Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/

The impact of AI on the labour market and equality

Published by Anonymous (not verified) on Sat, 20/04/2024 - 4:56am in

In future AI is the new technology which is likely to have the greatest impact on our economy and our society. But how AI is used and developed is a choice, and so far AI has been predominantly focused on continuing the emphasis on automation. To realise the full potential of AI and minimise its Continue reading »

Sharing the benefits of technological progress

Published by Anonymous (not verified) on Fri, 19/04/2024 - 4:55am in

This is the first of three articles discussing how the benefits of technological progress are shared, and thus determine the distribution of income and influence our economic and social structures. This first article focuses on how these benefits have been shared historically. Throughout history the growth in living standards has come from increasing productivity and Continue reading »

On the Navajo Nation, Accurate Mailing Addresses Save Lives

Published by Anonymous (not verified) on Fri, 19/04/2024 - 12:58am in

This story was originally published in the Daily Yonder.

Adaline Sneak lives at the end of a long, unmarked dirt road in a rural area of the Navajo Nation in Utah. Getting there requires a high clearance vehicle and at least moderate navigation skills.

Residents here don’t have typical addresses with street names and house numbers. Until recently, Sneak’s official address was even vaguer than the directions a gas station clerk might give a lost driver — seven miles south of Montezuma Creek, Utah, County Road 410.

She can’t get mail with an address like that, nor could someone search directions to her house on Google Maps, for example.

But for someone like Sneak, an address like this is more than just inconvenient. It’s life-threatening. Sneak suffers from seizures, and about a year ago, an ambulance got lost on the way to her house because of her ambiguous address.

A woman wears a Life Alert bracelet.Adaline Sneak recently registered her Life Alert system with her new Plus Code so that emergency responders can find her house more easily. Credit: Emily Arntsen

“We almost lost her that day,” said Arlene Begay, Sneak’s mother. The ambulance eventually made it to Sneak’s house, but only because someone on the emergency response team happened to know Begay’s sister, whom they called for directions to Sneak’s house.

“That’s happened a few times actually,” Begay continued, recalling other times the ambulance had gotten lost. But now, any confusion over Sneak’s address is hopefully cleared up for good.

This fall, Sneak was one of over 3,000 residents on the Navajo Nation who received a new, more accurate address through an initiative led by a nonprofit called the Rural Utah Project. The new addresses, which were developed by Google, are called Plus Codes. The codes are simple alpha-numeric coordinates based on longitude and latitude.

All locations on Earth have unique, Google-generated Plus Codes, the same way every location on Earth has global coordinates, though the Plus Codes are much shorter than global coordinates, making them easier to share and remember.

Slow beginnings

Plus Codes aren’t new — Google started developing the free, open-source technology in 2015. But the system has been slow to catch on in some areas.

For the Rural Utah Project, whose main mission is to empower disenfranchised voters, educating people on how to use Plus Codes originally started out as a way to increase voter registration on the Navajo Nation.

While registering voters during the 2018 state and county elections, field organizers with the Rural Utah Project realized hundreds of residents on the Navajo Nation were registered in the wrong voting precincts because of mix-ups with their addresses.

“When I got my ballot, I noticed I had the wrong school board member that I was voting for,” said Daylene Redhorse, a field organizer with the Rural Utah Project who lives on the Navajo Nation and spearheaded the addressing initiative.

Plus codes are printed on blue signs.Plus Codes are simple alpha-numeric codes based on longitude and latitude. The Rural Utah Project partnered with Google to distribute thousands of Plus Code signs on the Navajo Nation. Credit: Emily Arntsen

In rural parts of the Navajo Nation, as with many rural areas in the United States, step-by-step descriptive addresses are the norm. These addresses are valid for most services that require proof of residence, such as enrolling in public schools or registering to vote.

But just because these are technically “official” addresses doesn’t mean the system is particularly functional. For example, when Redhorse registered to vote with her descriptive address — 15 miles southwest of Bluff, Utah, County Road 436 — the county accidentally pinned her in a district north of Bluff.

“It’s discouraging for people, getting the wrong ballot and feeling like their vote doesn’t count,” Redhorse said. “As it is, we already have a lot of people who are skeptical about voting. When I go door-to-door registering people to vote, a lot of them say, ‘Why would I register? I don’t count. Nobody counts us.’”

Crushed by negative news?

Sign up for the Reasons to be Cheerful newsletter.

That attitude, Redhorse explains, stems from a long history of oppression and disenfranchisement for Native Americans, who didn’t receive the nationwide right to vote until 1962, when New Mexico was the last state to grant Native Americans suffrage.

In order to help Utah residents on the Navajo Nation adopt the Plus Codes system, the Rural Utah Project partnered with Google, who helped field organizers match thousands of homes with their new addresses. Those Plus Codes were then printed on blue plastic signs, which were delivered door-to-door, along with information about how to register to vote.

Since starting the initiative, the Rural Utah Project has registered nearly 2,000 new voters with their Plus Codes.

A new address right on time

The day that field organizers arrived at Sneak’s house to deliver her Plus Code sign and explain the new addressing system, she had a seizure. Redhorse’s colleague, Tara Benally, called 9-1-1 and gave the dispatcher Sneak’s new Plus Code.

“They were able to use the Plus Code no problem,” Redhorse said. “They found the house easily.” Sneak is now able to use her Plus Code for her Life Alert system, which, her mother said, is a huge relief.

Herman Chee Jr., chief of the Monument Valley Fire Department, poses in front of a pickup truck.Herman Chee Jr., chief of the Monument Valley Fire Department, says Plus Codes have made. Credit: Emily Arntsen

Herman Chee Jr., chief of the Monument Valley Fire Department, said that most EMS responders on the Navajo Nation already use Google Maps, which is compatible with Plus Codes, unlike descriptive addresses, which mostly rely on local knowledge to pinpoint.

“With our community, we just know where people live,” he said. But memory isn’t always perfect, especially during emergencies. He said there were many times when he made mistakes getting to the scene and had to double back.

“I remember one time, we got paged out to a structure fire. I was communicating with dispatch, and they just told me to take this road, then that road. And that was it. It was dark, and it was really snowing. I just had to guess. I could see the structure fire in the distance, but I still took that wrong turn. Had to go back,” he said. “Took a long time.”

Become a sustaining member today!

Join the Reasons to be Cheerful community by supporting our nonprofit publication and giving what you can.

He said Plus Codes have helped responders reach people’s houses faster. But the system only works if people remember to use their new addresses when calling 9-1-1. He was recently called out to a fire using a descriptive address.

“When I finally arrived, I saw that blue sign on their house,” he recalled. “I always tell people, use your Plus Code, remember your Plus Code. It’s so much easier for the dispatch.”

Beyond expectations

Redhorse said that when she started the addressing project for voter registration, she didn’t even think about all of the other benefits.

“Then we started to notice UPS coming down the dirt road, then FedEx coming down the dirt road.”

The United States Postal Service, which handles all voting by mail, doesn’t recognize Plus Codes. Rural residents will still need a post office box to receive mail-in ballots.

But commercial mail carriers, such as the United Parcel Service (UPS) and FedEx, have already started incorporating Plus Codes into their systems.

Daylene Redhorse holds up Plus Code signs at her desk.Daylene Redhorse, a field organizer with the Rural Utah Project, helped distribute over 3,000 Plus Code signs to residents on the Navajo Nation. Credit: Emily Arntsen

“I tell people to put their Plus Codes in the ‘description’ section when they’re buying something online,” Redhorse said. “The delivery person can usually figure it out that way.”

Residents can also use their new Plus Codes to receive at-home medical treatments, which were previously unavailable to them in some cases because of their addresses.

Redhorse used to work in a dialysis clinic in Blanding, Utah. For some of her patients that lived on the reservation, the commute was over two hours.

“The biggest complaint from our patients was that they didn’t want to make the drive every other day, but they couldn’t do home dialysis because they didn’t have an address that the insurance companies would recognize,” she said.

“One guy who used to be my patient used his Plus Code to get on home dialysis, and now I’ve been seeing the same truck that we used to have at the clinic going down the dirt roads,” she said. “When I see that I say, ‘Wow, this has really changed people’s lives.’”

The post On the Navajo Nation, Accurate Mailing Addresses Save Lives appeared first on Reasons to be Cheerful.

Say Hello to this Philosopher’s ExTRA

Published by Anonymous (not verified) on Thu, 18/04/2024 - 5:34am in

Appropriately enough, Luciano Floridi (Yale), known for his work in the philosophy of information and technology, may be the first philosopher with a… well, what should we call this thing?

It’s an AI chatbot trained on his works that can then answer questions about what he says in them, but also can extrapolate somewhat to offer suggestions as to what he might think about topics not covered in those works.

“AI chatbot” doesn’t quite capture the connection it has to the person whose thoughts it is trained on, though. Its creator gave it the name “LuFlot.” But we need a name for the kind of thing LuFlot is, since surely there will end up being many more of them, used for more than just academic purposes.

My suggestion: “Extended Thought and Response Agent”, or “ExTRA” (henceforth, just “extra”).

Floridi’s extra was developed by Nicolas Gertler, a first-year student at Yale, and Rithvik “Ricky” Sabnekar, a high school student, “to foster engagement” with Floridi’s ideas, according to a press release:

Meant to facilitate teaching and learning, the chatbot is trained on all the books that Floridi has published over his more than 30-year academic career. Within seconds of receiving a query, it provides users detailed and easily digestible answers drawn from this vast work. It’s able to synthesize information from multiple sources, finding links between works that even Floridi might not have considered.

In part, it’s like a version of “Hey Sophi“, discussed here three years ago, except that it’s publicly accessible, and not just a personal research tool.

Gertler and Sabnekar founded Mylon Education, “a startup company seeking to transform the educational landscape by reconstructing the systems through which individuals generate and develop their ideas,” according to the press release. “LuFlot is the startup’s first project.”

You can try out Floridi’s extra here.


The post Say Hello to this Philosopher’s ExTRA first appeared on Daily Nous.

Inside the UK’s First Open-Access, Pay-As-You-Go Factory

Published by Anonymous (not verified) on Thu, 11/04/2024 - 6:00pm in

Entrepreneurs Alisha Fredriksson and Roujia Wen spent months in 2022 scouring London for the right space to develop a prototype. Their big idea — to capture carbon emissions from cargo ships by trapping the gas amongst calcium oxide pebbles, through a system fitted on board — required a big, well-equipped space. 

The options their search yielded were less than appealing. Large warehouses that had the high ceilings Fredriksson and Wen needed to build their venture, Seabound, were typically empty, with tenants needing to fully equip it themselves with the right machinery, plus the electricity to power it. They tended to be in industrial zones with only the likes of auto shops or dark kitchens for neighbors, and they usually required signing a five-year lease.

Seabound co-founders Alisha Fredriksson and Roujia Wen.Seabound co-founders Alisha Fredriksson and Roujia Wen. Courtesy of Seabound

“As a six-month-old startup at the time, it was a scary proposition,” Fredriksson recalls.

Then Seabound found BLOQS, a 32,000-square-foot converted warehouse in the north London suburb of Enfield, fully kitted out with £1.3 million (around $1.7 million) worth of light industrial equipment for all kinds of manufacturing, including wood processing and metal fabrication, laser cutting and engraving, 3D printing, sewing machines, spray painting and more. If that didn’t already make the case for moving in, the flexible membership structure then quickly sealed the deal for Fredriksson and Wen. 

The initial sign-up is free, with members simply paying a daily rate for the machinery they need to use, as well as for flexible office and storage space if they need it. Raw materials are available to purchase too, price-matched with local suppliers. And if members need to learn to use a particular piece of equipment, they can pay for training. An added bonus is the on-site restaurant, where an award-winning chef serves a seasonable and affordable Mediterranean menu. Yet the biggest draw for the Seabound team was the community of 1,000 other like-minded members.

Credit: Claudia Agati

“We wanted people to not just make whatever it is that they needed to, but we wanted to provide a facility where somebody was able to do what it is that the world needs,” says BLOQS cofounder Al Parra.

“It’s a fun place to go to work every day. We have a whole ecosystem of people that we’re a part of. Whereas if we were in our own warehouse on some industrial site, I don’t think we would have friends there — it would be more lonely,” says Fredriksson.

The expertise available at BLOQS has also allowed Seabound to tap into support on an as-needed basis. “We’ve actually also been able to keep our team very lean, because we’ve been able to occasionally work with people at BLOQS as a kind of ‘surge support,’” Fredriksson says. “For instance, there are technicians at BLOQS that have helped us, and there are electricians who are members that we’ve been able to contract with. So we have flexibility in terms of space and resourcing.”

Seabound co-founders onboard a container ship.The Seabound co-founders tested their prototype on board an 800-foot commercial container ship in late 2023. Courtesy of Seabound

Seabound was able to leverage everything on offer at BLOQS to test its carbon capture technology, with the team spending two months in late 2023 on board an 800-foot commercial container ship. The Seabound prototype successfully captured around one metric ton of CO2 per day, meaning the team, now back on dry land at BLOQS, can move into their second phase of research, development and testing, aiming to deploy their next system onto a ship in 2025.

Crushed by negative news?

Sign up for the Reasons to be Cheerful newsletter.

BLOQS co-founder Al Parra feels Seabound is one of the best examples of why he and his partners set up the space, which he describes as having “its own dynamism,” to drive innovation. “What this women-led climate tech engineering group is doing is incredible,” says Parra. “They started at BLOQS because they couldn’t take on the risk of their own premises. That very often is the case, that people come to us because they have a physical need of something that we provide, but then they stay because of the community. They’re in this confluence and mix of abilities, skills and knowledge. If you don’t know how to do something, you can be damn sure you’re one handshake away from somebody who does.”

As the UK’s largest open-access professional maker space — and the country’s first pay-as-you-go space of its kind — BLOQS has created 380 full-time jobs and has turned over a collective £15 million a year (around $19.1 million) since it launched in 2012. (It was then in a different location and moved to Enfield in 2022.)

Al Parra portrait.Al Parra is BLOQS’ co-founder and director. Courtesy of BLOQS

As an open-access maker space in London, BLOQS isn’t alone. Thirty-eight maker spaces in the UK capital are listed on the Open Workshop Network, while 3D printing support organization CREATE Education lists community-centric spaces across the country on its site. Discipline-specific workshops also exist for professionals. But where BLOQS is unique, argues Parra, is that it’s the only cross-discipline site out of which someone could run a business. 

“We wanted people to not just make whatever it is that they needed to, but we wanted to provide a facility where somebody was able to do what it is that the world needs,” says Parra.  

Parra has observed that BLOQS members are able to leapfrog the initial set-up period of building up manufacturing contacts, which can take up to 10 years. 

“We simplify access to things which are really expensive. If you don’t come from a privileged background, it’s difficult to get together that money. At BLOQS, you can walk straight in, from something like a building site, from a course or degree, or you can transition from another career, and we’ve got all of the resources,” says Parra. 

“By making all of the technology that we’ve got available and affordable, we are diminishing the barriers between that and the creative mind.”

The DEMAND team at work.The DEMAND team at work. Courtesy of DEMAND

Some entrepreneurs see BLOQS as a testing ground for new ideas and stepping stone to a more permanent, private premises, while others see fit to call it their home for the foreseeable. Seabound’s future, for example, looks promising enough that Fredriksson is already forecasting a need for a larger separate space to accommodate dedicated facilities as well as manufacturing partners, although research and development, she thinks, could still be done at BLOQS.

The charity DEMAND, meanwhile, which creates assistive products for people with disabilities, has made its journey to BLOQS in reverse. After having spent the previous 20 years operating out of its own factory just north of London, the team migrated to BLOQS in 2022 after deciding its impact could be greater working in a shared space. Spending time and money on building and machine maintenance was holding the organization back, and with no other similar outfit nearby, the team felt isolated.

“The combination of flexible space and industrial-grade machinery has had a lot of impact on our speed and efficiency. And having access to the community makes it feel like we’re in a much bigger organization — we can lean on, and be inspired by, other people,” says Lynnette Smith, DEMAND’s head of creative.

DEMAND's push-along car for kids with balance issues.DEMAND’s push-along “big car” is designed for children with balance issues who are unable to ride a bicycle. Courtesy of DEMAND

“Being here has definitely helped us maximize the impact of each thing we design. We were very skilled at making one of something for a specific individual. While that’s still the purpose of DEMAND, to make something for an individual need, we’ve now got the machinery that helps us make much more repeatable things.”

DEMAND products refined at BLOQS include a ramp for boccia, a Paralympic sport in which athletes use the ramp to propel their ball to get as close to the target ball as possible, as well as a “big car,” a push-along car designed for children with balance issues who are unable to ride a bicycle. BLOQS’ machinery has reduced human error, and accelerated the production process, says Smith. The technology at BLOQS has also streamlined the production of an eye-led communication aid, which was originally designed for one user, Mark, who DEMAND has since collaborated with to enable it to be reproduced for others.

Growing the charity in this way is one of Smith’s key goals, as is collaborating more closely with users like Mark.

Courtesy of DEMAND

“Having access to the community makes it feel like we’re in a much bigger organization — we can lean on, and be inspired by, other people,” says Lynnette Smith, DEMAND head of creative.

“We would love to keep working with BLOQS to make sure that accessibility happens, potentially also in new places that BLOQS open as a partnership — that’s something we’d love to see the impact of,” says Smith.

Expansion is definitely in the cards, according to Parra, with the BLOQS team assessing the feasibility of a second site in either South London, Birmingham, Liverpool, Manchester or Glasgow, to open in 2025. Beyond the UK, Parra sees global demand for spaces like BLOQS. Similar models are already emerging, like South Africa’s Made In Workshop, Ireland’s Benchspace and Artisans Asylum in the US, all offering flexible, affordable models with a range of machinery.

A view of families walking outside BLOQS factory.Cofounder Alan Parra sees BLOQS as a model that could be replicated in other cities. Courtesy of BLOQS

Parra envisions real potential in developing countries, where microfinance schemes have become common in helping small-scale entrepreneurs build businesses and a livelihood.

“The developing world, where everybody’s one or two generations away from a village, understands this concept of sharing resources so intrinsically, that we’re getting interest from South Asia, Africa and Eastern Europe [to open another BLOQS],” says Parra.

“We’re offering a model for how we can make the things that we need, in a way that is sustainable.”


The post Inside the UK’s First Open-Access, Pay-As-You-Go Factory appeared first on Reasons to be Cheerful.

Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

Published by Anonymous (not verified) on Wed, 10/04/2024 - 10:00pm in



Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.

OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.

Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”

The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?”

In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

Lugo, mission commander of the Pentagon’s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a “technology specialist” working on the Space Force and Air Force.

The Air Force is currently building the Advanced Battle Management System, its portion of a broader multibillion-dollar Pentagon project called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon envisions a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.

On April 3, U.S. Central Command revealed it had already begun using elements of JADC2 in the Middle East.

The Department of Defense didn’t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that “the [Chief Digital and Artificial Intelligence Office’s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission areas.”

While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept’s January report on OpenAI’s military-industrial about face, the company’s spokesperson Niko Felix said that even under the loosened language, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

“The point is you’re contributing to preparation for warfighting.”

Whether the Pentagon’s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it’s aimed or pulling the trigger. “They may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “But that would be a spurious distinction in my view, because the point is you’re contributing to preparation for warfighting.”

Unlike OpenAI, Microsoft has little pretense about forgoing harm in its “responsible AI” document and openly promotes the military use of its machine learning tools.


OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept’s reporting, OpenAI vice president of global affairs Anna Makanju assured panel attendees that the company’s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company’s groundbreaking machine learning tools were still forbidden from causing harm or destruction.

Contributing to the development of a battle management system, however, would place OpenAI’s military work far closer to warfare itself. While OpenAI’s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its “use in other systems, such as military operation planning or battlefield assessments” would ultimately impact “where weapons are deployed or missions are carried out.”

Indeed, it’s difficult to imagine a battle whose primary purpose isn’t causing bodily harm and property damage. An Air Force press release from March, for example, describes a recent battle management system exercise as delivering “lethality at the speed of data.”

Other materials from the AI literacy seminar series make clear that “harm” is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft’s asks the question, “Why should we care?” The answer: “We have to kill bad guys.” In a nod to the “literacy” aspect of the seminar, the slide adds, “We need to know what we’re talking about… and we don’t yet.”

Update: April 11, 2024
This article was updated to clarify Microsoft’s promotion of its work with the Department of Defense.

The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

Forget a Ban — Why Are Journalists Using TikTok in the First Place?

Published by Anonymous (not verified) on Mon, 08/04/2024 - 12:00am in



The TikTok logo is being displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024. (Photo by Klaudia Radecka/NurPhoto via Getty Images)

The TikTok logo displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024.
Photo: Klaudia Radecka/NurPhoto via Getty Images

As far as I know, there are no laws against eating broken glass. You’re free to doomscroll through your cabinets, smash your favorite water cup, then scarf down the shards.

A ban on eating broken glass would be overwhelmingly irrelevant, since most people just don’t do it, and for good reason. Unfortunately, you can’t say the same about another dangerous habit: TikTok.

As a security researcher, I can’t help but hate TikTok, just like I hate all social media, for creating unnecessary personal exposure.

As a security researcher working in journalism, one group of the video-sharing app’s many, many users give me heartburn. These users strike a particular fear into my heart. This group of users is — you guessed it — my beloved colleagues, the journalists.

TikTok, of course, isn’t the only app that poses risks for journalists, but it’s been bizarre to watch reporters with sources to protect express concern about a TikTok ban when they shouldn’t be using the platform in the first place. TikTok officials, after all, have explicitly targeted reporters in attempts to reveal their sources.

My colleagues seem to nonetheless be dressing up as bullseyes.

Ignoring TikTok’s Record

Impassioned pleas by reporters to not ban TikTok curiously omit TikTok’s most egregious attacks on reporters.

In his defense of TikTok, the Daily Beast’s Brad Polumbo offers a disclaimer in the first half of the headline — “TikTok Is Bad. Banning It Would Be Much Worse” — but never expands upon why. Instead, the bulk of the piece offers an apologia for TikTok’s parent company, ByteDance.

Meanwhile, Vox’s A.W. Ohlheiser expatiates on the “both/and” of TikTok, highlighting its many perceived benefits and ills. And yet, the one specific ill, which could have the most impact on Ohlheiser and other reporters, is absent from the laundry list of downsides.

The record is well established. In an attempt to identify reporters’ sources, ByteDance accessed IP addresses and other user data of several journalists, according to a Forbes investigation. The intention seems to have been to track the location of the reporters to see if they were in the same locations as TikTok employees who may have been sources for stories about TikTok’s links to China.

Not only did TikTok surveil reporters in attempts to identify their sources, but the company also proceeded to publicly deny having done so.

“TikTok does not collect precise GPS location information from US users, meaning TikTok could not monitor US users in the way the article suggested,” the TikTok communication team’s account posted on X in response to Forbes’s initial reporting. “TikTok has never been used to ‘target’ any members of the U.S. government, activists, public figures or journalists.”

Forbes kept digging, and its subsequent investigation found that an internal email “acknowledged that TikTok had been used in exactly this way,” as reporter Emily Baker-White put it.

TikTok did various probes into the company’s accessing of U.S. user data; officials were fired and at least one resigned, according to Forbes. That doesn’t change the basic facts: Not only did TikTok surveil reporters in attempts to identify their sources, but the company also proceeded to publicly deny having done so.

And Now, Service Journalism for Journalists

For my journalism colleagues, there may well be times when you need to check TikTok, for instance when researching a story. If this is the case, you should follow the operational security best practice of compartmentalization: keeping various items separated from one another.

In other words, put TikTok on a separate “burner” device, which doesn’t have anything sensitive on it, like your sources saved in its contacts. There’s no evidence TikTok can see, for example, your chat histories, but it can, according to the security research firm Proofpoint, access your device’s location data, contacts list, as well as camera and microphone. And, as as a security researcher, I like to be as safe as possible.

And keep the burner device in a separate location from your regular phone. Don’t walk around with both phones turned on and connected to a cellular or Wi-Fi network and, for the love of everything holy, don’t take the burner to sensitive source meetings.

You can also limit the permissions that your device gives to TikTok — so that you’re not handing the app your aforementioned location data, contacts list, and camera access — and you should. Only allow the app to do things that are required for the app to run, and only run enough to do your research.

And don’t forget, this is all for your research. When you’re done looking up whatever in our hellscape tech dystopia has brought you to this tremendous time suck, the burner device should be wiped and restored to factory defaults.

The security and disinformation risks posed to journalists are, of course, not unique to TikTok. They permeate, one way or another, every single social media platform.

That doesn’t explain journalists’ inscrutable defense of a medium that is actively working against them. It’s as clear as your favorite water cup.

Editor’s note: You can follow The Intercept on TikTok here.

The post Forget a Ban — Why Are Journalists Using TikTok in the First Place? appeared first on The Intercept.

Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List”

Published by Anonymous (not verified) on Fri, 05/04/2024 - 10:00pm in


Technology, World

The Israeli military has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.

The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.

The mass surveillance of Palestinian faces resulting from Israel’s efforts to identify Hamas members has caught up thousands of Gaza residents since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they had been brutally interrogated or tortured. In its facial recognition story, the Times pointed to Palestinian poet Mosab Abu Toha, whose arrest and beating at the hands of the Israeli military began with its use of facial recognition. Abu Toha, later released without being charged with any crime, told the paper that Israeli soldiers told him his facial recognition-enabled arrest had been a “mistake.”

Putting aside questions of accuracy — facial recognition systems are notorious less accurate on nonwhite faces — the use of Google Photos’s machine learning-powered analysis features to place civilians under military scrutiny, or worse, is at odds with the company’s clearly stated rules. Under the header “Dangerous and Illegal Activities,” Google warns that Google Photos cannot be used “to promote activities, goods, services, or information that cause serious and immediate harm to people.”

“Facial recognition surveillance of this type undermines rights enshrined in international human rights law.”

Asked how a prohibition against using Google Photos to harm people was compatible with the Israel military’s use of Google Photos to create a “hit list,” company spokesperson Joshua Cruz declined to answer, stating only that “Google Photos is a free product which is widely available to the public that helps you organize photos by grouping similar faces, so you can label people to easily find old photos. It does not provide identities for unknown people in photographs.” (Cruz did not respond to repeated subsequent attempts to clarify Google’s position.)

It’s unclear how such prohibitions — or the company’s long-standing public commitments to human rights — are being applied to Israel’s military.

“It depends how Google interprets ‘serious and immediate harm’ and ‘illegal activity,’ but facial recognition surveillance of this type undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, assembly rights, and more,” said Anna Bacciarelli, the associate tech director at Human Rights Watch. “Given the context in which this technology is being used by Israeli forces, amid widespread, ongoing, and systematic denial of the human rights of people in Gaza, I would hope that Google would take appropriate action.”

Doing Good or Doing Google?

In addition to its terms of service ban against using Google Photos to cause harm to people, the company has for many years claimed to embrace various global human rights standards.

“Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” wrote Alexandria Walden, the company’s global head of human rights, in a 2022 blog post. “That’s why our products, business operations, and decision-making around emerging technologies are all informed by our Human Rights Program and deep commitment to increase access to information and create new opportunities for people around the world.”

This deep commitment includes, according to the company, upholding the Universal Declaration of Human Rights — which forbids torture — and the U.N. Guiding Principles on Business and Human Rights, which notes that conflicts over territory produce some of the worst rights abuses.

The Israeli military’s use of a free, publicly available Google product like Photos raises questions about these corporate human rights commitments, and the extent to which the company is willing to actually act upon them. Google says that it endorses and subscribes to the U.N. Guiding Principles on Business and Human Rights, a framework that calls on corporations to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.”

 Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)

Read our complete coverage

Israel’s War on Gaza

Walden also said Google supports the Conflict-Sensitive Human Rights Due Diligence for ICT Companies, a voluntary framework that helps tech companies avoid the misuse of their products and services in war zones. Among the document’s many recommendations are for companies like Google to consider “Use of products and services for government surveillance in violation of international human rights law norms causing immediate privacy and bodily security impacts (i.e., to locate, arrest, and imprison someone).” (Neither JustPeace Labs nor Business for Social Responsibility, which co-authored the due-diligence framework, replied to a request for comment.)

“Google and Corsight both have a responsibility to ensure that their products and services do not cause or contribute to human rights abuses,” said Bacciarelli. “I’d expect Google to take immediate action to end the use of Google Photos in this system, based on this news.”

Google employees taking part in the No Tech for Apartheid campaign, a worker-led protest movement against Project Nimbus, called their employer to prevent the Israeli military from using Photos’s facial recognition to prosecute the war in Gaza.

“That the Israeli military is even weaponizing consumer technology like Google Photos, using the included facial recognition to identify Palestinians as part of their surveillance apparatus, indicates that the Israeli military will use any technology made available to them — unless Google takes steps to ensure their products don’t contribute to ethnic cleansing, occupation, and genocide,” the group said in a statement shared with The Intercept. “As Google workers, we demand that the company drop Project Nimbus immediately, and cease all activity that supports the Israeli government and military’s genocidal agenda to decimate Gaza.”

Project Nimbus

This would not be the first time Google’s purported human rights principles contradict its business practices — even just in Israel. Since 2021, Google has sold the Israeli military advanced cloud computing and machine learning-tools through its controversial “Project Nimbus” contract.

Unlike Google Photos, a free consumer product available to anyone, Project Nimbus is a bespoke software project tailored to the needs of the Israeli state. Both Nimbus and Google Photos’s face-matching prowess, however, are products of the company’s immense machine-learning resources.

The sale of these sophisticated tools to a government so regularly accused of committing human rights abuses and war crimes stands in opposition to Google’s AI Principles. The guidelines forbid AI uses that are likely to cause “harm,” including any application “whose purpose contravenes widely accepted principles of international law and human rights.”

Google has previously suggested its “principles” are in fact far narrower than they appear, applying only to “custom AI work” and not the general use of its products by third parties. “It means that our technology can be used fairly broadly by the military,” a company spokesperson told Defense One in 2022.

How, or if, Google ever turns its executive-blogged assurances into real-world consequences remains unclear. Ariel Koren, a former Google employee who said she was forced out of her job in 2022 after protesting Project Nimbus, placed Google’s silence on the Photos issue in a broader pattern of avoiding responsibility for how its technology is used.

“It is an understatement to say that aiding and abetting a genocide constitutes a violation of Google’s AI principles and terms of service,” Koren, now an organizer with No Tech for Apartheid, told The Intercept. “Even in the absence of public comment, Google’s actions have made it clear that the company’s public AI ethics principles hold no bearing or weight in Google Cloud’s business decisions, and that even complicity in genocide is not a barrier to the company’s ruthless pursuit of profit at any cost.”

The post Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List” appeared first on The Intercept.

The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack

Published by Anonymous (not verified) on Thu, 04/04/2024 - 10:05am in



On March 29, Microsoft software developer Andres Freund was trying to optimize the performance of his computer when he noticed that one program was using an unexpected amount of processing power. Freund dove in to troubleshoot and “got suspicious.”

Eventually, Freund found the source of the problem, which he subsequently posted to a security mailing list: He had discovered a backdoor in XZ Utils, a data compression utility used by a wide array of various Linux-based computer applications — a constellation of open-source software that, while often not consumer-facing, undergirds key computing and internet functions like secure communications between machines.

By inadvertently spotting the backdoor, which was buried deep in the code in binary test files, Freund averted a large-scale security catastrophe. Any machine running an operating system that included the backdoored utility and met the specifications laid out in the malicious code would have been vulnerable to compromise, allowing an attacker to potentially take control of the system.

The XZ backdoor was introduced by way of what is known as a software supply chain attack, which the National Counterintelligence and Security Center defines as “deliberate acts directed against the supply chains of software products themselves.” The attacks often employ complex ways of changing the source code of the programs, such as gaining unauthorized access to a developer’s system or through a malicious insider with legitimate access.

The malicious code in XZ Utils was introduced by a user calling themself Jia Tan, employing the handle JiaT75, according to Ars Technica and Wired. Tan had been a contributor to the XZ project since at least late 2021 and built trust with the community of developers working on it. Eventually, though the exact timeline is unclear, Tan ascended to being co-maintainer of the project, alongside the founder, Lasse Collin, allowing Tan to add code without needing the contributions to be approved. (Neither Tan nor Collin responded to requests for comment.)

The XZ backdoor betrays a sophisticated, meticulous operation. First, whoever led the attack identified a piece of software that would be embedded in a vast array of Linux operating systems. The development of this widely used technical utility was understaffed, with a single, core maintainer, Collin, who later conceded he was unable to maintain XZ, providing the opportunity for another developer to step in. Then, after cultivating Collin’s trust over a period of years, Tan injected a backdoor into the utility. All these moves were underlaid by a technical proficiency that ushered the creation and embedding of the actual backdoor code — a code sophisticated enough that analysis of its precise functionality and capability is still ongoing.

“The care taken to hide the exploits in binary test files as well as the sheer time taken to gain a reputation in the open-source project to later exploit it are abnormally sophisticated,” said Molly, a system administrator at Electronic Frontier Foundation who goes by a mononym. “However, there isn’t any indication yet whether this was state sponsored, a hacking group, a rogue developer, or any combination of the above.”

Tan’s elevation to being a co-maintainer mostly played out on an email group where code developers — in the open-source, collaborative spirit of the Linux family of operating systems — exchange ideas and strategize to build applications.

On one email list, Collin faced a raft of complaints. A group of users, relatively new to the project, had protested that Collin was falling behind and not making updates to the software quickly enough. He should, some of these users said, hand over control of the project; some explicitly called for the addition of another maintainer. Conceding that he could no longer devote enough attention to the project, Collin made Tan a co-maintainer.

The users involved in the complaints seemed to materialize from nowhere — posting their messages from what appear to be recently created Proton Mail accounts, then disappearing. Their entire online presence is related to these brief interactions on the mailing list dedicated to XZ; their only recorded interest is in quickly ushering along updates to the software.

Various U.S. intelligence agencies have recently expressed interest in addressing software supply chain attacks. The Cybersecurity and Infrastructure Security Agency jumped into action after Freund’s discovery, publishing an alert about the XZ backdoor on March 29, the same day Freund publicly posted about it.

Open-Source Players

In the open-source world of Linux programming — and in the development of XZ Utils — collaboration is carried out through email groups and code repositories. Tan posted on the listserv, chatted to Collin, and contributed code changes on the code repository Github, which is owned by Microsoft. GitHub has since disabled access to the XZ repository and disabled Tan’s account. (In February, The Intercept and other digital news firms sued Microsoft and its partner OpenAI for using their journalism without permission or credit.)

Several other figures on the email list participated in efforts — appearing to be diffuse but coinciding in their aims and timing — to install the new co-maintainer, sometimes particularly pushing for Tan.

Later, on a listserv dedicated to Debian, one of the more popular of the Linux family of operating systems, another group of users advocated for the backdoored version of XZ Utils to be included in the operating system’s distribution.

These dedicated groups played discrete roles: In one case, complaining about the lack of progress on XZ Utils and pushing for speedier updates by installing a new co-maintainer; and, in the other case, pushing for updated versions to be quickly and widely distributed.

“I think the multiple green accounts seeming to coordinate on specific goals at key times fits the pattern of using networks of sock accounts for social engineering that we’ve seen all over social media,” said Molly, the EFF system administrator. “It’s very possible that the rogue dev, hacking group, or state sponsor employed this tactic as part of their plan to introduce the back door. Of course, it’s also possible these are just coincidences.”

The pattern seems to fit what’s known in intelligence parlance as “persona management,” the practice of creating and subsequently maintaining multiple fictitious identities. A leaked document from the defense contractor HBGary Federal outlines the meticulousness that may go into maintaining these fictive personas, including creating an elaborate online footprint — something which was decidedly missing from the accounts involved in the XZ timeline.

While these other users employed different emails, in some cases they used providers that give clues as to when their accounts were created. When they used Proton Mail accounts, for instance, the encryption keys associated with these accounts were created on the same day, or mere days before, the users’ first posts to the email group. (Users, however, can also generate new keys, meaning the email addresses may have been older than their current keys.)

One of the earliest of these users on the list used the name Jigar Kumar. Kumar appears on the XZ development mailing list in April 2022, complaining that some features of the tool are confusing. Tan promptly responded to the comment. (Kumar did not respond to a request for comment.)

Kumar repeatedly popped up with subsequent complaints, sometimes building off others’ discontent. After Dennis Ens appeared on the same mailing list, Ens also complained about the lack of response to one of his messages. Collin acknowledged things were piling up and mentioned Tan had been helping him off list; he might soon have “a bigger role with XZ Utils.” (Ens did not respond to a request for comment.)

After another complaint from Kumar calling for a new maintainer, Collin responded: “I haven’t lost interest but my ability to care has been fairly limited mostly due to longterm mental health issues but also due to some other things. Recently I’ve worked off-list a bit with Jia Tan on XZ Utils and perhaps he will have a bigger role in the future, we’ll see.”

The pressure kept coming. “As I have hinted in earlier emails, Jia Tan may have a bigger role in the project in the future,” Collin responded after Ens suggested he hand off some responsibilities. “He has been helping a lot off-list and is practically a co-maintainer already. :-)”

Ens then went quiet for two years — reemerging around the time the bulk of the malicious backdoor code was installed in the XZ software. Ens kept urging ever quicker updates.

After Collin eventually made Tan a co-maintainer, there was a subsequent push to get XZ Utils — which by now had the backdoor — distributed widely. After first showing up on the XZ GitHub repository in June 2023, another figure calling themselves Hans Jansen went on this March to push for the new version of XZ to be included in Debian Linux. (Jansen did not respond to a request for comment.)

An employee at Red Hat, a software firm owned by IBM, which sponsors and helps maintain Fedora, another popular Linux operating system, described Tan trying to convince him to help add the compromised XZ Utils to Fedora.

These popular Linux operating systems account for millions of computer users — meaning that huge numbers of users would have been open to compromise if Freund, the developer, had not discovered the backdoor.

“While the possibility of socially engineering backdoors in critical software seems like an indictment of open-source projects, it’s not exclusive to open source and could happen anywhere,” said Molly. “In fact, the ability for the engineer to discover this backdoor before it was shipped was only possible due to the open nature of the project.”

The post The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack appeared first on The Intercept.

Congress Has a Chance to Rein In Police Use of Surveillance Tech

Published by Anonymous (not verified) on Wed, 03/04/2024 - 1:00am in

Hardware that breaks into your phone; software that monitors you on the internet; systems that can recognize your face and track your car: The New York State Police are drowning in surveillance tech.

Last year alone, the Troopers signed at least $15 million in contracts for powerful new surveillance tools, according to a New York Focus and Intercept review of state data. While expansive, the State Police’s acquisitions aren’t unique among state and local law enforcement. Departments across the country are buying tools to gobble up civilians’ personal data, plus increasingly accessible technology to synthesize it.

“It’s a wild west,” said Sean Vitka, a privacy advocate and policy counsel for Demand Progress. “We’re seeing an industry increasingly tailor itself toward enabling mass warrantless surveillance.”

So far, local officials haven’t done much about it. Surveillance technology has far outpaced traditional privacy laws, and legislators have largely failed to catch up. In New York, lawmakers launched a years-in-the-making legislative campaign last year to rein in police intrusion — but with Gov. Kathy Hochul pushing for tough-on-crime policies instead, none of their bills have made it out of committee.

So New York privacy proponents are turning to Congress. A heated congressional debate over the future of a spying law offers an opportunity to severely curtail state and local police surveillance through federal regulation.

At issue is Section 702 of the Foreign Intelligence Surveillance Act, or FISA, which expires on April 19. The law is notorious for a provision that allows the feds to access Americans’ communications swept up in intelligence agencies’ international spying. As some members of Congress work to close that “backdoor,” they’re also pushing to ban a so-called data broker loophole that allows law enforcement to buy civilians’ personal data from private vendors without a warrant. Closing that loophole would likely make much of the New York State Police’s recently purchased surveillance tech illegal.

Members of the House and Senate judiciary committees, who have introduced bills to close the loopholes, are leading the latest bipartisan charge for reform. Members of the House and Senate intelligence committees, meanwhile, are pushing to keep the warrant workarounds in place. The Democratic leaders of both chambers — House Minority Leader Hakeem Jeffries and Senate Majority Leader Chuck Schumer, both from New York — have so far kept quiet on the spying debate. As Section 702’s expiration date nears, local advocates are trying to get them on board.

On Tuesday, a group of 33 organizations, many from New York, sent a letter to Jeffries and Schumer urging them to close the loopholes. More than 100 grassroots and civil rights groups from across the country sent the lawmakers a similar petition this week.

“These products are deeply invasive, discriminatory, and ripe for abuse.”

“These products are deeply invasive, discriminatory, and ripe for abuse,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, which signed both letters. They reach “into nearly every aspect of our digital and physical lives.”

Jeffries’s office declined to comment. Schumer’s office did not respond to a request for comment before publication.

Both letters cited a Wired report from last month, which revealed that Republican Rep. Mike Turner of Ohio, the chair of the House Intelligence Committee, pointed to New York City protests against Israel’s war on Gaza to argue against the spying law’s reform. Sources told Wired that in a presentation to fellow House Republicans, Turner implied that protesters in New York had ties to Hamas — and therefore should remain subject to Section 702’s warrantless surveillance backdoor. An intelligence committee spokesperson disputed the characterization of Turner’s remarks, but said that the protests had “responded to what appears to be a Hamas solicitation.”

“The real-world impact of such surveillance on protest and dissent is profound and undeniable,” read the New York letter, spearheaded by Empire State Indivisible and NYU Law School’s Brennan Center for Justice. “With Rep. Turner having placed your own constituents in the crosshairs, your leadership is urgently needed.”

Police surveillance today looks much different than it did 10, five, or even three years ago. A report from the U.S. Office of the Director of National Intelligence, declassified last year, put it succinctly: “The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits.”

That report called specific attention to the “data broker loophole”: law enforcement’s practice of obtaining data for which they’d otherwise have to obtain a warrant by buying it from brokers. The New York State Police have taken greater and greater advantage of the loophole in recent years, buying up seemingly as much tech and data as they can get their hands on.

In 2021, the State Police purchased a subscription to ShadowDragon, which is designed to scan websites for clues about targeted individuals, then synthesize it into in-depth profiles.


ShadowDragon: Inside the Social Media Surveillance Software That Can Watch Your Every Move

“I want to know everything about the suspect: Where do they get their coffee? Where do they get their gas? Where’s their electric bill? Who’s their mom? Who’s their dad?” ShadowDragon’s founder said in an interview unearthed by The Intercept in 2021. The company claims that its software can anticipate crime and violence — a practice, trendy among law enforcement tech companies, known as “predictive policing,” which ethicists and watchdogs warn can be inaccurate and biased.

The State Police renewed their ShadowDragon subscription in January of last year, shelling out $308,000 for a three-year contract. That was one of at least nine web surveillance tools State Police signed contracts for last year, worth at least $2.1 million in total.

Among the other firms the Troopers contracted with are Cognyte ($310,000 for a three-year contract); Whooster ($110,000 over three years); Skopenow ($280,000); Griffeye ($209,000); the credit reporting agency TransUnion ($159,000); and Echosec ($262,000 over two years), which specializes in using “global social media, discussions, and defense forums” to geolocate people. They also bought Cobwebs software, a mass web surveillance tool created by former Israeli military and intelligence officials — part of that country’s multibillion-dollar surveillance tech industry, which often tests its products on Palestinians.

That’s likely not the full extent of the State Police’s third party-brokered surveillance arsenal. As New York Focus revealed last year, the State Police have for years been shopping around for programs that take in mass quantities of data from social media, sift through them, and then feed insights — including users’ real-time location information — to law enforcement. Those contracts don’t show up in the state contract data, suggesting that the public disclosures are incomplete. Depending on how the programs obtain their data, closing the data broker loophole could bar their sale to law enforcement.

The State Police refused to answer questions about how its officers use surveillance tools.

“We do not discuss specific strategies or technologies as it provides a blueprint to criminals which puts our members and the public at risk,” State Police spokesperson Deanna Cohen said in an email.

Closing the data broker loophole wouldn’t entirely curtail the police surveillance tech boom. The New York State Police have also been deepening their investments in tech the FISA reforms wouldn’t touch, like aerial drones and automatic license plate readers, which store data from billions of scans to create searchable vehicle location databases.

They’ve also spent millions on mobile device forensic tools, or MDFTs, powerful hacking hardware and software that allow users to download full, searchable copies of a cellphone’s data, including social media messages, emails, web and search histories, and minute-by-minute location information.

Watchdogs warn of potential abuses accompanying the proliferation of MDFTs. The Israeli MDFT company Cellebrite has serviced repressive authorities around the globe, including police in Botswana, who used it to access a journalist’s list of sources, and Hong Kong, where the cops deployed it against leaders of the pro-democracy protest movement there.

In the United States, law enforcement officials argue that more expansive civil liberties protections prevent them from misusing the tech. But according to the technology advocacy organization Upturn, around half of police departments that have used MDFTs have done so with no internal policies in place. Meanwhile, cops have manipulated people into consenting to having their phones cracked without a warrant — for instance, by having them sign generic consent forms that don’t explain that the police will be able to access the entirety of their phone’s data.

In October 2020, New York police departments known to use MDFTs had spent less than $2.2 million on them, and no known MDFT-using department in the country had hit the million-dollar mark, according to a report by Upturn.

Between September 2022 and November 2023, however, the State Police signed more than $12.1 million in contracts for MDFT products and training, New York Focus and The Intercept found. They signed a five-year, $4 million agreement with Cellebrite, while other contracts went to MDFT firms Magnet Forensics and Teel Technologies. The various products attack phones in different ways, and thus have different strengths and weaknesses depending on the type of phone, according to Emma Weil, senior policy analyst at Upturn.

Cellebrite’s tech initially costs around $10,000–$30,000 for an official license, then tens or low hundreds of thousands of dollars for the ability to hack into a set number of phones. According to Weil, the State Police’s inflated bill could mean either that Cellebrite has dramatically increased its pricing, or that the Troopers are “getting more intensive support to unlock more difficult phones.”

If Congress passes the Section 702 renewal without addressing its warrant workarounds, state and local legislation will become the main battleground in the fight against the data broker loophole. In New York, state lawmakers have introduced at least 14 bills as part of their campaign to rein in police surveillance, but none have gotten off the ground.

If the legislature passes some of the surveillance bills, they may well face opposition when they hit the governor’s desk. Hochul has extolled the virtues of police surveillance technology, and committed to expanding law enforcement’s ability to disseminate the information gathered by it. Every year since entering the governor’s mansion, she has proposed roughly doubling funding to New York’s Crime Analysis Center Network, a series of police intelligence hubs that distribute information to local and federal law enforcement, and she’s repeatedly boosted funding to the State Police’s social media surveillance teams.

The State Police has “ramped up its monitoring,” she said in November. “All this is in response to our desire, our strong commitment, to ensure that not only do New Yorkers be safe — but they also feel safe.”

This story was published in partnership with New York Focus, a nonprofit news site investigating how power works in New York state. Sign up for their newsletter here.

The post Congress Has a Chance to Rein In Police Use of Surveillance Tech appeared first on The Intercept.