Technology

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

Inside the UK’s First Open-Access, Pay-As-You-Go Factory

Published by Anonymous (not verified) on Thu, 11/04/2024 - 6:00pm in

Entrepreneurs Alisha Fredriksson and Roujia Wen spent months in 2022 scouring London for the right space to develop a prototype. Their big idea — to capture carbon emissions from cargo ships by trapping the gas amongst calcium oxide pebbles, through a system fitted on board — required a big, well-equipped space. 

The options their search yielded were less than appealing. Large warehouses that had the high ceilings Fredriksson and Wen needed to build their venture, Seabound, were typically empty, with tenants needing to fully equip it themselves with the right machinery, plus the electricity to power it. They tended to be in industrial zones with only the likes of auto shops or dark kitchens for neighbors, and they usually required signing a five-year lease.

Seabound co-founders Alisha Fredriksson and Roujia Wen.Seabound co-founders Alisha Fredriksson and Roujia Wen. Courtesy of Seabound

“As a six-month-old startup at the time, it was a scary proposition,” Fredriksson recalls.

Then Seabound found BLOQS, a 32,000-square-foot converted warehouse in the north London suburb of Enfield, fully kitted out with £1.3 million (around $1.7 million) worth of light industrial equipment for all kinds of manufacturing, including wood processing and metal fabrication, laser cutting and engraving, 3D printing, sewing machines, spray painting and more. If that didn’t already make the case for moving in, the flexible membership structure then quickly sealed the deal for Fredriksson and Wen. 

The initial sign-up is free, with members simply paying a daily rate for the machinery they need to use, as well as for flexible office and storage space if they need it. Raw materials are available to purchase too, price-matched with local suppliers. And if members need to learn to use a particular piece of equipment, they can pay for training. An added bonus is the on-site restaurant, where an award-winning chef serves a seasonable and affordable Mediterranean menu. Yet the biggest draw for the Seabound team was the community of 1,000 other like-minded members.

Credit: Claudia Agati

“We wanted people to not just make whatever it is that they needed to, but we wanted to provide a facility where somebody was able to do what it is that the world needs,” says BLOQS cofounder Al Parra.

“It’s a fun place to go to work every day. We have a whole ecosystem of people that we’re a part of. Whereas if we were in our own warehouse on some industrial site, I don’t think we would have friends there — it would be more lonely,” says Fredriksson.

The expertise available at BLOQS has also allowed Seabound to tap into support on an as-needed basis. “We’ve actually also been able to keep our team very lean, because we’ve been able to occasionally work with people at BLOQS as a kind of ‘surge support,’” Fredriksson says. “For instance, there are technicians at BLOQS that have helped us, and there are electricians who are members that we’ve been able to contract with. So we have flexibility in terms of space and resourcing.”

Seabound co-founders onboard a container ship.The Seabound co-founders tested their prototype on board an 800-foot commercial container ship in late 2023. Courtesy of Seabound

Seabound was able to leverage everything on offer at BLOQS to test its carbon capture technology, with the team spending two months in late 2023 on board an 800-foot commercial container ship. The Seabound prototype successfully captured around one metric ton of CO2 per day, meaning the team, now back on dry land at BLOQS, can move into their second phase of research, development and testing, aiming to deploy their next system onto a ship in 2025.

Crushed by negative news?

Sign up for the Reasons to be Cheerful newsletter.
[contact-form-7]

BLOQS co-founder Al Parra feels Seabound is one of the best examples of why he and his partners set up the space, which he describes as having “its own dynamism,” to drive innovation. “What this women-led climate tech engineering group is doing is incredible,” says Parra. “They started at BLOQS because they couldn’t take on the risk of their own premises. That very often is the case, that people come to us because they have a physical need of something that we provide, but then they stay because of the community. They’re in this confluence and mix of abilities, skills and knowledge. If you don’t know how to do something, you can be damn sure you’re one handshake away from somebody who does.”

As the UK’s largest open-access professional maker space — and the country’s first pay-as-you-go space of its kind — BLOQS has created 380 full-time jobs and has turned over a collective £15 million a year (around $19.1 million) since it launched in 2012. (It was then in a different location and moved to Enfield in 2022.)

Al Parra portrait.Al Parra is BLOQS’ co-founder and director. Courtesy of BLOQS

As an open-access maker space in London, BLOQS isn’t alone. Thirty-eight maker spaces in the UK capital are listed on the Open Workshop Network, while 3D printing support organization CREATE Education lists community-centric spaces across the country on its site. Discipline-specific workshops also exist for professionals. But where BLOQS is unique, argues Parra, is that it’s the only cross-discipline site out of which someone could run a business. 

“We wanted people to not just make whatever it is that they needed to, but we wanted to provide a facility where somebody was able to do what it is that the world needs,” says Parra.  

Parra has observed that BLOQS members are able to leapfrog the initial set-up period of building up manufacturing contacts, which can take up to 10 years. 

“We simplify access to things which are really expensive. If you don’t come from a privileged background, it’s difficult to get together that money. At BLOQS, you can walk straight in, from something like a building site, from a course or degree, or you can transition from another career, and we’ve got all of the resources,” says Parra. 

“By making all of the technology that we’ve got available and affordable, we are diminishing the barriers between that and the creative mind.”

The DEMAND team at work.The DEMAND team at work. Courtesy of DEMAND

Some entrepreneurs see BLOQS as a testing ground for new ideas and stepping stone to a more permanent, private premises, while others see fit to call it their home for the foreseeable. Seabound’s future, for example, looks promising enough that Fredriksson is already forecasting a need for a larger separate space to accommodate dedicated facilities as well as manufacturing partners, although research and development, she thinks, could still be done at BLOQS.

The charity DEMAND, meanwhile, which creates assistive products for people with disabilities, has made its journey to BLOQS in reverse. After having spent the previous 20 years operating out of its own factory just north of London, the team migrated to BLOQS in 2022 after deciding its impact could be greater working in a shared space. Spending time and money on building and machine maintenance was holding the organization back, and with no other similar outfit nearby, the team felt isolated.

“The combination of flexible space and industrial-grade machinery has had a lot of impact on our speed and efficiency. And having access to the community makes it feel like we’re in a much bigger organization — we can lean on, and be inspired by, other people,” says Lynnette Smith, DEMAND’s head of creative.

DEMAND's push-along car for kids with balance issues.DEMAND’s push-along “big car” is designed for children with balance issues who are unable to ride a bicycle. Courtesy of DEMAND

“Being here has definitely helped us maximize the impact of each thing we design. We were very skilled at making one of something for a specific individual. While that’s still the purpose of DEMAND, to make something for an individual need, we’ve now got the machinery that helps us make much more repeatable things.”

DEMAND products refined at BLOQS include a ramp for boccia, a Paralympic sport in which athletes use the ramp to propel their ball to get as close to the target ball as possible, as well as a “big car,” a push-along car designed for children with balance issues who are unable to ride a bicycle. BLOQS’ machinery has reduced human error, and accelerated the production process, says Smith. The technology at BLOQS has also streamlined the production of an eye-led communication aid, which was originally designed for one user, Mark, who DEMAND has since collaborated with to enable it to be reproduced for others.

Growing the charity in this way is one of Smith’s key goals, as is collaborating more closely with users like Mark.

Courtesy of DEMAND

“Having access to the community makes it feel like we’re in a much bigger organization — we can lean on, and be inspired by, other people,” says Lynnette Smith, DEMAND head of creative.

“We would love to keep working with BLOQS to make sure that accessibility happens, potentially also in new places that BLOQS open as a partnership — that’s something we’d love to see the impact of,” says Smith.

Expansion is definitely in the cards, according to Parra, with the BLOQS team assessing the feasibility of a second site in either South London, Birmingham, Liverpool, Manchester or Glasgow, to open in 2025. Beyond the UK, Parra sees global demand for spaces like BLOQS. Similar models are already emerging, like South Africa’s Made In Workshop, Ireland’s Benchspace and Artisans Asylum in the US, all offering flexible, affordable models with a range of machinery.

A view of families walking outside BLOQS factory.Cofounder Alan Parra sees BLOQS as a model that could be replicated in other cities. Courtesy of BLOQS

Parra envisions real potential in developing countries, where microfinance schemes have become common in helping small-scale entrepreneurs build businesses and a livelihood.

“The developing world, where everybody’s one or two generations away from a village, understands this concept of sharing resources so intrinsically, that we’re getting interest from South Asia, Africa and Eastern Europe [to open another BLOQS],” says Parra.

“We’re offering a model for how we can make the things that we need, in a way that is sustainable.”

 

The post Inside the UK’s First Open-Access, Pay-As-You-Go Factory appeared first on Reasons to be Cheerful.

Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

Published by Anonymous (not verified) on Wed, 10/04/2024 - 10:00pm in

Tags 

Technology

Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.

OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.

Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”

The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?”

In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

Lugo, mission commander of the Pentagon’s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a “technology specialist” working on the Space Force and Air Force.

The Air Force is currently building the Advanced Battle Management System, its portion of a broader multibillion-dollar Pentagon project called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon envisions a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.

On April 3, U.S. Central Command revealed it had already begun using elements of JADC2 in the Middle East.

The Department of Defense didn’t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that “the [Chief Digital and Artificial Intelligence Office’s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission areas.”

While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept’s January report on OpenAI’s military-industrial about face, the company’s spokesperson Niko Felix said that even under the loosened language, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

“The point is you’re contributing to preparation for warfighting.”

Whether the Pentagon’s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it’s aimed or pulling the trigger. “They may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “But that would be a spurious distinction in my view, because the point is you’re contributing to preparation for warfighting.”

Unlike OpenAI, Microsoft has little pretense about forgoing harm in its “responsible AI” document and openly promotes the military use of its machine learning tools.


Related

OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept’s reporting, OpenAI vice president of global affairs Anna Makanju assured panel attendees that the company’s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company’s groundbreaking machine learning tools were still forbidden from causing harm or destruction.

Contributing to the development of a battle management system, however, would place OpenAI’s military work far closer to warfare itself. While OpenAI’s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its “use in other systems, such as military operation planning or battlefield assessments” would ultimately impact “where weapons are deployed or missions are carried out.”

Indeed, it’s difficult to imagine a battle whose primary purpose isn’t causing bodily harm and property damage. An Air Force press release from March, for example, describes a recent battle management system exercise as delivering “lethality at the speed of data.”

Other materials from the AI literacy seminar series make clear that “harm” is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft’s asks the question, “Why should we care?” The answer: “We have to kill bad guys.” In a nod to the “literacy” aspect of the seminar, the slide adds, “We need to know what we’re talking about… and we don’t yet.”

Update: April 11, 2024
This article was updated to clarify Microsoft’s promotion of its work with the Department of Defense.

The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

Forget a Ban — Why Are Journalists Using TikTok in the First Place?

Published by Anonymous (not verified) on Mon, 08/04/2024 - 12:00am in

Tags 

Technology

The TikTok logo is being displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024. (Photo by Klaudia Radecka/NurPhoto via Getty Images)

The TikTok logo displayed on a laptop screen with a glowing keyboard in Krakow, Poland, on March 3, 2024.
Photo: Klaudia Radecka/NurPhoto via Getty Images

As far as I know, there are no laws against eating broken glass. You’re free to doomscroll through your cabinets, smash your favorite water cup, then scarf down the shards.

A ban on eating broken glass would be overwhelmingly irrelevant, since most people just don’t do it, and for good reason. Unfortunately, you can’t say the same about another dangerous habit: TikTok.

As a security researcher, I can’t help but hate TikTok, just like I hate all social media, for creating unnecessary personal exposure.

As a security researcher working in journalism, one group of the video-sharing app’s many, many users give me heartburn. These users strike a particular fear into my heart. This group of users is — you guessed it — my beloved colleagues, the journalists.

TikTok, of course, isn’t the only app that poses risks for journalists, but it’s been bizarre to watch reporters with sources to protect express concern about a TikTok ban when they shouldn’t be using the platform in the first place. TikTok officials, after all, have explicitly targeted reporters in attempts to reveal their sources.

My colleagues seem to nonetheless be dressing up as bullseyes.

Ignoring TikTok’s Record

Impassioned pleas by reporters to not ban TikTok curiously omit TikTok’s most egregious attacks on reporters.

In his defense of TikTok, the Daily Beast’s Brad Polumbo offers a disclaimer in the first half of the headline — “TikTok Is Bad. Banning It Would Be Much Worse” — but never expands upon why. Instead, the bulk of the piece offers an apologia for TikTok’s parent company, ByteDance.

Meanwhile, Vox’s A.W. Ohlheiser expatiates on the “both/and” of TikTok, highlighting its many perceived benefits and ills. And yet, the one specific ill, which could have the most impact on Ohlheiser and other reporters, is absent from the laundry list of downsides.

The record is well established. In an attempt to identify reporters’ sources, ByteDance accessed IP addresses and other user data of several journalists, according to a Forbes investigation. The intention seems to have been to track the location of the reporters to see if they were in the same locations as TikTok employees who may have been sources for stories about TikTok’s links to China.

Not only did TikTok surveil reporters in attempts to identify their sources, but the company also proceeded to publicly deny having done so.

“TikTok does not collect precise GPS location information from US users, meaning TikTok could not monitor US users in the way the article suggested,” the TikTok communication team’s account posted on X in response to Forbes’s initial reporting. “TikTok has never been used to ‘target’ any members of the U.S. government, activists, public figures or journalists.”

Forbes kept digging, and its subsequent investigation found that an internal email “acknowledged that TikTok had been used in exactly this way,” as reporter Emily Baker-White put it.

TikTok did various probes into the company’s accessing of U.S. user data; officials were fired and at least one resigned, according to Forbes. That doesn’t change the basic facts: Not only did TikTok surveil reporters in attempts to identify their sources, but the company also proceeded to publicly deny having done so.

And Now, Service Journalism for Journalists

For my journalism colleagues, there may well be times when you need to check TikTok, for instance when researching a story. If this is the case, you should follow the operational security best practice of compartmentalization: keeping various items separated from one another.

In other words, put TikTok on a separate “burner” device, which doesn’t have anything sensitive on it, like your sources saved in its contacts. There’s no evidence TikTok can see, for example, your chat histories, but it can, according to the security research firm Proofpoint, access your device’s location data, contacts list, as well as camera and microphone. And, as as a security researcher, I like to be as safe as possible.

And keep the burner device in a separate location from your regular phone. Don’t walk around with both phones turned on and connected to a cellular or Wi-Fi network and, for the love of everything holy, don’t take the burner to sensitive source meetings.

You can also limit the permissions that your device gives to TikTok — so that you’re not handing the app your aforementioned location data, contacts list, and camera access — and you should. Only allow the app to do things that are required for the app to run, and only run enough to do your research.

And don’t forget, this is all for your research. When you’re done looking up whatever in our hellscape tech dystopia has brought you to this tremendous time suck, the burner device should be wiped and restored to factory defaults.

The security and disinformation risks posed to journalists are, of course, not unique to TikTok. They permeate, one way or another, every single social media platform.

That doesn’t explain journalists’ inscrutable defense of a medium that is actively working against them. It’s as clear as your favorite water cup.

Editor’s note: You can follow The Intercept on TikTok here.

The post Forget a Ban — Why Are Journalists Using TikTok in the First Place? appeared first on The Intercept.

Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List”

Published by Anonymous (not verified) on Fri, 05/04/2024 - 10:00pm in

Tags 

Technology, World

The Israeli military has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.

The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.

The mass surveillance of Palestinian faces resulting from Israel’s efforts to identify Hamas members has caught up thousands of Gaza residents since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they had been brutally interrogated or tortured. In its facial recognition story, the Times pointed to Palestinian poet Mosab Abu Toha, whose arrest and beating at the hands of the Israeli military began with its use of facial recognition. Abu Toha, later released without being charged with any crime, told the paper that Israeli soldiers told him his facial recognition-enabled arrest had been a “mistake.”

Putting aside questions of accuracy — facial recognition systems are notorious less accurate on nonwhite faces — the use of Google Photos’s machine learning-powered analysis features to place civilians under military scrutiny, or worse, is at odds with the company’s clearly stated rules. Under the header “Dangerous and Illegal Activities,” Google warns that Google Photos cannot be used “to promote activities, goods, services, or information that cause serious and immediate harm to people.”

“Facial recognition surveillance of this type undermines rights enshrined in international human rights law.”

Asked how a prohibition against using Google Photos to harm people was compatible with the Israel military’s use of Google Photos to create a “hit list,” company spokesperson Joshua Cruz declined to answer, stating only that “Google Photos is a free product which is widely available to the public that helps you organize photos by grouping similar faces, so you can label people to easily find old photos. It does not provide identities for unknown people in photographs.” (Cruz did not respond to repeated subsequent attempts to clarify Google’s position.)

It’s unclear how such prohibitions — or the company’s long-standing public commitments to human rights — are being applied to Israel’s military.

“It depends how Google interprets ‘serious and immediate harm’ and ‘illegal activity,’ but facial recognition surveillance of this type undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, assembly rights, and more,” said Anna Bacciarelli, the associate tech director at Human Rights Watch. “Given the context in which this technology is being used by Israeli forces, amid widespread, ongoing, and systematic denial of the human rights of people in Gaza, I would hope that Google would take appropriate action.”

Doing Good or Doing Google?

In addition to its terms of service ban against using Google Photos to cause harm to people, the company has for many years claimed to embrace various global human rights standards.

“Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” wrote Alexandria Walden, the company’s global head of human rights, in a 2022 blog post. “That’s why our products, business operations, and decision-making around emerging technologies are all informed by our Human Rights Program and deep commitment to increase access to information and create new opportunities for people around the world.”

This deep commitment includes, according to the company, upholding the Universal Declaration of Human Rights — which forbids torture — and the U.N. Guiding Principles on Business and Human Rights, which notes that conflicts over territory produce some of the worst rights abuses.

The Israeli military’s use of a free, publicly available Google product like Photos raises questions about these corporate human rights commitments, and the extent to which the company is willing to actually act upon them. Google says that it endorses and subscribes to the U.N. Guiding Principles on Business and Human Rights, a framework that calls on corporations to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.”

 Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)

Read our complete coverage

Israel’s War on Gaza

Walden also said Google supports the Conflict-Sensitive Human Rights Due Diligence for ICT Companies, a voluntary framework that helps tech companies avoid the misuse of their products and services in war zones. Among the document’s many recommendations are for companies like Google to consider “Use of products and services for government surveillance in violation of international human rights law norms causing immediate privacy and bodily security impacts (i.e., to locate, arrest, and imprison someone).” (Neither JustPeace Labs nor Business for Social Responsibility, which co-authored the due-diligence framework, replied to a request for comment.)

“Google and Corsight both have a responsibility to ensure that their products and services do not cause or contribute to human rights abuses,” said Bacciarelli. “I’d expect Google to take immediate action to end the use of Google Photos in this system, based on this news.”

Google employees taking part in the No Tech for Apartheid campaign, a worker-led protest movement against Project Nimbus, called their employer to prevent the Israeli military from using Photos’s facial recognition to prosecute the war in Gaza.

“That the Israeli military is even weaponizing consumer technology like Google Photos, using the included facial recognition to identify Palestinians as part of their surveillance apparatus, indicates that the Israeli military will use any technology made available to them — unless Google takes steps to ensure their products don’t contribute to ethnic cleansing, occupation, and genocide,” the group said in a statement shared with The Intercept. “As Google workers, we demand that the company drop Project Nimbus immediately, and cease all activity that supports the Israeli government and military’s genocidal agenda to decimate Gaza.”

Project Nimbus

This would not be the first time Google’s purported human rights principles contradict its business practices — even just in Israel. Since 2021, Google has sold the Israeli military advanced cloud computing and machine learning-tools through its controversial “Project Nimbus” contract.

Unlike Google Photos, a free consumer product available to anyone, Project Nimbus is a bespoke software project tailored to the needs of the Israeli state. Both Nimbus and Google Photos’s face-matching prowess, however, are products of the company’s immense machine-learning resources.

The sale of these sophisticated tools to a government so regularly accused of committing human rights abuses and war crimes stands in opposition to Google’s AI Principles. The guidelines forbid AI uses that are likely to cause “harm,” including any application “whose purpose contravenes widely accepted principles of international law and human rights.”

Google has previously suggested its “principles” are in fact far narrower than they appear, applying only to “custom AI work” and not the general use of its products by third parties. “It means that our technology can be used fairly broadly by the military,” a company spokesperson told Defense One in 2022.

How, or if, Google ever turns its executive-blogged assurances into real-world consequences remains unclear. Ariel Koren, a former Google employee who said she was forced out of her job in 2022 after protesting Project Nimbus, placed Google’s silence on the Photos issue in a broader pattern of avoiding responsibility for how its technology is used.

“It is an understatement to say that aiding and abetting a genocide constitutes a violation of Google’s AI principles and terms of service,” Koren, now an organizer with No Tech for Apartheid, told The Intercept. “Even in the absence of public comment, Google’s actions have made it clear that the company’s public AI ethics principles hold no bearing or weight in Google Cloud’s business decisions, and that even complicity in genocide is not a barrier to the company’s ruthless pursuit of profit at any cost.”

The post Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List” appeared first on The Intercept.

The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack

Published by Anonymous (not verified) on Thu, 04/04/2024 - 10:05am in

Tags 

Technology

On March 29, Microsoft software developer Andres Freund was trying to optimize the performance of his computer when he noticed that one program was using an unexpected amount of processing power. Freund dove in to troubleshoot and “got suspicious.”

Eventually, Freund found the source of the problem, which he subsequently posted to a security mailing list: He had discovered a backdoor in XZ Utils, a data compression utility used by a wide array of various Linux-based computer applications — a constellation of open-source software that, while often not consumer-facing, undergirds key computing and internet functions like secure communications between machines.

By inadvertently spotting the backdoor, which was buried deep in the code in binary test files, Freund averted a large-scale security catastrophe. Any machine running an operating system that included the backdoored utility and met the specifications laid out in the malicious code would have been vulnerable to compromise, allowing an attacker to potentially take control of the system.

The XZ backdoor was introduced by way of what is known as a software supply chain attack, which the National Counterintelligence and Security Center defines as “deliberate acts directed against the supply chains of software products themselves.” The attacks often employ complex ways of changing the source code of the programs, such as gaining unauthorized access to a developer’s system or through a malicious insider with legitimate access.

The malicious code in XZ Utils was introduced by a user calling themself Jia Tan, employing the handle JiaT75, according to Ars Technica and Wired. Tan had been a contributor to the XZ project since at least late 2021 and built trust with the community of developers working on it. Eventually, though the exact timeline is unclear, Tan ascended to being co-maintainer of the project, alongside the founder, Lasse Collin, allowing Tan to add code without needing the contributions to be approved. (Neither Tan nor Collin responded to requests for comment.)

The XZ backdoor betrays a sophisticated, meticulous operation. First, whoever led the attack identified a piece of software that would be embedded in a vast array of Linux operating systems. The development of this widely used technical utility was understaffed, with a single, core maintainer, Collin, who later conceded he was unable to maintain XZ, providing the opportunity for another developer to step in. Then, after cultivating Collin’s trust over a period of years, Tan injected a backdoor into the utility. All these moves were underlaid by a technical proficiency that ushered the creation and embedding of the actual backdoor code — a code sophisticated enough that analysis of its precise functionality and capability is still ongoing.

“The care taken to hide the exploits in binary test files as well as the sheer time taken to gain a reputation in the open-source project to later exploit it are abnormally sophisticated,” said Molly, a system administrator at Electronic Frontier Foundation who goes by a mononym. “However, there isn’t any indication yet whether this was state sponsored, a hacking group, a rogue developer, or any combination of the above.”

Tan’s elevation to being a co-maintainer mostly played out on an email group where code developers — in the open-source, collaborative spirit of the Linux family of operating systems — exchange ideas and strategize to build applications.

On one email list, Collin faced a raft of complaints. A group of users, relatively new to the project, had protested that Collin was falling behind and not making updates to the software quickly enough. He should, some of these users said, hand over control of the project; some explicitly called for the addition of another maintainer. Conceding that he could no longer devote enough attention to the project, Collin made Tan a co-maintainer.

The users involved in the complaints seemed to materialize from nowhere — posting their messages from what appear to be recently created Proton Mail accounts, then disappearing. Their entire online presence is related to these brief interactions on the mailing list dedicated to XZ; their only recorded interest is in quickly ushering along updates to the software.

Various U.S. intelligence agencies have recently expressed interest in addressing software supply chain attacks. The Cybersecurity and Infrastructure Security Agency jumped into action after Freund’s discovery, publishing an alert about the XZ backdoor on March 29, the same day Freund publicly posted about it.

Open-Source Players

In the open-source world of Linux programming — and in the development of XZ Utils — collaboration is carried out through email groups and code repositories. Tan posted on the listserv, chatted to Collin, and contributed code changes on the code repository Github, which is owned by Microsoft. GitHub has since disabled access to the XZ repository and disabled Tan’s account. (In February, The Intercept and other digital news firms sued Microsoft and its partner OpenAI for using their journalism without permission or credit.)

Several other figures on the email list participated in efforts — appearing to be diffuse but coinciding in their aims and timing — to install the new co-maintainer, sometimes particularly pushing for Tan.

Later, on a listserv dedicated to Debian, one of the more popular of the Linux family of operating systems, another group of users advocated for the backdoored version of XZ Utils to be included in the operating system’s distribution.

These dedicated groups played discrete roles: In one case, complaining about the lack of progress on XZ Utils and pushing for speedier updates by installing a new co-maintainer; and, in the other case, pushing for updated versions to be quickly and widely distributed.

“I think the multiple green accounts seeming to coordinate on specific goals at key times fits the pattern of using networks of sock accounts for social engineering that we’ve seen all over social media,” said Molly, the EFF system administrator. “It’s very possible that the rogue dev, hacking group, or state sponsor employed this tactic as part of their plan to introduce the back door. Of course, it’s also possible these are just coincidences.”

The pattern seems to fit what’s known in intelligence parlance as “persona management,” the practice of creating and subsequently maintaining multiple fictitious identities. A leaked document from the defense contractor HBGary Federal outlines the meticulousness that may go into maintaining these fictive personas, including creating an elaborate online footprint — something which was decidedly missing from the accounts involved in the XZ timeline.

While these other users employed different emails, in some cases they used providers that give clues as to when their accounts were created. When they used Proton Mail accounts, for instance, the encryption keys associated with these accounts were created on the same day, or mere days before, the users’ first posts to the email group. (Users, however, can also generate new keys, meaning the email addresses may have been older than their current keys.)

One of the earliest of these users on the list used the name Jigar Kumar. Kumar appears on the XZ development mailing list in April 2022, complaining that some features of the tool are confusing. Tan promptly responded to the comment. (Kumar did not respond to a request for comment.)

Kumar repeatedly popped up with subsequent complaints, sometimes building off others’ discontent. After Dennis Ens appeared on the same mailing list, Ens also complained about the lack of response to one of his messages. Collin acknowledged things were piling up and mentioned Tan had been helping him off list; he might soon have “a bigger role with XZ Utils.” (Ens did not respond to a request for comment.)

After another complaint from Kumar calling for a new maintainer, Collin responded: “I haven’t lost interest but my ability to care has been fairly limited mostly due to longterm mental health issues but also due to some other things. Recently I’ve worked off-list a bit with Jia Tan on XZ Utils and perhaps he will have a bigger role in the future, we’ll see.”

The pressure kept coming. “As I have hinted in earlier emails, Jia Tan may have a bigger role in the project in the future,” Collin responded after Ens suggested he hand off some responsibilities. “He has been helping a lot off-list and is practically a co-maintainer already. :-)”

Ens then went quiet for two years — reemerging around the time the bulk of the malicious backdoor code was installed in the XZ software. Ens kept urging ever quicker updates.

After Collin eventually made Tan a co-maintainer, there was a subsequent push to get XZ Utils — which by now had the backdoor — distributed widely. After first showing up on the XZ GitHub repository in June 2023, another figure calling themselves Hans Jansen went on this March to push for the new version of XZ to be included in Debian Linux. (Jansen did not respond to a request for comment.)

An employee at Red Hat, a software firm owned by IBM, which sponsors and helps maintain Fedora, another popular Linux operating system, described Tan trying to convince him to help add the compromised XZ Utils to Fedora.

These popular Linux operating systems account for millions of computer users — meaning that huge numbers of users would have been open to compromise if Freund, the developer, had not discovered the backdoor.

“While the possibility of socially engineering backdoors in critical software seems like an indictment of open-source projects, it’s not exclusive to open source and could happen anywhere,” said Molly. “In fact, the ability for the engineer to discover this backdoor before it was shipped was only possible due to the open nature of the project.”

The post The Other Players Who Helped (Almost) Make the World’s Biggest Backdoor Hack appeared first on The Intercept.

Congress Has a Chance to Rein In Police Use of Surveillance Tech

Published by Anonymous (not verified) on Wed, 03/04/2024 - 1:00am in

Hardware that breaks into your phone; software that monitors you on the internet; systems that can recognize your face and track your car: The New York State Police are drowning in surveillance tech.

Last year alone, the Troopers signed at least $15 million in contracts for powerful new surveillance tools, according to a New York Focus and Intercept review of state data. While expansive, the State Police’s acquisitions aren’t unique among state and local law enforcement. Departments across the country are buying tools to gobble up civilians’ personal data, plus increasingly accessible technology to synthesize it.

“It’s a wild west,” said Sean Vitka, a privacy advocate and policy counsel for Demand Progress. “We’re seeing an industry increasingly tailor itself toward enabling mass warrantless surveillance.”

So far, local officials haven’t done much about it. Surveillance technology has far outpaced traditional privacy laws, and legislators have largely failed to catch up. In New York, lawmakers launched a years-in-the-making legislative campaign last year to rein in police intrusion — but with Gov. Kathy Hochul pushing for tough-on-crime policies instead, none of their bills have made it out of committee.

So New York privacy proponents are turning to Congress. A heated congressional debate over the future of a spying law offers an opportunity to severely curtail state and local police surveillance through federal regulation.

At issue is Section 702 of the Foreign Intelligence Surveillance Act, or FISA, which expires on April 19. The law is notorious for a provision that allows the feds to access Americans’ communications swept up in intelligence agencies’ international spying. As some members of Congress work to close that “backdoor,” they’re also pushing to ban a so-called data broker loophole that allows law enforcement to buy civilians’ personal data from private vendors without a warrant. Closing that loophole would likely make much of the New York State Police’s recently purchased surveillance tech illegal.

Members of the House and Senate judiciary committees, who have introduced bills to close the loopholes, are leading the latest bipartisan charge for reform. Members of the House and Senate intelligence committees, meanwhile, are pushing to keep the warrant workarounds in place. The Democratic leaders of both chambers — House Minority Leader Hakeem Jeffries and Senate Majority Leader Chuck Schumer, both from New York — have so far kept quiet on the spying debate. As Section 702’s expiration date nears, local advocates are trying to get them on board.

On Tuesday, a group of 33 organizations, many from New York, sent a letter to Jeffries and Schumer urging them to close the loopholes. More than 100 grassroots and civil rights groups from across the country sent the lawmakers a similar petition this week.

“These products are deeply invasive, discriminatory, and ripe for abuse.”

“These products are deeply invasive, discriminatory, and ripe for abuse,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, which signed both letters. They reach “into nearly every aspect of our digital and physical lives.”

Jeffries’s office declined to comment. Schumer’s office did not respond to a request for comment before publication.

Both letters cited a Wired report from last month, which revealed that Republican Rep. Mike Turner of Ohio, the chair of the House Intelligence Committee, pointed to New York City protests against Israel’s war on Gaza to argue against the spying law’s reform. Sources told Wired that in a presentation to fellow House Republicans, Turner implied that protesters in New York had ties to Hamas — and therefore should remain subject to Section 702’s warrantless surveillance backdoor. An intelligence committee spokesperson disputed the characterization of Turner’s remarks, but said that the protests had “responded to what appears to be a Hamas solicitation.”

“The real-world impact of such surveillance on protest and dissent is profound and undeniable,” read the New York letter, spearheaded by Empire State Indivisible and NYU Law School’s Brennan Center for Justice. “With Rep. Turner having placed your own constituents in the crosshairs, your leadership is urgently needed.”

Police surveillance today looks much different than it did 10, five, or even three years ago. A report from the U.S. Office of the Director of National Intelligence, declassified last year, put it succinctly: “The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits.”

That report called specific attention to the “data broker loophole”: law enforcement’s practice of obtaining data for which they’d otherwise have to obtain a warrant by buying it from brokers. The New York State Police have taken greater and greater advantage of the loophole in recent years, buying up seemingly as much tech and data as they can get their hands on.

In 2021, the State Police purchased a subscription to ShadowDragon, which is designed to scan websites for clues about targeted individuals, then synthesize it into in-depth profiles.


Related

ShadowDragon: Inside the Social Media Surveillance Software That Can Watch Your Every Move

“I want to know everything about the suspect: Where do they get their coffee? Where do they get their gas? Where’s their electric bill? Who’s their mom? Who’s their dad?” ShadowDragon’s founder said in an interview unearthed by The Intercept in 2021. The company claims that its software can anticipate crime and violence — a practice, trendy among law enforcement tech companies, known as “predictive policing,” which ethicists and watchdogs warn can be inaccurate and biased.

The State Police renewed their ShadowDragon subscription in January of last year, shelling out $308,000 for a three-year contract. That was one of at least nine web surveillance tools State Police signed contracts for last year, worth at least $2.1 million in total.

Among the other firms the Troopers contracted with are Cognyte ($310,000 for a three-year contract); Whooster ($110,000 over three years); Skopenow ($280,000); Griffeye ($209,000); the credit reporting agency TransUnion ($159,000); and Echosec ($262,000 over two years), which specializes in using “global social media, discussions, and defense forums” to geolocate people. They also bought Cobwebs software, a mass web surveillance tool created by former Israeli military and intelligence officials — part of that country’s multibillion-dollar surveillance tech industry, which often tests its products on Palestinians.

That’s likely not the full extent of the State Police’s third party-brokered surveillance arsenal. As New York Focus revealed last year, the State Police have for years been shopping around for programs that take in mass quantities of data from social media, sift through them, and then feed insights — including users’ real-time location information — to law enforcement. Those contracts don’t show up in the state contract data, suggesting that the public disclosures are incomplete. Depending on how the programs obtain their data, closing the data broker loophole could bar their sale to law enforcement.

The State Police refused to answer questions about how its officers use surveillance tools.

“We do not discuss specific strategies or technologies as it provides a blueprint to criminals which puts our members and the public at risk,” State Police spokesperson Deanna Cohen said in an email.

Closing the data broker loophole wouldn’t entirely curtail the police surveillance tech boom. The New York State Police have also been deepening their investments in tech the FISA reforms wouldn’t touch, like aerial drones and automatic license plate readers, which store data from billions of scans to create searchable vehicle location databases.

They’ve also spent millions on mobile device forensic tools, or MDFTs, powerful hacking hardware and software that allow users to download full, searchable copies of a cellphone’s data, including social media messages, emails, web and search histories, and minute-by-minute location information.

Watchdogs warn of potential abuses accompanying the proliferation of MDFTs. The Israeli MDFT company Cellebrite has serviced repressive authorities around the globe, including police in Botswana, who used it to access a journalist’s list of sources, and Hong Kong, where the cops deployed it against leaders of the pro-democracy protest movement there.

In the United States, law enforcement officials argue that more expansive civil liberties protections prevent them from misusing the tech. But according to the technology advocacy organization Upturn, around half of police departments that have used MDFTs have done so with no internal policies in place. Meanwhile, cops have manipulated people into consenting to having their phones cracked without a warrant — for instance, by having them sign generic consent forms that don’t explain that the police will be able to access the entirety of their phone’s data.

In October 2020, New York police departments known to use MDFTs had spent less than $2.2 million on them, and no known MDFT-using department in the country had hit the million-dollar mark, according to a report by Upturn.

Between September 2022 and November 2023, however, the State Police signed more than $12.1 million in contracts for MDFT products and training, New York Focus and The Intercept found. They signed a five-year, $4 million agreement with Cellebrite, while other contracts went to MDFT firms Magnet Forensics and Teel Technologies. The various products attack phones in different ways, and thus have different strengths and weaknesses depending on the type of phone, according to Emma Weil, senior policy analyst at Upturn.

Cellebrite’s tech initially costs around $10,000–$30,000 for an official license, then tens or low hundreds of thousands of dollars for the ability to hack into a set number of phones. According to Weil, the State Police’s inflated bill could mean either that Cellebrite has dramatically increased its pricing, or that the Troopers are “getting more intensive support to unlock more difficult phones.”

If Congress passes the Section 702 renewal without addressing its warrant workarounds, state and local legislation will become the main battleground in the fight against the data broker loophole. In New York, state lawmakers have introduced at least 14 bills as part of their campaign to rein in police surveillance, but none have gotten off the ground.

If the legislature passes some of the surveillance bills, they may well face opposition when they hit the governor’s desk. Hochul has extolled the virtues of police surveillance technology, and committed to expanding law enforcement’s ability to disseminate the information gathered by it. Every year since entering the governor’s mansion, she has proposed roughly doubling funding to New York’s Crime Analysis Center Network, a series of police intelligence hubs that distribute information to local and federal law enforcement, and she’s repeatedly boosted funding to the State Police’s social media surveillance teams.

The State Police has “ramped up its monitoring,” she said in November. “All this is in response to our desire, our strong commitment, to ensure that not only do New Yorkers be safe — but they also feel safe.”

This story was published in partnership with New York Focus, a nonprofit news site investigating how power works in New York state. Sign up for their newsletter here.

The post Congress Has a Chance to Rein In Police Use of Surveillance Tech appeared first on The Intercept.

Everything Chinese is a national security threat to the United States

Published by Anonymous (not verified) on Sat, 30/03/2024 - 4:50am in

After the battles over 5G, social media and advanced microchips, Chinese electric cars are the new front line of US economic warfare. Like any environmentally correct person in North America, I was toying with the idea of buying a Tesla to replace my beat-up eight-year-old Honda. While doing some research, this came up and suddenly Continue reading »

China warns foreign hackers are infiltrating ‘hundreds’ of business and government networks

Published by Anonymous (not verified) on Fri, 29/03/2024 - 4:50am in

Top spy agency urges Chinese citizens to step up cybersecurity as attacks by overseas agencies have been ‘rampant’ in recent years. The message comes as Beijing broadens scope of anti-espionage law to cover online attacks and prepares to expand penalties for data violations. China’s state security authority warned that the networks of “hundreds” of Chinese Continue reading »

Climate Engineering: Doubling Down on Bad Habits

Published by Anonymous (not verified) on Fri, 29/03/2024 - 12:47am in
by Gary Gardner

red sky at sunset, with clouds

Let’s not mess with such perfection. (Wikimedia)

Social psychologists tell us it takes about 66 days to form a new habit. In my experience that’s only half true. Sixty-six days to form a good habit, yes, but about 66 hours to form a bad one. If I reach for a donut at breakfast, then do the same the next two days, I seal the deal and establish a habit of bad eating. And the dynamic has an insidious way of spreading. Soon I skip workouts, watch too much TV, and succumb to other indulgences. Poor choices beget poor choices, in a rapidly descending spiral.

We might frame climate engineering in the same way, as the latest in a downward spiral of bad economic choices. Our original sin was committing uncritically to growth. Then we doubled down using the power of fossil fuels. Now we flirt with climate engineering, a set of technologies that are expensive, risky, and often unproven, as an extension of our fossil energy addiction. Down the slippery slope we go.

Breaking our bad habit requires that we adopt a strict fossil-fuel-free diet. But we’ve reached a point where may also need some forms of climate engineering—limited and relatively benign—to restore stability to our climate and to reduce climate damage, especially in the most vulnerable regions. In our addiction to growth and fossil fuels we’ve wrought a vexing ethical tangle that will be difficult to sort through.

What is Climate Engineering?

Climate engineering, also known as geoengineering, is an umbrella term covering a broad set of technologies for avoiding dangerous levels of global heating. Analysts generally split the field into two families of technologies, each with a different approach to addressing warming.

Solar radiation modification (SRM) is a strategy for deflecting away the sun’s rays to reduce the heating of our planet. Scientists imagine increasing the albedo (reflectivity) of the earth, say by covering glaciers in Antarctica with artificial snow or planting high-albedo varieties of barley, corn, wheat, or other crops. More exotic options include spraying reflective sulphates into the stratosphere or placing orbiting mirrors in space. Proponents of SRM note that most of the options are relatively inexpensive, at tens of billions of dollars per year for each degree C of cooling in the sulphate-spraying option. And it would provide quick relief from rising temperatures. Other advocates may see SRM as an easy way to skirt emissions reductions and to maintain economies run by fossil fuels. And SRM has serious downsides, described below.

graphic illustrating differen types of solar radiation modification, with solar rays reflecting back to space from snow covered areas, clouds, and aerosols sprayed into the atmosphere

Forms of Solar Radiation Management. (Chelsea Thompson, NOAA/CIRES)

Carbon dioxide removal (CDR) approaches would pull carbon dioxide from the atmosphere, thinning the blanket of heat-trapping CO2 molecules. CDR is a testament to the human imagination, featuring more than a dozen methods of removing excess carbon. Some are nature-based, like planting trees over massive areas. Others are mechanical, like direct air capture (DAC), which uses giant fan-like devices to draw air through adsorbent filters that isolate the CO2, concentrating it for sequestration. Still others alter marine ecosystems by encouraging the growth of carbon-rich plankton (for example, by scattering iron filings through the ocean). The plankton then sink to the ocean floor for a natural burial. All these CDR methods have drawbacks, whether ecological, political, financial, or ethical.

Which approach is best, SRM or CDR? In truth, the question is premature. Before considering any merits of climate engineering, we must tackle the little matter of emissions reductions. This is the elephant in the room in many climate engineering discussions.

The multiple technologies proposed for tinkering with the climate can easily dazzle us. But gee-whiz excitement may blind us to an important fact. None of these solutions addresses the core driver of the climate crisis, the emission of greenhouse gases. None is a complete solution to our climate challenge.

All climate engineering approaches are workarounds that sidestep dealing with emissions. Every one is a temptation to avoid the painful task of building new, carbon-neutral economies. Each makes at best an incomplete contribution to solving the climate crisis.

An honest, fully formed approach to climate policy requires that emissions reductions be not only present, but at the forefront. Climate engineering should play only a distant, secondary role. Even then, only the most benign, ecologically friendly options should be considered.

Fundamental Failure

Remember that geoengineering strategies would not even figure in the climate conversation if humanity had met its responsibility for emissions reductions decades ago. But we didn’t. Between 2000 and 2022, the world’s economies emitted enormous quantities of carbon, equal to 41 percent of the carbon emitted back to 1750. This growth-driven surge boosted concentrations of atmospheric carbon from 370 to 418 parts per million).

lone tree out on a large plain

Carbon sequestration as nature intended. (Niko photos, Unsplash)

The surge has also tied our hands: Atmospheric concentrations are now so great that emissions reductions alone cannot provide for the 1.5-degree (Celsius) limit on temperature rise set in the 2015 Paris Agreement. Nor are they likely to keep us below 2.0 degrees of temperature increase. In fact, most IPCC scenarios for meeting temperature goals assume some use of CDR technologies to sequester carbon. And the U.S. government seems intent on exploring climate engineering, having provided billions of dollars in subsidies to help boost direct air capture technologies.

In fact, the longer serious mitigation efforts are delayed, the more the mercury rises and the greater the pressure to use climate engineering. More carbon indulgence coaxes us toward more extreme tinkering with Earth systems. Bad habits breed bad habits.

In sum, we err in framing climate engineering as a set of wonder technologies arriving just in time to save us. Instead, they are like crutches, temporary assists that support the hard work of rehabilitation. Our hard work, our rehabilitation of the climate, is to reduce greenhouse gas emissions in a serious way.

What Could Possibly Go Wrong?

As helpful as some limited geoengineering practices may be, policymakers and the public must be clear-eyed about the risks involved. The list of risks is long. Here are just a few:

Rogue actors—What if a nation frustrated over feeble progress in cutting global emissions decided to take climate action into its own hands? It’s not as far-fetched as it may seem given the relative simplicity and affordability of sulphate spraying, at least for larger economies. Kim Stanley Robinson’s The Ministry for the Future imagined just such a scenario, with India suffering mass deaths due to heat and responding by essentially skywriting with cannisters of sulphate. The government, perhaps understandably, frames an exotic and risky venture as necessary to protect its suffering people.

Long-term commitment—SRM in particular could introduce a major new risk. If nations start to reflect away solar rays without serious emissions reductions, they essentially commit to SRM indefinitely. Stopping the practice after a buildup of carbon would produce a spike in global temperature that many species likely could not adapt to. Who has confidence that nations would fund SRM solutions indefinitely to avoid such an outcome?

young African boy playing with a soccer ball in a field

Would climate engineering be in the interest of this young African? (Seth Doyle, Unsplash)

Unintended consequences—Some measures to restrain global temperature increases could have detrimental effects at the regional or local levels. For example, some forms of solar engineering could cause changes in rainfall in parts of Africa or to the monsoon in India. What safeguards can we put in place to protect vulnerable regions? Are people in those regions part of decision-making on climate engineering?

Resource intensive—For CDR technologies, global-level interventions will require tremendous resources, both financial and physical. Even tree planting is no panacea. Planting 900 million hectares in trees would require the area of 2.74 Indias, raising questions about whose land would be used. Even then, the trees would remove around 8 billion tons of CO2 equivalent. This is just a fraction of the 52 billion tons of CO2 equivalent emitted each year. And young trees absorb far less carbon than mature trees, so the bulk of these gains would not be realized until the second half of the century.

Unproven—In 2023, a panel in the U.N. climate bureaucracy shocked the emerging CDR community when it declared carbon removal technologies “technologically and economically unproven, especially at scale.” It also said that carbon removal poses “unknown environmental and social risks.” Even Ocean Visions, which is bullish on marine-based methods of CDR, acknowledges that many are untried and have unknown effects. This, even as the global community needs climate action urgently.

Finding the Moral Middle

If we were tasked in 1990 to lay out an ethics of climate action, a single sentence would have sufficed. Cut greenhouse gas emissions as broadly and quickly as possible, starting with the biggest emitters. But economic growth and technological developments since 2000 have complicated the picture considerably and put us in an ethical bind. Navigating our choices requires ramping up our commitment to emissions reductions while carefully considering other actions that would keep temperature increases under two degrees.

On curbing emissions, we’ll need to be much less permissive than we’ve been in the past two decades. Electric vehicles, solar panels, and other “clean” technologies are part of this effort, but they carry their own moral hazard. It’s tempting to believe that adopting technological fixes is the extent of our responsibility. But we know that technological solutions alone often backfire and can produce more of the very harms we are trying to reduce.

a woman and a girl planting a tree

A good effort in itself, but even a trillion trees wouldn’t finish the emissions-mopping job. (Eyoel Kahssey, Unsplash)

The most direct, broad-based, and cheapest way to cut emissions is to stop the economic growth that fuels it. Policy efforts focused here could yield substantial returns. Some of the draft bills from CASSE’s Steady State Economy Act project (such as the Mileage Fee Act, to be introduced next week) would be helpful steps in this direction.

With a serious commitment to emissions reduction, we can turn to mopping up as much excess greenhouse gases from the atmosphere as is safely possible. Climate engineering solutions should be those that mesh with ecological restoration, like tree planting, restoring wetlands, and designating “blue carbon” areas. The simultaneous contribution to biodiversity conservation makes such efforts a systems solution to the climate crisis. This stands in contrast to the single-issue, reductionist focus that characterizes many climate engineering approaches.

We got into the climate crisis by relying on tunnel-visioned engineering solutions. Let’s not double down on that mistake by tinkering with the climate. Greta Thunberg captures the idea: “A crisis created by lack of respect for nature will … not be solved by taking that lack of respect to the next level.”

Instead, let’s respect nature—and ourselves—and abandon the naïve notions of perpetual growth. We have the option of the steady state economy to fall back upon. It’s the adult thing to do.

Gary Gardner is Managing Editor at CASSE.

The post Climate Engineering: Doubling Down on Bad Habits appeared first on Center for the Advancement of the Steady State Economy.

Tactical Publishing: Using Senses, Software, and Archives in the Twenty-First Century – review

Published by Anonymous (not verified) on Thu, 28/03/2024 - 9:00pm in

In Tactical Publishing: Using Senses, Software, and Archives in the Twenty-First Century, Alessandro Ludovico assembles a vast repertoire of post-digital publications to make the case for their importance in shaping and proposing alternative directions for the current computational media landscape. Although tilting towards example over practical theory, Tactical Publishing is an inspiring resource for all scholars and practitioners interested in the critical potential of experimenting with the technologies, forms, practices and socio-material spaces that emerge around books, writes Rebekka Kiesewetter.

Tactical Publishing: Using Senses, Software, and Archives in the Twenty-First Century. Alessandro Ludovico. The MIT Press. 2024.

Working at the intersection of art, technology, and media, Alessandro Ludovico is known for his contribution to shaping the term “post-digital” through his book Post-Digital Print: The Mutation of Publishing Since 1894. Ludovico’s notion of the post-digital, in brief, challenges the divide between digital and physical realms by exploring the normalisation and ubiquity of the digital in contemporary culture and urges for a nuanced perspective beyond its novelty, as boundaries between online and offline experiences blur.

Tactical Publishing is presented as a sequel, evolving and updating Ludovico’s concept for the concerns of a contemporary computational media landscape shaped by technologies and platforms (social media, algorithms, mobile apps and virtual reality environments) owned by large multinational corporations. Through discussing a wide variety of antagonistically situated experimental and activist publishing initiatives, Ludovico discovers fresh roles and purposes for books, publishers, editors, and libraries at the centre of an alternative post-digital publishing system. This system diverges from the “calculated and networked quality of publishing between digital and print … to promote an intrinsic and explicitly cooperative structure that contrasts with the vertical, customer-oriented industry model” (8).

Ludovico develops this argument around a captivating array of well and lesser known examples from the realms of analogue, digital, and post-digital publishing stretching the prevalent boundaries of what a book was, is, and can be. Ranging from Asger Jorn’s and Guy Debord’s sandpaper covered book Mémoirs (1958), to Nanni Balestrini’s computer generated poem “Tape Mark 1” (1961), to Newstweek (2011), a device for manipulating news created by Julian Oliver and Danja Vasiliev. Tactical Publishing also ventures into the complex relationships, practices, socio-political and economic contexts of the production and reception of books. It draws on these relational contexts to explore their disruptive potential. For example, through forms of “liminal librarianship” practiced by DIY libraries, networked archiving practices of historically underrepresented communities, and custodianship in the context of digital piracy.

Ludovico develops this argument around a captivating array of well and lesser known examples from the realms of analogue, digital, and post-digital publishing stretching the prevalent boundaries of what a book was, is, and can be

As in Post-Digital Print, Tactical Publishing offers an abundantly rich resource for scholars interested in exploring the ways in which experimenting with the manifold dimensions that make up books, can be a means for creative expression, intellectual exploration, and social change in the digital age. Ludovico dedicates considerable attention to these case studies, allowing them ample space to shine and speak by themselves in support of his argument.

The book is divided into six chapters, each mixing illustrative instances of practical application with theoretical reflection. Chapter one explores how reading is transformed by digital screens. These, as the author explains, tend to enforce industrially standardised experiences, while neutralising cultural differences and leading to a potential loss of sensory involvement. Ludovico proposes to reclaim enriched and multisensory reading experiences by combining digital tools and physical qualities. He illustrates this proposition by discussing a series of publishing experiments in music publishing that have used analogue and digital technologies to integrate text and music media.

Chapter two examines the transformation of the role of software in writing. Here, Ludovico presents a transition from an infrastructural to an authorial function that blurs distinctions between human and artificial “subjectivities”. The latter being a simulation of human-like experiences, characteristics, and behaviours often associated with human subjectivity, such as learning, decision-making, or emotional responses. This simulation Ludovico argues increasingly obstructs the ability to distinguish between actions and expressions originating from humans and those generated by technological systems. Ludovico contends that the “practice of constructing digital systems, processes, and infrastructures to deal with these new subjectivities can become a political matter” (89). One that requires initiatives intertwining critical and responsible efforts in digitising knowledges, making digital knowledge-bases accessible and searchable, and developing and maintaining machine-based services on top of them. However, the origin and nature of these institutions, and what their efforts might entail remain unspecified.

Ludovico presents a transition from an infrastructural to an authorial function that blurs distinctions between human and artificial “subjectivities”.

Chapter three explores how post-truth arises from a constant construction and deconstruction of meaning in transient digital spaces, and through media and image manipulation. Ludovico emphasises that, in this context, it is important to build “an information dam … to protect our minds from being flooded with data, especially emotionally charged data” (123). Chapter four, “Endlessness: The Digital Publishing Paradigm”, makes the case that the fragmented short formats characteristic of digital publishing underscore the importance of the archival role of print publications and the necessity of networks of “critical human editors” (130). These can act as a counterbalance to this flood of information and foster a more focused and collaborative exchange of information.

Chapter five proposes a transformation of libraries from centralised towards distributed and networked knowledge infrastructures in which librarians strategically contribute to the selection and sharing of “relevant collections” (197). Chapter six concludes Tactical Publishing synthesising the previous chapters by proposing the strategic integration of analogue and digital realms within an “open media continuum” rejecting a calculated, networked approach in favour of a cooperative structure sustained by “responsible editors” (212), publishers, librarians, custodians, and distributors. Last but not least, a useful appendix offers a selection of one hundred publications, encompassing both print and digital formats.

Tactical Publishing sits within a well-established canon of critical media studies, digital humanities, and cultural studies, focusing on the materiality of media, historical dimensions of technology, media ecology, politics of information, and socio-cultural implications of post-digital communication. However, its theoretical contributions are at times subdued by the host of examples presented. Some readers may also be left wanting a more pronounced engagement with recent theoretical works discussing the concept of post-digital publishing and its interventionist potential into dominant publishing systems, norms, and cultures from cultural hegemony critical, post-Marxist, various feminist, post-hegemonic, and ecologically-minded perspectives. Such an engagement might have helped clarify questions about the politics and ethics related to the alternative post-digital publishing system and the “comprehensive liberatory attitude” (4) Ludovico advocates for, beyond the motivation to counter the alienation of the current computational media landscape.

Tactical Publishing sits within a well-established canon of critical media studies, digital humanities, and cultural studies, focusing on the materiality of media, historical dimensions of technology, media ecology, politics of information, and socio-cultural implications of post-digital communication.

Similarly, Tactical Publishing also leaves unresolved related questions of positionality, accountability, and agency. For example: Who is the “we “Ludovico addresses, not least in the final chapter titled “How we Should Publish in the 21st Century”? What drives “the critical human editors” (130) whose role is to “filter the myriad of sources, to preserve their heterogeneity, to … include new sources, but to keep their final number limited, and to confirm them, transparently acknowledged, in order to strengthen trusted networks” (211), and what legitimises their activity? And where, in a post-digital world, is “the personal trusted human network” situated that, according to the author, can be “resistant to mass manipulation by fake news and post-truth strategies” (123)?

However, despite (or exactly because) the theoretical argument occasionally takes a backseat to numerous meticulously selected and well-arranged examples, Tactical Publishing is an inspiring resource for all scholars and practitioners in design, the arts, humanities, and social sciences that are interested in the ways in which experimental publishing can help question, challenge and rearrange dominant publishing systems.

Note: This article was initially published on the LSE Impact of Social Science blog.

This post gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science.

Image credit: Dikushin Dmitry on Shutterstock.

Pages