The Intercept https://theintercept.com/technology/ Sun, 16 Jul 2023 13:19:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 <![CDATA[FBI Hired Social Media Surveillance Firm That Labeled Black Lives Matter Organizers “Threat Actors”]]> https://theintercept.com/2023/07/06/fbi-social-media-surveillance-zerofox/ https://theintercept.com/2023/07/06/fbi-social-media-surveillance-zerofox/#respond Thu, 06 Jul 2023 17:11:08 +0000 https://production.public.theintercept.cloud/?p=434224 A new Senate report calls out the FBI for lying to Congress about its social media monitoring, pointing out the FBI’s hiring of ZeroFox.

The post FBI Hired Social Media Surveillance Firm That Labeled Black Lives Matter Organizers “Threat Actors” appeared first on The Intercept.

]]>
The FBI’s primary tool for monitoring social media threats is the same contractor that labeled peaceful Black Lives Matter protest leaders DeRay McKesson and Johnetta Elzie as “threat actors” requiring “continuous monitoring” in 2015.

The contractor, ZeroFox, identified McKesson and Elzie as posing a “high severity” physical threat, despite including no evidence that McKesson or Elzie were suspected of criminal activity. “It’s been almost a decade since the referenced 2015 incident and in that time we have invested heavily in fine-tuning our collections, analysis and labeling of alerts,” Lexie Gunther, a spokesperson for ZeroFox, told The Intercept, “including the addition of a fully managed service that ensures human analysis of every alert that comes through the ZeroFox Platform to ensure we are only alerting customers to legitimate threats and are labeling those threats appropriately.”

The FBI, which declined to comment, hired ZeroFox in 2021, a fact referenced in the new 106-page Senate report about the intelligence community’s failure to anticipate the January 6, 2021, uprising at the U.S. Capitol. The June 27 report, produced by Democrats on the Senate Homeland Security Committee, shows the bureau’s broad authorities to surveil social media content — authorities the FBI previously denied it had, including before Congress. It also reveals the FBI’s reliance on outside companies to do much of the filtering for them.

The FBI’s $14 million contract to ZeroFox for “FBI social media alerting” replaced a similar contract with Dataminr, another firm with a history of scrutinizing racial justice movements. Dataminr, like ZeroFox, subjected the Black Lives Matter movement to web surveillance on behalf of the Minneapolis Police Department, previous reporting by The Intercept has shown. 

In testimony before the Senate in 2021, the FBI’s then-Assistant Director for Counterterrorism Jill Sanborn flatly denied that the FBI had the power to monitor social media discourse.

“So, the FBI does not monitor publicly available social media conversations?” asked Arizona Sen. Kyrsten Sinema. 

“Correct, ma’am. It’s not within our authorities,” Sanborn replied, citing First Amendment protections barring such activities. 

Sanborn’s statement was widely publicized at the time and cited as evidence that concerns about federal government involvement in social media were unfounded. But, as the Senate report stresses, Sanborn’s answer was false. 

“FBI leadership mischaracterized the Bureau’s authorities to monitor social media,” the report concludes, calling it an “exaggeration of the limits on FBI’s authorities,” which in fact are quite broad.

It is under these authorities that the FBI sifts through vast amounts of social media content searching for threats, the report reveals.

“Prior to 2021, FBI contracted with the company Dataminr that used pre-defined search terms to identify potential threats from voluminous open-source posts online, which FBI could then investigate further as appropriate,” the report states, citing internal FBI communications obtained as part of the committee’s investigation. “Effective Jan. 1, 2021, FBI’s contract for these services switched to a new company called ZeroFox that would perform similar functions under a new system.”

The FBI has maintained that its “intent is not to ‘scrape’ or otherwise monitor individual social media activity,” instead insisting that it “seeks to identify an immediate alerting capability to better enable the FBI to quickly respond to ongoing national security and public safety-related incidents.” Dataminr has also previously told The Intercept that its software “does not provide any government customers with the ability to target, monitor or profile social media users, perform geospatial, link or network analysis, or conduct any form of surveillance.” 

While it may be technically true that flagging social media posts based on keywords isn’t the same as continuously flagging posts from a specific account, the notion that this doesn’t amount to monitoring specific users is misleading. If an account is routinely using certain keywords (e.g. #BlackLivesMatter), flagging those keywords would surface the same accounts repeatedly.

The 2015 threat report for which ZeroFox was criticized specifically called for “continuous monitoring” of McKesson and Elzie. In an interview with The Intercept, Elzie stressed how incompetent the FBI’s analysis of social media was in her situation. She described a visit the FBI paid her parents in 2016, telling them that it was imperative she not attend the Republican National Convention in Cleveland — an event she says she had no intention of attending and which troll accounts on Twitter bearing her name claimed she would be at to foment violence. (The FBI confirmed that it was “reaching out to people to request their assistance in helping our community host a safe and secure convention,” but did not respond to allegations that they were trying to discourage activists from attending the convention.)

“My parents were like why would she be going to the RNC? And that’s where the conversation ended because they couldn’t answer that.”

“I don’t think [ZeroFox] should be getting $14 million dollars [from] the same FBI that knocked on my family’s door [in Missouri] and looked for me when it was world news that I was in Baton Rouge at the time,” Elzie told The Intercept. “They’re just very unserious, both organizations.”

The FBI was so dependent on automated social media monitoring for ascertaining threats that the temporary loss of access to such software led to panic by bureau officials.

“This investigation found that FBI’s efforts to effectively detect threats on social media in the lead-up to January 6th were hampered by the Bureau’s change in contracts mere days before the attack,” the report says. “Internal FBI communications obtained by the Committee show how that transition caused confusion and concern as the Bureau’s open-source monitoring capabilities were degraded less than a week before January 6th.” 

One of the FBI communications obtained by the committee was an email from an FBI official at the Washington Field Office, lamenting the loss of Dataminr, which the official deemed “crucial.”

“Their key term search allows Intel to enter terms we are interested in without having to constantly monitor social media as we’ll receive notification alerts when a social media posts [sic] hits on one of our key terms,” the FBI official said.

“The amount of time saved combing through endless streams of social media is spent liaising with partners and collaborating and supporting operations,” the email continued. “We will lose this time if we do not have a social media tool and will revert to scrolling through social media looking for concerning posts.”

But civil libertarians have routinely cautioned against the use of automated social media surveillance tools not just because they place nonviolent, constitutionally protected speech under suspicion, but also for their potential to draw undue scrutiny to posts that represent no threat whatsoever. 

While tools like ZeroFox and Dataminr may indeed spare FBI analysts from poring over timelines, the company’s in-house definition of what posts are relevant or constitute a “threat” can be immensely broad. Dataminr has monitored the social media usage of people and communities of color based on law enforcement biases and stereotypes

A May report by The Intercept also revealed that the U.S. Marshals Service’s contract with Dataminr had the company relaying not only information about peaceful abortion rights protests, but also web content that had no apparent law enforcement relevance whatsoever, including criticism of the Met Gala and jokes about Donald Trump’s weight.

The FBI email closes noting that “Dataminr is user friendly and does not require an expertise in social media exploitation.” But that same user-friendliness can lead government agencies to rely heavily on the company’s designations of what is important or what constitutes a threat. 

The dependence is mutual. In its Securities and Exchange Commission filing, ZeroFox says that “one U.S. government customer accounts for a substantial portion” of its revenue.

Additional reporting by Sam Biddle.

The post FBI Hired Social Media Surveillance Firm That Labeled Black Lives Matter Organizers “Threat Actors” appeared first on The Intercept.

]]>
https://theintercept.com/2023/07/06/fbi-social-media-surveillance-zerofox/feed/ 0
<![CDATA[LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes]]> https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates/ https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates/#respond Tue, 20 Jun 2023 20:33:27 +0000 https://production.public.theintercept.cloud/?p=431690 ICE uses LexisNexis to track people's cars, gather information on people, and make arrests for its deportation machine, according to a contract.

The post LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes appeared first on The Intercept.

]]>
The legal research and public records data broker LexisNexis is providing U.S. Immigration and Customs Enforcement with tools to target people who may potentially commit a crime — before any actual crime takes place, according to a contract document obtained by The Intercept. LexisNexis data then helps ICE to track the purported pre-criminals’ movements.

The unredacted contract overview provides a rare look at the controversial $16.8 million agreement between LexisNexis and ICE, a federal law enforcement agency whose surveillance of and raids against migrant communities are widely criticized as brutal, unconstitutional, and inhumane.

“The purpose of this program is mass surveillance at its core.”

“The purpose of this program is mass surveillance at its core,” said Julie Mao, an attorney and co-founder of Just Futures Law, which is suing LexisNexis over allegations it illegally buys and sells personal data. Mao told The Intercept the ICE contract document, which she reviewed for The Intercept, is “an admission and indication that ICE aims to surveil individuals where no crime has been committed and no criminal warrant or evidence of probable cause.”

While the company has previously refused to answer any questions about precisely what data it’s selling to ICE or to what end, the contract overview describes LexisNexis software as not simply a giant bucket of personal data, but also a sophisticated analytical machine that purports to detect suspicious activity and scrutinize migrants — including their locations.

“This is really concerning,” Emily Tucker, the executive director of Georgetown Law School’s Center on Privacy and Technology, told The Intercept. Tucker compared the contract to controversial and frequently biased predictive policing software, causing heightened alarm thanks to ICE’s use of license plate databases. “Imagine if whenever a cop used PredPol to generate a ‘hot list’ the software also generated a map of the most recent movements of any vehicle associated with each person on the hot list.”

The document, a “performance of work statement” made as part of the contract with ICE, was obtained by journalist Asher Stockler through a public records request and shared with The Intercept. LexisNexis Risk Solutions, a subsidiary of LexisNexis’s parent company, inked the contract with ICE, a part of the Department of Homeland Security, in 2021.

“LexisNexis Risk Solutions prides itself on the responsible use of data, and the contract with the Department of Homeland Security encompasses only data allowed for such uses,” said LexisNexis spokesperson Jennifer Richman. She told The Intercept the company’s work with ICE doesn’t violate the law or federal policy, but did not respond to specific questions.

The document reveals that over 11,000 ICE officials, including within the explicitly deportation-oriented Enforcement and Removal Operations branch, were using LexisNexis as of 2021. “This includes supporting all aspects of ICE screening and vetting, lead development, and criminal analysis activities,” the document says.

In practice, this means ICE is using software to “automate” the hunt for suspicious-looking blips in the data, or links between people, places, and property. It is unclear how such blips in the data can be linked to immigration infractions or criminal activity, but the contract’s use of the term “automate” indicates that ICE is to some extent letting computers make consequential conclusions about human activity. The contract further notes that the LexisNexis analysis includes “identifying potentially criminal and fraudulent behavior before crime and fraud can materialize.” (ICE did not respond to a request for comment.)

LexisNexis supports ICE’s activities through a widely used data system named the Law Enforcement Investigative Database Subscription. The contract document provides the most comprehensive window yet for what data tools might be offered to a LEIDS clients. Other federal, state, and local authorities who pay a hefty subscription fee for the LexisNexis program could have access to the same powerful surveillance tools used by ICE.

The LEIDS program is used by ICE for “the full spectrum of its immigration enforcement,” according to the contract document. LexisNexis’s tools allow ICE to monitor the personal lives and mundane movements of migrants in the U.S., in search of incriminating “patterns” and for help to “strategize arrests.”

The ICE contract makes clear the extent to which LexisNexis isn’t simply a resource to be queried but a major power source for the American deportation machine.

LexisNexis is known for its vast trove of public records and commercial data, a constantly updating archive that includes information ranging from boating licenses and DMV filings to voter registrations and cellphone subscriber rolls. In the aggregate, these data points create a vivid mosaic of a person’s entire life, interests, professional activities, criminal run-ins no matter how minor, and far more.

While some of the data is valuable for the likes of researchers, journalists, and law students, LexisNexis has turned the mammoth pool of personal data into a lucrative revenue stream by selling it to law enforcement clients like ICE, who use the company’s many data points on over 280 million different people to not only determine whether someone constitutes a “risk,” but also to locate and apprehend them.

LexisNexis has long since deflected questions about its relationship by citing ICE’s “national security” and “public safety” mission; the agency is responsible for both criminal and civil immigration violations, including smuggling, other trafficking, and customs violations. The contract’s language, however, indicates LexisNexis is empowering ICE to sift through an large sea of personal data to do exactly what advocates have warned against: busting migrants for civil immigration violations, a far cry from thwarting terrorists and transnational drug cartels.

Related

ICE Searched LexisNexis Database Over 1 Million Times in Just Seven Months

ICE has a documented history of rounding up and deporting nonviolent immigrants without any criminal history, whose only offense may be something on the magnitude of a traffic violation or civil immigration violation. The contract document further suggests LexisNexis is facilitating ICE’s workplace raids, one of the agency’s most frequently criticized practices, by helping immigration officials detect fraud through bulk searches of Social Security and phone numbers.

ICE investigators can use LexisNexis tools, the document says, to pull a large quantity of records about a specified individual’s life and visually map their relationships to other people and property. The practice stands as an exemplar of the digital surveillance sprawl that immigrant advocates have warned unduly broadens the gaze of federal suspicion onto masses of people.

Citing language from the contract, Mao, the lawyer on the lawsuit, said, “‘Patterns of relationships between entities’ likely means family members, one of the fears for immigrants and mixed status families is that LexisNexis and other data broker platforms can map out family relationships to identify, locate, and arrest undocumented individuals.”

The contract shows ICE can combine LexisNexis data with databases from other outside firms, namely PenLink, a controversial company that helps police nationwide request private user data from social media companies.

In this Wednesday, April 29, 2020 photo, a surveillance camera, top right, and license plate scanners, center, are seen at an intersection in West Baltimore. On Friday, May 1, planes equipped with cameras will begin creating a continuous visual record of the city of Baltimore so that police can see how potential suspects and witnesses moved to and from crime scenes. Police alerted to violent crimes by street-level cameras and gunfire sound detectors will work with analysts to see just where people came and went.

A license plate reader, center, and surveillance camera, top right, are seen at an intersection in West Baltimore, Md., on April 29, 2020.

Photo: Julio Cortez/AP

The contract’s “performance of work statement” mostly avoids delving into the numerous categories of data LEIDS makes available to ICE, but it does make clear the importance of one: scanned license plates .

The automatic scanning of license plates has created a feast for data-hungry government agencies, providing an effective means of tracking people. Many people are unaware that their license plates are continuously scanned as they drive throughout their communities and beyond — thanks to automated systems affixed to traffic lights, cop cars, and anywhere else a small camera might fit. These automated license plate reader systems, or ALPRs, are employed by an increasingly diverse range of surveillance-seekers, from toll booths to homeowners associations.

Police are a major consumer of the ALPR spigot. For them, tracking the humble license plate is a relatively cheap means of covertly tracking a person’s movements while — as with all the data offered by LexisNexis — potentially bypassing Fourth Amendment considerations. The trade in bulk license plate data is generally unregulated, and information about scanned plates is indiscriminately aggregated, stored, shared, and eventually sold through companies like LexisNexis and Thomson Reuters.

Though LexisNexis explored selling ICE its license plate scanner data according to the FOIA materials, federal procurement records show Thomson Reuters Special Services, a top LexisNexis Risk Solutions competitor, was awarded a contract in 2021 to provide license plate data. (Thomson Reuters did not immediately respond to a request for comment.)

A major portion of the LEIDS overview document details ICE’s access to and myriad use of license plate reader data to geolocate its targets, providing the agency with 30 million new plate records monthly. The document says ICE can access data on any license plate query going back years; while the time frame for different kinds of investigations aren’t specified, the contract document says immigration investigations can query location and other data on a license plate going back five years.

“This begins to look a lot like indiscriminate, warrantless real-time surveillance capabilities for ICE with respect to any vehicle.”

The LEIDS license plate bounty provides ICE investigators with a variety of location-tracking surveillance techniques, including the ability to learn which license plates — presumably including people under no suspicion of any wrongdoing — have appeared in a location of interest. Users subscribing to LEIDS can also plug a plate into the system and automatically get updates on the car as they come in, including maps and vehicle images. ICE investigators are allowed to place up to 2,500 different license plates onto their own watchlist simultaneously, the contract notes.

ICE agents can also bring the car-tracking tech on the road through a dedicated smartphone app that allows them to, with only a few taps, snap a picture of someone’s plate to automatically place them on the watchlist. Once a plate of interest is snapped and uploaded, ICE agents then need only to wait for a convenient push notification informing them that there’s been activity detected about the car.

Related

How ICE Uses Social Media to Surveil and Arrest Immigrants

Combining the staggering number of plates with the ability to search them from anywhere provides a potent tool with little oversight, according to Tucker, of Georgetown Law.

Tucker told The Intercept, “This begins to look a lot like indiscriminate, warrantless real-time surveillance capabilities for ICE with respect to any vehicle encountered by any agent in any context.”

In conjunction with Thomson Reuters plate-reader data, the information provided by LexisNexis creates a potential for powerful tracking. Vehicle ownership and registration information from motor vehicle departments, for instance, can tie specific people to plate numbers. In addition, LexisNexis sells many other forms of personal information that can be used to chart a person’s general location and movements over time: Data on jail bookings, home utilities, and other detailed property and financial records tie people to both places and others in a way that’s difficult if not impossible to opt out of.

LexisNexis’s LEIDS program is, crucially, not an outlier in the United States. For-profit data brokers are increasingly tapped by law enforcement and intelligence agencies for both the vastness of the personal information they collect and the fact that this data can be simply purchased rather than legally obtained with a judge’s approval.

“Today, in a way that far fewer Americans seem to understand, and even fewer of them can avoid, CAI includes information on nearly everyone,” warned a recently declassified report from the Office of the Director of National Intelligence on so-called commercially available information. Specifically citing LexisNexis, the report said the breadth of the information “could be used to cause harm to an individual’s reputation, emotional well-being, or physical safety.”

While the ICE contract document is replete with mentions of how these tools will be used to thwart criminality — obscuring the extent to which this the ends up deporting noncriminal migrants guilty of breaking only civil immigration rules — Tucker said the public should take seriously the inflated ambitions of ICE’s parent agency, the Department of Homeland Security.

“What has happened in the last several years is that DHS’s ‘immigration enforcement’ activities have been subordinated to its mass surveillance activities,” Tucker said, “which produce opportunities for immigration enforcement but no longer have the primary purpose of immigration enforcement.”

“What has happened in the last several years is that DHS’s ‘immigration enforcement’ activities have been subordinated to its mass surveillance activities.”

The federal government allows the general Homeland Security apparatus so much legal latitude, Tucker explained, that an agency like ICE is the perfect vehicle for indiscriminate surveillance of the general public, regardless of immigration status.

“That’s not to say that DHS isn’t still detaining and deporting hundreds of thousands of people every year. Of course they are, and it’s horrific,” Tucker said. “But the main goal of DHS’s surveillance infrastructure is not immigration enforcement, it’s … surveillance.

“Use the agency that operates with the fewest legal and political restraints to put everyone inside a digital panopticon, and then figure out who to target for what kind of enforcement later, depending on the needs of the moment.”

Update: June 21, 2023
This story has been updated to clarify that Thomson Reuters Special Services was contracted in 2021 to provide license plate scanner data for the LEIDS program used by ICE.

Update: June 23, 2023
This story has been updated to include specifics on the types of data LexisNexis makes available to ICE that could allow the agency to geolocate and track people.

The post LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes appeared first on The Intercept.

]]>
https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates/feed/ 0 License plate reader Surveillance A license plate reader, center and surveillance camera, top right, are seen at an intersection in West Baltimore, April 29, 2020.
<![CDATA[Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest]]> https://theintercept.com/2023/06/13/jordan-world-bank-poverty-algorithm/ https://theintercept.com/2023/06/13/jordan-world-bank-poverty-algorithm/#respond Tue, 13 Jun 2023 15:42:46 +0000 https://production.public.theintercept.cloud/?p=431206 The algorithm used for the cash relief program is broken, a Human Rights Watch report found.

The post Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest appeared first on The Intercept.

]]>
A program spearheaded by the World Bank that uses algorithmic decision-making to means-test poverty relief money is failing the very people it’s intended to protect, according to a new report by Human Rights Watch. The anti-poverty program in question, known as the Unified Cash Transfer Program, was put in place by the Jordanian government.

Having software systems make important choices is often billed as a means of making those choices more rational, fair, and effective. In the case of the poverty relief program, however, the Human Rights Watch investigation found the algorithm relies on stereotypes and faulty assumptions about poverty.

“Its formula also flattens the economic complexity of people’s lives into a crude ranking.”

“The problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances,” the report found. “Its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.”

The program, known in Jordan as Takaful, is meant to solve a real problem: The World Bank provided the Jordanian state with a multibillion-dollar poverty relief loan, but it’s impossible for the loan to cover all of Jordan’s needs.  

Without enough cash to cut every needy Jordanian a check, Takaful works by analyzing the household income and expenses of every applicant, along with nearly 60 socioeconomic factors like electricity use, car ownership, business licenses, employment history, illness, and gender. These responses are then ranked — using a secret algorithm — to automatically determine who are the poorest and most deserving of relief. The idea is that such a sorting algorithm would direct cash to the most vulnerable Jordanians who are in most dire need of it. According to Human Rights Watch, the algorithm is broken.

The rights group’s investigation found that car ownership seems to be a disqualifying factor for many Takaful applicants, even if they are too poor to buy gas to drive the car.

Similarly, applicants are penalized for using electricity and water based on the presumption that their ability to afford utility payments is evidence that they are not as destitute as those who can’t. The Human Rights Watch report, however, explains that sometimes electricity usage is high precisely for poverty-related reasons. “For example, a 2020 study of housing sustainability in Amman found that almost 75 percent of low-to-middle income households surveyed lived in apartments with poor thermal insulation, making them more expensive to heat.”

In other cases, one Jordanian household may be using more electricity than their neighbors because they are stuck with old, energy-inefficient home appliances.

Beyond the technical problems with Takaful itself are the knock-on effects of digital means-testing. The report notes that many people in dire need of relief money lack the internet access to even apply for it, requiring them to find, or pay for, a ride to an internet café, where they are subject to further fees and charges to get online.

“Who needs money?” asked one 29-year-old Jordanian Takaful recipient who spoke to Human Rights Watch. “The people who really don’t know how [to apply] or don’t have internet or computer access.”

Human Rights Watch also faulted Takaful’s insistence that applicants’ self-reported income match up exactly with their self-reported household expenses, which “fails to recognize how people struggle to make ends meet, or their reliance on credit, support from family, and other ad hoc measures to bridge the gap.”

The report found that the rigidity of this step forced people to simply fudge the numbers so that their applications would even be processed, undermining the algorithm’s illusion of objectivity. “Forcing people to mold their hardships to fit the algorithm’s calculus of need,” the report said, “undermines Takaful’s targeting accuracy, and claims by the government and the World Bank that this is the most effective way to maximize limited resources.”

Related

AI Tries (and Fails) to Detect Weapons in Schools

The report, based on 70 interviews with Takaful applicants, Jordanian government workers, and World Bank personnel, emphasizes that the system is part of a broader trend by the World Bank to popularize algorithmically means-tested social benefits over universal programs throughout the developing economies in the so-called Global South.

Confounding the dysfunction of an algorithmic program like Takaful is the increasingly held naïve assumption that automated decision-making software is so sophisticated that its results are less likely to be faulty. Just as dazzled ChatGPT users often accept nonsense outputs from the chatbot because the concept of a convincing chatbot is so inherently impressive, artificial intelligence ethicists warn the veneer of automated intelligence surrounding automated welfare distribution leads to a similar myopia.

The Jordanian government’s official statement to Human Rights Watch defending Takaful’s underlying technology provides a perfect example: “The methodology categorizes poor households to 10 layers, starting from the poorest to the least poor, then each layer includes 100 sub-layers, using statistical analysis. Thus, resulting in 1,000 readings that differentiate amongst households’ unique welfare status and needs.”

“These are technical words that don’t make any sense together.”

When Human Rights Watch asked the Distributed AI Research Institute to review these remarks, Alex Hanna, the group’s director of research, concluded, “These are technical words that don’t make any sense together.” DAIR senior researcher Nyalleng Moorosi added, “I think they are using this language as technical obfuscation.”

As is the case with virtually all automated decision-making systems, while the people who designed Takaful insist on its fairness and functionality, they refuse to let anyone look under the hood. Though it’s known Takaful uses 57 different criteria to rank poorness, the report notes that the Jordanian National Aid Fund, which administers the system, “declined to disclose the full list of indicators and the specific weights assigned, saying that these were for internal purposes only and ‘constantly changing.’”

While fantastical visions of “Terminator”-like artificial intelligences have come to dominate public fears around automated decision-making, other technologists argue civil society ought to focus on real, current harms caused by systems like Takaful, not nightmare scenarios drawn from science fiction.

So long as the functionality of Takaful and its ilk remain government and corporate secrets, the extent of those risks will remain unknown.

The post Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest appeared first on The Intercept.

]]>
https://theintercept.com/2023/06/13/jordan-world-bank-poverty-algorithm/feed/ 0
<![CDATA[Is Bluesky Billionaire-Proof?]]> https://theintercept.com/2023/06/01/bluesky-owner-twitter-elon-musk/ https://theintercept.com/2023/06/01/bluesky-owner-twitter-elon-musk/#respond Thu, 01 Jun 2023 16:15:14 +0000 https://production.public.theintercept.cloud/?p=429809 Here are some answers about the new social media network Bluesky that you don’t need an invite to see.

The post Is Bluesky Billionaire-Proof? appeared first on The Intercept.

]]>
For someone who hasn’t been on Twitter since it became a safe space for the far right under Elon Musk’s leadership, the new invite-only social media network Bluesky can feel like a nostalgic breath of fresh air. The vibes are great. A lot of old communities from Twitter that never quite made the jump to Mastodon — a harder-to-use federated social network — have shown up in Bluesky.

Like Mastodon, Bluesky is an open-source, decentralized social network. Unlike Mastodon, which is notoriously confusing for the uninitiated, it’s simple to get started on Bluesky. The user interface is clean and familiar to people accustomed to modern commercial apps. Bluesky embraces user control over their timelines, both in terms of algorithmic choice — the Mastodon project is hostile to algorithms — and customizable content moderation.

There are other fundamental differences between the two projects. While Mastodon is a scrappy nonprofit, Bluesky PBLLC is a for-profit startup. And while Mastodon is a vibrant network of thousands of independent social media that federate with each other, Bluesky’s “decentralization” is only in theory. So far there’s only one site that uses Bluesky’s decentralized AT Protocol, and that site is Bluesky Social.

It is mostly for these and related reasons that people on Mastodon get very defensive when Bluesky comes up. “Why are you helping oligarchs test their products? Are they paying you or do you do it out of sheer loyalty?” one stranger asked me when I posted about some of Bluesky’s creative moderation features that had recently dropped.

Amid the noise, though, there are genuine concerns about how Bluesky is operated and what the people behind it aim to do. It’s wise to remember that the company started off with $13 million of funding from pre-Musk Twitter, when Jack Dorsey, who is now at Bluesky, was CEO.

The history and the arrangement raise several questions: Who owns Bluesky PBLLC? What is the role of Dorsey, who famously tweeted about Musk’s purchase of Twitter that “Elon is the singular solution I trust”? What is Bluesky’s business model? What prevents another Elon Musk from buying Bluesky PBLLC and destroying it 10 years down the line? Many of the answers are out there — many even posted to Bluesky itself by its employees. Since Bluesky is still a private invite-only site, here are some of these answers for Bluesky skeptics to see.

Who Owns Bluesky?

“Bluesky, the company, is a Public Benefit LLC. It is owned by Jay Graber and the Bluesky team,” according to the site’s Frequently Asked Questions page. This is exactly what Jeromy Johnson, a former engineer for the distributed file system IPFS and a technical adviser to Bluesky who goes by Whyrusleeping, said when asked in early April.

Bluesky technical advisor Jeremy Johnson’s post about who own’s Bluesky PBLLC

Bluesky technical adviser Jeromy Johnson’s post about who owns Bluesky PBLLC.

Screenshot: Micah Lee/The Intercept

One user — who like nearly everyone else on the site was psyched to be essentially tweeting but without having to deal with Twitter — inquired who owns Bluesky. Why said that “the founding team holds the equity” and that Dorsey himself is not an owner. (You can verify that Why is part of the Bluesky team because of how self-verifying handles work in the AT Protocol; only people who control the domain name bsky.team are able to have handles like that.)

When asked for clarification about Bluesky’s ownership, Emily Liu, another member of the Bluesky team, told me that Bluesky has been offering employees equity as part of their compensation packages, as is a common practice with startups. She also confirmed that Bluesky PBLLC’s board consists of Graber, Dorsey, and Jeremie Miller, inventor of the open and decentralized chat protocol Jabber.

For burgeoning Twitter skeptics, this should be good news: a much better arrangement than if it were owned by Dorsey or, worse yet, if it were a subsidiary of Twitter. The arrangement also explains why Bluesky PBLLC appears on Dun & Bradstreet’s list of minority and women-owned businesses: Jay Graber, Bluesky PBLLC’s CEO and primary owner, is a woman of color.

What About Twitter’s Role?

In December 2019, Dorsey, who was Twitter’s CEO at the time, announced that the company was funding Bluesky, which he described as “a small independent team of up to five open source architects, engineers, and designers to develop an open and decentralized standard for social media.”

This ultimately turned into the independent company Bluesky PBLLC, incorporated in late 2021, with $13 million in initial funding from Twitter.

Does Twitter, with Musk at the helm, have any power over Bluesky now? As is the habit of other Bluesky team members, Graber explained the situation on Bluesky. According to Graber, she “spent 6 mo of 2021 negotiating for bluesky to be built in an org independent from twitter, and boy was that the right decision.” In response to another question, Graber confirmed that Bluesky doesn’t “owe” Twitter anything.

Graber’s post explaining that Bluesky doesn’t owe Twitter anything.

Jay Graber’s post explaining that Bluesky doesn’t owe Twitter anything.

Screenshot: Micah Lee/The Intercept

Bluesky PBLLC is 100 percent independent from Twitter and Elon Musk.

What is a Public Benefit LLC?

In the name Bluesky PBLLC, PB stands for Public Benefit. PBLLCs are a relatively new type of corporation that’s designed for companies that want to promote a general or specific public benefit as opposed to just making a profit.

When whistleblower Chelsea Manning asked why Bluesky chose to incorporate as a PBLLC, Graber explained her reasoning.

Graber’s post explaining why her company chose a Public Benefit LLC

Jay Graber’s post explaining why Bluesky formed as a Public Benefit LLC.

Screenshot: Micah Lee/The Intercept

According to Graber, they chose PBLLC because it was fast to form and because “being Public Benefit means shareholders can’t sue us for pursing mission over profit.” The mission appears to be the design and promotion of the AT Protocol and its ecosystem of (eventually) other social networks that federate with Bluesky Social, along with the larger Bluesky developer community that has sprung up.

Liu, who answered some of my questions, did not respond when I asked for the exact language the Bluesky PBLLC used to describe its public benefit mission when incorporating the company. She also didn’t say whether the company would publish its annual benefits reports — reports that PBLLCs are required to create each year, but PBLLCs incorporated in Delaware, where Bluesky was incorporated, are not required to make them public.

In her email, Liu said, “We’re generally not taking interviews right now because we’re heads down on work.”

Bluesky’s Business Model

AT Protocol is open, and the code that powers Bluesky Social is open source. Yet Bluesky PBLLC is still a for-profit company. How do they plan to make money? “We’ll be publishing a blog post on our monetization plans in a few weeks, and we’ll share more then,” Liu told me.

In the meantime, the team has openly discussed hints of some of their potential plans on Bluesky. According to Why, advertising might play a role in the future.

Johnson’s post about if Bluesky will have ads

Jeromy Johnson’s post about if Bluesky will have ads.

Screenshot: Micah Lee/The Intercept

And Paul Frazee, an engineer who’s been livestreaming his Bluesky coding, hinted that the company may be considering some sort of paid subscription component. “[H]ypothetically speaking,” Frazee asked in a post, “if bluesky ever did a paid subscription thing, what would we call it.” Though Frazee was also quick to point out that he’s not as terrible at business as Musk is and wouldn’t use paid subscriptions to destroy the product — à la Twitter’s $8-a-month “verified” blue checkmarks.

Regardless of how Bluesky PBLLC eventually monetizes its product, if it gets its way, this monetization would only affect users of Bluesky Social. In the future, if you didn’t like the ads you were seeing in Bluesky, for example, the AT Protocol would allow you to take your account, including your handle, your followers, and all your posts, and move to a different social network you like better, so long as it also used the AT Protocol.

Resilient to Billionaires?

If we learned anything from Twitter over this last year, it’s that you can’t trust billionaires. By all accounts, the owners of Bluesky appear to be genuinely interested in remaking social media so that users have control instead of big tech companies like Twitter. But it’s possible that one day they could become seduced by obscene amounts of money to sell their shares of the company to an Elon Musk character who is hellbent on owning the libs. What would happen then?

Part of the problem with Twitter’s demise is that so many people have spent the last decade building up an audience there, making it very hard to finally pull the plug and start over from scratch somewhere else — even after several months of Musk’s policies have rapidly made the site more toxic and less useful at the same time.

The whole idea behind the AT Protocol, though, is that if you don’t like Bluesky Social for whatever reason, you can simply move to a rival social media site without losing your data or social graph. This is called “account portability,” and it’s baked into the core of the AT Protocol. It’s also a feature that Mastodon doesn’t support; it is possible to move your Mastodon account from one server to another and keep your followers, but only if your original server cooperates, and you’re willing to lose your old data.

So hypothetically, if a billionaire one day buys Bluesky PBLLC and ruins it, it won’t matter. Anyone who doesn’t like how Bluesky Social is run can simply switch to a rival service without losing their post history or their followers. When Musk took over Twitter and starting bringing back neo-Nazis and banning antifascists, imagine if you could have simply ported your account over to another social media site and then just kept tweeting like normal. That’s the promise of the AT Protocol.

Account portability is exactly how, once it begins to federate with other servers, Bluesky hopes to avoid the confusion that Mastodon is famous for. As Frazee explained, keeping Bluesky easy to use is a top priority.

Bluesky engineer Paul Frazee’s posts about emphasizing a good user experience

Bluesky engineer Paul Frazee’s posts about emphasizing a good user experience.

Screenshot: Micah Lee/The Intercept

Bluesky’s usability plan is simple: When you install the app and create an account, you’ll get an account on the default server, Bluesky Social (unless you already have a preference). Then, at any point after that, you can simply move your account to any other server that you prefer.

Of course, account portability is only possible if there are other AT Protocol sites to port your account to, and so far, Bluesky Social is the only one.

“Right now, Bluesky is the only option because we haven’t launched federation yet, but we’ll be starting with a sandbox environment for federation soon,” Liu told me, mentioning a recent blog post that gives an overview of how it will work. “Other companies are working on Bluesky and atproto integrations already, and when the federation sandbox launches, we’ll work with community developers and external teams to build more on the AT Protocol.”

It’s too early to tell whether Bluesky will succeed, but if it works out the way the team hopes, social media users will have far more power and tech companies — and the billionaires who own them — will have far less.

The post Is Bluesky Billionaire-Proof? appeared first on The Intercept.

]]>
https://theintercept.com/2023/06/01/bluesky-owner-twitter-elon-musk/feed/ 0 bluesky twitter Bluesky technical advisor Jeremy Johnson’s post about who own’s Bluesky PBLLC. bluesky twitter Graber’s post explaining that Bluesky doesn’t owe Twitter anything. bluesky twitter Graber’s post explaining why her company chose a Public Benefit LLC. bluesky twitter Johnson’s post about if Bluesky will have ads. bluesky twitter Bluesky engineer Paul Frazee’s posts about emphasizing a good user experience.
<![CDATA[What We’re Reading and Watching]]> https://theintercept.com/2023/05/28/book-recommendations-summer-reading/ https://theintercept.com/2023/05/28/book-recommendations-summer-reading/#respond Sun, 28 May 2023 10:00:00 +0000 https://production.public.theintercept.cloud/?p=429399 Book recommendations and more from Intercept staffers.

The post What We’re Reading and Watching appeared first on The Intercept.

]]>
Fiction

Nights of Plague,” Orhan Pamuk
Like many other people during the pandemic, I searched for books that could help me understand the impact of a mass disease outbreak on society. Above any book of epidemiology or history, however, I found that this novel by Turkish writer Orhan Pamuk about an outbreak of plague on a fictional Mediterranean island to be the most enlightening about how disease can sap the human spirit and break open divisions within a society. His writing is darkly humorous and full of pathos — highly recommended for anyone looking for a novel to immerse themselves in this summer. – Murtaza Hussain

Cuatro Manos,” Paco Ignacio Taibo II
The novel “Cuatro Manos” was published in 1997 and features major historical characters and events from 20th century Latin America. Taibo, a renowned author and activist in Mexico, guides us through a story of two journalists in the 1980s. They begin to investigate unpublished and undiscovered works by Russian revolutionary Leon Trotsky, written during his exile in Mexico City. The book jumps between the past and the present. And the two journalists’ travels through Latin America overlap with drug traffickers, a Spanish anarchist, a Bulgarian communist, and a shady CIA agent. It’s a light, fun novel, but it may require the reader to stop at every few pages and independently research historical events Taibo narrates, like the CIA’s alleged involvement in the killing of Salvadoran poet Roque Dalton. – José Olivares

Harrow,” Joy Williams
On the banks of a fetid lake called Big Girl, a cadre of aging rebels plots acts of ecoterrorism. They don’t consider themselves terrorists, though, reserving that appellation for bankers and war-mongers, “exterminators and excavators … those locusts of clattering, clacking hunger.” You can hardly blame them. In this vision of a near-future beset by ecological collapse, oranges and horses are long gone, but Disney World has “rebooted and is going strong.” A girl named Khirsten, or Lamb, who may or may not have been resurrected as an infant, stumbles upon the group after her mother disappears and her boarding school abruptly shuts down.

This is the rough plot of “Harrow” by Joy Williams, but the plot is not really the point. Williams is a worldbuilder, crafting mood and meaning out of layered fragments. Her writing is often called “experimental,” but if anything, oblique prose is the truest way to capture life under the yoke of apocalypse, the dizzying absurdity of deciding to forsake Earth for profit. Sometimes, lucid revelations peek through — “I think the world is dying because we were dead to its astonishments pretty much. It’ll be around but it will become less and less until it’s finally compatible with our feelings for it” — though for the most part, the world of “Harrow” is a labyrinth of decay. But don’t be mistaken: The book is very funny. Apocalypse is a slow creep, and while the Earth might not end with a bang, at least in “Harrow,” it ends with one final, reverberating laugh. – Schuyler Mitchell

Red Team Blues,” Cory Doctorow
I just started “Red Team Blues,” and I can’t put it down. I’ve always loved Cory Doctorow’s novels, and this one is no exception. The protagonist, a 67-year-old retired forensic accountant who lives alone in his RV called the Unsalted Hash, spent his career tracking down assets of the ultra-rich by unwinding their shady networks of shell companies. He took one final job from an old friend and found himself both incredibly rich and in a world of trouble, trying to escape with his life. This book is a cryptocurrency techno-thriller (full of characters who are skeptical of crypto bros and insist that “crypto means cryptography”), and it’s full of money laundering, tax havens, lawyers for the 1 percent, organized crime and murders, hacking and open source intelligence, and so much more. This is the first book in a new series that I definitely plan on reading as they come out. – Micah Lee

In Memory of Memory,” Maria Stepanova
Appropriate to its contents, the title so easy to remember, yet always escapes memory. – Fei Liu

Long Way Down,” Jason Reynolds
I don’t often reach for poetry, but I had 15 minutes before I boarded a flight and had neglected to pack a book. The cover was riddled with awards and, most importantly, it was right next to the checkout. “Long Way Down” captures an emotional journey of grief built around a young man’s descent in an elevator after his brother is shot and killed. The book is an intense, quick read (I finished before we landed), written in captivating staccato narrative verse. The anxiety was palpable and fierce, and the structure truly enhances the reading experience. I found myself reflecting on Reynolds’s motivation for structural decisions, just as much as his word choice. Overall, “Long Way Down” is a powerful study in the traumatic and lasting impact of violence on individuals and communities. – Kate Miller

The Melancholy of Resistance,” László Krasznahorkai
I’ve been — very slowly! — reading “The Melancholy of Resistance” by László Krasznahorkai, a Hungarian writer best known in the U.S. for Béla Tarr’s grueling film adaptation of his novel “Sátántangó.” Written during the collapse of Eastern Bloc communism, “Melancholy” tells the surreal tale of a rubbish-strewn town visited by a mysterious circus exhibiting only the body of a giant whale, which slowly incites the townspeople to madness. As the town’s petty tyrants scheme to use the chaos to their advantage, Krasznahorkai’s novel becomes a striking parable about the appeal of fascism in uncertain times, while his darkly funny stream-of-consciousness prose captures the devilish internal logic of anxiety. “His followers know all things are false pride, but they don’t know why.” Sound familiar? – Thomas Crowley

The Actual True Story of Ahmed and Zarga,” Mohamedou Ould Slahi
I found myself laughing, loudly, overcome with appreciation and awe during the first few pages of my friend Mohamedou Ould Slahi’s first novel, “The Actual True Story of Ahmed and Zarga.” Mohamedou opens the book by swearing “on the belly button of my only sister” that the story we are about to hear is a thousand percent true and that we must have already heard it before. What begins to unfold is a mystical tale so rich in detail, tradition, Mauritanian culture, and moral guidance that you feel Mohamedou himself is speaking all this to you, and only you, while slurping his hot tea and conjuring the tale with his hands. It’s impossible to put the pages down once you start across the desert with Ahmed, battling djinns, dreams, snakes, and the changing ways of the world as he races to find his missing camel named Zarga. While Mohamedou is best known for captivating the world with best-selling memoir “Guantánamo Diary” and as the subject of the film “The Mauritanian,” both about his time wrongly imprisoned and tortured at GTMO, it is this stunning novel, rich with wordplay, wit, and unwavering conviction, that lets us know his true heart. – Elise Swain

The Lathe of Heaven,” Ursula K. Le Guin
Have you ever woken up from a dream so intense that it affected you in real life? George Orr’s dreams change lived reality, so he wants to stop sleeping, and the only person who can cure him is his misguided psychiatrist whose ambitions to make their dystopia, and his own position in it, “better” means that Orr can’t be treated just yet. Le Guin’s topical themes of techno-utopianism, alternate realities, collective false memories, living nightmares, consent, and more make me forget that it was published in 1971. The novel also has aliens, untranslatable words, a Beatles song, plague history, and Hollywood-thriller plot scaffolding (a cinematic climax and almost forced coupling of the passive protagonist who falls in love with the lawyer helping him). Two video artists made a film adaptation in 1980 on a shoestring budget — with Le Guin’s active involvement — that was produced by NYC public television and aired on PBS. I haven’t watched it yet (it’s available on YouTube), but in my dream soundtrack for “The Lathe of Heaven,” I hear the late Pauline Anna Strom’s prelude-to-a-portal “Marking Time” over the opening credits. – Nara Shin

Nonfiction

The Undertow: Scenes From a Slow Civil War,” Jeff Sharlet
I’ve been reading Jeff Sharlet’s reporting on the varieties of Christian authoritarianism for more than 20 years. In books such as “The Family” and “C Street,” Sharlet exposed the political ambitions and hidden influence of shadowy and well-financed Christian extremists. Looking back, after the Trump presidency, his writings now seem prophetic. In “The Undertow,” Sharlet sets out to understand the movement that coalesced, under Donald Trump, into full-blown messianic fascism. How do we stop this slow-motion slide toward political violence, the strange lure of civil war?

The Last Honest Man,” James Risen’s political biography of Sen. Frank Church, should be required reading for anyone who wants to understand the dangers of the national security state. Risen’s book might also illuminate the underlying causes of the national pathology described in “The Undertow.” – Roger Hodge

Black Women Writers at Work,” Claudia Tate
In this powerhouse of a collection, Claudia Tate interviews iconic Black women writers, from Gwendolyn Brooks to Ntozake Shange, about their process, inspirations, critiques, and audience. I was personally thrilled to read about the differences between the structures of their writing processes, as well as their thoughts on craft — it’s a trove of knowledge for any writer, poet, or playwright. Black women writers are often lumped together as a monolith; this book breaks apart that belief throughout every single interview. – Skyler Aikerson

A World Without Soil,” Jo Handelsman
No time to write! Only to read and garden! – Fei Liu

Nineteen Reservoirs: On Their Creation and the Promise of Water for New York City,” Lucy Sante
Best known for “Lowlife,” her masterpiece history of low-class New York City’s metaphorical underground, Lucy Sante of late turned her sights on the underwater. Specifically, in “Nineteen Reservoirs,” she tells the stories of upstate New York valleys and ravines, hamlets and farms, all drowned one by one to expand the water supply of the growing metropolis downstate. Sante writes with the verve we expect from her, transmitting an astounding amount of rapid-fire details and facts with delectable prose that keeps it humming and makes it easy reading. – Ali Gharib

Mussolini’s Grandchildren,” David Broder
When it became clear last year that my country was about to elect its most rightwing government since Benito Mussolini gave fascism its name, I found it hard to explain to non-Italians how we had gotten there, so I pointed them to David Broder’s words instead. After speaking with Broder for a story about how new Prime Minister Giorgia Meloni had inspired a surge of far-right threats and attacks against journalists and critics, I picked up his book, “Mussolini’s Grandchildren,” a lucid if terrifying history drawing the direct and rather explicit line between Mussolini’s regime and Meloni’s political triumph. It’s a history even many Italians watched unfold almost without noticing, deluded by the notion that fascism is for the history books alone, or maybe just wishing to look the other way. It’s also by no means an Italian story alone. – Alice Speri

Chaos: Charles Manson, the CIA, and the Secret History of the Sixties,” Tom O’Neill
I am reading “Chaos” alongside “Women in Love” by D. H. Lawrence. I recommend listening to The Fucktrots while reading. – Daniel Boguslaw

Strange Tapes” zine
DIY zines oft offer a kaleidoscopic peek down the subcultural spiral. No matter how fringe a particular hobby may look, the deeper you dive into a given genre, the more singular the subject matter becomes. Strange Tapes is a zine devoted to the celebratory archaeology of unearthing VHS ephemera: analog jetsam that’s washed up on the shores of thrift stores and swap meets, or in the dregs of dusty attics and musty basements. The tapes covered range from promotional and instructional videos, to recorded home movies and Z-grade filmmaking efforts. Interspersed with reviews of the tapes are interviews with independent filmmakers, collectors, and other personalities. “Strange Tapes” is a zine for those who marvel at the sheer range of humanity’s knowledge base, and the accompanying desire to share those singular skill sets with the world at large, whether those proficiencies are in the realm of ocular yoga or canine choreography.  – Nikita Mazurov

Care Work: Dreaming Disability Justice,” Leah Lakshmi Piepzna-Samarasinha
A love letter to the sick and disabled queer and trans community of color in Canada and beyond. This collection of essays discusses everything from chronic suicidal ideation, accessible queer spaces, invisible femme labor, tips for sick and disabled artists who are traveling, and much, much more. Listening to this audiobook (narrated by the author) was such a beautiful, impactful experience; Piepzna-Samarasinha writes with sizzling rage and deep love for their communities in a way that will set you on fire. – Skyler Aikerson

Arabiyya: Recipes from the Life of an Arab in Diaspora,” Reem Assil
For the past several years, I’ve been learning to recreate the Syrian dishes I ate growing up, begging my mom to commit to writing (or at least a voice note) the recipes she knows via muscle memory and FaceTiming her when something just doesn’t look right. More recently, I’ve sought to expand my repertoire of dishes from Syria and the broader Levant by digging into cookbooks written by chefs from the region. “Arabiyya” by Reem Assil is the most recent addition to my collection, which also includes “The Palestinian Table” by Reem Kassis and “Feast: Food of the Islamic World” by Anissa Helou.

Assil, who was born in the United States to a Syrian father and Palestinian mother, weaves personal stories about her food experiences as a diaspora Arab with recipes that run the gamut from pickled vegetables to a slow-cooked lamb shoulder. I’ve so far attempted her shawarma mexiciyya (Mexican shawarma) — a fusion dish that she describes in English as al pastor-style red-spiced chicken — and her kafta bil bandoura, or meatballs in Arab-spiced tomato sauce. The shawarma recipe features my all-time favorite spice, Aleppo pepper, which I threw into the meatballs as well. (I don’t quite yet have my mom’s nafas yet, but I’m slowly but surely trying to wean myself off the dictates of a written recipe.) This summer, I’m looking forward to trying my hand at making saj, a flatbread named for the dome-shaped griddle it is prepared on, and musakhan, a Palestinian dish that involves sumac-spiced chicken. – Maryam Saleh

How to Stand Up to a Dictator,” Maria Ressa
Maria Ressa’s new book, “How to Stand up to a Dictator,” is both a memoir by a winner of the Nobel Peace Prize and a stirring call to action against the toxic power of social media companies and the autocrats that they enable around the world. – James Risen

Films

Joyland,” Saim Sadiq
I’ve thought about “Joyland” at least once a day since it opened in New York earlier this month. I’ve already seen it twice — that’s how obsessed I am with this gorgeous, emotional tour de force of a film. Haider is an unemployed, acquiescent young man who lives in a joint household in Lahore with his free-spirited wife, his conventionally masculine older brother and his family, and his elderly patriarch father. Haider finds a job as a backup dancer for a fierce trans burlesque performer, who he has an instant crush on. What happens from there sends a ripple effect through his family, as they each strain against the stifling scripts of gender and sexuality that they impose on themselves and each other.

“Joyland” is a deeply human story about untangling desires from obligations to embody the most honest version of ourselves for a chance to experience connection as we are. It’s a movie you feel just as much as you watch. – Rashmee Kumar

Return to Seoul,” Davy Chou
This movie is so unusual, a mixture of a transnational adoption documentary and a film noir, created by the French director Davy Chou. “Return to Seoul” follows the journey of a Korean adoptee played by the elusive Park Ji-min, who wasn’t an actor at all until taking the lead role in this film. Park’s character decides on a whim to return to the country where she was born, and the result is a film that goes sideways at every issue and scenario it lands on. Yes, it’s the saga of an adoptee who seeks out her birth parents, but that’s just some of what happens. It unfolds with visual and existential twists you don’t expect, keeping you in suspense until the last note. It also provides an imaginative variation on the discourse about the emotional dislocation that foreign adoption can involve. If you want to know more about that after the credits roll, I highly recommend the landmark “Adopted Territory,” written by anthropologist (and friend) Eleana J. Kim. – Peter Maass

The post What We’re Reading and Watching appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/28/book-recommendations-summer-reading/feed/ 0
<![CDATA[Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center]]> https://theintercept.com/2023/05/23/joe-manchin-rents-office-space-to-firm-powering-fbi-pentagon-biometric-surveillance-center/ https://theintercept.com/2023/05/23/joe-manchin-rents-office-space-to-firm-powering-fbi-pentagon-biometric-surveillance-center/#respond Tue, 23 May 2023 10:00:00 +0000 https://theintercept.com/?p=425788 Tygart Technology was founded by Manchin’s daughter in 1991, and it’s headquartered in the same building as his coal company.

The post Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center appeared first on The Intercept.

]]>
After killing Joe Biden’s audacious Build Back Better legislation in 2021 and emerging as a constant roadblock to Democrats’ sweeping climate agenda, Sen. Joe Manchin’s sprawling coal empire became the focus of intense scrutiny for its impact on the citizens and ecosystem of northern West Virginia. What went unnoticed at the time was another company the senator is quietly profiting off of, housed in the very same building where his coal company Enersystems is headquartered, with an even greater reach.

Manchin has said in recent weeks that he won’t rule out running to replace Biden in the 2024 presidential election. He maintains a cozy relationship with the moderate political nonprofit No Labels, which has raised tens of millions of dollars to run a third-party presidential ticket in 2024, and he himself has raised millions from special interest groups cheering on his intransigence. But while Manchin has long cultivated the image of a liberty-loving champion, his financial ties to a biometric surveillance company draw a sharp contrast.

Related

As Manchin Eyes Presidential Run, His Allies at No Labels Face Mounting Legal Challenges

For decades, Manchin has been the landlord of the lucrative biometric surveillance firm co-founded in 1991 by his then-23-year-old daughter Heather Bresch, along with her late husband Jack Kirby and Manchin’s brother-in-law, Manuel Llaneza.

According to Tygart Technology’s website, its mission focuses on “leveraging technology to support National Security.” Since at least 1999, the company has operated out of the Manchin Professional Building, where Manchin has collected tens of thousands of dollars in rent over the years, according to deed records, patent applications, and financial disclosures recording rent collection from the enterprise.

The firm received large contracts from the West Virginia state government in the years that Manchin served as secretary of state and then as governor. In more recent years, Tygart has secured tens of millions of dollars in federal contracts from law enforcement and defense agencies to supply biometric data collection services to intelligence operations in West Virginia and across the country.

Bresch has held no financial interests in the company since her divorce from Kirby in 1999, according to reporting from the Charleston Gazette, but she is still registered as an agent for the company, according to West Virginia Secretary of State records. Kirby died in 2019, but Tygart’s new president also has ties to the senator. John Waugaman served on Manchin’s transition team for governor, according to the company’s website, and has donated some $12,000 to Manchin in the past decade. Neither a spokesperson for Manchin nor Tygart Technology responded to The Intercept’s questions.

While the Pentagon and contractors like Tygart justify mass biometric surveillance in the name of national security, both civil liberties advocates and members of Congress have moved to head off what they view as excessive and dangerous data collection.

Federal lawmakers, led by Sen. Ed Markey, D-Mass., have introduced legislation since 2021 to ban biometric surveillance by the federal government, citing civil liberties advocates’ concerns about racial bias in biometric technology and the mass collection of personal data. Manchin has not supported this year’s bill or its previous iterations.

“The year is 2023, but we are living through 1984,” Markey said during the bill’s reintroduction this year. “Between the risks of sliding into a surveillance state and the dangers of perpetuating discrimination, this technology creates more problems than solutions. Every American who values their right to privacy, stands against discrimination, and believes people are innocent until proven guilty should be concerned. Enacting a federal moratorium on this technology is critical to ensure our communities are protected from inappropriate surveillance.”

“For a senator to be attached to an industrial-scale biometrics operation used in a wide range of criminal justice contexts is unsettling.”

John Davisson, director of litigation and senior counsel at the Electronic Privacy Information Center, or EPIC, said Manchin’s connection to the mass collection of biometric data — which he described as an “alarming activity” — is cause for concern. “Particularly when in the hands of law enforcement, mass biometric technology poses a heightened risk of civil liberties violations,” he told The Intercept. “For a senator to be attached to an industrial-scale biometrics operation used in a wide range of criminal justice contexts is unsettling.”

Tygart received its first contract from West Virginia in 2000, eventually billing the state for more than $6 million, including web service subcontracts worth tens of thousands of dollars. In 2006, the state auditor launched an investigation into the company as part of a larger audit request by then-Secretary of State Betty Ireland, embroiling Manchin, then governor, in a no-bid contract scandal for services rendered by Tygart Technology.

The audit ultimately found that Tygart’s accounting procedures were error-ridden, but the auditor nonetheless ruled that “on the surface, there seems to be no criminal intent.” The majority of contracts involving Tygart came in under $10,000, the threshold required under state law for a competitive bidding process. In the months following the audit, Manchin signed House Bill 4031, which raised the cap for no-bid contracts from $10,000 to $25,000.

By 2009, Tygart was picking up federal contracts. The company has raked in over $117 million in government contracts to provide technology and software products to a host of federal agencies, including the FBI, the Department of Defense, the U.S. Army, the General Services Administration, and the Department of Health and Human Services. The company’s federal contracts peaked in 2015, when it brought in $19.1 million. So far this year, Tygart has $4.8 million worth of business with federal agencies.

The firm’s Pentagon contracts include providing support for an Automated Biometric Information System, or ABIS, which stores and queries millions of peoples’ biometric files collected both domestically and abroad.

At the same time that Tygart was doing business with the Defense Department, Manchin was touting the Pentagon’s biometrics surveillance work and warning about looming budget cuts.

“I am a strong supporter of the work done at this facility,” Manchin said during a 2013 Armed Services Committee hearing, referring to a biometrics center in Clarksburg, West Virginia. “More than 6,000 terrorists have been captured or killed as a direct result of the real-time information provided by ABIS to [Special Operations Forces] working in harm’s way. However, the funding for this work will run out on April 4, 2013.”

Manchin went on to vote for the Bipartisan Budget Act of 2013 to raise limits on discretionary appropriations, which allowed for more funding for the Clarksburg facility.

At the same time that Tygart was doing business with the Defense Department, Manchin was touting the Pentagon’s biometrics surveillance work and warning about looming budget cuts.

Two years later, Manchin was cheering on investments in biometric surveillance in his home state. In 2015, he welcomed attendees to the Identification Intelligence Expo, which was held in West Virginia for the first time. Tygart was among the attendees, which also included representatives from multiple divisions of the FBI and major defense contractors like Northrop Grumman. That same year, the FBI opened a new biometric technology center on its Clarksburg campus, bringing the Defense Department and FBI’s biometric operations under one roof. “I think we all have to realize it’s a very troubled world we live in,” Manchin said during the ribbon cutting. ”We’re going to have to continue to stay ahead of the curve and be on the cutting edge of technology.”

According to a report from the Government Accountability Office, the joint FBI/Defense Department facility can screen an individual through both the military’s massive ABIS and the FBI’s sprawling fingerprint database, known as IAFIS. “The IAFIS database includes the fingerprint records of more than 51 million persons who have been arrested in the United States as well as information submitted by other agencies such as the Department of Homeland Security, the Department of State, and Interpol,” the report reads.

Tygart Technology supplies the hardware used to collect biometric data processed in Clarksburg through its MXSERVER and MatchBox technologies, a contract worth tens of millions of dollars. These facial recognition products are used to search photographic and video databases and monitor surveillance camera streams in real time.

The technology allows law enforcement officials to track a person’s movement, scan through social media to find people, and identify individuals “using smart phones — including the ability to quickly scan crowds for threats using a mobile device’s embedded video camera.”

That the Pentagon and the Defense Department are jointly using such technologies is a recipe for violating Americans’ civil liberties, said Davisson of EPIC. “Anytime you’ve got a center like this that’s combining these two operations of criminal enforcement and national security,” he said, “there’s a risk and almost a certainty that the center is going to be blurring lines and running afoul of limitations on what the FBI is allowed to do in a law enforcement context.”

The post Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/23/joe-manchin-rents-office-space-to-firm-powering-fbi-pentagon-biometric-surveillance-center/feed/ 0
<![CDATA[Profits Skyrocket for AI Gun Detection Used in Schools — Despite Dubious Results]]> https://theintercept.com/2023/05/19/ai-gun-detector-evolv-stock/ https://theintercept.com/2023/05/19/ai-gun-detector-evolv-stock/#respond Fri, 19 May 2023 14:00:00 +0000 https://production.public.theintercept.cloud/?p=428257 Amid aggressive marketing to schools, Evolv announced it had doubled its first-quarter earnings compared to last year.

The post Profits Skyrocket for AI Gun Detection Used in Schools — Despite Dubious Results appeared first on The Intercept.

]]>
“If you are serious about our systems, then let’s jump on a quick call this week,” Anthony Geraci, a sales representative of Evolv Technology, wrote in an email to New Mexico’s Clovis Municipal Schools last November. “This is not a pressure tactic.”

There was, however, pressure: If Clovis didn’t purchase the systems by the end of the year on a four-year agreement, Geraci explained, the prices would escalate. “We just want you to know this option exists and don’t want you upset when you hear that others have taken advantage of this option,” Geraci wrote.

The tactic eventually worked. It would be another high-priced sale for Evolv, a leading company in the world of weapons detection systems that use artificial intelligence.

Local media reported in March that Clovis bought the technology for $345,000, funded by the Federal CARES Act, a Covid-19 relief measure. Evolv, though, didn’t announce the sale until May 9 — timed so that the company could promote the purchase in its first-quarter earnings release.

Earlier in May, before the announcement, Evolv officials had asked Clovis if they could tout the sale in their earnings report, according to internal emails. And on May 10, Evolv named the purchase — alongside half a dozen other school districts — in a webcast.

Evolv, a publicly traded company, had much to brag about. Despite public reports on Evolv’s overpromises on efficiency and effectiveness for its technology, the company’s aggressive marketing to schools paid off: Evolv announced it had doubled its earnings compared to last year’s first quarter and saw its stock price rise 167 percent over the past year.

“The salespeople will use whatever leverage they have, and there is a real, genuine fear about weapons and shootings in America today,” said Andrew Guthrie Ferguson, a professor of law at American University and an expert on surveillance. “It plays right into the salesperson’s game plan to market fear as hard as they can.”

“It plays right into the salesperson’s game plan to market fear as hard as they can.”

Evolv has come under intense criticism for the faults in its technology, including incidents in which guns and knives bypassed the system in schools — with, in two cases, students being stabbed. Nonetheless, the company announced $18.6 million in total revenue for the first quarter of 2023, an increase of 113 percent compared to the first quarter last year, beating its prior estimates.

CEO Peter George also said Evolv would add at least one more school building daily in the next three months to its roster of clients.

“Weapons detection is not perfect, but it adds a layer of protection that can help deter, detect and mitigate risk,” said Dana Loof, Evolv’s chief marketing officer, in a statement to The Intercept. “We are a partner with our customers and work with them every step of the way towards helping to create a safer environment.”

With its star status and value rising, the company recently hired former Tesla product leader Parag Vaish as chief digital product officer.

“Just like digital advances can bring civilians to space, drive cars autonomously, and help address challenges in climate change,” George said, “developments and artificial intelligence can be applied to the gun violence epidemic gripping the country.”

Public records, obtained by research publication IPVM and shared with The Intercept, reveal the extent the company goes to persuade schools to buy, and advertise, its technology.

In internal emails to the Clovis school district, Evolv sent the school a plan recommending the use of conveyor belts alongside the AI system — offered as a means of efficiency, but in effect rendering Evolv’s technology an auxiliary for more traditional security procedures.

Evolv also sent the district marketing materials, including template letters to send to parents to notify them of the technology.

“One of the things we have seen in the past year is that customers who opt to not make an announcement are oftentimes subject to misinformation by local media and critics,” Beatriz Almeida, Evolv’s marketing director, wrote to Clovis, “and we like to get ahead of these potential situations by helping you craft the story and tell your side before any misconceptions can occur.” (The Clovis school district did not respond to a request for comment.)

Experts say that Evolv’s pressure on schools to correct the narrative could be harmful. “Labeling facts about Evolv’s detection capabilities as ‘misinformation’ distorts the public’s understanding of what Evolv can and cannot do,” said Don Maye, head of operations at IPVM.

Loof, from Evolv, said, “We strive to be transparent with our customers and security professionals about our technology’s capabilities and that our focus is on weapons that could cause mass casualty.”

Prior reports have illustrated how easily the Evolv alerts sound with metal objects, including misidentifying a lunch box for a bomb, but Clovis went ahead with the Evolv collaboration. And officials with the schools agreed to collaborate on the Evolv press release announcing the sale, according to internal emails.

“Evolv gives us the security we need,” Loran Hill, senior director of operations at Clovis, said in Evolv’s press release, “and since it can tell the difference between threats and most of the everyday items people bring into school, our students’ routines won’t change when they come to school, keeping anxiety levels low and the focus on education.”

Related

AI Tries (and Fails) to Detect Weapons in Schools

The public documents obtained by The Intercept indicate that everything wasn’t perhaps as smooth as advertised. The Intercept has previously reported that research shows metallic objects repeatedly trigger alerts, despite Evolv’s claim that it’s not a metal detection system. 

The sensitivity to metal came up for the Clovis school district. In an email earlier this month from Hill herself, she discussed the system’s use during the recent prom. “We all learnt a lot about clutch purses,” Hill wrote.

“Honestly didn’t think about those,” Mark Monfredi from the integrator Stone Security, responded. ”But being the same construction as the metal eye glass cases” — apparently another item that set off false alarms — “it makes sense.” (Stone Security did not respond to a request for comment.)

Despite Evolv’s initial pitch of efficiency to the school district — the company said a single-lane system could scan up to 2,000 children an hour — other Evolv internal documents sent to the school outline ways to speed up the scanning process. The two options include “The Pass Around Method” for sending students around the machines and “Conveyer Belt Addition,” the latter resembling airport security checkpoints. Both options require students to remove laptops or other “nuisance alarm items” from their bags that may set off the system.

“We are upfront with our customers and prospects that if they want the potential for a sterile environment, they will need TSA-style screening,” said Loof, referring to the Transportation Security Administration.

In another document, titled “Empowering Student Well-Being,” the company attempts to spin potential faults in its technology — namely false alarms — as potentially beneficial experiences for the students.

“Some of the students who get stopped often for secondary checks, see the interaction as part of their daily routine,” says one school official promoted in Evolv’s materials for its clients. “It gives them a chance to have a positive conversation with an adult to start the day. This even happens for students who don’t set off an alert.”

Despite the need to propose workarounds to make the system function properly, George, the CEO, couldn’t help touting about Evolv’s technology on the earnings webcast: “We’re really, really, really good at detecting guns.”

The post Profits Skyrocket for AI Gun Detection Used in Schools — Despite Dubious Results appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/19/ai-gun-detector-evolv-stock/feed/ 0
<![CDATA[U.S. Marshals Spied on Abortion Protesters Using Dataminr]]> https://theintercept.com/2023/05/15/abortion-surveillance-dataminr/ https://theintercept.com/2023/05/15/abortion-surveillance-dataminr/#respond Mon, 15 May 2023 10:00:39 +0000 https://theintercept.com/?p=427574 Twitter’s “official partner” monitored the precise time and location of post-Roe demonstrations, internal emails show.

The post U.S. Marshals Spied on Abortion Protesters Using Dataminr appeared first on The Intercept.

]]>
Dataminr, an “official partner” of Twitter, alerted a federal law enforcement agency to pro-abortion protests and rallies in the wake of the reversal of Roe v. Wade, according to documents obtained by The Intercept through a Freedom of Information Act request.

Internal emails show that the U.S. Marshals Service received regular alerts from Dataminr, a company that persistently monitors social media for corporate and government clients, about the precise time and location of both ongoing and planned abortion rights demonstrations. The emails show that Dataminr flagged the social media posts of protest organizers, participants, and bystanders, and leveraged Dataminr’s privileged access to the so-called firehose of unrestricted Twitter data to monitor constitutionally protected speech.

“This is a technique that’s ripe for abuse, but it’s not subject to either legislative or judicial oversight,” said Jennifer Granick, an attorney with the American Civil Liberties Union’s Speech, Privacy, and Technology Project.

The data collection alone, however, can have a deleterious effect on free speech. Mary Pat Dwyer, the academic program director of the Institute for Technology Law and Policy at Georgetown University, told The Intercept, “The more it’s made public that law enforcement is gathering up this info broadly about U.S. residents and citizens, it has a chilling effect on whether people are willing to express themselves and attend protests and plan protests.”

The documents obtained by The Intercept are from April to July 2022, during a period of seismic news from the Supreme Court. Following the leak of a draft decision that the court would overturn Roe v. Wade, the cornerstone of reproductive rights in the U.S., pro-abortion advocates staged massive protests and rallies across the country. This was not the first time Dataminr helped law enforcement agencies monitor mass demonstrations in the wake of political outcry: In 2020, The Intercept reported that the company had surveilled Black Lives Matter protests for the Minneapolis Police Department following the murder of George Floyd.

The Marshals Service’s social media surveillance ingested Roe-related posts nearly as soon as they began to appear. In a typical alert, a Dataminr analyst wrote a caption summarizing the social media data in question, with a link to the original post. On May 3, 2022, the day after Politico’s explosive report on the draft decision, New York-based artist Alex Remnick tweeted about a protest planned later that day in Foley Square, a small park in downtown Manhattan surrounded by local and federal government buildings. Dataminr quickly forwarded their tweet to the Marshals. That evening, Dataminr continued to relay information about the Foley Square rally, now in full swing, with alerts like “protestors block nearby streets near Foley Square,” as well as photos of demonstrators, all gleaned from Twitter.

The following week, Dataminr alerted the Marshals when pro-abortion demonstrators assembled at the Basilica of St. Patrick’s Old Cathedral in Manhattan, coinciding with a regular anti-abortion event held by the church. Between 9:06 and 9:53 that morning, the Marshals received five separate updates on the St. Patrick’s protest, including an estimated number of attendees, again based on the posts of unwitting Twitter users.

In the weeks and months that followed, the emails show that Dataminr tipped off the Marshals to dozens of protests, including many pro-abortion gatherings, from Maine to Wisconsin to Virginia, both before and during the demonstrations. Untold other protests, rallies, and exercises of the First Amendment may have been monitored by the company; in response to The Intercept’s public records request, the Marshals Service identified nearly 5,000 pages of relevant documents but only shared about 800 pages. The U.S. Marshals Service did not respond to a request for comment.

The documents obtained by The Intercept are email digests of social media activity that triggered alerts based on requested search terms, which appear at the bottom of the reports. The subscribed topics have ambiguous names like “SCOTUS Mentions,” “Federal Courthouses and Personnel Hazards_V2,” “Public Safety Critical Events,” “Attorneys,” and “Officials.” The lists suggest that the Marshals were not specifically seeking information on abortion rallies; rather, the agency had cast such a broad surveillance net that large volumes of innocuous First Amendment-protected activity regularly got swept up as potential security threats. What the Marshals did with the information Dataminr collected remains unknown.

“The breadth of these search categories and terms is definitely going to loop in political speech. It’s a certainty,” Granick told The Intercept. “It’s a reckless indifference to the fact that you’re going to end up spying on core constitutionally protected political activity.”

Pro-choice and pro-life supporters confronted each other on Mott street between St. Patrick's old cathedral and Planned Parenthood in New York on June 4, 2022. Pro-choice for rights to get abortion staged rally at the front of St. Patrick's old cathedral and pro-life supporters counter protest and pushed their way up to Planned Parenthood. Police tried to separate demonstrators. (Photo by Lev Radin/Sipa USA)(Sipa via AP Images)

Pro-abortion and anti-abortion supporters confronted each other on Mott Street between the Basilica of St. Patrick’s Old Cathedral and Planned Parenthood in New York City on June 4, 2022.

Photo: Lev Radin/Sipa via AP

The oldest law enforcement agency in the U.S., the Marshals are a niche holdover of early American policing, immortalized in cowboy movies and tales of the Wild West. Today, the Marshals Service retains a unique mission among federal agencies, consisting largely of transporting prisoners, hunting fugitives, and ensuring the safety of federal courts and judicial staff.

While some of the Dataminr alerts aligned with this mission, such as informing the Marshals of protests near courthouses or judges’ homes, others monitored protests in locations without any ostensible relation to the judiciary. The Basilica of St. Patrick’s Old Cathedral is well over a mile from the nearest courthouse and surrounded by trendy cafes and boutiques. Brooklyn’s Barclays Center, a sports and performance venue where a protest organized on Facebook was flagged by Dataminr on May 3, 2022, is nearly a mile from the closest courthouse.

Related

U.S. Marshals Used Drones to Spy on Black Lives Matter Protests in Washington, D.C.

The Marshals’ broad use of social media surveillance is not the first instance of its apparent mission creep in recent years: In 2021, The Intercept reported that a drone operated by the Marshals had spied on Black Lives Matter protests in Washington, D.C.

As an attorney who frequents courthouses, including during protests, Granick rejected the notion that a political rally is a security threat by dint of its proximity to a judiciary building.

“I would say that a tiny, tiny, tiny fraction of protests at courthouses pose any kind of risk of either property damage or personal injury,” she said. “And there’s really no reason to gather information on who is going to that protest, or what their other political views are, or how they’re communicating with other people who also believe in that cause.”

Dataminr sent a regular volley of alerts about planned and ongoing protests at or near the homes of conservative Supreme Court Justices Clarence Thomas, Brett Kavanaugh, and Amy Coney Barrett. On June 24, 2022, Dataminr sent the Marshals an alert that read, “Protest planned for 18:30 at CVS on 5700 Burke Centre Parkway in Burke, VA to travel to residence of US Supreme Court Justice Thomas.” Follow-up alerts noted the protesters were “at entrance to subdivision of neighborhood where US Supreme Court Justice Thomas lives.” A third alert included that the Marshals were already at the protest; it’s unclear why the agency would need to monitor discussion of an event where its marshals were already present.

Only a small fraction of the alerts reviewed by The Intercept include content that could plausibly be construed as threatening, and even those seem to lack any specificity that would make them useful to a federal agency. On May 3, 2022, Dataminr flagged a tweet that read “WE’RE COMING FOR YOU PLANNED PARENTHOOD.” A week later, another tweet exhorted followers to “[b]urn down anti abortion orgs, kick in extremist churches and smash the homes of the oppressors.”

“There’s an assumption underlying this that someone who complains on Twitter is more dangerous than someone who doesn’t complain on Twitter.”

The following month, Dataminr reported two tweets to the Marshals that appeared to be more hyperbolic fantasies than credible threats. One user tweeted that they would pay to watch the Supreme Court justices who overturned Roe burn alive, while another cited an individual who tweeted, “I’m not not advocating for burning down buildings. But trauma and destruction is kind of the thing that I love.”

At other times, Dataminr seemed incapable of distinguishing between slang and violence. Among several tweets about the 2022 Met Gala inexplicably flagged by Dataminr, the Marshals Service was alerted to a fan account of the actor Timothée Chalamet that tweeted, “i would destroy the met gala” — an online colloquialism for something akin to stealing the show.

These alerts show that despite the claims in its marketing materials, Dataminr isn’t necessarily in the business of public safety, but rather bulk, automated scrutiny. Given the generally incendiary, keyed-up nature of social media speech, a vast number of people might potentially be treated with suspicion by police in the total absence of a criminal act.

“There’s an assumption underlying this that someone who complains on Twitter is more dangerous than someone who doesn’t complain on Twitter,” Granick said. “Inevitably, you have people making decisions about what anger is legitimate and what anger is not.”

FILE - A U.S. Marshal patrols outside the home of Supreme Court Justice Brett Kavanaugh, in Chevy Chase, Md., June 8, 2022. The House has given final approval to legislation to allow around-the-clock security protection for families of Supreme Court justices. The vote on Tuesday came one week after a man carrying a gun, knife and zip ties was arrested near Justice Brett Kavanaugh’s house after threatening to kill the justice.  (AP Photo/Jacquelyn Martin, File)

A U.S. Marshal patrols outside the home of Supreme Court Justice Brett Kavanaugh in Chevy Chase, Md., on June 8, 2022.

Photo: Jacquelyn Martin/AP

Aside from alerts about protests near judges’ homes or courthouses, many of the Dataminr notices appear to have no relevance to American law enforcement. Emails reviewed by The Intercept show that Dataminr alerted the Marshals to social media chatter about Saudi airstrikes in Yemen, attacks in Syria using improvised explosive devices, and political protests in Argentina.

Dataminr represents itself as a “real-time AI platform,” but company sources have previously told The Intercept that this is largely a marketing feint and that human analysts conduct the bulk of platform surveillance, scouring the web for posts they think their clients want to see.

Nonetheless, Dataminr is armed with one technological advantage: the Twitter firehose. For companies willing to pay for it, Twitter’s firehose program provides unfettered access to the entirety of the social network and the ability to automatically comb every tweet, topic, and photo in real time.

The Marshals Service emails also show the extent to which Dataminr is drinking from far more than the Twitter firehose. The emails indicate that the agency is notified when internet users merely mention certain political figures, namely judges and state attorneys general, on Telegram channels or in the comments of news articles.

Although most of the Dataminr alerts don’t include the text of the original posts, those that do often flag innocuous content across the political spectrum, including hundreds of mundane comments from blogs and news websites. In July, for instance, Dataminr reported to the Marshals web comments calling New York Attorney General Letitia James a “racist;” a user saying, “God Bless Gov. Youngkin,” referring to the Virginia governor; and another comment arguing that “Trump wants to hide out in the Oval Office from the responsibility and any accountability for what he did on January 6th and before.” When Ohio Attorney General Dave Yost made national headlines after suggesting that reports of a 10-year-old rape victim denied an abortion may have been fabricated, the Marshals received dozens of alerts about blog comments debating his words.

In some cases, Dataminr appeared incapable of differentiating between people with the same name. On May 18, the Marshals received an alert that “New Jersey District Court Magistrate Judge Jessica S. Allen” was mentioned in a Telegram channel used to organize an anti-Covid lockdown rally in Australia. The text in question appears to be automated, semicoherent spam: “I’ve been a victim of scam, was scared of getting scammed again, but somehow I managed to squeeze out some couple of dollars and I invested with Jessica Allen, damn to my surprise I got my profit within 2 hours.”

Even those sharing links to articles without any added commentary on Telegram fell under Dataminr scrutiny. When one Telegram user shared a July 4, 2022, story from The Hill about Kentucky Attorney General Daniel Cameron’s request that the Supreme Court put the state’s abortion ban back in place, it was flagged to the U.S. Marshals within an hour.

“Discussions of how people view political officials governing them, discussions of constitutional rights, planning protests — that’s supposed to be the most protected speech,” Georgetown’s Dwyer said. “And here you have it being swept up and provided to law enforcement.”

At the time the Marshals received the alerts obtained by The Intercept, Dataminr was listed as an “official partner” on Twitter’s website. Since Elon Musk acquired Twitter in October 2022, the company’s partnership with the social media site has continued. Despite his fury against people who might track the location of his private jet, Musk does not appear to have similar misgivings about furnishing federal police with the precise real-time locations of peaceful protesters.

Twitter’s longtime policy forbids third parties from “conducting or providing surveillance or gathering intelligence” or “monitoring sensitive events (including but not limited to protests, rallies, or community organizing meetings).” When asked how Dataminr’s surveillance of protests using Twitter could be compatible with the policy banning the surveillance of protests, Dataminr spokesperson Georgia Walker said in a statement:

Dataminr supports all public sector clients with a product called First Alert which was specifically developed with input from Twitter, and fully complies with Twitter’s policies and the policies of all our data providers. First Alert delivers breaking news alerts enabling first responders to respond more quickly to public safety emergencies. First Alert is not permitted to be used for surveillance of any kind by First Alert users. First Alert provides a public good while ensuring maximum protections for privacy and civil liberties.

Both Twitter, which no longer has a communications team in the Musk era, and Dataminr have denied that the persistent real-time monitoring of the platform on behalf of police constitutes “surveillance” because the posts are public. Civil libertarians and scholars of state surveillance generally reject their argument, noting that other forms of surveillance routinely occur in public spaces — security cameras pointed at the sidewalk, for instance — and that Dataminr is surfacing posts that would likely be hard for police to find through a manual search.

“There is a world of difference between reading through some public tweets and having a service which indexes, stores, aggregates, and makes that information searchable.”

“There is a world of difference between reading through some public tweets and having a service which indexes, stores, aggregates, and makes that information searchable,” Granick said. As is typical with surveillance tools, police are inclined to use Dataminr not necessarily because it’s effective in thwarting or solving crimes, she said, but because it’s easy and relatively cheap. Receiving a constant flow of alerts from Dataminr creates the appearance of intelligence-gathering without any clear objective or actual intelligence.

In the absence of automated tools like Dataminr, police would have to make choices about how to use their finite time to sift through the vastness of social media platforms, which would likely result in more focus on actual criminality instead of harmless political chatter.

“What this technology does is it liberates law enforcement from having to make that economic calculation and enables them to do both,” Granick explained. “And then once the technology does that, in the absence of any kind of regulation, there’s insufficient disincentive to stop them from doing it.”

Following January 6, 2021, lawmakers questioned why police were blindsided by the storming of the U.S. Capitol even though it was openly planned online. There were calls to bolster the government’s ability to monitor social media, which were again sounded in the wake of the recent leak of classified intelligence documents on Discord. These calls, however, ignore the vast scale of social media surveillance already taking place, surveillance that has failed to stop both apparent blows to state security.

While Dataminr and its many competitors stand to profit immensely from more government agencies buying these tools, they have little to say about how they’ll avoid generating even more noise in search of signal.

“Collecting more hay,” Granick said, “doesn’t help you find the needle.”

Correction: May 16, 2023
This story has been updated to use Alex Remnick’s correct pronoun.

The post U.S. Marshals Spied on Abortion Protesters Using Dataminr appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/15/abortion-surveillance-dataminr/feed/ 0 Pro-choice and pro-life supporters confrontation Pro-choice and pro-life supporters confronted each other on Mott street between St. Patrick's old cathedral and Planned Parenthood in New York on June 4, 2022. Congress Supreme Court A U.S. Marshal patrols outside the home of Supreme Court Justice Brett Kavanaugh, in Chevy Chase, Md., June 8, 2022.
<![CDATA[Can the Pentagon Use ChatGPT? OpenAI Won’t Answer.]]> https://theintercept.com/2023/05/08/chatgpt-ai-pentagon-military/ https://theintercept.com/2023/05/08/chatgpt-ai-pentagon-military/#respond Mon, 08 May 2023 10:00:56 +0000 https://theintercept.com/?p=427162 The AI company is silent on ChatGPT’s use by a military intelligence agency despite an explicit ban in its ethics policy.

The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept.

]]>
As automated text generators have rapidly, dazzlingly advanced from fantasy to novelty to genuine tool, they are starting to reach the inevitable next phase: weapon. The Pentagon and intelligence agencies are openly planning to use tools like ChatGPT to advance their mission — but the company behind the mega-popular chatbot is silent.

OpenAI, the nearly $30 billion R&D titan behind ChatGPT, provides a public list of ethical lines it will not cross, business it will not pursue no matter how lucrative, on the grounds that it could harm humanity. Among many forbidden use cases, OpenAI says it has preemptively ruled out military and other “high risk” government applications. Like its rivals, Google and Microsoft, OpenAI is eager to declare its lofty values but unwilling to earnestly discuss what these purported values mean in practice, or how — or even if — they’d be enforced.

“If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves.”

AI policy experts who spoke to The Intercept say the company’s silence reveals the inherent weakness of self-regulation, allowing firms like OpenAI to appear principled to an AI-nervous public as they develop a powerful technology, the magnitude of which is still unclear. “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves,” said Sarah Myers West, managing director of the AI Now Institute and former AI adviser to the Federal Trade Commission.

The question of whether OpenAI will allow the militarization of its tech is not an academic one. On March 8, the Intelligence and National Security Alliance gathered in northern Virginia for its annual conference on emerging technologies. The confab brought together attendees from both the private sector and government — namely the Pentagon and neighboring spy agencies — eager to hear how the U.S. security apparatus might join corporations around the world in quickly adopting machine-learning techniques. During a Q&A session, the National Geospatial-Intelligence Agency’s associate director for capabilities, Phillip Chudoba, was asked how his office might leverage AI. He responded at length:

We’re all looking at ChatGPT and, and how that’s kind of maturing as a useful and scary technology. … Our expectation is that … we’re going to evolve into a place where we kind of have a collision of you know, GEOINT, AI, ML and analytic AI/ML and some of that ChatGPT sort of stuff that will really be able to predict things that a human analyst, you know, perhaps hasn’t thought of, perhaps due to experience, or exposure, and so forth.

Stripping away the jargon, Chudoba’s vision is clear: using the predictive text capabilities of ChatGPT (or something like it) to aid human analysts in interpreting the world. The National Geospatial-Intelligence Agency, or NGA, a relatively obscure outfit compared to its three-letter siblings, is the nation’s premier handler of geospatial intelligence, often referred to as GEOINT. This practice involves crunching a great multitude of geographic information — maps, satellite photos, weather data, and the like — to give the military and spy agencies an accurate picture of what’s happening on Earth. “Anyone who sails a U.S. ship, flies a U.S. aircraft, makes national policy decisions, fights wars, locates targets, responds to natural disasters, or even navigates with a cellphone relies on NGA,” the agency boasts on its site. On April 14, the Washington Post reported the findings of NGA documents that detailed the surveillance capabilities of Chinese high-altitude balloons that had caused an international incident earlier this year.

Forbidden Uses

But Chudoba’s AI-augmented GEOINT ambitions are complicated by the fact that the creator of the technology in question has seemingly already banned exactly this application: Both “Military and warfare” and “high risk government decision-making” applications are explicitly forbidden, according to OpenAI’s “Usage policies” page. “If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes,” the policy reads. “Repeated or serious violations may result in further action, including suspending or terminating your account.”

By industry standards, it’s a remarkably strong, clear document, one that appears to swear off the bottomless pit of defense money available to less scrupulous contractors, and would appear to be a pretty cut-and-dry prohibition against exactly what Chudoba is imagining for the intelligence community. It’s difficult to imagine how an agency that keeps tabs on North Korean missile capabilities and served as a “silent partner” in the invasion of Iraq, according to the Department of Defense, is not the very definition of high-risk military decision-making.

While the NGA and fellow intel agencies seeking to join the AI craze may ultimately pursue contracts with other firms, for the time being few OpenAI competitors have the resources required to build something like GPT-4, the large language model that underpins ChatGPT. Chudoba’s namecheck of ChatGPT raises a vital question: Would the company take the money? As clear-cut as OpenAI’s prohibition against using ChatGPT for crunching foreign intelligence may seem, the company refuses to say so. OpenAI CEO Sam Altman referred The Intercept to company spokesperson Alex Beck, who would not comment on Chudoba’s remarks or answer any questions. When asked about how OpenAI would enforce its use policy in this case, Beck responded with a link to the policy itself and declined to comment further.

“I think their unwillingness to even engage on the question should be deeply concerning,” Myers of the AI Now Institute told The Intercept. “I think it certainly runs counter to everything that they’ve told the public about the ways that they’re concerned about these risks, as though they are really acting in the public interest. If when you get into the details, if they’re not willing to be forthcoming about these kinds of potential harms, then it shows sort of the flimsiness of that stance.”

Public Relations

Even the tech sector’s clearest-stated ethics principles have routinely proven to be an exercise in public relations and little else: Twitter simultaneously forbids using its platform for surveillance while directly enabling it, and Google sells AI services to the Israeli Ministry of Defense while its official “AI principles” prohibit applications “that cause or are likely to cause overall harm” and “whose purpose contravenes widely accepted principles of international law and human rights.” Microsoft’s public ethics policies note a “commitment to mitigating climate change” while the company helps Exxon Mobil analyze oil field data, and similarly professes a “commitment to vulnerable groups” while selling surveillance tools to American police.

It’s an issue OpenAI won’t be able to dodge forever: The data-laden Pentagon is increasingly enamored with machine learning, so ChatGPT and its ilk are obviously desirable. The day before Chudoba was talking AI in Arlington, Kimberly Sablon, principal director for trusted AI and autonomy at the Undersecretary of Defense for Research and Engineering, told a conference in Hawaii, “There’s a lot of good there in terms of how we can utilize large language models like [ChatGPT] to disrupt critical functions across the department,” National Defense Magazine reported last month. In February, CIA Director of Artificial Intelligence Lakshmi Raman told the Potomac Officers Club, “Honestly, we’ve seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to [be exploring] ways in which we can leverage new and upcoming technologies.”

Steven Aftergood, a scholar of government secrecy and longtime intelligence community observer with the Federation of American Scientists, explained why Chudoba’s plan makes sense for the agency. “NGA is swamped with worldwide geospatial information on a daily basis that is more than an army of human analysts could deal with,” he told The Intercept. “To the extent that the initial data evaluation process can be automated or assigned to quasi-intelligent machines, humans could be freed up to deal with matters of particular urgency. But what is suggested here is that AI could do more than that and that it could identify issues that human analysts would miss.” Aftergood said he doubted an interest in ChatGPT had anything to do with its highly popular chatbot abilities, but in the underlying machine learning model’s potential to sift through massive datasets and draw inferences. “It will be interesting, and a little scary, to see how that works out,” he added.

U.S. Army Reserve soldiers receive an overview of Washington D.C. as part of the 4th Annual Day with the Army Reserve May 25, 2016.  The event was led by the Private Public Partnership office. (U.S. Army photo by Sgt. 1st Class Marisol Walker)

The Pentagon seen from above in Washington, D.C, on May 25, 2016.

Photo: U.S. Army

Persuasive Nonsense

One reason it’s scary is because while tools like ChatGPT can near-instantly mimic the writing of a human, the underlying technology has earned a reputation for stumbling over basic facts and generating plausible-seeming but entirely bogus responses. This tendency to confidently and persuasively churn out nonsense — a chatbot phenomenon known as “hallucinating” — could pose a problem for hard-nosed intelligence analysts. It’s one thing for ChatGPT to fib about the best places to get lunch in Cincinnati, and another matter to fabricate meaningful patterns from satellite images over Iran. On top of that, text-generating tools like ChatGPT generally lack the ability to explain exactly how and why they produced their outputs; even the most clueless human analyst can attempt to explain how they reached their conclusion.

Related

U.S. Special Forces Want to Use Deepfakes for Psy-Ops

Lucy Suchman, a professor emerita of anthropology and militarized technology at Lancaster University, told The Intercept that feeding a ChatGPT-like system brand new information about the world represents a further obstacle. “Current [large language models] like those that power ChatGPT are effectively closed worlds of already digitized data; famously the data scraped for ChatGPT ends in 2021,” Suchman explained. “And we know that rapid retraining of models is an unsolved problem. So the question of how LLMs would incorporate continually updated real time data, particularly in the rapidly changing and always chaotic conditions of war fighting, seems like a big one. That’s not even to get into all of the problems of stereotyping, profiling, and ill-informed targeting that plague current data-drive military intelligence.”

OpenAI’s unwillingness to rule out the NGA as a future customer makes good business sense, at least. Government work, particularly of the national security flavor, is exceedingly lucrative for tech firms: In 2020, Amazon Web Services, Google, Microsoft, IBM, and Oracle landed a CIA contract reportedly worth tens of billions of dollars over its lifetime. Microsoft, which has invested a reported $13 billion into OpenAI and is quickly integrating the smaller company’s machine-learning capabilities into its own products, has earned tens of billions in defense and intelligence work on its own. Microsoft declined to comment.

But OpenAI knows this work is highly controversial, potentially both with its staff and the broader public. OpenAI is currently enjoying a global reputation for its dazzling machine-learning tools and toys, a gleaming public image that could be quickly soiled by partnering with the Pentagon. “OpenAI’s righteous presentations of itself are consistent with recent waves of ethics-washing in relation to AI,” Suchman noted. “Ethics guidelines set up what my UK friends call ‘hostages to fortune,’ or things you say that may come back to bite you.” Suchman added, “Their inability even to deal with press queries like yours suggests that they’re ill-prepared to be accountable for their own policy.”

The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/08/chatgpt-ai-pentagon-military/feed/ 0 The Pentagon The Pentagon seen from above in Washington, D.C, on May 25, 2016.
<![CDATA[AI Tries (and Fails) to Detect Weapons in Schools]]> https://theintercept.com/2023/05/07/ai-gun-weapons-detection-schools-evolv/ https://theintercept.com/2023/05/07/ai-gun-weapons-detection-schools-evolv/#respond Sun, 07 May 2023 09:00:53 +0000 https://theintercept.com/?p=427148 Companies like Evolv sell multimillion-dollar AI-powered gun detection systems to schools nationwide, but weapons still slip through.

The post AI Tries (and Fails) to Detect Weapons in Schools appeared first on The Intercept.

]]>
On Halloween day last year, a 17-year-old-student walked straight through an artificial intelligence weapons detection system at Proctor High School in Utica, New York. No alert went off.

The 17-year-old then approached a fellow student, pulled a hunting-style knife out of his backpack, and repeatedly stabbed the other student in the hands and back.

The Utica City School District had installed the $4 million weapons detection system across 13 of its schools earlier that summer, mostly with public funds. The scanners, from Massachusetts-based Evolv Technology, look like metal detectors but scan for “signatures” for “all the guns, all the bombs, and all the large tactical knives” in the world, Evolv’s CEO Peter George has repeatedly claimed.

In Utica, the 17-year-old’s weapon wasn’t the first knife, or gun, to bypass the system. Earlier that month, at a parents’ night, a law enforcement officer had walked through the system twice with his service revolver and was puzzled to find it was never detected. School authorities reached out to Evolv and were subsequently told to increase the sensitivity settings to the highest level.

The detector did finally go off: It identified a 7-year-old student’s lunch box as a bomb. On Halloween, however, it remained silent.

“They’ve tried to backtrack by saying, ‘Oh no, it doesn’t pick up all knives,’” said Brian Nolan, who had been appointed acting superintendent of the Utica City School District 10 days before the stabbing. “They don’t tell you — will it pick up a machete or a Swiss army knife? We’ve got like really nothing back from Evolv.”

Ultimately, Utica City School District removed and replaced the scanners from their high schools, costing the district another $250,000. In the elementary and middle schools, which retained Evolv scanners, three knives have been recovered from students — but not because the scanners picked them up, according to Nolan.

Stories about Evolv systems missing weapons have popped up nationwide. Last month, a knife fight erupted between students at Mifflin High School in Ohio. It’s not clear how the knives entered the building, but it was less than three months after the school district spent $3 million installing Evolv scanners.

As school shootings proliferate across the country — there were 46 school shootings in 2022, more than in any year since at least 1999 — educators are increasingly turning to dodgy vendors who market misleading and ineffective technology. Utica City is one of dozens of school districts nationwide that have spent millions on gun detection technology with little to no track record of preventing or stopping violence.

Evolv’s scanners keep popping up in schools across the country. In a video produced by the Charlotte-Mecklenburg district in North Carolina about its new $16.5 million system, students spoke about how the technology reassured them. “I know that I’m not going to be threatened with any firearms, any knives, any sort of metallic weapon at all,” one said.

“Private companies are preying on school districts’ worst fears and proposing the use of technology that’s not going to work and may cause many more problems than it seeks to solve.”

Over 65 school districts have bought or tested artificial intelligence gun detection from a variety of companies since 2018, spending a total of over $45 million, much of it coming from public coffers, according to an investigation by The Intercept.

“Private companies are preying on school districts’ worst fears and proposing the use of technology that’s not going to work,” said Stefanie Coyle, deputy director of the Education Policy Center at the New York Civil Liberties Union, or NYCLU, “and may cause many more problems than it seeks to solve.”

In December, it came out that Evolv, a publicly traded company since 2021, had doctored the results of their software testing. In 2022, the National Center for Spectator Sports Safety and Security, a government body, completed a confidential report showing that previous field tests on the scanners failed to detect knives and a handgun. When Evolv released a public version of the report, according to IPVM, a surveillance industry research publication, and underlying documents reviewed by The Intercept, the failures had been excised from the results. Though Evolv touted the report as “fully independent,” there was no disclosure that the company itself had paid for the research. (Evolv has said the public version of the report had information removed for security reasons.)

Five law firms recently announced investigations of Evolv Technology — a partner of Motorola Solutions whose investors include Bill Gates — looking into possible violations of securities law, including claims that Evolv misrepresented its technology and its capabilities to it.

“When you start peeling back the onion on what the technology actually does and doesn’t do, it’s much different than the reality these companies present,” said Donald Maye at IPVM. “And that is absolutely the case with Evolv.”

Evolv told The Intercept it would not comment on any specific situations involving their customers and declined to comment further. (Motorola Solutions did not respond to a request for comment.)

The overpromising of artificial intelligence products is an industrywide problem. The Federal Trade Commission recently released a blog post warning companies, “Keep your AI claims in check.” Among the questions was, “Are you exaggerating what your AI product can do?”

An employee for Evolv Technology, demonstrates the Evolv Express weapons detection system, which is showing red lights to flag a weapon he is wearing on his hip, Wednesday, May 25, 2022, in New York. (AP Photo/Mary Altaffer)

An employee of Evolv Technology demonstrates the Evolv Express weapons detection system, which is showing red lights to flag a weapon, on May 25, 2022, in New York.

Photo: Mary Altaffer/AP


Artificial intelligence gun detection vendors advertise themselves as the solution to the mass school shootings that plague the U.S. While various companies employ differing methods, the Evolv machines use cameras and sensors to capture people as they walk by, after which AI software compares them with object signatures that the system has created. When a weapon is present, the system is supposed to pick up the weapon’s signature and sound an alarm.

At an investor conference in June 2022, Evolv CEO George was asked if the company would have stopped the tragic school shooting in Uvalde, Texas, where 19 students and two teachers were killed. “The answer is when somebody goes through our system and they have a concealed weapon or an open carry weapon, we’re gonna find it, period,” he responded. “We won’t miss it.”

In January, the scanners caught a student trying to enter a high school with a handgun in Guilford, North Carolina. Subsequently, an Evolv spokesperson told WFMY News that their systems had uncovered 100,000 weapons in 2022. In a presentation for investors in the fourth quarter of 2022, George said the detection scanners, on average, stopped 400 guns per day.

There is little peer-reviewed research, however, showing that AI gun detection is effective at preventing shootings. And in the case of Uvalde, the shooter began firing his gun before even entering the school building — and therefore before having passed through a detector.

“The odds of that happening — someone walks in with a displayed gun — are really, really small. It just doesn’t make sense that that’s what you’re investing in.”

“The odds of that happening — someone walks in with a displayed gun — are really, really small,” said Andrew Guthrie Ferguson, a professor of law at American University’s law school and an expert on surveillance. “It just doesn’t make sense that that’s what you’re investing in.”

Even in airports with maximum security protocols, Evolv’s technology has proved to have gaping holes. When an official at Denver International Airport expressed interest in Evolv scanners, he asked a colleague at Oakland International Airport, which uses the machines.

“It is not an explosives detection machine per se,” wrote Douglas Mansel, the aviation security manager in Oakland, in an internal email obtained through a public records request and shared with The Intercept, “So if an employee (or law enforcement during a test) walks through with a brick of C4” — an explosive — “in their hands, the Evolv will not alarm.” (The Oakland Airport told The Intercept it does not comment on its security program.)

In a BBC interview in 2020, Evolv said the density of metal is one key indicator of a weapon’s presence. But the company firmly denies that their scanners are akin to metal detectors. “We’re a weapons detector, not a metal detector,” George said on a conference call in June 2021. (A large competitor of Evolv is CEIA, which manufactures metal detectors without AI, used in airports and schools.)

Yet in many cases, Evolv hasn’t picked up weapons. And researchers have also highlighted how metallic objects, such as laptops, repeatedly set the system off. “They go through great lengths to claim they are not a metal detector,” said Maye of IPVM. “To the extent to which AI is being used, it’s open to interpretation to the consumer.”

Despite claims by George that the system can scan up to 1,000 students in 15 minutes, in the Hemet Unified School District in California, false alarms slowed ingress to school buildings. The solution, according to Evolv, was to simply encourage educators to let students proceed.

Related

Detroit Cops Want $7 Million in Covid Relief Money for Surveillance Microphones

“They only need to clear the threat(s) and not figure out what alarmed the system,” wrote Amy Ferguson, customer manager at Evolv, in an internal email to the school system obtained through a public records request and shared with The Intercept. “I recommended not doing a loop back unless necessary. … Many students were looping back 2 or 3 times.” (The Hemet Unified School District did not respond to a request for comment.)

Across the country in Dorchester County Public Schools in Maryland, the system had 250 false alarms for every real hit in the period from September 2021 to June 2022, according to internal records obtained by IPVM. The school district spent $1.4 million on the Evolv software, which it bought from Motorola.

“It plays an important role in our efforts to keep our School District safe,” the district told The Intercept. “And we plan to expand its use within the District.”

Evolv isn’t the only company making bold claims about its sophisticated weapons detection system. ZeroEyes, a Philadelphia-based AI company, states in contracts that “our proactive solution saves lives.” Founded by Navy SEALs in 2018, the firm uses video analytics and object detection to pick up guns.

ZeroEyes’s website lists the timeline for the Sandy Hook shooting, arguing its technology could have materially reduced the response time. When a gun is visible on camera, an alert gets sent to a “24/7/365 ZeroEyes Operations Center Team,” with people monitoring the feed, who in turn confirm the gun and alert the school and police. It claims to do all of this in three to five seconds.

The human team is key to the group’s system, something critics say belies the weakness of the underlying AI claims. “This is one of the fundamental challenges these companies have. Like if they could fully automate it reliably, they wouldn’t need to have a human-in-the-loop,” said Maye. “The human-in-the-loop is because AI isn’t good enough to do it itself.”

“We have never suggested that AI alone is enough,” Olga Shmuklyer, spokesperson for ZeroEyes, told The Intercept. “We would never trust AI alone to determine whether a gun threat is real or fake, nor should anybody else.”

In addition to Philadelphia, the company also has an operations center in Honolulu, Hawaii, “to cater to different time zones.”

ZeroEyes seems determined to overcome its critics and is so far faring well. The company raised $20 million in 2021. According to co-founder Rob Huberty, in a LinkedIn post, the team’s mantra is “F*** you, watch me.”

“We are problem solvers, and this is a difficult problem,” said Shmuklyer, the spokesperson. “Without the mentality proposed in that post, we wouldn’t have a solution to offer to school districts around the country.”

During the pandemic, school shootings rose in tandem with a spike in gun violence in general. The sort of panic that ensued can lead to impulsive and ineffective action, according to safety experts.

“We are seeing some school boards and administrators making knee-jerk reactions by purchasing AI weapons detection systems,” said Kenneth Trump, president of National School Safety and Security Services. “Unfortunately, the purchase of the systems appears to be done with little-to-no professional assessment of overall security threats and needs.”

Schools in Colorado and Texas brought weapons detection software from a now-convicted fraudster. Barry Oberholzer developed SWORD in 2018 under the startup X.Labs, registered as Royal Holdings Technologies, which he claimed to be the first mobile phone case providing gun detection software.

“I can identify you and identify if you are carrying a gun in 1.5 seconds,” Oberholzer told WSFA 12 News in Alabama in February 2019. “You don’t even have to click. You just need to point the device at the person.”

Later that year, it was reported that Oberholzer was on the run from over two dozen fraud and forgery charges in South Africa. (Todd Dunphy, a board member of and investor in X.Labs, denied the charges on Oberholzer’s behalf and produced an unverified letter from South African authorities clearing him.)

His SWORD product was endorsed by former high-level U.S. officials.

Former FBI agent James Gagliano, who was listed as an adviser to X.Labs, praised the product as “next generation public safety threat-detection.” Charles Marino, a retired Secret Service special agent, was listed as the company’s national security adviser.

Marino said he invested in the company but has not been involved for years and did not work on the SWORD project. “He swindled everybody,” Marino told The Intercept, referring to the conspiracy conviction. “Look, you kiss a lot of frogs in this world.”

Gagliano said in an email that he severed ties with Oberholzer after hearing of the fraud charges. “I was as stunned as anyone,” he said. “Have had no contact with him since I learned of his indictment in the Summer of 2021. I was excited about the technology he was seeking to introduce to law enforcement.”

In June 2020, X.Labs announced the rebranding of SWORD to X1, a standing device and “full-featured weapons detection system” in partnership with another firm.

Last month, Oberholzer and his business partner Jaromy Pittario pleaded guilty in federal court to conspiracy to defraud investors and creditors. The Department of Justice accused Oberholzer of posing as Gen. David Petraeus, the former CIA director, while pitching the product to venture capital firms.

“Instead of attracting investors honestly, Oberholzer lied continuously to make his company more appealing to investors,” U.S. Attorney for the Southern District of New York Damian Williams said in a statement.

None of it deterred the company. Its scanners, despite problems, remain in schools — and X.Labs continues to cultivate new business. “All of the devices that are purchased by clients are in their possession and can be used as they see fit,” Dunphy said. “The company, like last year, is run by the board and is working with parties to complete the last phase of development for the purpose of slowing down mass shootings globally.”

Oberholzer is no longer involved with X.Labs, said Dunphy, the board member, who responded to emails addressed to Oberholzer.

“Mr Oberholzer is a professional helicopter pilot and his comings and goings has nothing to do with X.labs,” Dunphy said, “as he resigned from the company in February 2021.”

There is a reason districts in New York, such as Utica, have been a target of gun detection vendors. Most of this technology is being funded by taxpayer money and, in the Empire State, there is a lot to spend.

Under the Boards of Cooperative Educational Services aid, school purchases get reimbursed based on a school district’s poverty level. Utica City School District, which has a high poverty level, was reimbursed 93 cents on the dollar on the Evolv sale, according to acting superintendent Nolan.

The Boards of Cooperative Educational Services told The Intercept, “As a coalition of the state’s 37 Boards of Cooperative Educational Services, BOCES of NYS has neither authority nor oversight regarding the budgets, purchases, or reimbursement rates of any school district.” The regional Oneida-Herkimer-Madison Counties BOCES office — which covers the Utica school district — did not respond for comment.

While the district gets most of its money back after the disastrous purchase of the Evolv scanners, “New York state taxpayers are still on the hook for the system,” Nolan said.

The Smart Schools Bond Act, passed in 2014, also set aside $2 billion funding to “finance improved educational technology and infrastructure,” drawing the attention of vendors nationwide.

Related

Kathy Hochul Is Ready to Spend Millions on New Police Surveillance

“Folks in the school security industry got wind that New York State was sitting on this big pot of money that school districts had access to,” said Coyle of the NYCLU. “And that kind of opened the floodgates for companies to try to convince school districts to use that state funding to buy products they don’t need, they don’t know how to use, and are potentially harmful.”

New York isn’t the only state ready to spend a fortune. A 2019 Texas bill allocated $100 million in grants for schools seeking to purchase new equipment.

Federal Covid-19 relief dollars can also be directed to things like school security systems, through the Elementary and Secondary School Emergency Relief Fund. Companies, including ZeroEyes and a similar firms, advertise how schools can receive a grant for the “development and implementation of procedures and systems to improve the preparedness and response efforts of a school district.”

“We are targeting sales to all states,” Shmuklyer, of Zero Eyes, said. “A lack of funds should not be the reason why a school cannot be proactive in addressing the mass shooting problem.”

Experts argue schools are just a cheap training ground for technology vendors to test and improve their object detection software so that they can eventually sell it elsewhere.

“Part of the reason why these companies are offering schools the technologies at a relatively cheap price point is that they’re using the schools as their grounds for training,” said Ferguson, the American University professor. “And so those schools or students become data points in a large data set that’s actually improving the technology so they can sell it to other people in other places.”

“They keep saying how the artificial intelligence system they use gets refined after more usage, because they collect more data, more information. But what’s it going to take, 20 years?”

Acting superintendent Nolan himself was told by Evolv the system would get smarter over time with more use. “They keep saying how the artificial intelligence system they use gets refined after more usage, because they collect more data, more information,” he said. “But what’s it going to take, 20 years?”

The lack of regulation leads to a lack of transparency on the use of the data itself. “There’s no protections in place,” said Daniel Schwarz, privacy and technology strategist at NYCLU, “And it raises all these issues around what happens with the data. … Oftentimes, what we’ve caught out is that they actually worsen racial disparities and biases.”

FILE - ShotSpotter equipment overlooks the intersection of South Stony Island Avenue and East 63rd Street in Chicago on Tuesday, Aug. 10, 2021. In more than 140 cities across the United States in 2023, ShotSpotter’s artificial intelligence algorithm and its intricate network of microphones evaluate hundreds of thousands of sounds a year to determine if they are gunfire, generating data now being used in criminal cases nationwide. (AP Photo/Charles Rex Arbogast, File)

ShotSpotter (renamed SoundThinking) equipment overlooks the intersection of South Stony Island Avenue and East 63rd Street in Chicago on Aug. 10, 2021.

Photo: Charles Rex Arbogast/AP


Additionally, ShotSpotter — now renamed SoundThinking — a system of microphones which claims to use “sensors, algorithms and artificial intelligence” to detect the sound of gunfire, has received intense criticism for being overwhelmingly deployed in communities of color. The frequent false alarms of the systems has led to more aggressive policing, as well as the distortion of gunfire statistics.

An analysis by the MacArthur Justice Center found that 89 percent of ShotSpotter alerts in Chicago from 2019-2021 turned up no gun-related crime. “Every unfounded ShotSpotter deployment creates an extremely dangerous situation for residents in the area,” according to the report.

There has been extensive reporting on police departments and other agencies’ use of ShotSpotter nationwide — but not schools. Public records show Brockton Public Schools, in Massachusetts, for instance, bought access to the technology for three years in a row. The school system said in a statement that the public document showing its purchase of ShotSpotter was in error and referred instead to a purchase by the police department; the school spokesperson said Brockton schools received a separate donation of ShotSpotter, but never activated it. (The school system did not say who donated the system, and the police department did not respond to a request for comment.)

“Contrary to claims that the ShotSpotter product leads to over-policing, ShotSpotter alerts allow police to investigate a gunfire incident in a more precise area,” Sara Lattman, a SoundThinking spokesperson, said in a statement to The Intercept. “Additionally, ShotSpotter has maintained a low false positive rate, just 0.5%, across all customers in the last three years.”

For many advocates against gun violence, particularly in schools, gun control measures like an assault weapons ban would go a long way in curtailing the deadly effects of attacks. With Congress failing to enact such policies, experts argue that schools should refrain from turning to shoddy technology to support their students.

“We advise schools to focus on human factors: people, policies, procedures, planning, training, and communications,” said Trump, the National School Safety and Security Services head. “Avoid security theater.”

Vendors, though, continue to emphasize the risk of gun violence and rely on the steady drumbeat of attacks to generate fear in potential clients — and to make sales.

“While recent high visibility attacks at publicly and privately-owned venues and schools have increased market awareness of mass shootings,” said Evolv’s recent annual disclosure report, “if such attacks were to decline or enterprises of governments perceived the general level of attacks has declined, our ability to attract new customers and expand our sales to existing customers could be materially and adversely affected.”

The company even helps schools market the technology to their own communities. In an email from Evolv to the Charlotte-Mecklenburg school district, a bulleted list of talking points makes suggestions for how the school system might respond to public queries about the scanners. One of the talking points said, “Security approaches included multiple layers,” adding that “this approach recognizes the reality that no single layer or single technology is 100% effective.”

When reached for comment by The Intercept, Eddie Perez, a spokesperson for the Charlotte-Mecklenburg school district, quoted the talking point verbatim in an emailed response.

That hedged view is out of step with how people in the district itself speak about the system: as an absolute assurance of a gun-free safety. Students in the video produced by the school district said, “You get a certain reassurance that there are no dangerous weapons on campus.”

Correction: May 11, 2023
This story has been updated to use the correct spelling of ZeroEyes spokesperson Olga Shmuklyer’s name. It has also been updated to reflect a clarification received after publication from Brockton Public Schools in Massachusetts that the ShotSpotter system donated to the schools was not received from the police.

The post AI Tries (and Fails) to Detect Weapons in Schools appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/07/ai-gun-weapons-detection-schools-evolv/feed/ 0 Subway Shootings Security An employee of Evolv Technology, demonstrates the Evolv Express weapons detection system, which is showing red lights to flag a weapon, May 25, 2022, in New York. Investigation Tracked ShotSpotter ShotSpotter (renamed SoundThinking) equipment overlooks the intersection of South Stony Island Avenue and East 63rd Street in Chicago on Tuesday, Aug. 10, 2021.
<![CDATA[The Pentagon Uses Video Games to Teach “Security Excellence.” You Can Play Them Too.]]> https://theintercept.com/2023/05/02/defense-department-pentagon-video-games/ https://theintercept.com/2023/05/02/defense-department-pentagon-video-games/#respond Tue, 02 May 2023 18:59:31 +0000 https://theintercept.com/?p=426935 The computer games provide clues that could help potential whistleblowers leak intel without getting caught.

The post The Pentagon Uses Video Games to Teach “Security Excellence.” You Can Play Them Too. appeared first on The Intercept.

]]>
In the 1983 movie “WarGames,” a young hacker played by Matthew Broderick inadvertently accesses a fictional supercomputer belonging to the U.S. military. Before realizing he has found a system the North American Aerospace Defense Command uses for war simulations, he searches for computer games. The list he gets back starts with classic games like checkers and bridge, but to his surprise, it also includes games called “Guerrilla Engagement” and “Theaterwide Biotoxic and Chemical Warfare.”

Turns out the Department of Defense likes to play computer games in real life too.

More than 40 security awareness” games are available for anyone to play on the website of the Center for Development of Security Excellence, or CDSE, a directorate within the Defense Counterintelligence and Security Agency, the largest security agency within the U.S. government. The DCSA, which refers to itself as “America’s Gatekeeper,” specializes in security of government personnel and infrastructure as well as counterintelligence and insider threat detection. (The Defense Department did not immediately respond to a request for comment.)

The games range from crossword puzzles and word searches about how to identify an insider threat, to games with more peculiar titles like “Targeted Violence” and “The Adventures of Earl Lee Indicator.” The trove of games looks like an artifact from the late ’90s: game titles announced in WordArt, award badges that look designed with Microsoft Word, “Matrix”-esque backgrounds of falling numbers, and stock photos (some still watermarked).

Some of the games themselves are presented in formats prone to security vulnerabilities. For example, some look like they were made using freely available PowerPoint magic eight ball templates, despite the file format’s potential for containing malware. Playing the magic eight ball games also requires downloading and opening files, exposing players to potentially malicious attachments. Heightening this risk, it appears not all the games have a carefully guarded provenance: The metadata in a magic eight ball game called “Unauthorized Disclosure,” for instance, indicates that the file was originally stored in a personal Dropbox folder.

The games appear to be used for internal training on topics such as cybersecurity and industrial security as well as insider threats and Special Access Programs, security protocols for handling highly classified information. But they can also reveal what actions Defense Department investigators are taught to flag as an insider threat, like plugging in unauthorized USB devices or downloading eyebrow-raising amounts of files all at once. These clues could potentially help whistleblowers avoid detection when leaking government intelligence.

The Intercept played a selection of the Pentagon’s security games. Here’s what the gameplay was like.

Adjudicative Guidelines Word Search

This word search, based off open-sourced code, is ostensibly designed to teach the player about the government’s adjudicative guidelines for determining a person’s eligibility for security clearance. The teaching method is to search a 625-letter grid for words like “sexual” and “criminal.” For example, once you spot “sexual,” a pop-up informs you that “[s]exual behavior that involves a criminal offense … raises questions about an individual’s judgment, reliability, trustworthiness, and ability to protect classified or sensitive information.” Seemingly the Defense Department believes that anyone convicted of a sex crime can’t be trusted to protect sensitive information.

Who Is the Risk?

This game is a cross between “The Dating Game” and “To Catch a Predator,” if the participants were suspected of being insider threats. In an upbeat voiceover, the game show host — or interrogator, who is represented by a $12 stock photo — says, “Welcome to America’s favorite game show: ‘Who Is the Risk?’ Your task in this exercise is to determine which of our guests is most likely to pose an indicator risk to your organization.”

The DoD’s Who’s the Risk? game.

The Department of Defense’s “Who Is the Risk?” game.

Screenshot: The Intercept

Each of the three contestants answer six different questions, such as “Have you made any large purchases recently?” and “Do you use social media?” If their answer sounds like a Potential Risk Indicator — a ”risky behavior” that, according to the CDSE, may indicate an inclination for becoming an insider threat — you click a checkbox under that person.

One of the contestants admits to just purchasing a Ferrari, while another brags about having high-level government contacts in the European Union. A third admits that they took classified documents home. Add up who has the most checkboxes and you’ve got your perp. Once you’ve identified the correct suspect, the host tells you to “bring these concerns to the appropriate reporting authority,” as one does.

Whodunit Mystery Game

The opening screen for the Whodunit Mystery Game.

The opening screen for the “Whodunit” game.

Screenshot: The Intercept

The most elaborate game on the site, “Whodunit,” is similar to Clue, except that instead of a murder suspect, you’re trying to identify and locate a leaker, and instead of a murder weapon, you’re trying to find the method they used to leak the data.

The suspect cards include intricate profiles and a number of potential red flags. David Plum, for instance, has “shared that he’s going through a divorce” and is “declining performance evaluations.” Betty Brown has “never taken a polygraph,” and Marge Merlot “frequently travels to several foreign countries.” After pegging the suspect, you can then select a probable location where the data breach occurred, such as in the cubicle farm (which, we’re informed, lacks security cameras) or the sensitive compartmented information facility, a secure facility for handling sensitive information. Finally, you can pick the method the nefarious leaker deployed, such as spillage or a good old-fashioned phishing attack. After you’ve cracked one case, there are six more to try.

Special Access Program Hidden Object Game

The Special Access Program hidden objects game.

The Special Access Program hidden object game.

Screenshot: The Intercept

This is a standard hidden object game: You have two minutes to locate 10 physical security-related objects. These objects range from General Services Administration containers used for storing classified information, to Z-duct ventilation constructions designed to prevent sound from escaping the secure facility, to astragal strips that can seal gaps in a closed door. If you successfully find all the objects, you’re awarded the rank of “security guru” and unlock a bonus hidden object game, where you now have one minute to find five unauthorized objects, including a personal phone and a wireless keyboard.

A Department of Defense poster advising against submitting confidential news tips.

A Department of Defense poster advising against submitting confidential news tips.

Screenshot: The Intercept


If you want to decorate your gaming room to match the Pentagon games as you’re playing, the CDSE also provides over 100 posters about security topics. Many of them are reminiscent of vintage 1960s National Security Agency posters, but others have been updated to warn about modern threats. For instance, one poster depicts a fictitious media outlet called the Daily News; its tips pop-up is verbatim from the New York Times. The poster cautions against following links to submit tips to news media, advising that “unauthorized disclosure of classified information to the news media or other outlets is not whistleblowing” and, in big red letters, “It’s a crime.”

The post The Pentagon Uses Video Games to Teach “Security Excellence.” You Can Play Them Too. appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/02/defense-department-pentagon-video-games/feed/ 0 The DoD’s Who’s the Risk? game. The DoD’s Who’s the Risk? game. The opening screen for the Whodunit Mystery Game. The opening screen for the Whodunit Mystery Game. The Special Access Program hidden objects game. The Special Access Program hidden objects game. A Department of Defense poster advising against submitting confidential news tips. A Department of Defense poster advising against submitting confidential news tips.
<![CDATA[Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad]]> https://theintercept.com/2023/04/29/phone-laptop-security-international-travel/ https://theintercept.com/2023/04/29/phone-laptop-security-international-travel/#respond Sat, 29 Apr 2023 17:30:03 +0000 https://theintercept.com/?p=426793 Traveling with a phone and laptop? Here are digital security tips to keep your devices and your data safe from the cops.

The post Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad appeared first on The Intercept.

]]>
Ernest Moret, a foreign rights manager for the French publishing house La Fabrique, boarded a train in Paris bound for London in early April. He was on his way to attend the London Book Fair.

When Moret arrived at St. Pancras station in the United Kingdom, two plainclothes cops who apparently said they were “counter-terrorist police” proceeded to terrorize Monet. They interrogated him for six hours, asking everything from his views on pension reform to wanting him to name “anti-government” authors his company had published, according to the publisher, before proceeding to arrest him for refusing to give up the passwords to his phone and laptop. Following his arrest, Moret was released on bail, though his devices were not returned to him.


The case, while certainly showcasing the United Kingdom’s terrifying anti-terror legislation, also highlights the crucial importance of taking operational security seriously when traveling — even when going on seemingly innocuous trips like a two-and-a-half-hour train ride between London and Paris. One never knows what will trigger the authorities to put a damper on your international excursion.

Every trip is unique and, ideally, each would get a custom-tailored threat model: itemizing the risks you foresee, and knowing the steps you can take to avoid them. There are nonetheless some baseline digital security precautions to consider before embarking on any trip.

Travel Devices, Apps, and Accounts

The first digital security rule of traveling is to leave your usual personal devices at home. Go on your trip with “burner” travel devices instead.

Aside from the potential for compromise or seizure by authorities, you also run the gamut of risks ranging from having your devices lost or stolen during your trip. It’s typically way less dangerous to just leave your usual devices behind, and to bring along devices you only use when traveling. This doesn’t need to be cost prohibitive: You can buy cheap laptops and either inexpensive new phones or refurbished versions of pricier models. (And also get privacy screens for your new phones and laptops, to reduce the information that’s visible to any onlookers.)

Spots

Illustration: Pierre Buttin for The Intercept

Your travel devices should not have anything sensitive on them. If you’re ever coerced to provide passwords or at risk of otherwise having the devices be taken away from you, you can readily hand over the credentials without compromising anything important.

If you do need access to sensitive information while traveling, store it in a cloud account somewhere using cloud encryption tools like Cryptomator to encrypt the data first. Be sure to then both log out of your cloud account and make sure it’s not in your browsing history, as well as uninstall Cryptomator or other encryption apps, and only reinstall them and re-log in to your accounts after you’ve reached your destination and are away from your port of entry. (Don’t login to your accounts while still at the airport or train station.)

Just as you shouldn’t bring your usual devices, you also shouldn’t bring your usual accounts. Make sure you’re logged out of any personal or work accounts which contain sensitive information. If you need to access particular services, use travel accounts you’ve created for your trip. Make sure the passwords to your travel accounts are different from the passwords to your regular accounts, and check if your password manager has a travel mode which lets you access only particular account credentials while traveling.

Before your trip, do your research to make sure the apps you’re planning to use — like your virtual private network and secure chat app of choice — are not banned or blocked in the region you’re visiting.

Related

Feds Are Tapping Protesters’ Phones. Here’s How To Stop Them.

Maintain a line of sight with your devices at all times while traveling. If, for instance, a customs agent or border officer takes your phone or laptop to another room, the safe bet is to consider that device compromised if it’s brought back later, and to immediately procure new devices in-region, if possible.

If you’re entering a space where it won’t be possible to maintain line of sight — like an embassy or other government building where you’re told to store devices in a locker prior to entry — put the devices into a tamper-evident bag, which you can buy in bulk online before your trip. While this, of course, won’t prevent the devices from being messed with, it will nonetheless give you a ready indication that something may be amiss. Likewise, use tamper-evident bags if ever leaving your devices unattended, like in your hotel room.

Phone Numbers

Sensitive information you may have on your devices doesn’t just mean documents, photos, or other files. It can also include things like contacts and chat histories. Don’t place your contacts in danger by leaving them on your device: Keep them in your encrypted cloud drive until you can access them in a safe location.

Spots

Illustration: Pierre Buttin for The Intercept

Much like you shouldn’t bring your usual phone, you also shouldn’t bring your normal SIM card. Instead, use a temporary SIM card to avoid the possibility of authorities taking control of your phone number. Depending on which region you’re going to, it may make more sense to either buy a temporary SIM card when in-region, or buy one beforehand. The advantage of buying a card at your destination is that it may have a higher chance of working, whereas if you buy one in advance, the claims that vendors make about their cards working in a particular region may or may not pan out.

On the other hand, the region you’re traveling to may have draconian identification requirements in order to purchase a SIM. And, if you’re waiting to purchase a card at your destination, you won’t have phone access while traveling and won’t be able to reach an emergency contact number if you encounter difficulties en route.

Heading Back

Keep in mind that the travel precautions outlined here don’t just apply for your inbound trip, they apply just as much for your return trip back home. You may be questioned either as you’re leaving the host country, or as you’re arriving back at your local port of entry. Follow all of the same steps of making sure there is nothing sensitive on your devices prior to heading back home.

Taking precautions like obtaining and setting up travel devices and accounts, or establishing a temporary phone number, may all seem like hassles for a standard trip, but the point of undertaking these measures is that they’re ultimately less hassle than the repercussions of exposing sensitive information or contacts — or of being interrogated and caged.

The post Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/29/phone-laptop-security-international-travel/feed/ 0 Spots Spots
<![CDATA[Crypto Cash Is Powering Kyrsten Sinema’s Reelection Campaign]]> https://theintercept.com/2023/04/28/kyrsten-sinema-crypto-campaign-donations/ https://theintercept.com/2023/04/28/kyrsten-sinema-crypto-campaign-donations/#respond Fri, 28 Apr 2023 09:00:52 +0000 https://theintercept.com/?p=426566 Sinema has grown more friendly to the cryptocurrency industry as investors and industry employees contributed to her congressional races.

The post Crypto Cash Is Powering Kyrsten Sinema’s Reelection Campaign appeared first on The Intercept.

]]>
After leaving the Democratic Party last year, Kyrsten Sinema will run for reelection to the U.S. Senate as an independent in 2024, according to a report by the Wall Street Journal earlier this month. In her bid for reelection as the senior senator from Arizona, Sinema faces a dour approval rating in her home state and will have to fend off a challenge by Democratic Rep. Ruben Gallego, who has launched an aggressive challenge to the incumbent.

In a hyperpolarized state, winning as an independent could be daunting. But Sinema has amassed a hearty war chest, thanks in part to six-figure contributions from crypto stakeholders pouring in since she joined a congressional caucus that sought to explore issues that would affect their industry.

The windfall from the crypto industry completes a two-year arc for Sinema, who went from being a proponent of regulations that some crypto giants opposed to a force for compromise with the industry. The softened legislation on regulations fell by the wayside, though Sinema’s crypto donations kept pouring in.

“There’s not a lot of money in being a crypto skeptic.”

In the last three years, Sinema has taken in almost a half a million dollars from crypto businesses and investors. In 2021, as her position on regulating crypto eased, she raised at least $175,000 in campaign cash from the industry. Between 2022 and 2023, her campaign has received more than $330,000 from crypto companies and firms with crypto holdings.

“There’s not a lot of money in being a crypto skeptic,” said Mark Hays, a senior policy analyst with Americans for Financial Reform and Demand Progress.

Some of the largest Sinema campaign contributions over the last four years came from employees at massive private equity firms who began investing heavily in crypto and blockchain technologies in the run up to the formation of Sinema’s new caucus. These include Apollo Global Management and Andreessen Horowitz, both of which started crypto funds worth hundreds of millions.

According to Sinema’s most recent financial disclosures, fundraising from crypto-aligned interests isn’t slowing down. Donald Trump spokesperson turned cryptocurrency evangelist Anthony Scaramucci donated $6,600 to Sinema’s campaign in February, despite suffering a major loss from the implosion of crypto exchange FTX last year. (Scaramucci remains confident in crypto.)

“I’m not one of these religious figures that’s going to chant ‘Bitcoin über alles’ no matter what is going on in life,” Scaramucci said earlier this month. “So I want to frame it from that perspective, and then tell you that I’m more bullish now than I’ve ever been.”

Financial Innovation Caucus

At its inception, the trajectory of Sinema’s crypto caucus wasn’t set in stone. In May 2021, Sinema and Sen. Cynthia Lummis, R-Wyo., started a coalition to support financial innovations including blockchain technology and central bank digital currencies to promote “financial inclusion and opportunity for all.”

The Financial Innovation Caucus has nine members including Sinema, Lummis, six Republicans, and Sen. John Hickenlooper, D-Colo. (The caucus website still lists Sinema as a Democrat.)

The caucus sprang into action during a congressional debate over the bipartisan infrastructure bill in August 2021. Sinema — along with Sens. Rob Portman, R-Ohio, and Mark Warner, D-Va. — proposed an amendment that would have strengthened cryptocurrency reporting requirements by narrowing certain exemptions.

The amendment was backed by the White House but drew immediate criticism from the cryptocurrency industry, which preferred an alternative proposal that would have loosened reporting requirements. The president of the Blockchain Association, a trade group that advocates for “the future of crypto,” called the Sinema amendment “terrible.” A spokesperson for Andreessen Horowitz, a venture capital firm that launched a $2.2 billion crypto fund that June, said the proposal would be a “stunning loss for America.”

Several days later, Sinema and her co-authors on the amendment came around on some of the crypto industry’s concerns. Sinema, Warner, Portman, Lummis, and former Sen. Pat Toomey, R-Pa., announced that they had compromised on a proposal to exempt certain groups like software developers and crypto miners from enhanced reporting requirements. Both the Blockchain Association and the White House supported the compromise, but the bill failed in the Senate.

Related

Sen. Kyrsten Sinema Privately Blew Up Biden Nominee Needed to Enact Regulatory Agenda

“The entire goal was moving the bipartisan infrastructure law forward,” a spokesperson for Sinema told the Intercept. “Working with the White House, we found a path forward for the bill to ensure it was not held up or derailed due to separate cryptocurrency concerns from a few Senators.”

The spokesperson disputed the characterization of Sinema’s revised bill as a softer proposed regulation.“The amendment does not ‘loosen reporting requirements,’” the spokesperson said. “It clarifies who is a broker so that people who aren’t actually brokers and cannot fulfill reporting requirements aren’t subject to reporting.”

As Congress debated the competing amendments in the third quarter of 2021, employees at crypto companies, along with venture capital and investment firms with nascent crypto holdings, contributed more than $175,000 total to Sinema’s congressional campaign committees, which raised $2 million that quarter.

These donors include employees from Andreessen Horowitz, in addition to Apollo Global Management, a private equity firm that started offering crypto services in October 2022. Apollo would emerge as the second largest donor base to Sinema’s campaign committee between 2017 and 2022.

Sinema’s congressional campaign committees received more than $51,000 from Andreessen Horowitz and Apollo. The $24,200 she received from Andreessen Horowitz employees — including one from co-founder Benjamin Horowitz — came mostly from employees who had not previously contributed to her campaigns.

Her campaign also received $27,300 in mostly maxed-out contributions from employees at Apollo, including COO Stuart Rothstein. Employees at Apollo had contributed to her campaigns in previous years but less frequently and in smaller amounts. In 2022, Sinema’s campaign received more than $151,000 from Apollo employees.

Scrutiny on Crypto

In the summer of 2022, as the cryptocurrency industry faced increasing scrutiny amid layoffs and failed pilot projects, industry leaders found steady support from Sinema and her colleagues in Congress. Sinema was the only co-sponsor of Toomey’s July 2022 Virtual Currency Tax Fairness Act, which would have exempted small personal crypto transactions from taxation and was widely celebrated by the industry.

Last August, Sinema and her colleagues reintroduced an identical version of the failed 2021 compromise bill with support from industry leaders like the Crypto Council for Innovation, Coin Center, and the Chamber of Digital Commerce. The bill has languished.

Efforts to regulate crypto have since slowed thanks to the recent collapse of Silicon Valley Bank, or SVB, and its effect on national banks, said Hilary Allen, a professor of financial regulation at American University Washington College of Law.

“Any momentum that was building to do crypto legislation has, to some extent, been deflected into dealing with the more present crisis,” Allen said. She was already skeptical about there being a potential crypto bill that could pass through both the House and the Senate. “Now, given the fact that SVB is such a clear focus of the Senate Banking or House Financial Services Committee, I think that’s going to make it even less likely that legislation will go through.”

In addition to the campaign contributions, the crypto industry has touched Sinema’s private life as well. In April 2021, Sinema’s romantic partner Lindsey Buckman received a home equity line of credit from Figure, a blockchain-powered loan provider, on a property in Arizona — the same property where Sinema is actively registered to vote. (Buckman did not respond to a request for comment.)

In July 2021, Apollo — the company whose employees went on to donate to Sinema — entered into an agreement with Figure Technologies, the firm behind the loan provider,

With the agreement between the two companies, Apollo invested funds to further develop Figure’s technology, and last year began experimenting with loan transfers using Figure’s blockchain. (There is no evidence Apollo’s investment in Figure had any bearing on Buckman’s loan.)

When reached for comment, Sinema’s spokesperson told The Intercept, “Lindsey is a private citizen who is not involved in politics. She deserves to make her own financial decisions without public scrutiny just like all other private citizens.”

The post Crypto Cash Is Powering Kyrsten Sinema’s Reelection Campaign appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/28/kyrsten-sinema-crypto-campaign-donations/feed/ 0
<![CDATA[Army Info War Division Wants Social Media Surveillance to Protect “NATO Brand”]]> https://theintercept.com/2023/04/27/army-cyber-command-nato-social-media/ https://theintercept.com/2023/04/27/army-cyber-command-nato-social-media/#respond Thu, 27 Apr 2023 16:01:07 +0000 https://theintercept.com/?p=426687 An Army Cyber Command official sought military contractors that could help “attack, defend, influence, and operate” on global social media.

The post Army Info War Division Wants Social Media Surveillance to Protect “NATO Brand” appeared first on The Intercept.

]]>
The U.S. Army Cyber Command told defense contractors it planned to surveil global social media use to defend the “NATO brand,” according to a 2022 webinar recording reviewed by The Intercept.

The disclosure, made a month after Russia’s invasion of Ukraine, follows years of international debate over online free expression and the influence of governmental security agencies over the web. The Army’s Cyber Command is tasked with both defending the country’s military networks as well as offensive operations, including propaganda campaigns.

The remarks came during a closed-door conference call hosted by the Cyber Fusion Innovation Center, a Pentagon-sponsored nonprofit that helps with military tech procurement, and provided an informal question-and-answer session for private-sector contractors interested in selling data to Army Cyber Command, commonly referred to as ARCYBER.

Though the office has many responsibilities, one of ARCYBER’s key roles is to detect and thwart foreign “influence operations,” a military euphemism for propaganda and deception campaigns, while engaging in the practice itself. The March 24, 2022, webinar was organized to bring together vendors that might be able to help ARCYBER “attack, defend, influence, and operate,” in the words of co-host Lt. Col. David Beskow of the ARCYBER Technical Warfare Center.

While the event was light on specifics — the ARCYBER hosts emphasized that they were keen to learn whatever the private sector thought was “in the realm of possible” — a recurring topic was how the Army can more quickly funnel vast volumes of social media posts from around the world for rapid analysis.

At one point in the recording, a contractor who did not identify themselves asked if ARCYBER could share specific topics they plan to track across the web. “NATO is one of our key brands that we are pushing, as far as our national security alliance,” Beskow explained. “That’s important to us. We should understand all conversations around NATO that has happened on social media.”

He added, “We would want to do that long term to understand how — what is the NATO, for lack of a better word, what’s the NATO brand, and how does the world view that brand across different places of the world?”

Beskow said that ARCYBER wanted to track social media on various platforms used in places where the U.S. had an interest.

“Twitter is still of interest,” Beskow told the webinar audience, adding that “those that have other penetration are of interest as well. Those include VK, Telegram, Sina Weibo, and others that may have penetration in other parts of the world,” referring to foreign-owned chat and social media sites popular in Russia and China. (The Army did not respond to a request for comment.)

The mass social media surveillance appears to be just one component of a broader initiative to use private-sector data mining to advance the Army’s information warfare efforts. Beskow expressed an interest in purchasing access to nonpublic commercial web data, corporate ownership records, supply chain data, and more, according to a report on the call by the researcher Jack Poulson.

“The NATO Brand”

Tracking a brand’s reputation is an extremely common marketing practice. But a crucial difference between a social media manager keeping tabs on Casper mattress mentions and ARCYBER is that the Army is authorized to, in Beskow’s words, “influence-operate the network … and, when necessary, attack.” And NATO is an entity subject to intense global civilian scrutiny and debate.

While the webinar speakers didn’t note whether badmouthing NATO or misrepresenting its positions would be merely monitored or actively countered, ARCYBER’s umbrella includes seven different units dedicated to offense and propaganda. The 1st Information Operations Command provides “Social Media Overwatch,” and the Army Civil Affairs and Psychological Operations Command works to “gain and maintain information dominance by conducting Information Warfare in the Information Environment,” according to ARCYBER’s website.

Related

Pentagon Tries to Cast Bank Runs as National Security Threat

Though these are opaque, jargon-heavy concepts, the term “information operations” encompasses activities the U.S. has been eager to decry when carried out by its geopolitical rivals — the sort of thing typically labeled “disinformation” when emanating from abroad.

The Department of Defense defines “information operations” as those which “influence, disrupt, corrupt or usurp adversarial human and automated decision making while protecting our own,” while “influence operations” are the “United States Government efforts to understand and engage key audiences to create, strengthen, or preserve conditions favorable for the advancement of United States Government interests, policies, and objectives through the use of coordinated programs, plans, themes, messages, and products synchronized with the actions of all instruments of national power.”

ARCYBER is key to the U.S.’s ability to do both.

While the U.S. national security establishment frequently warns against other countries’ “weaponization” of social media and the broader internet, recent reporting has shown the Pentagon engages in some of the very same conduct.

Last August, researchers from Graphika and the Stanford Internet Observatory uncovered a network of pro-U.S. Twitter and Facebook accounts covertly operated by U.S. Central Command, an embarrassing revelation that led to a “sweeping audit of how it conducts clandestine information warfare,” according to the Washington Post. Subsequent reporting by The Intercept showed Twitter had whitelisted the accounts in violation of its own policies.

Despite years of alarm in Washington over the threat posed by deepfake video fabrications to democratic societies, The Intercept reported last month that U.S. Special Operations Command is seeking vendors to help them make their own deepfakes to deceive foreign internet users.

It’s unclear how the Army might go about conducting mass surveillance of social media platforms that prohibit automated data collection.

During the webinar, Beskow told vendors that “the government would provide a list of publicly facing pages that we would like to be crawled at a specific times,” specifically citing Facebook and the Russian Facebook clone VK. But Meta, which owns Facebook and Instagram, expressly prohibits the “scraping” of its pages.

Asked how the Army planned to get around this fact, Beskow demurred: “Right now, we’re really interested in just understanding what’s in the realm of the possible, while maintaining the authorities and legal guides that we’re bound by,” he said. “The goal is to see what’s in the realm of possible in order to allow our, uh, leaders, once again, to understand the world a little bit better, specifically, that of the technical world that we live in today.”

The post Army Info War Division Wants Social Media Surveillance to Protect “NATO Brand” appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/27/army-cyber-command-nato-social-media/feed/ 0
<![CDATA[With Pentagon Leak, the Press Had Their Source and Ate Him Too]]> https://theintercept.com/2023/04/25/discord-leaker-new-york-times/ https://theintercept.com/2023/04/25/discord-leaker-new-york-times/#respond Tue, 25 Apr 2023 17:51:36 +0000 https://theintercept.com/?p=426501 Whatever Jack Teixeira’s motives, he's accused of sharing documents that have underpinned major stories in the same outlets that helped hunt him down.

The post With Pentagon Leak, the Press Had Their Source and Ate Him Too appeared first on The Intercept.

]]>
Members of law enforcement assemble on a road, Thursday, April 13, 2023, in Dighton, Mass., near where FBI agents converged on the home of a Massachusetts Air National Guard member who has emerged as a main person of interest in the disclosure of highly classified military documents on the Ukraine. The guardsman was identified as 21-year-old Jack Teixeira. (AP Photo/Steven Senne)

Members of law enforcement assemble near the home of Air National Guard member Jack Teixeira on April 13, 2023, in Dighton, Mass.

Photo: Steven Senne/AP


Tracing the concept of homo sacer from antiquity to modern life, philosopher Giorgio Agamben cites the ancient Roman lexicographer Festus, who defined the term as someone “whom the people have judged on account of a crime. It is not permitted to sacrifice this man, yet he who kills him will not be condemned for homicide.” Homo sacer is thus an outlaw who is free to be pursued by vigilante lynch mobs but who, crucially, cannot be martyred. The mass media’s treatment of the alleged Pentagon leaker appears to have taken this conceit to heart, codifying him as a justifiable target for persecution, to be “tracked” and “hunt[ed] down.”

Over and over, the mainstream press has employed a rhetoric of exclusion, stripping the leaker bare of any protections that might be afforded to a whistleblower. He is not, they tell us ad nauseum, an Edward Snowden or a Chelsea Manning. “It does not seem to involve a principled whistleblower, calling attention to wrongdoing or a coverup,” according to a Washington Post editorial. The “far-right” is incorrectly calling him a whistleblower, claims the New York Times. This view lets the outlet chastise those who attribute different motives to the alleged leaker, Jack Teixeira, while simultaneously distancing itself from the “far-right,” despite its own notably pro-law enforcement slant.

The motives of Teixeira, a 21-year-old Air National Guardsman, are important and newsworthy. They are also not fully known. Most press accounts have relied solely on interviews with minors who hung out in the same chatrooms as Teixeira. These sources have painted a compelling picture, but many others, including Teixeira himself, have not yet spoken publicly.

Why, just because the leaker didn’t bring his material directly to a news outlet, wasn’t he deserving of either protection or being cultivated as a future source?

Whatever his motives may have been, they don’t change the outcome of the leak: the release of informative documents that have underpinned major news stories in the same outlets that eagerly joined the search for their source. Reporters have argued that since Teixeira wasn’t a whistleblower, he was fair game to be hunted by law enforcement agencies and exposed by the press. This rationale conveniently sidesteps a key question: Why, just because the leaker didn’t bring his material directly to a news outlet, wasn’t he deserving of either protection or being cultivated as a future source? Why, instead, was he viewed solely or primarily as quarry?

The media’s claim that Teixeira is not a whistleblower has been based in part on the environment in which the documents were disclosed and the relatively small number of people with whom they were originally shared. Based on testimony from others in a chatroom, the Times wrote that the documents Teixeira allegedly shared, far from being disseminated in the public interest, “were never meant to leave their small corner of the internet.” Likewise, the Post claimed that “the classified documents were intended only to benefit his online family,” which Bellingcat estimated as having around 20 active users out of what the Times later said was about 50 total members. Yet on Friday, the Times reported that Teixeira had previously shared sensitive documents on another chat server that was publicly listed and had about 600 users. In their haste to reveal further possibly incriminating evidence against him, the authors seem not to have paused to reflect on how this wider distribution, if accurate, might undermine their earlier argument.

“Keeping secrets is essential to a functioning government,” the Post editorialized shortly after the documents began being covered in the mainstream press. “Breaking the laws for a psychic joyride is a despicable betrayal of trust and oaths.” Meanwhile, over on the news side, the paper churned out numerous articles revealing those very same secrets, some accompanied by unredacted copies of the leaked documents themselves.

Not to be outdone, the Times has deployed language that dehumanizes the leaker, evoking images of a threatening wild animal. The reporters don’t unpack the full significance of this hunting metaphor, which presumably ends with a slaughtered animal presented as a trophy. In the wake of the Times story naming the alleged leaker before his arrest (which has since been replaced by another story), Twitter was in full media victory lap mode, with reporters patting themselves on the back for their promptness in deanonymizing Teixeira.

More recently, however, the trophy hunters have begun to deny culpability for even the possibility that their investigations provided material assistance to the government.

Christiaan Triebert, a former Bellingcat staffer and a co-author of the Times investigation that initially named Teixeira, issued a disavowal of liability, explaining that the Times reporting team went to the suspect’s house in the hope of talking to him, but he wasn’t there, so instead, they interviewed his mother and, later, his stepfather. At one point, a man matching Teixeira’s description drove onto the property in a pickup truck, but upon seeing the journalists, he promptly departed.

Yet Triebert’s self-defense doesn’t entirely follow. “There seems to be a misconception that our story naming Teixeira led to his arrest,” Triebert tweeted. “That’s simply not the case.” But how does he know? Certainty about this only seems possible from inside the Department of Justice effort to find Teixeira, which isn’t where Triebert claims to stand. Triebert did not respond to a request for comment.

Aric Toler, a current Bellingcat staffer and the principle author of the Times investigation that first named Teixeira, has likewise been quick to dismiss the possibility that his reporting aided the government’s investigation: “This should have been obvious, but no, our story naming the Pentagon/Discord leaker didn’t help the feds find him. They already knew at least a day before we identified him.” He cites the FBI affidavit, employing zero skepticism about a government document that represents one side in what is about to become a contested legal process. Toler did not reply to multiple requests for comment.

The narrow parameters of these denials are telling. Toler has been careful to focus his disdain on the notion that the Times story naming the leaker helped lead to his arrest. But that was not the first time Toler wrote about the leaker. Four days earlier, on April 9, Toler published a story about the leak on Bellingcat’s site in which he named for the first time the Discord chat server where the documents seemed to have originally been leaked. In that piece, Toler also supplied the username of a member of the chat server where the documents were shared, explaining, “The Thug Shaker Central server was originally named after its original founder, one member of the server with the username ‘Vakhi’ told Bellingcat.”

Related

Why Did Journalists Help the Justice Department Identify a Leaker?

These two pieces of information — the name of the server and the name of one of its users — could have led the FBI to issue a request to Discord to provide identifying information about the user as well as about the owner of the chat server.

The FBI’s affidavit states that on April 10, the day after Toler’s Bellingcat story was posted online, “the FBI interviewed a user of Social Media Platform 1 (‘User 1’).” That user, who is not named in the affidavit, told the FBI that “an individual using a particular username (the ‘Subject Username’) began posting what appeared to be classified information on Social Media Platform.” The “Subject Username,” the affidavit explains, refers to Teixeira.

As with all documentation produced by government investigators, the FBI affidavit must be taken with an iceberg-sized lump of salt. However, it is at least as possible that Toler’s Bellingcat story provided a material lead for the federal investigation as that investigators already knew about Vakhi and Thug Shaker Central before reading it.

Regardless of whether journalists actually provided material assistance to federal investigators, it is concerning that there has been so little public discussion of or reflection by the reporters involved on the ethical ramifications of their work.

After talking to people who knew Teixeira from the Discord server, the investigatory paths of the FBI and Toler diverged. The FBI appears to have identified the suspected leaker based on server records it requested from the platform, while Toler has revealed that he was able to identify the individual by leveraging information supplied by minors.

Though Toler stated that his sources were “all kids,” neither he nor the Times has made any mention of whether they obtained parental consent for these interviews. UNICEF guidelines state that consent from both the child and their guardian should be established prior to conducting an interview and that the intended use of the interview should be made apparent. It’s not clear whether Toler informed the minors that he was going to use clues they offered, like which video games the alleged leaker liked to play, to out Teixeira. The Times did not respond to a request for comment.

In a since-deleted tweet, Times military correspondent David Philipps effectively threatened that if you don’t leak to the Times, the paper will instead “work feverishly” to identify you. Nuanced or not, this tweet perfectly summarizes the media’s messaging regarding this case: Only those who reach out to a media outlet are worthy of protection; those who leak information via other means risk sharing the fate of homo sacer, a traitor to be hunted down.

The post With Pentagon Leak, the Press Had Their Source and Ate Him Too appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/25/discord-leaker-new-york-times/feed/ 0 Leaked Documents Members of law enforcement assemble near the home of Air National Guard member Jack Teixeira, April 13, 2023, in Dighton, Mass.
<![CDATA[AI Art Sites Censor Prompts About Abortion]]> https://theintercept.com/2023/04/22/ai-art-abortion-censorship/ https://theintercept.com/2023/04/22/ai-art-abortion-censorship/#respond Sat, 22 Apr 2023 10:00:14 +0000 https://theintercept.com/?p=426190 “Why are they censoring something that is clearly under attack?”

The post AI Art Sites Censor Prompts About Abortion appeared first on The Intercept.

]]>
Two of the hottest new artificial intelligence programs for people who aren’t tech savvy, DALL-E 2 and Midjourney, create stunning visual images using only written prompts. Everything, that is, that avoids certain language in the prompts — including words associated with women’s bodies, women’s health care, women’s rights, and abortion.

I discovered this recently when I prompted the platforms for “pills used in medication abortion.” I’d added the instruction “in the style of Matisse.” I expected to get colorful visuals to supplement my thinking and writing about right-wing efforts to outlaw the pills.

Neither site produced the images. Instead, DALL-E 2 returned the phrase, “It looks like this request may not follow our content policy.” Midjourney’s message said, “The word ‘abortion’ is banned. Circumventing this filter to violate our rules may result in your access being revoked.”

DLL-censorship-copy

DALL-E blocks the AI image generator prompt of “abortion pills.”

Photo: DALL-E

Julia Rockwell had a similar experience. A clinical data analyst in North Carolina, Rockwell has a friend who works as a cell biologist studying the placenta, the organ that develops during pregnancy to nourish the developing fetus. Rockwell asked Midjourney to generate a fun image of the placenta as a gift for her friend. Her prompt was banned.

She then found other banned words and sent her findings to MIT Technology Review. The publication reported that reproductive system-related medical terms, including “fallopian tubes,” “mammary glands,” “sperm,” “uterine,” “urethra,” “cervix,” “hymen,” and “vulva,” are banned on Midjourney, but words relating to general biology, such as “liver” and “kidney,” are allowed.

I’ve since found more banned prompt words. They include products to prevent pregnancy, such as “condom” and “IUD,” an intrauterine device, a birth control product for women. Additional devices are sexed. “Stethoscope” prompted on Midjourney produces gorgeous renderings of an antique instrument. But “speculum,” a basic tool that medical providers use to visualize female reproductive anatomy, is not allowed.

The AI developers devising this censorship are “just playing whack-a-mole” with the word prompts they’re prohibiting, said University of Washington AI researcher Bill Howe. They aren’t deliberately censoring information about female reproductive health. They know that AI mirrors our culture’s worst and most virulent biases, including sexism. They say they want to protect people from hurtful images that their programs scrape from the internet. So far, they haven’t been able to do that, because their efforts are hopelessly superficial: Instead of putting intensive resources into fixing the models that generate the offensive material, the AI firms attempted to cut out the bias through censoring the prompts.

During a time when women’s right to sexual equality and freedom is under increasing assault by the right, the AI bans could be making things worse.

During a time when women’s right to sexual equality and freedom is under increasing assault by the right, the AI bans could be making things worse.

Midjourney rationalizes bans by explaining that it limits its content to the Hollywood equivalent of PG-13. DALL-E 2 uses PG. The program’s user guide prohibits production of images that are “inherently disrespectful, aggressive, or otherwise abusive.” Also banned are “visually shocking or disturbing content, including adult content or gore,” or which “can be viewed as racist, homophobic, disturbing, or in some way derogatory to a community.” Midjourney also bans “nudity, sexual organs, fixation on naked breasts,” and other pornography-like content. DALL-E 2’s prohibitions are similar.

Many users complain about the restrictions. “Do they want a program for creative professionals or for kindergartners?” complained one DALL-E 2 user on Reddit. A Midjourney member was more political, noting that the bans make it “pretty hard to create images with feminist themes.”

Abortion-is-banned-MJ-copy

Midjourney explains that “abortion” is banned as a prompt for the AI image generator.

Photo: Debbie Nathan

Bias Feedback Loop

The issue of biases in AI-generated art popped up after the launch of DALL-E, the precursor program to DALL-E 2. Some users noticed signs of gender bias (and racial bias too). Prompting with the words “flight attendant” generated only women. “Builder” produced images solely of men. Wired reported that developmental tests with DALL-E 2’s data found that when a prompt was entered simply for a person, without specifying gender, resulting images were usually of white men. When the prompt added negative nouns and adjectives, such “a man sitting in a prison cell” or “a photo of an angry man,” resulting images almost invariably depicted men of color.

These problems stem from bias produced by algorithms using models containing massive amounts of potentially harmful data. DALL-E 2’s model, for instance, was trained on 12 billion parameters of text-image pairs scraped from the internet. As a mirror of the real world, the internet world contains torrents of sexist pornography that objectify and degrade people, especially women. As DALL-E itself admitted last year, its model and the images it produces have “the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity.”

Related

Texas Judge Cosplaying as Medical Expert Has Consequences Beyond the Abortion Pill

On the earlier iteration of DALL-E 2, OpenAI, the research lab that created the program, tried to filter the training data to excise prompts that trigger sexism. Howe, the University of Washington researcher, said in an interview with The Intercept that such filtering is ham-fisted and, in some cases, worsens the bias. For instance, the filtering ended up decreasing how often images of women were produced. OpenAI hypothesized that the decrease occurred because images of women put into the data system were more likely than those of men to look sexualized. By filtering out problematic images, women as a class of the population tended to be erased.

In the AI text-to-visual programs, written prompts are associated with female bodies can trigger sexist, even sadistically sexist, output. This should not surprise. Everyday human society in most of the world remains obstinately patriarchal. And when it comes to the web, as one researcher reports, large-scale evidence exists for “a masculine default in the language of the online English-speaking world.” Another study found that data on the internet is highly influenced by the economics of the male gaze, including its gaze upon objectified, sexualized images of women and upon violence.

DALL-E 2 has tried to solve the problem superficially, not by retraining its model at the front end to remove harmful imagery, but instead simply by filtering out written prompts that focus on women’s bodies and activities, including the act of obtaining an abortion, hence the roadblocks I came up against trying to produce images with abortion pills on the platform, as well as what happened with Midjourney, which employs similar filters.

“Lock Down the Prompts”

It’s easy to sneak past the filters by tweaking words in the prompts. That’s what Rockwell — the digital analyst who gave Midjourney a prompt including “placenta” — discovered. After unsuccessfully requesting an image for “gynecological exam,” she shifted to the British spelling: “gynaecological.” The images she received, later published in MIT Technology Review, were creepy, if not downright pornographic. They featured nudity and body injuries unrelated to medical treatment. The visuals I got by typing the same phrase were even worse than Rockwell’s. One showed a naked woman lying on an exam table, screaming, with a slash on her throat.

gynaecological-exam-MJ-copy

A search on Midjourney for “gynaecological exam” provided four AI generated images.

Photo: Debbie Nathan; Midjourney

Aylin Caliskan, a scholar at the University of Washington’s Information School, co-published a study late last year verifying statistically that AI models tend to sexualize women, particularly teenagers. So, avoiding the word “abortion,” I asked Midjourney to render a visual for the phrase “pregnancy termination in 16-year-old girl. Realistic.” I got back a chilling combination of photorealism and soft-porn horror flick. The image depicts a very young white woman with cleavage exposed and with a grotesquely discolored and swollen belly, from which two conjoined baby heads stare fixedly with four zombie eyes.

pg-16-yo

Midjourney AI’s return images for the prompt “pregnancy termination in 16-year-old girl. Realistic.”

Photo: Debbie Nathan; Midjourney

Howe, who is an associate professor at the Information School, was a member of Caliskan’s team for the study that inspired my experiment. He is also co-founder of the Responsible AI Systems and Experiences center. He speculated that the salacious visual of the girl’s breasts reflected the prevalence of pornography in Midjourney’s model, while the bizarre babies probably showed that the internet has such a relative paucity of positive or normalizing material regarding abortion that the program got confused and generated gibberish — albeit gibberish that, in the current political climate, could be construed as anti-abortion.

The larger issue, Howe added, is that the amount of data in AI models has exploded recently. The text and visuals they are generating now are so detailed that the models may appear to be thinking and working at levels approaching human abilities. Howe said, the models possess “no grounding, no understanding, no experience, no other sensor that reifies words with objects or experiences in the real world.” On their own, they are completely incapable of avoiding bias.

There are only three ways to correct the bias they generate, Howe said. One involves filtering the database while the model is being trained and before it is released to the public. “For example,” he said, “scour through the entire training set, determine for each image if it’s sexualized, and either ensure that sexualized male and female images are equal in number, or remove all of them.” Similar techniques can be used midway through the training, Howe said. Either way is expensive and time-consuming.

Instead, he said, the owners do the cheapest and quickest thing: “They lock down the prompts.” But, Howe notes, this produces “tons of false positives and tons of false negatives,” and “makes it basically impossible to have a scientific discussion about reproduction. This is wrong,” he said. “You need to do the right thing from the beginning.”

“And you need to be transparent,” Howe said. Companies including Microsoft’s OpenAI, which Elon Musk has financially backed, are lately “releasing one model after the other,” Howe noted. Echoing a recent article in Scientific American, he expressed concern about the secrecy with which the new models are being rolled out. “There’s not much science we can do on them because they don’t tell us how they work or what they were trained on.” He attributed the secrecy to competitive fears of having trade secrets copied and to the probability, as he put it, that they are “all using the same bag of tricks.” Howe said that DALL-E no longer talks publicly about its model. Midjourney’s developer and owner David Holz said recently the program never has and won’t.

“Nothing Is Perfect”

Midjourney is gendered as well as racialized. One person’s prompt for male participants at a protest generated serious looking, fully clothed white men. A prompt for a Black woman fighting for her reproductive rights returned someone with outsized hips, bared breasts, and an angry scowl.

People using Midjourney have also generated anti-abortion images from metaphors rather than direct references. Someone’s prompt last year created a plate with slices of toast and a sunny side up egg with an embryo floating in the yolk. It is labeled “Planned Parenthood Breakfast,” implying that people who work for the storied women’s reproductive health and abortion provider are cannibals. Midjourney’s current rules have no way of removing them from public view.

Midjourney has been using human beings to vet automated first passes of the output. When The Intercept asked Holz to comment on the problem of prompt words generating biased and harmful images, he said he was test-driving a new plan, to replace people with algorithms that he claims will be “much smarter and won’t rely on ‘banned words.’” He added, “Nothing is perfect.”

This offhand attitude is unacceptable, said Renee Bracey Sherman, the director of We Testify, a nonprofit that promotes storytelling by people who’ve had abortions and want to normalize the experience. Prompt bans have long existed for text on social media. She said that this year, on the 50th anniversary of Roe v. Wade, she tweeted information about “self-managed abortion” and saw her post flagged by Twitter as dangerous — which led to it hardly being retweeted. She has seen the same happen to postings by reputable public health experts discussing scientific information about abortion.

Bracey Sherman said she was not surprised by the sexist, racist “protest” image I found on Midjourney. “Social media cannot imagine what a pro-abortion or reproductive rights activity looks like, other than something pornographic,” she said. She worries that word bans on platforms like DALL-E 2 and Midjourney cut off marginalized groups, including poor people and women of color, from good information that they desperately need and which does remain in the data.

Policy does not exist yet for regulating AI, but it should, Howe said. “We figured out how to build a plane,” he said, but “do we trust companies to not kill a plane full of people? No. We put regulations in place.” A New York City law, slated to go into effect in July, bans using AI to make job hiring decisions unless the algorithm first passes a bias audit. Other locales are working on similar laws. Last year, the Federal Trade Commission sent a report to Congress expressing concern about bias, inaccuracy, and discrimination in AI. And the White House Office of Science and Technology Policy published its Blueprint for an AI Bill of Rights “to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.”

Howe said he is “somewhat optimistic” that civil society in the U.S. will develop AI oversight policy. “But will it be enough and in time?” he asked. “It’s just mind-blowing the speed at which these things are being released.”

“Why are they censoring something that is clearly under attack?”

Bracey Sherman excoriated the companies’ lack of concern for the quality of their models prior to release and their piecemeal response after the output interacts with consumers in an increasingly fraught world. “Why are they not paying attention to what’s going on?” she said of the AI companies. “They make something and then say, ‘Oh, we didn’t know!’”

Of abortion information that gets blocked by banned prompts, she asked, “Why are they censoring something that is clearly under attack?”

The post AI Art Sites Censor Prompts About Abortion appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/22/ai-art-abortion-censorship/feed/ 0 DLL-censorship-copy DALL-E blocks the AI image generator prompt of "abortion pills." Abortion-is-banned-MJ-copy Midjourney explains that "abortion" is banned as a prompt for the AI image generator. gynaecological-exam-MJ-copy A search on Midjourney for "gynaecological exam" provided four AI generated images. pg-16-yo Midjourney AI's return images for the prompt “pregnancy termination in 16-year-old girl. Realistic."
<![CDATA[Georgia National Guard Will Use Phone Location Tracking to Recruit High School Children]]> https://theintercept.com/2023/04/16/georgia-army-national-guard-location-tracking-high-school/ https://theintercept.com/2023/04/16/georgia-army-national-guard-location-tracking-high-school/#respond Sun, 16 Apr 2023 11:00:11 +0000 https://theintercept.com/?p=425854 Federal contract materials outline plans to geofence 67 different public high schools throughout the state and to target phones with recruitment ads.

The post Georgia National Guard Will Use Phone Location Tracking to Recruit High School Children appeared first on The Intercept.

]]>
The Georgia Army National Guard plans to combine two deeply controversial practices — military recruiting at schools and location-based phone surveillance — to persuade teens to enlist, according to contract documents reviewed by The Intercept.

The federal contract materials outline plans by the Georgia Army National Guard to geofence 67 different public high schools throughout the state, targeting phones found within a one-mile boundary of their campuses with recruiting advertisements “with the intent of generating qualified leads of potential applicants for enlistment while also raising awareness of the Georgia Army National Guard.” Geofencing refers generally to the practice of drawing a virtual border around a real-world area and is often used in the context of surveillance-based advertising as well as more traditional law enforcement and intelligence surveillance. The Department of Defense expects interested vendors to deliver a minimum of 3.5 million ad views and 250,000 clicks, according to the contract paperwork.

While the deadline for vendors attempting to win the contract was the end of this past February, no public winner has been announced.

The ad campaign will make use of a variety of surveillance advertising techniques, including capturing the unique device IDs of student phones, tracking pixels, and IP address tracking. It will also plaster recruiting solicitations across Instagram, Snapchat, streaming television, and music apps. The documents note that “TikTok is banned for official DOD use (to include advertising),” owing to allegations that the app is a manipulative, dangerous conduit for hypothetical Chinese government propaganda.

The Georgia Army National Guard did not respond to a request for comment.

While the planned campaign appears primarily aimed at persuading high school students to sign up, the Guard is also asking potential vendors to also target “parents or centers of influence (i.e. coaches, school counselors, etc.)” with recruiting ads. The campaign plans not only call for broadcasting recruitment ads to kids at school, but also for pro-Guard ads to follow these students around as they continue using the internet and other apps, a practice known as retargeting. And while the digital campaign may begin within the confines of the classroom, it won’t remain there: One procurement document states the Guard is interested in “retargeting to high school students after school hours when they are at home,” as well as “after school hours. … This will allow us to capture potential leads while at after-school events.”

“Location based tracking is not legitimate. It’s largely based on the collecting of people’s location data that they’re not aware of and haven’t given meaningful permission for.”

Although it’s possible that children caught in the geofence might have encountered a recruiter anyway — the 2001 No Child Left Behind Act mandated providing military recruiters with students’ contact information — critics of the plan say the use of geolocational data is an inherently invasive act. “Location based tracking is not legitimate,” said Jay Stanley, a senior policy analyst with the American Civil Liberties Union. “It’s largely based on the collecting of people’s location data that they’re not aware of and haven’t given meaningful permission for.” The complex technology underpinning a practice like geofencing can obscure what it’s really accomplishing, argues Benjamin Lynde, an attorney with the ACLU of Georgia. “I think we have to start putting electronic surveillance in the context of what we would accept if it weren’t electronic,” Lynde told The Intercept. “If there were military recruiters taking pictures of students and trying to identify them that way, parents wouldn’t think that conduct is acceptable.” Lynde added that the ACLU of Georgia did not believe there were any state laws constraining geofence surveillance.

The sale and use of location data is largely uncontrolled in the United States, and the legal and regulatory vacuum has created an unscrupulous cottage industry of brokers and analytics firms that turn our phones’ GPS pings into a commodity. The practice has allowed for a variety of applications, including geofence warrants that compel companies like Google to give police a list of every device within a targeted area at a given time. Last year, The Intercept reported on a closed-door technology demo in which a private surveillance firm geofenced the National Security Agency and CIA headquarters to track who came and went.

Although critics of geofencing point to the practice’s invasiveness, they also argue that the inherent messiness of Wi-Fi and Bluetooth signals means that the results are prone to inaccuracy. “This creates the possibility of both false positives and false negatives,” the Electronic Frontier Foundation wrote earlier this year in a Supreme Court amicus brief opposing geofence warrants served to Google. “People could be implicated for a crime when they were nowhere near the scene, or the actual perpetrator might not be included at all in the data Google provides to police.”

It’s doubtful that potential vendors for the Georgia Guard have data accurate enough to avoid targeting kids under 17, according to Zach Edwards, a cybersecurity researcher who closely tracks the surveillance advertising sector. “It would also sweep up plenty of families with young kids who gave them phones before they turned 16 and who were using networks that had location-targetable ads,” he explained in a message to The Intercept. “Very, very few advertising networks track the age of kids under 18. It’s one giant bucket.”

In-school recruiting been hotly debated for decades, both defended as a necessary means of maintaining an all-volunteer military and condemned as a coercive practice that exploits the immaturity of young students. While the state’s plan specifies targeting only high school juniors and seniors ages 17 and above, demographic ad targeting is known to be error prone, and experts told The Intercept it’s possible the recruiting messages could reach the phones of younger children. “Generally, commercial databases aren’t known for their high levels of accuracy,” explained the ACLU’s Stanley. “If you have some incorrect ages in there, it’s really not a big deal [to the broker].” The accuracy of demographic targeting aside, there’s also a problem of geographic reality: “There are middle schools within a mile of those high schools,” according to Lynde of the ACLU of Georgia. “There’s no way there can be a specific delineation of who they’re targeting in that geofence.”

Indeed, dozens of the schools pegged for geotargeting have middle schools, elementary schools, parks, churches, and other sites where children may congregate within a mile radius, according to Google Maps. A geofence containing Hillgrove High School in Powder Springs, Georgia, would also snare phone-toting students at Still Elementary School and Lovinggood Middle School, the latter a mere thousand feet away. A mile-radius around Collins Hill High School in Suwanee, Georgia, would also include the Walnut Grove Elementary School, along with the nearby Oak Meadow Montessori School, a community swim club, a public park, and an aquatic center. Lynde, who himself enlisted with the Georgia National Guard in 2005, added that he’s concerned beaming recruiting ads directly to kids’ phones “could be a means to bypass parental involvement in the recruiting process,” allowing the state to circumvent the scrutiny adults might bring to traditional military recruiting methods like brochures and phone calls to a child’s house. “Parents should be involved from the onset.”

The post Georgia National Guard Will Use Phone Location Tracking to Recruit High School Children appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/16/georgia-army-national-guard-location-tracking-high-school/feed/ 0
<![CDATA[Why Did Journalists Help the Justice Department Identify a Leaker?]]> https://theintercept.com/2023/04/13/why-did-journalists-help-the-justice-department-identify-a-leaker/ https://theintercept.com/2023/04/13/why-did-journalists-help-the-justice-department-identify-a-leaker/#respond Fri, 14 Apr 2023 01:55:07 +0000 https://theintercept.com/?p=426061 If he’d shared the same classified materials with reporters, he would be tirelessly defended as a source.

The post Why Did Journalists Help the Justice Department Identify a Leaker? appeared first on The Intercept.

]]>
In the fallout from the Pentagon document leaks, a troubling trend has emerged: Journalists seem to be eagerly volunteering their efforts to help the Pentagon and Justice Department facilitate an investigation into the source of the leaks, with no discussion of the ethical ramifications. If the individual — whose identity has been published by journalists, and who has now been arrested by federal authorities — had shared precisely the same classified materials with reporters, regardless of his motivations, he would be tirelessly defended as a source.

NPR recently decried being labeled by Twitter as state-affiliated media, writing that this is a label Twitter uses “to designate official state mouthpieces and propaganda outlets.” That unrelated controversy is notable given that an NPR staffer seems to have deputized himself to act as a government investigator by posting image analyses on Twitter. (While NPR has announced that its official organizational accounts have quit Twitter, individual staff accounts still appear to be active.)

NPR senior editor and correspondent Geoff Brumfiel on Monday combed through artifacts visible in the periphery of the photos of the leaks, as well as collating findings others have discovered, itemizing and explaining each one. Though Brumfiel claimed that his roundup was “largely pointless,” he was effectively performing free labor for the Justice Department, and his posts may corroborate the identity of a suspect. For instance, it may be possible for investigators to analyze a suspect’s credit card purchase history to see if he at some point ordered the objects in question. Brumfiel did not respond to a request for comment in time for publication.

The saving grace here appears to be that the analysis — as is all too often the case with open-source sleuthing on social media — was flawed. Less than 15 minutes after proclaiming he was “confident” that a manual partially visible in some of the photos of leaked documents was for a particular model of scope, others pointed out that in fact the manual was clearly for a different model. “I regret the error,” responded Brumfiel. To his credit, Brumfiel does freely admit in his bio to being “Mostly stupid on the Twitter,” though in this case that self-professed stupidity may put someone’s liberty at risk.

See Something, Say Something

It’s not atypical for government agencies to explicitly request this type of image identification help. For instance, Europol maintains a Trace an Object website, where budding image analysts can help identify various objects in photos linked to child abuse cases. In the case of the leaked Pentagon documents, the Justice Department hasn’t even needed to put out such a call, as plenty of volunteers are offering up leads.

Brumfiel is by no means alone in his social media vigilantism. Jake Godin, a visual investigations journalist at Scripps News, has likewise engaged in the Twitter pastime of volunteering his time to help the Justice Department. Bellingcat, meanwhile, went further and virtually handed over the potential origin point of the leak by specifying the exact name of the chatroom where the documents appear to have first been shared. The fact that these identifications may be aiding the Justice Department investigation appears not to have merited any public consideration from those doing the analyses.

On Wednesday, the Washington Post disclosed further information about the peripheral contents of “previously unreported images,” as well as a variety of additional information about the alleged leaker and his underage associates. The Post states that the leaker “may have endangered his young followers by allowing them to see and possess classified information, exposing them to potential federal crimes.” Given this risk, the Post was extremely cavalier in its depiction of one of those teenagers, publishing video with only rudimentary pixelation accompanied by his unaltered voice. The Post notes that the interviewee asked them not to obscure his voice, but one wonders whether he also asked for close-up shots of his laptop, clearly showing missing keys, to be included. In other words, the Post appears to be acknowledging the danger the interviewee faces while also choosing to readily present evidence that could help investigators confirm his identity. (In response to detailed questions from The Intercept, a Post spokesperson reiterated that the reporters obtained parental consent for the interview.)

The New York Times went further still, identifying the suspected leaker by name on Thursday based on a “trail of evidence” they compiled, including matching elements in the margins of the document photos to other posts on social media.

Perhaps the most bizarre entry in this dubious parade was a story published last week by VICE’s Motherboard about a role-playing game character sheet that seems to have been included in a batch of the leaked document photos. Motherboard published the character sheet in full (in stark contrast to the extreme trouble the same publication took just days before to avoid publishing a poorly redacted document revealing the names of minors suspected of using the artificial intelligence chatbot ChatGPT in school). Motherboard notes that it’s not clear whether the errant image was inadvertently or intentionally added to the photo dump, or whether it was added by the original leaker or an intermediary who further disseminated the photo archive. This lack of clarity makes the decision to publish the document even more confusing and suspect, but the author doesn’t seem bothered, as the story morphs into a humorous analysis of the fun and creative things people do in the world of online role-playing games.

The document in question appears to be an extremely niche adaptation of a role-playing game. Let’s say that someone in an online community on Reddit, 4chan, or a Discord server instantly recognizes this particular game and says, “Oh, that’s Alice’s game sheet.” Alice may now be the subject of Justice Department scrutiny or an online lynch mob, or both, courtesy of Motherboard. Or suppose the Justice Department zeroes in on a suspected leaker and uses the handwriting in the published Motherboard document to positively identify them. The story’s author, Matthew Gault, did not respond to a request for comment.

Duty of Care

Why is the media so eager to help the Justice Department by supplying potentially viable leads? Sure, the­­­ leaker wasn’t NPR’s or Motherboard’s source, and as far as we know, had no intention of being a whistleblower. But does that give journalists a green light to act as investigative agents for the Justice Department? A duty of care arguably extends beyond one’s immediate source: You don’t have to assist an individual in publicizing the workings of government, but at the very least, you should not intentionally compromise them.

The argument could be made that the identity of the leaker is newsworthy. For instance, as the CIA points out, leakers are often senior officials. But ascertaining a source’s identity can be done by journalists privately, as opposed to all over social media or in published stories. If it emerges that the source’s identity is not, in fact, newsworthy, a life hasn’t been damaged by overzealous state-serving reporting.

There is, of course, the distinct possibility that the Justice Department investigators are already well familiar with the ephemera in the photos, seeing as they too have access to reverse image search sites, and that journalists are not telling them anything they don’t already know. Nonetheless, there is a very real possibility that the various clues to the leaker’s and their associates’ identities proffered by various news outlets have helped the government in their recent apprehension of a person suspected to be the leaker.

Either way, the zeal of some “reporters” to out the leaker or find a “gotcha” clue tucked away in the marginalia of an image seems distasteful. A different impulse would be to offer guidance that might help sources avoid getting caught; that could facilitate future leaks and thus greater transparency.

The post Why Did Journalists Help the Justice Department Identify a Leaker? appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/13/why-did-journalists-help-the-justice-department-identify-a-leaker/feed/ 0
<![CDATA[What to Do Before Sharing Classified Documents With Your Friends Online]]> https://theintercept.com/2023/04/12/classified-documents-leak/ https://theintercept.com/2023/04/12/classified-documents-leak/#respond Wed, 12 Apr 2023 21:28:50 +0000 https://theintercept.com/?p=425875 While it’s tempting to share a photo to prove your point, you ought to think through the potential repercussions.

The post What to Do Before Sharing Classified Documents With Your Friends Online appeared first on The Intercept.

]]>
Let’s say you’re locked in a heated geopolitical spat with a few of your online friends in a small chatroom, and you happen to be privy to some classified documents that could back up your argument. While it’s tempting to snap a photo and share it to prove your point, especially given the appeal of impressing onlookers and instantly placating naysayers, it would behoove you to take a step back and think through the potential repercussions. Even though you may only plan for the documents to be shared among your small group of 20 or so friends, you should assume that copies may trickle out, and in a few weeks, those very same documents could appear on the front pages of international news sites. Thinking of this as an inevitability instead of a remote prospect may help protect you in the face of an ensuing federal investigation.

Provenance

Thorough investigators will try to establish the provenance of leaked materials from a dual perspective, seeking to ascertain the original points of acquisition and distribution. In other words, the key investigatory questions pertaining to the origins of the leaks are where the leaker obtained the source materials and where they originally shared them.

To establish the point of acquisition, investigators will likely first enumerate all the documents that were leaked, then check via which systems they were originally disseminated, followed by seeing both who had access to the documents and, if access logs permit, who actually viewed them.

What all this means for the budding leaker is that the more documents you share with your friends, the tighter the noose becomes. Consider the probabilities: If you share one document to which 1,000 people had access and that 500 people actually accessed, you’re only one of 500 possible primary leakers. But if you share 10 documents — even if hundreds of people opened each one — the pool of people who accessed all 10 is likely significantly smaller.

Keep in mind that access logs may not just be digital — in the form of keeping track of who opened, saved, copied, printed, or otherwise interacted with a file in any way — but also physical, as when a printer produces imperceptible tracking dots. Even if the printer or photocopier doesn’t generate specifically designed markings, it may still be possible to identify the device based on minute imperfections that leave a trace.

In the meantime, investigators will be working to ascertain precisely where you originally shared the leaked contents in question. Though images of documents, for instance, may pass through any number of hands, bouncing seemingly endlessly around the social media hall of mirrors, it will likely be possible with meticulous observation to establish the probable point of origin where the materials were first known to have surfaced online. Armed with this information, investigators may file for subpoenas to request any identifying information about the participants in a given online community, including IP addresses. Those will in turn lead to more subpoenas to internet service providers to ascertain the identities of the original uploaders.

It is thus critically important to foresee how events may eventually unfold, perhaps months after your original post, and to take preemptive measures to anonymize your IP address by using tools such as Tor, as well as by posting from a physical location at which you can’t easily be identified later and, of course, to which you will never return. An old security adage states that you should not rely on security by obscurity; in other words, you should not fall into the trap of thinking that because you’re sharing something in a seemingly private, intimate — albeit virtual — space, your actions are immune from subsequent legal scrutiny. Instead, you must preemptively guard against such scrutiny.

Digital Barrels

Much as crime scene investigators, with varying levels of confidence, try to match a particular bullet to a firearm based on unique striations or imperfections imprinted by the gun barrel, so too can investigators attempt to trace a particular photo to a specific camera. Source camera identification deploys a number of forensic measures to link a camera with a photo or video by deducing that camera’s unique fingerprint. A corollary is that if multiple photos are found to have the same fingerprint, they can all be said to have come from the same camera.

A smudge or nick on the lens may readily allow an inspector to link two photos together, while other techniques rely on imperfections and singularities in camera mechanisms that are not nearly as perceptible to the lay observer, such as the noise a camera sensor produces or the sensor’s unique response to light input, otherwise known as photo-response nonuniformity.

Related

Twitter Deploys Classic Musk Tactics to Hunt Down Leaker

This can quickly become problematic if you opted to take photos or videos of your leaked materials using the same camera you use to post food porn on Instagram. Though the technical minutiae of successful source camera identification forensics can be stymied by factors like low image quality or applied filters, new techniques are being developed to avoid such limitations.

If you’re leaking photos or videos, the best practice is to employ a principle of one-time use: to use a camera specifically and solely for the purpose of the leak; be sure not to have used it before and to dispose of it after.

And, of course, when capturing images to share, it would be ideal to keep a tidy and relatively unidentifiable workspace, avoiding extraneous items either along the periphery or even under the document that could corroborate your identity.

In sum, there are any number of methods that investigators may deploy in their efforts to ascertain the source of a leak, from identifying the provenance of the leaked materials, both in terms of their initial acquisition and their subsequent distribution, to identifying the leaker based on links between their camera and other publicly or privately posted images.

Foresight is thus the most effective tool in a leaker’s toolkit, along with the expectation that any documents you haphazardly post in your seemingly private chat group may ultimately be seen by thousands.

The post What to Do Before Sharing Classified Documents With Your Friends Online appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/12/classified-documents-leak/feed/ 0
<![CDATA[Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math]]> https://theintercept.com/2023/04/09/elon-musk-social-security-cuts/ https://theintercept.com/2023/04/09/elon-musk-social-security-cuts/#respond Sun, 09 Apr 2023 10:00:49 +0000 https://theintercept.com/?p=425708 No, Elon, Japan is not a “leading indicator” just because of your billionaire vibes.

The post Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math appeared first on The Intercept.

]]>
Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, US, on Tuesday, Jan. 24, 2023. Investors suing Tesla and Musk argue that his August 2018 tweets about taking Tesla private with funding secured were indisputably false and cost them billions of dollars by spurring wild swings in Tesla's stock price. Photographer: Marlena Sloss/Bloomberg via Getty Images

Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, on Jan. 24, 2023.

Photo: Marlena Sloss/Bloomberg via Getty Images

If there’s one thing you can say for sure about Elon Musk, it’s that he has a huge number of opinions and loves to share them at high volume with the world. The problem here is that his opinions are often stunningly wrong.

Generally, these stunningly wrong opinions are the conventional wisdom among the ultra-right and ultra-rich.

In particular, like most of the ultra-right ultra-rich, Musk is desperately concerned that the U.S. is about to be overwhelmed by the costs of Social Security and Medicare.

He’s previously tweeted — in response to the Christian evangelical humor site Babylon Bee — that “True national debt, including unfunded entitlements, is at least $60 trillion.” On the one hand, this is arguably true. On the other hand, you will understand it’s not a problem if you are familiar with 1) this subject and 2) basic math.

More recently, Musk favored us with this perspective on Social Security:

There’s so much wrong with this that it’s difficult to know where to start explaining, but let’s try.

First of all, Musk is saying that the U.S. will have difficulty paying Social Security benefits in the future due to a low U.S. birth rate. People who believe this generally point to the falling ratio of U.S. workers to Social Security beneficiaries. The Peter G. Peterson Foundation, founded by another billionaire, is happy to give you the numbers: In 1960, there were 5.1 workers per beneficiary, and now there are only 2.8. Moreover, the ratio is projected to fall to 2.3 by 2035.

This does sound intuitively like it must be a big problem — until you think about it for five seconds. As in many other cases, this is the five seconds of thinking that Musk has failed to do.

You don’t need to know anything about the intricacies of how Social Security works to understand it. Just use your little noggin. The obvious reality is that if a falling ratio of workers to beneficiaries is an enormous problem, this problem would already have manifested itself.

Again, look at those numbers. In 1960, 5.1. Now, 2.8. The ratio has dropped by almost half. (In fact, it’s dropped by more than that in Social Security’s history. In 1950 the worker-to-beneficiary ratio was 16.5.) And yet despite a plunge in the worker-retiree ratio that has already happened, the Social Security checks today go out every month like clockwork. There is no mayhem in the streets. There’s no reason to expect disaster if the ratio goes down a little more, to 2.3.

The reason this is possible is the same reason the U.S. overall is a far richer country than it was in the past: an increase in worker productivity. Productivity is the measure of how much the U.S. economy produces per worker, and probably the most important statistic regarding economic well being. We invent bulldozers, and suddenly one person can do the work of 30 people with shovels. We invent computer printers, and suddenly one person can do the work of 100 typists. We invent E-ZPass, and suddenly zero people can do the work of thousands of tollbooth operators.

This matters because, when you strip away the complexity, retirement income of any kind is simply money generated by present-day workers being taken from them and given to people who aren’t working. This is true with Social Security, where the money is taken in the form of taxes. But it’s also true with any kind of private savings. The transfer there just uses different mechanisms — say, Dick Cheney, 82, getting dividends from all the stock he owns.

So it’s all about how much present day workers can produce. And if productivity goes up fast enough, it will swamp any fall in the worker-beneficiary ratio — and the income of both present day workers and retirees can rise indefinitely. This is exactly what happened in the past. And we can see that there’s no reason to believe it won’t continue, again using the concept of math.

Related

As Congress Pushes a $2 Trillion Stimulus Package, the “How Will You Pay For It?” Question Is Tossed in the Trash

The economist Dean Baker of the Center for Economic and Policy Research, a Washington think tank, has done this math. U.S. productivity has grown at more than 1 percent per year — sometimes much more — over every 15-year period since World War II. If it grows at 1 percent for the next 15 years, it will be possible for both workers and retirees to see their income increase by almost 9 percent. If it grows at 2 percent — about the average since World War II — the income of both workers and retirees can grow by 20 percent during the next 15 years. This does not seem like the “reckoning” predicted by Musk.

What Musk is essentially saying is that technology in general, and his car company in particular, are going to fail.

What’s even funnier about Musk’s fretting is that it contradicts literally everything about his life. He’s promised for years that Tesla’s cars will soon achieve “full self-driving.” If indeed humans can invent vehicles that can drive without people, this will generate a huge increase in productivity — so much so that some people worry about what millions of truck drivers would do if their jobs are shortly eliminated. Meanwhile, if low birth rates mean there are fewer workers available, the cost of labor will rise, meaning that it will be worth it for Tesla to invest more in creating self-driving trucks. So what Musk is essentially saying is that technology in general, and his car company in particular, are going to fail.

Finally, there’s Musk’s characterization of Japan as a “leading indicator.” Here’s a picture of Tokyo, depicting what a poverty-stricken hellscape Japan has now become due to its low birthrate:

People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023. (Photo by Philip FONG / AFP) (Photo by PHILIP FONG/AFP via Getty Images)

People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023.

Photo: Philip Fong/AFP via Getty Images

That is a joke. Japan is an extremely rich country by world standards, and the aging of its population has not changed that. The statistic to pay attention here is a country’s per capita income. Aging might be a problem if so many people were old and out of the workforce that per capita income fell, but, as the World Bank will tell you, that hasn’t happened in Japan. In fact, thanks to the magic of productivity, per capita income has continued to rise, albeit more slowly than in Japan’s years of fastest growth.

So if you’re tempted by Musk’s words to be concerned about what a low birth rate means for Social Security, you don’t need to sweat it. A much bigger problem, for Social Security and the U.S. in general, are the low-functioning brains of our billionaires.

The post Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math appeared first on The Intercept.

]]>
https://theintercept.com/2023/04/09/elon-musk-social-security-cuts/feed/ 0 Shareholder Trial Against Tesla And Elon Musk Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, on Jan. 24, 2023. JAPAN-ENVIRONMENT-CLIMATE-NATURE-CHERRY People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023.