Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself, HERE.
Please consider supporting THE WORK DAVE EMORY DOES.
This broadcast was recorded in one, 60-minute segment.
Introduction: Continuing the discussion from FTR #1076, the broadcast recaps key aspects of analysis of the Cambridge Analytica scandal.
In our last program, we noted that both the internet (DARPA projects including Project Agile) and the German Nazi Party had their origins as counterinsurgency gambits. Noting Hitler’s speech before The Industry Club of Dusseldorf, in which he equated communism with democracy, we highlight how the Cambridge Analytica scandal reflects the counterinsurgency origins of the Internet, and how the Cambridge Analytica affair embodies anti-Democracy/as counterinsurgency.
Key aspects of the Cambridge Analytica affair include:
- The use of psychographic personality testing on Facebook that is used for political advantage: ” . . . . For several years, a data firm eventually hired by the Trump campaign, Cambridge Analytica, has been using Facebook as a tool to build psychological profiles that represent some 230 million adult Americans. A spinoff of a British consulting company and sometime-defense contractor known for its counterterrorism ‘psy ops’ work in Afghanistan, the firm does so by seeding the social network with personality quizzes. Respondents — by now hundreds of thousands of us, mostly female and mostly young but enough male and older for the firm to make inferences about others with similar behaviors and demographics — get a free look at their Ocean scores. Cambridge Analytica also gets a look at their scores and, thanks to Facebook, gains access to their profiles and real names. . . .”
- The parent company of Cambridge Analytica–SCL–was deeply involved with counterterrorism “psy-ops” in Afghanistan, embodying the essence of the counterinsurgency dynamic at the root of the development of the Internet. The use of online data to subvert democracy recalls Hitler’s speech to the Industry Club of Dusseldorf, in which he equated democracy with communism: ” . . . . Cambridge Analytica was a company spun out of SCL Group, a British military contractor that worked in information operations for armed forces around the world. It was conducting research on how to scale and digitise information warfare – the use of information to confuse or degrade the efficacy of an enemy. . . . As director of research, Wylie’s original role was to map out how the company would take traditional information operations tactics into the online space – in particular, by profiling people who would be susceptible to certain messaging. This morphed into the political arena. After Wylie left, the company worked on Donald Trump’s US presidential campaign . . . .”
- Cambridge Analytica whistleblower Christopher Wylie’s observations on the anti-democratic nature of the firm’s work: ” . . . . It was this shift from the battlefield to politics that made Wylie uncomfortable. ‘When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy,’ he says. ‘But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists.’ . . . .”
- Wylie’s observations on how Cambridge Analytica’s methodology can be used to build a fascist political movement: ” . . . . One of the reasons these techniques are so insidious is that being a target of a disinformation campaign is ‘usually a pleasurable experience’, because you are being fed content with which you are likely to agree. ‘You are being guided through something that you want to be true,’ Wylie says. To build an insurgency, he explains, you first target people who are more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to ‘like’ a group on social media. They start engaging with the content, which may or may not be true; either way ‘it feels good to see that information’. When the group reaches 1,000 or 2,000 members, an event is set up in the local area. Even if only 5% show up, ‘that’s 50 to 100 people flooding a local coffee shop’, Wylie says. This, he adds, validates their opinion because other people there are also talking about ‘all these things that you’ve been seeing online in the depths of your den and getting angry about’. People then start to believe the reason it’s not shown on mainstream news channels is because ‘they don’t want you to know what the truth is’. As Wylie sums it up: ‘What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.’ . . . .”
- Wylie’s observation that Facebook was “All In” on the Cambridge Analytica machinations: ” . . . . ‘Facebook has known about what Cambridge Analytica was up to from the very beginning of those projects,” Wylie claims. “They were notified, they authorised the applications, they were given the terms and conditions of the app that said explicitly what it was doing. They hired people who worked on building the app. I had legal correspondence with their lawyers where they acknowledged it happened as far back as 2016.’ . . . .”
- The decisive participation of “Spy Tech” firm Palantir in the Cambridge Analytica operation: Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. In addition to his opposition to democracy because it allegedly is inimical to wealth creation, Thiel doesn’t think women should be allowed to vote and holds Nazi legal theoretician Carl Schmitt in high regard. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
- The use of “dark posts” by the Cambridge Analytica team. (We have noted that Brad Parscale has reassembled the old Cambridge Analytica team for Trump’s 2020 election campaign. It seems probable that AOC’s millions of online followers, as well as the “Bernie Bots,” will be getting “dark posts” crafted by AI’s scanning their online efforts.) ” . . . . One recent advertising product on Facebook is the so-called ‘dark post’: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times. . . .”
Supplementing the discussion about Cambridge Analytica, the program reviews information from FTR #718 about Facebook’s apparent involvement with elements and individuals linked to CIA and DARPA: ” . . . . Facebook’s most recent round of funding was led by a company called Greylock Venture Capital, who put in the sum of $27.5m. One of Greylock’s senior partners is called Howard Cox, another former chairman of the NVCA, who is also on the board of In-Q-Tel. What’s In-Q-Tel? Well, believe it or not (and check out their website), this is the venture-capital wing of the CIA. After 9/11, the US intelligence community became so excited by the possibilities of new technology and the innovations being made in the private sector, that in 1999 they set up their own venture capital fund, In-Q-Tel, which ‘identifies and partners with companies developing cutting-edge technologies to help deliver these solutions to the Central Intelligence Agency and the broader US Intelligence Community (IC) to further their missions’. . . .”
More about the CIA/DARPA links to the development of Facebook: ” . . . . The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company’s key areas of expertise are in ‘data mining technologies’. Breyer also served on the board of R&D firm BBN Technologies, which was one of those companies responsible for the rise of the internet. Dr Anita Jones joined the firm, which included Gilman Louie. She had also served on the In-Q-Tel’s board, and had been director of Defence Research and Engineering for the US Department of Defence. She was also an adviser to the Secretary of Defence and overseeing the Defence Advanced Research Projects Agency (DARPA), which is responsible for high-tech, high-end development. . . .”
Program Highlights Include: Review of Facebook’s plans to use brain-to-computer technology to operate its platform, thereby the enabling of recording and databasing people’s thoughts; Review of Facebook’s employment of former DARPA head Regina Dugan to implement the brain-to-computer technology; Review of Facebook’s building 8–designed to duplicate DARPA; Review of Facebook’s hiring of the Atlantic Council to police the social medium’s online content; Review of Facebook’s partnering with Narendra Modi’s Hindutva fascist government in India; Review of Facebook’s emloyment of Ukrainian fascist Kateryna Kruk to manage the social medium’s Ukrainian content.
1a. Facebook personality tests that allegedly let you learn things about what make you tick allows whoever set up that test learn what makes you tick too. Since it’s done through Facebook, they can identify your test results with your real identity.
If the Facebook personality test in question happens to report your “Ocean score” (Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism), that means the test your taking was created by Cambridge Analytica, a company with one of Donald Trump’s billionaire sugar-daddies, Robert Mercer, as a major investor. And it’s Cambridge Analytica that gets to learn all those fun facts about your psychological profile too. And Steve Bannon sat on its board:
“The Secret Agenda of a Facebook Quiz” by McKenzie Funk; The New York Times; 1/19/2017.
Do you panic easily? Do you often feel blue? Do you have a sharp tongue? Do you get chores done right away? Do you believe in the importance of art?
If ever you’ve answered questions like these on one of the free personality quizzes floating around Facebook, you’ll have learned what’s known as your Ocean score: How you rate according to the big five psychological traits of Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. You may also be responsible the next time America is shocked by an election upset.
For several years, a data firm eventually hired by the Trump campaign, Cambridge Analytica, has been using Facebook as a tool to build psychological profiles that represent some 230 million adult Americans. A spinoff of a British consulting company and sometime-defense contractor known for its counterterrorism “psy ops” work in Afghanistan, the firm does so by seeding the social network with personality quizzes. Respondents — by now hundreds of thousands of us, mostly female and mostly young but enough male and older for the firm to make inferences about others with similar behaviors and demographics — get a free look at their Ocean scores. Cambridge Analytica also gets a look at their scores and, thanks to Facebook, gains access to their profiles and real names.
Cambridge Analytica worked on the “Leave” side of the Brexit campaign. In the United States it takes only Republicans as clients: Senator Ted Cruz in the primaries, Mr. Trump in the general election. Cambridge is reportedly backed by Robert Mercer, a hedge fund billionaire and a major Republican donor; a key board member is Stephen K. Bannon, the head of Breitbart News who became Mr. Trump’s campaign chairman and is set to be his chief strategist in the White House.
In the age of Facebook, it has become far easier for campaigners or marketers to combine our online personas with our offline selves, a process that was once controversial but is now so commonplace that there’s a term for it, “onboarding.” Cambridge Analytica says it has as many as 3,000 to 5,000 data points on each of us, be it voting histories or full-spectrum demographics — age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, car ownership, homeownership — from consumer-data giants.
No data point is very informative on its own, but profiling voters, says Cambridge Analytica, is like baking a cake. “It’s the sum of the ingredients,” its chief executive officer, Alexander Nix, told NBC News. Because the United States lacks European-style restrictions on second- or thirdhand use of our data, and because our freedom-of-information laws give data brokers broad access to the intimate records kept by local and state governments, our lives are open books even without social media or personality quizzes.
Ever since the advertising executive Lester Wunderman coined the term “direct marketing” in 1961, the ability to target specific consumers with ads — rather than blanketing the airwaves with mass appeals and hoping the right people will hear them — has been the marketer’s holy grail. What’s new is the efficiency with which individually tailored digital ads can be tested and matched to our personalities. Facebook is the microtargeter’s ultimate weapon.
The explosive growth of Facebook’s ad business has been overshadowed by its increasing role in how we get our news, real or fake. In July, the social network posted record earnings: quarterly sales were up 59 percent from the previous year, and profits almost tripled to $2.06 billion. While active users of Facebook — now 1.71 billion monthly active users — were up 15 percent, the real story was how much each individual user was worth. The company makes $3.82 a year from each global user, up from $2.76 a year ago, and an average of $14.34 per user in the United States, up from $9.30 a year ago. Much of this growth comes from the fact that advertisers not only have an enormous audience in Facebook but an audience they can slice into the tranches they hope to reach.
One recent advertising product on Facebook is the so-called “dark post”: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times.
Imagine the full capability of this kind of “psychographic” advertising. In future Republican campaigns, a pro-gun voter whose Ocean score ranks him high on neuroticism could see storm clouds and a threat: The Democrat wants to take his guns away. A separate pro-gun voter deemed agreeable and introverted might see an ad emphasizing tradition and community values, a father and son hunting together.
In this election, dark posts were used to try to suppress the African-American vote. According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous “super predator” line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake. Federal Election Commission rules are unclear when it comes to Facebook posts, but even if they do apply and the facts are skewed and the dog whistles loud, the already weakening power of social opprobrium is gone when no one else sees the ad you see — and no one else sees “I’m Donald Trump, and I approved this message.”
While Hillary Clinton spent more than $140 million on television spots, old-media experts scoffed at Trump’s lack of old-media ad buys. Instead, his campaign pumped its money into digital, especially Facebook. One day in August, it flooded the social network with 100,000 ad variations, so-called A/B testing on a biblical scale, surely more ads than could easily be vetted by human eyes for compliance with Facebook’s “community standards.”
1b. Christopher Wylie–the former head of research at Cambridge Analytica who became one of the key insider whistle-blowers about how Cambridge Analytica operated and the extent of Facebook’s knowledge about it–gave an interview last month to Campaign Magazine. (We dealt with Cambridge Analytica in FTR #‘s 946, 1021.)
Wylie recounts how, as director of research at Cambridge Analytica, his original role was to determine how the company could use the information warfare techniques used by SCL Group – Cambridge Analytica’s parent company and a defense contractor providing psy op services for the British military. Wylie’s job was to adapt the psychological warfare strategies that SCL had been using on the battlefield to the online space. As Wylie put it:
“ . . . . When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy…But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists. . . . .”
Wylie also draws parallels between the psychological operations used on democratic audiences and the battlefield techniques used to be build an insurgency. It starts with targeting people more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to “like” a group on social media. The information you’re feeding this target audience may or may not be real. The important thing is that it’s content that they already agree with so that “it feels good to see that information.” Keep in mind that one of the goals of the ‘psychographic profiling’ that Cambridge Analytica was to identify traits like neuroticism.
Wylie goes on to describe the next step in this insurgency-building technique: keep building up the interest in the social media group that you’re directing this target audience towards until it hits around 1,000–2,000 people. Then set up a real life event dedicated to the chosen disinformation topic in some local area and try to get as many of your target audience to show up. Even if only 5 percent of them show up, that’s still 50–100 people converging on some local coffee shop or whatever. The people meet each other in real life and start talking about about “all these things that you’ve been seeing online in the depths of your den and getting angry about”. This target audience starts believing that no one else is talking about this stuff because “they don’t want you to know what the truth is”. As Wylie puts it, “What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.”
In the early hours of 17 March 2018, the 28-year-old Christopher Wylie tweeted: “Here we go….”
Later that day, The Observer and The New York Times published the story of Cambridge Analytica’s misuse of Facebook data, which sent shockwaves around the world, caused millions to #DeleteFacebook, and led the UK Information Commissioner’s Office to fine the site the maximum penalty for failing to protect users’ information. Six weeks after the story broke, Cambridge Analytica closed. . . .
. . . . He believes that poor use of data is killing good ideas. And that, unless effective regulation is enacted, society’s worship of algorithms, unchecked data capture and use, and the likely spread of AI to all parts of our lives is causing us to sleepwalk into a bleak future.
Not only are such circumstances a threat to adland – why do you need an ad to tell you about a product if an algorithm is choosing it for you? – it is a threat to human free will. “Currently, the only morality of the algorithm is to optimise you as a consumer and, in many cases, you become the product. There are very few examples in human history of industries where people themselves become products and those are scary industries – slavery and the sex trade. And now, we have social media,” Wylie says.
“The problem with that, and what makes it inherently different to selling, say, toothpaste, is that you’re selling parts of people or access to people. People have an innate moral worth. If we don’t respect that, we can create industries that do terrible things to people. We are [heading] blindly and quickly into an environment where this mentality is going to be amplified through AI everywhere. We’re humans, we should be thinking about people first.”
His words carry weight, because he’s been on the dark side. He has seen what can happen when data is used to spread misinformation, create insurgencies and prey on the worst of people’s characters.
The political battlefield
A quick refresher on the scandal, in Wylie’s words: Cambridge Analytica was a company spun out of SCL Group, a British military contractor that worked in information operations for armed forces around the world. It was conducting research on how to scale and digitise information warfare – the use of information to confuse or degrade the efficacy of an enemy. . . .
. . . . As director of research, Wylie’s original role was to map out how the company would take traditional information operations tactics into the online space – in particular, by profiling people who would be susceptible to certain messaging.
This morphed into the political arena. After Wylie left, the company worked on Donald Trump’s US presidential campaign . . . .
. . . . It was this shift from the battlefield to politics that made Wylie uncomfortable. “When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy,” he says.
“But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists.”
One of the reasons these techniques are so insidious is that being a target of a disinformation campaign is “usually a pleasurable experience”, because you are being fed content with which you are likely to agree. “You are being guided through something that you want to be true,” Wylie says.
To build an insurgency, he explains, you first target people who are more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to “like” a group on social media. They start engaging with the content, which may or may not be true; either way “it feels good to see that information”.
When the group reaches 1,000 or 2,000 members, an event is set up in the local area. Even if only 5% show up, “that’s 50 to 100 people flooding a local coffee shop”, Wylie says. This, he adds, validates their opinion because other people there are also talking about “all these things that you’ve been seeing online in the depths of your den and getting angry about”.
People then start to believe the reason it’s not shown on mainstream news channels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.” . . . .
. . . . Psychographic potential
. . . . But Wylie argues that people underestimate what algorithms allow you to do in profiling. “I can take pieces of information about you that seem innocuous, but what I’m able to do with an algorithm is find patterns that correlate to underlying psychological profiles,” he explains.
“I can ask whether you listen to Justin Bieber, and you won’t feel like I’m invading your privacy. You aren’t necessarily aware that when you tell me what music you listen to or what TV shows you watch, you are telling me some of your deepest and most personal attributes.” . . . .
. . . . Clashes with Facebook
Wylie is opposed to self-regulation, because industries won’t become consumer champions – they are, he says, too conflicted.
“Facebook has known about what Cambridge Analytica was up to from the very beginning of those projects,” Wylie claims. “They were notified, they authorised the applications, they were given the terms and conditions of the app that said explicitly what it was doing. They hired people who worked on building the app. I had legal correspondence with their lawyers where they acknowledged it happened as far back as 2016.” . . . .
1c. In FTR #946, we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.
Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.
The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.
“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .
. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .
. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
. . . . But he [Wylie] said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.
Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .
2a. There are indications that elements in and/or associated with CIA and Pentagon/DARPA were involved with Facebook almost from the beginning: ” . . . . Facebook’s most recent round of funding was led by a company called Greylock Venture Capital, who put in the sum of $27.5m. One of Greylock’s senior partners is called Howard Cox, another former chairman of the NVCA, who is also on the board of In-Q-Tel. What’s In-Q-Tel? Well, believe it or not (and check out their website), this is the venture-capital wing of the CIA. After 9/11, the US intelligence community became so excited by the possibilities of new technology and the innovations being made in the private sector, that in 1999 they set up their own venture capital fund, In-Q-Tel, which ‘identifies and partners with companies developing cutting-edge technologies to help deliver these solutions to the Central Intelligence Agency and the broader US Intelligence Community (IC) to further their missions’. . . .”
“With Friends Like These . . .” by Tim Hodgkinson; guardian.co.uk; 1/14/2008.
. . . . The third board member of Facebook is Jim Breyer. He is a partner in the venture capital firm Accel Partners, who put $12.7m into Facebook in April 2005. On the board of such US giants as Wal-Mart and Marvel Entertainment, he is also a former chairman of the National Venture Capital Association (NVCA). Now these are the people who are really making things happen in America, because they invest in the new young talent, the Zuckerbergs and the like. Facebook’s most recent round of funding was led by a company called Greylock Venture Capital, who put in the sum of $27.5m. One of Greylock’s senior partners is called Howard Cox, another former chairman of the NVCA, who is also on the board of In-Q-Tel. What’s In-Q-Tel? Well, believe it or not (and check out their website), this is the venture-capital wing of the CIA. After 9/11, the US intelligence community became so excited by the possibilities of new technology and the innovations being made in the private sector, that in 1999 they set up their own venture capital fund, In-Q-Tel, which “identifies and partners with companies developing cutting-edge technologies to help deliver these solutions to the Central Intelligence Agency and the broader US Intelligence Community (IC) to further their missions”. . . .
2b. More about the CIA/Pentagon link to the development of Facebook: ” . . . . The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company’s key areas of expertise are in ‘data mining technologies’. Breyer also served on the board of R&D firm BBN Technologies, which was one of those companies responsible for the rise of the internet. Dr Anita Jones joined the firm, which included Gilman Louie. She had also served on the In-Q-Tel’s board, and had been director of Defence Research and Engineering for the US Department of Defence. She was also an adviser to the Secretary of Defence and overseeing the Defence Advanced Research Projects Agency (DARPA), which is responsible for high-tech, high-end development. . . .”
“Facebook–the CIA Conspiracy” by Matt Greenop; The New Zealand Herald; 8/8/2007.
. . . . Facebook’s first round of venture capital funding ($US500,000) came from former Paypal CEO Peter Thiel. Author of anti-multicultural tome ‘The Diversity Myth’, he is also on the board of radical conservative group VanguardPAC.
The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company’s key areas of expertise are in “data mining technologies”.
Breyer also served on the board of R&D firm BBN Technologies, which was one of those companies responsible for the rise of the internet.
Dr Anita Jones joined the firm, which included Gilman Louie. She had also served on the In-Q-Tel’s board, and had been director of Defence Research and Engineering for the US Department of Defence.
She was also an adviser to the Secretary of Defence and overseeing the Defence Advanced Research Projects Agency (DARPA), which is responsible for high-tech, high-end development. . . .
3. Facebook wants to read your thoughts.
- ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
“Facebook Literally Wants to Read Your Thoughts” by Kristen V. Brown; Gizmodo; 4/19/2017.
At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.
What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”
She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.
Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.
Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.
The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.
“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
…
4. The broadcast then reviews (from FTR #1074) Facebook’s inextricable link with the Hindutva fascist BJP of Narendra Modi:
Key elements of discussion and analysis include:
- Indian politics has been largely dominated by fake news, spread by social media: ” . . . . In the continuing Indian elections, as 900 million people are voting to elect representatives to the lower house of the Parliament, disinformation and hate speech are drowning out truth on social media networks in the country and creating a public health crisis like the pandemics of the past century. This contagion of a staggering amount of morphed images, doctored videos and text messages is spreading largely through messaging services and influencing what India’s voters watch and read on their smartphones. A recent study by Microsoft found that over 64 percent Indians encountered fake news online, the highest reported among the 22 countries surveyed. . . . These platforms are filled with fake news and disinformation aimed at influencing political choices during the Indian elections. . . . ”
- Narendra Modi’s Hindutva fascist BJP has been the primary beneficiary of fake news, and his regime has partnered with Facebook: ” . . . . The hearing was an exercise in absurdist theater because the governing B.J.P. has been the chief beneficiary of divisive content that reaches millions because of the way social media algorithms, especially Facebook, amplify ‘engaging’ articles. . . .”
- Rajesh Jain is among those BJP functionaries who serve Facebook, as well as the Hindutva fascists: ” . . . . By the time Rajesh Jain was scaling up his operations in 2013, the BJP’s information technology (IT) strategists had begun interacting with social media platforms like Facebook and its partner WhatsApp. If supporters of the BJP are to be believed, the party was better than others in utilising the micro-targeting potential of the platforms. However, it is also true that Facebook’s employees in India conducted training workshops to help the members of the BJP’s IT cell. . . .”
- Dr. Hiren Joshi is another of the BJP operatives who is heavily involved with Facebook. ” . . . . Also assisting the social media and online teams to build a larger-than-life image for Modi before the 2014 elections was a team led by his right-hand man Dr Hiren Joshi, who (as already stated) is a very important adviser to Modi whose writ extends way beyond information technology and social media. . . . Joshi has had, and continues to have, a close and long-standing association with Facebook’s senior employees in India. . . .”
- Shivnath Thukral, who was hired by Facebook in 2017 to be its Public Policy Director for India & South Asia, worked with Joshi’s team in 2014. ” . . . . The third team, that was intensely focused on building Modi’s personal image, was headed by Hiren Joshi himself who worked out of the then Gujarat Chief Minister’s Office in Gandhinagar. The members of this team worked closely with staffers of Facebook in India, more than one of our sources told us. As will be detailed later, Shivnath Thukral, who is currently an important executive in Facebook, worked with this team. . . .”
- An ostensibly remorseful BJP politician–Prodyut Bora–highlighted the dramatic effect of Facebook and its WhatsApp subsidiary have had on India’s politics: ” . . . . In 2009, social media platforms like Facebook and WhatsApp had a marginal impact in India’s 20 big cities. By 2014, however, it had virtually replaced the traditional mass media. In 2019, it will be the most pervasive media in the country. . . .”
- A concise statement about the relationship between the BJP and Facebook was issued by BJP tech office Vinit Goenka: ” . . . . At one stage in our interview with [Vinit] Goenka that lasted over two hours, we asked him a pointed question: ‘Who helped whom more, Facebook or the BJP?’ He smiled and said: ‘That’s a difficult question. I wonder whether the BJP helped Facebook more than Facebook helped the BJP. You could say, we helped each other.’ . . .”
5. In Ukraine, as well, Facebook and the OUN/B successor organizations function symbiotically:
CrowdStrike–at the epicenter of the supposed Russian hacking controversy is noteworthy. Its co-founder and chief technology officer, Dmitry Alperovitch is a senior fellow at the Atlantic Council, financed by elements that are at the foundation of fanning the flames of the New Cold War: “In this respect, it is worth noting that one of the commercial cybersecurity companies the government has relied on is Crowdstrike, which was one of the companies initially brought in by the DNC to investigate the alleged hacks. . . . Dmitri Alperovitch is also a senior fellow at the Atlantic Council. . . . The connection between [Crowdstrike co-founder and chief technology officer Dmitri] Alperovitch and the Atlantic Council has gone largely unremarked upon, but it is relevant given that the Atlantic Council—which is is funded in part by the US State Department, NATO, the governments of Latvia and Lithuania, the Ukrainian World Congress, and the Ukrainian oligarch Victor Pinchuk—has been among the loudest voices calling for a new Cold War with Russia. As I pointed out in the pages of The Nation in November, the Atlantic Council has spent the past several years producing some of the most virulent specimens of the new Cold War propaganda. . . . ”
(Note that the Atlantic Council is dominant in the array of individuals and institutions constituting the Ukrainian fascist/Facebook cooperative effort. We have spoken about the Atlantic Council in numerous programs, including FTR #943. The organization has deep operational links to elements of U.S. intelligence, as well as the OUN/B milieu that dominates the Ukrainian diaspora.)
” . . . . Facebook is partnering with the Atlantic Council in another effort to combat election-related propaganda and misinformation from proliferating on its service. The social networking giant said Thursday that a partnership with the Washington D.C.-based think tank would help it better spot disinformation during upcoming world elections. The partnership is one of a number of steps Facebook is taking to prevent the spread of propaganda and fake news after failing to stop it from spreading on its service in the run up to the 2016 U.S. presidential election. . . .”
Since autumn 2018, Facebook has looked to hire a public policy manager for Ukraine. The job came after years of Ukrainians criticizing the platform for takedowns of its activists’ pages and the spread of [alleged] Russian disinfo targeting Kyiv. Now, it appears to have one: @Kateryna_Kruk.— Christopher Miller (@ChristopherJM) June 3, 2019
Kateryna Kruk:
- Is Facebook’s Public Policy Manager for Ukraine as of May of this year, according to her LinkedIn page.
- Worked as an analyst and TV host for the Ukrainian ‘anti-Russian propaganda’ outfit StopFake. StopFake is the creation of Irena Chalupa, who works for the Atlantic Council and the Ukrainian government and appears to be the sister of Andrea and Alexandra Chalupa.
- Joined the “Kremlin Watch” team at the European Values think-tank, in October of 2017.
- Received the Atlantic Council’s Freedom award for her communications work during the Euromaidan protests in June of 2014.
- Worked for OUN/B successor organization Svoboda during the Euromaidan protests. “ . . . ‘There are people who don’t support Svoboda because of some of their slogans, but they know it’s the most active political party and go to them for help, said Svoboda volunteer Kateryna Kruk. . . . ” . . . .
- Also has a number of articles on the Atlantic Council’s Blog. Here’s a blog post from August of 2018 where she advocates for the creation of an independent Ukrainian Orthodox Church to diminish the influence of the Russian Orthodox Church.
- According to her LinkedIn page has also done extensive work for the Ukrainian government. From March 2016 to January 2017 she was the Strategic Communications Manager for the Ukrainian parliament where she was responsible for social media and international communications. From January-April 2017 she was the Head of Communications at the Ministry of Health.
- Was not only was a volunteer for Svoboda during the 2014 Euromaidan protests, but openly celebrated on twitter the May 2014 massacre in Odessa when the far right burned dozens of protestors alive. Kruk’s twitter feed is set to private now so there isn’t public access to her old tweet, but people have screen captures of it. Here’s a tweet from Yasha Levine with a screenshot of Kruk’s May 2, 2014 tweet where she writes: “#Odessa cleaned itself from terrorists, proud for city fighting for its identity.glory to fallen heroes..” She even threw in a “glory to fallen heroes” at the end of her tweet celebrating this massacre. Keep in mind that it was month after this tweet that the Atlantic Council gave her that Freedom Award for her communications work during the protests.
- In 2014, . . . tweeted that a man had asked her to convince his grandson not to join the Azov Battalion, a neo-Nazi militia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then traveled to Luhansk to fight pro-Russian rebels.
- Lionized a Nazi sniper killed in Ukraine’s civil war. In March 2018, a 19-year neo-Nazi named Andriy “Dilly” Krivich was shot and killed by a sniper. Krivich had been fighting with the fascist Ukrainian group Right Sector, and had posted photos on social media wearing Nazi German symbols. After he was killed, Kruk tweeted an homage to the teenage Nazi. (The Nazi was also lionized on Euromaidan Press’ Facebook page.)
- Has staunchly defended the use of the slogan “Slava Ukraini,”which was first coined and popularized by Nazi-collaborating fascists, and is now the official salute of Ukraine’s army.
- Has also said that the Ukrainian fascist politician Andriy Parubiy, who co-founded a neo-Nazi party before later becoming the chairman of Ukraine’s parliament the Rada, is “acting smart,” writing, “Parubiy touche.” . . . .
It sounds like Palantir is experiencing some significant employee morale problems. Why? Because it turns out Palantir’s Investigative Case Management, or ICM, system that is currently being used by Immigration and Customs Enforcement (ICE) has been used to build profiles and track undocumented immigrants, including those immigrant families where children have been separated by parents. Palantir’s software is also used to determine targets for arrest. For example, ICE agents relied on Palantir’s ICM during a 2017 operation that targeted families of migrant children. ICE agents were instructed to use ICM to document any interaction they have with unaccompanied children trying to cross the border and they determined the children’s parents or other family members facilitated smuggling them across the border, the family members could be arrested and prosecuted for deportation. Earlier this month, the ICE unit that carried out the recent high-profile raid in Mississippi — where 680 people were arrested and detained during a school day, resulting in hundreds of children be sent home from school to homes without their parents — uses Palantir’s ICM software. As the following article describes, Palantir was contracted in 2014 to build this ICM system that lets agents access digital profiles of people suspected of violating immigration laws and organize records about them in one place. The data in the profiles includes emails, phone records, text messages and data from automatic license plate cameras so this is potentially very invasive databases of information on the US immigrant community.
The fact that the ICM system is now being used to identify the parents and children who end up getting separated has understandably resulted in a number of Palantir employees experiencing crises of conscience. Although Palantir’s leadership hasn’t experience this crisis. Quite the opposite. As the following article describes, Palantir has in fact used similar stories about employee concerns at Google over work the Google was doing for the US military as an opportunity to bash Google and declare that Palantir wouldn’t have such concerns about controversial government work. And more recently, the company just renewed a $42 million contract with ICE and CEO Alex Karp has defended the role Palantir plays with ICE during company town hall meetings. In general, it appears that Palantir is actively trying to brand itself in Washington DC as the Silicon Valley company that won’t suffer from moral qualms about the work its contracted to do (even if many of the employees are actually suffering moral qualms):
“Ending the contracts with ICE would risk a backlash in Washington, where Palantir was quickly becoming a go-to provider of data-mining services to a wide range of federal agencies. Data mining is a process of compiling multitudes of information from disparate sources to show patterns and relationships. Google’s decision, earlier the same year, to end a contract with the Pentagon over pressure from its employees had chilled the Internet giant’s relationships with some government leaders who accused it of betraying American interests.”
This is the fundamental business problem Palantir faces when confronting fundamental moral problems: its main customer is the US federal government so if it refuses a contract like the ICE case management software contract the company risks the rest of those federal contracts. That’s Palantir’s business model. A business model that includes building the Investigative Case Management (ICM) system that allows ICE to create detailed digital profiles on individuals. It’s the kind of powerful technology that all sorts of government agencies might be interested ing, and maybe even the Palantir’s corporate clients. Building powerful profiles of large numbers of individuals is a generically useful capability to offer clients. But in the end, it’s the US federal government that is Palantir’s core client and that’s why the company can’t easily dismiss controversial contracts with agencies like ICE and when its tools are being used to break up migrant families:
It’s that business model that’s built around keep the US federal government as a core client that makes it no surprise to learn that Alex Karp not only dismissed the concerns of those 200 employees, but Palantir recently renewed a contract with ICE worth $42 million. In addition, Thiel has publicly attacked Google for backing out of a federal government contract and suggested that Google was treasonous (as part of allegation that the Chinese military had infiltrated Google). And Alex Karp recently gave an interview where he shared his view that “I do not believe that these questions should be decided in Silicon Valley by a number of engineers at large platform companies.” So the message from Karp appears to be that Palantir aren’t actually going to engage in any kind of moral decision-making when it comes to its contracts with federal government at all. Not considering the morality of its actions is part of this business model:
And that ‘amoral contractor for hire’ attitude has clearly paid off. In March of this year, Palantir was awarded a massive $800 million contract to develop a new intelligence gathering network for the US military. Interestingly, in order to win this contract, Palantir first had to win a court case that found that the federal government is required by law to consider commercially available products instead of only the custom products built by contracting firms. This 2016 court ruling essentially forced the military into reconsidering its decision to go with the establishment contractor, Raytheon, for this big new contract and Palantir ended up winning in that contest. So given that Palantir’s commercially available software is presumably potentially applicable to a lot more government agencies than currently use it, it’s going to be interesting to see how many new federal contracts with the US government the company ends up securing in coming years:
So Palantir is going to be even more deeply embedded into the US national security state and military following the completion of this new giant Army contract to build the nerve center of a vast intelligence gathering network. What kinds of giant databases of personal profiles might this contract involve?
And since Palantir’s case management software (ICM) that allows for the building of detailed profiles on large numbers of people is one of the main products ICE is interested in, and presumably a lot of other government agencies too, it’s worth recalling that the PROMIS mega-scandal involved bugged commercial case management software also developed in cooperation with the US government. It’s especially notable since Palantir has other corporate clients too, as was the case with PROMIS. And, of course, there’s the whole PRISM saga that makes it abundantly clear Palantir is happy to assist with spying. In other words, if we were to see a repeat of PROMIS in the modern age, it’s a good bet Palantir will be involved. At a minimum, we know the company won’t have any moral qualms about being the next PROMIS.
Here’s the latest example of the GOP’s ongoing and growing efforts to ‘work the refs’ in the media and tech industry. We’ve already seen how the laughable claims of anti-conservative bias waged against social media companies have become a central part of the core right-wing strategy of getting favorable social media treatment and ensuring the platforms remain viable outlets for right-wing disinformation campaigns. Now there appears to be a significant fund-raising effort to finance a project dedicated to researching the past of journalists working for virtually all major mainstream new outlets, including their past social media postings, and find anything that can be embarrassing. The effort is being led by Arthur Schwartz, a Steve Bannon ally who is described as Donald Trump Jr’s “fixer”.
But it get more devious: this group is claiming that they aren’t just going to engage in deep opposition research of journalists who report things critical of Trump. They are also going to be looking into the family members of journalists who happen to be active in politics and anyone else who works at a media organization critical of Trump. And any liberal activists of other opponents of Trump will also be subject to this opposition research campaign. In other words, pretty much anyone who doesn’t support Trump and their family members will be subject to this opposition research.
The group has already released damaging anti-Semitic old tweets from a New York Times editor and a CNN editor. The New York Times editor wrote the tweets while he was in college. The CNN editor wrote them while he was a 15 and 16 year old growing up in Egypt. It underscores how, after more than a decade of widespread social media usage, we now have a large number of people working in media who were teens cluelessly tweeting away years ago and now all that old teenage-generated content is available for use by this network.
We’re told by former Bannon-ally Sam Nunberg that part of the motive of this operation is revenge. Specifically, revenge against the media for its depiction of Trump as a racist. Yep. It’s all part of the generic ‘no, you’re the real racist’ meme that we so often hear these days. But while revenge is the stated goal of this operation, it’s also clearly part of a media intimidation campaign as evidenced by the fact that they are being very out in the open out this:
“Operatives have closely examined more than a decade’s worth of public posts and statements by journalists, the people familiar with the operation said. Only a fraction of what the network claims to have uncovered has been made public, the people said, with more to be disclosed as the 2020 election heats up. The research is said to extend to members of journalists’ families who are active in politics, as well as liberal activists and other political opponents of the president.”
Do you support Trump? Nope? Well, get ready for opposition research conducted on you. And this is all being framed as ‘revenge’ against Trump’s opponents for portraying him, and/or portraying his supporters, as racist. This is presumably how this kind of intimidation campaign will be sold to the right-wing audiences...as a ‘we’re fighting for you and your honor!’ operation:
And the guy behind, Arthur Schwartz, is both an informal adviser to Trump Jr. with a history of working with Steve Bannon. As Bannon describes it, the people targeted by this are just casualties in a culture war:
Of course, the Trump White House and reelection campaign is claiming it has nothing to do it. So if any journalist point out the clear connections between this operation and the Trump White House they will presumably become targets:
And as the following article describes, Arthur Schwartz has decided to make this intimidation campaign even more overtly intimidating by now openly fundraising for this effort. He wants to raise at least $2 million to fund this operation (and clearly wants the public to know this):
“CNN, MSNBC, all broadcast networks, NY Times, Washington Post, BuzzFeed, Huffington Post, and all others that routinely incorporate bias and misinformation in to their coverage. We will also track the reporters and editors of these organizations.”
Intimidating all of the media that doesn’t routinely fête Trump isn’t going to be cheap. But Arthur Schwartz is publicly signaling that his intimidation operation is going to all the resources it needs. And don’t forget that in the age of Big Data and Cambridge Analytica-style mass data-collection operations, a lot of this opposition research will probably be highly automatable. So if you assume that you’re too insignificant to end up being targeted by this operation that’s probably not a safe assumption. And given that it’s not just journalists, but liberal activists and anyone else who openly opposes Trump (never-Trumpers) that are being targeted too, it points towards the next phase of the far right’s assault on democracy and civil society: micro-targeted intimidation campaigns against political dissidents. Today it’s journalists and liberal activists who don’t support Trump. But in the era of social media and vast databases of billions of tweets and social media posts there’s no reason the intimidation needs to be limited to journalists or activists. Virtually all citizens will potentially be vulnerable.
So let’s hope today’s teenagers get the memo about their social media use: watch what you post, kids, because some day it might be used against you. Especially by the Republican Party.
There was a recent story in Politico that appears to solve the mystery of who was behind the “stringray” devices found in Washington DC in recent years. The existence of the devices — which collects cell-phone data by mimic legitimate cell-phone towers — near the White House and other sensitive areas in DC was first publicly acknowledged by the US government in April of 2018. These reports were deemed at the time to be extra alarming given the fact that President Trump was known to use insecure cellphone for sensitive communications. According to the new Politico report, the US government has concluded that the sting-ray devices were most likely put in place by Israel, and yet there have been no consequences at all following this finding. Israel has denied the reports and Trump himself told Politico, “I don’t think the Israelis were spying on us...My relationship with Israel has been great...Anything is possible but I don’t believe it.”.
So we have reports about a US government investigation concluding Israel we behind one of the most mysterious, and potentially significant, spying operation uncovered in DC in recent years coupled with US government denials that this happened. Which is largely what we should have expected given this finding. On the one hand, given the extremely close and long-standing ties between US and Israeli military and intelligence, if this really was an operation that Israel was genuinely behind without the tacit approval of the US government there would likely be an attempt to minimize the diplomatic fallout and deal with these things quietly and out of the public eye. On the other hand, if this was the kind of operation done with the US government’s tacit approval, we would expect at least downplaying of the scandal too.
But as the following article makes clear, there’s another huge we should expect the downplaying by the US government about a story like this: The US and Israel have been increasingly outsourcing their cyber-spying capabilities to the private sector and jointly investing in these companies. Beyond that, Jeffrey Epstein appears to be one of the figures who appears to have been working on this merging of US and Israeli cyber-spying technology in recent years. So when we talk about Israel spying operations in the US involving the covert use of technology, we have to ask whether or not this was an operation involving a company with US national security ties.
The following report, the latest for Whitney Webb at MintPress
on the Epstein scandal, describes this growing joint US/Israeli investment in cyber sector in recent years and some of the figures behind it in addition to Epstein. The piece focuses on Carbyne (Carbyne911), the Israeli company started in 2014 by former members of Israel’s Unit 8200 cyber team. Carbyne created Reporty, a smartphone app that promises to provide faster and better communications to public emergency first responders. As we’ve seen, Reporty isn’t just a smartphone app. It also appears to work by monitoring public emergency communication systems and national civilian communications infrastructure for the ostensible purpose of ensuring minimal data loss during emergency response calls, which is the kind of capability with obvious dual use potential.
As we also saw, while former Israeli prime minister Ehud Barack was publicly the big investor who helped start Carbyne back in 2014, it turns out Jeffrey Epstein was quietly the person behind Barack’s financing. Barack was a known associate of Epstein and reportedly frequented Epstein’s Manhattan mansion. So we have Epstein, a figure with clear ties to Israeli intelligence but also very clear ties to US intelligence, investing in Carbyne. Well, as the piece describes, it turns out that one of the other investors in Carbyne is Peter Thiel. And Carbyne’s board of advisors includes former Palantir employee Trae Stephens, who was a member of the Trump transition team. Former Secretary of Homeland Security Michael Chertoff is also an advisory board member. These are the kinds of investors and advisors that make it clear Carbyne isn’t simply an Israeli intelligence front. This is, at a minimum, a joint operation between the US and Israel.
It’s also noteworthy that both Thiel and Epstein appear to have been leading financiers for ‘transhumanist’ projects like longevity and artificial intelligence. Both have a history of sponsoring scientists working in these areas. Both appeared to have very similar interests and moved in the same circles and yet there previously weren’t indications that Thiel and Epstein had a relationship. Their mutual investments in Carbyne helps answer that. The two definitely knew each other because they were secret business partners.
How many other secret business partnerships might Epstein and Thiel have been involved in and now many of them involve the Israeli tech sector? We obviously don’t know, but as the following article points out, Palantir opened an R&D branch in Israel in 2013 and there have long been suspicions that Palantir’s ‘pre-cog’ predictive crime algorithms have been used against Palestinian populations. So Palantir appears to be well positioned to help lead any quiet joint US-Israeli efforts to develop cyber-intelligence capabilities in the private sector.
Ominously, as the article also describes, the idea of a joint US-Israeli project on ‘pre-crime’ detection is one that goes back to 1982 when the “Main Core” database of 8 million Americans deemed to be potential subversives was developed by Oliver North under the “Continuity of Government” program and maintained using the PROMIS software (which sounds like a complimentary program to “Rex 84”). According to anonymous intelligence sources talking to MintPress, this “Main Core” database of US citizens considered “dissidents” still exists today. According to these anonymous U.S. intelligence officials who reportedly have direct knowledge of the US intelligence community’s use of PROMIS and Main Core from the 1980s to 2000s, Israeli intelligence played a role in the deployment of PROMIS as the software used for the Main Core. And Palantir, with its PROMIS-like Investigative Case Management (ICM) software already being offered to the US government for use in tracking immigrants, is the company well positioned to be maintaining the current version of Main Core. The article also reports that Main Core was used by at least one former CIA official on Ronald Reagan’s National Security Council to blackmail members of Congress, Congressional staffers and journalists. That obviously has thematic ties to the Epstein sexual trafficking network that appears to have blackmailing powerful people as one of its core functions.
Also noteworthy in all this is is that Carbyne’s products were initially sold as a solution for mass shootings (‘solution’, in the sense that victims would be able to contact emergency responders). That’s part of what makes Thiel’s investment in Carbyne extra interesting given the pre-crime prediction technologies capabilities Palantir has been offering law enforcement in recent years. As the article notes, this all potentially ties in to the recent push by the Trump administration to create HARPA, a new US government agency modeled after DARPA, that could create tools for tracking the mentally ill using smartphones and smartwatches and predicting when they might become violent. Palantir is perfectly situated to capitalize on an initiative like that.
And that’s all part of the context we have to keep in mind when reading reports about “string-ray” devices in Washington DC being set up by Israel and the response from the US government is a big *yawn*. When figures like Thiel and Epstein are acting as middle-men in some sort of joint US-Israeli cyber-spying privatization drive, it’s hard not to wonder if those stingray devices aren’t also part of some sort of joint initiative:
“Another funder of Carbyne, Peter Thiel, has his own company that, like Carbyne, is set to profit from the Trump administration’s proposed hi-tech solutions to mass shootings. Indeed, after the recent shooting in El Paso, Texas, President Trump — who received political donations from and has been advised by Thiel following his election — asked tech companies to “detect mass shooters before they strike,” a service already perfected by Thiel’s company Palantir, which has developed “pre-crime software” already in use throughout the country. Palantir is also a contractor for the U.S. intelligence community and also has a branch based in Israel.”
As we can see, Peter Thiel and Jeffrey Epstein’s paths did indeed cross with their mutual investments in Carbyne. And while we should have expected their paths to cross given the enormous overlap between their interests and activities, this is the first confirmation we’ve found. It’s also a big reason we shouldn’t assume that stories about Israeli spying on the US government aren’t being done with the US government’s participation. Don’t forget that letting Israel spy on US citizens and others in the DC area could be a means of the US intelligence services getting around legal and constitutional restrictions on domestic surveillance. In other words, there are a some potentially huge incentives for a joint US-Israeli spying operation that includes spying on Americans. Especially if that spying allows for the blackmailing of US politicians. And based on the history of programs like the “Main Core” dissident database that was reportedly used for blackmailing members of congress, and the supporting role Israeli intelligence reportedly played in setting “Main Core” up, we shouldn’t be surprised by any stories at all about Israel spying operations in DC. Given that history, the only thing we should be surprised by is if this operation wasn’t done in coordination with US intelligence:
So is the story about Israeli “stingrays” in DC really just a story about an Israeli spying operation? Or is it a story about a joint US-Isreali spying operation? And if it is a joint operation, is it part of a blackmail operation too? Is Palantir involved? These are the kinds of questions we have to ask now that we’ve learned that Peter Thiel and Jeffrey Epstein were quiet co-investors in Israeli tech companies with clear ‘dual use’ capabilities.
Here’s some articles are worth keeping in mind regarding the ongoing question of who Jeffrey Epstein was coordinating with in his Silicon Valley investments and the people involved with rehabilitation of Epstein’s reputation in recent years. We’ve already seen how one of Epstein’s co-investors in Carbyne911 — the Israeli tech company that makes emergency responder communication technology with what appears to be possible ‘dual use’ intelligence capabilities — is Peter Thiel. Epstein was reportedly the financier behind the 2015 investments in Carbyne by former Israeli Prime Minister Ehud Barak. Thiel’s Founders Fund invested in Carbyne in 2018. But as the following article describes, Epstein was getting introduced to major Silicon Valley financiers like Thiel back in 2015. And it was apparently Silicon Valley investor Reid Hoffman, a member of the ‘PayPal Mafia’, who arranged for an August 2015 dinner where Epstein was a guest along with Elon Musk, Mark Zuckerberg, and Peter Thiel.
Hoffman has subsequently publicly apologized for inviting Epstein to this dinner, saying in an email, “By agreeing to participate in any fundraising activity where Epstein was present, I helped to repair his reputation and perpetuate injustice. For this, I am deeply regretful.” So Hoffman acknowledges that this dinner helped repair Epstein’s reputation.
Hoffman also acknowledges several interactions with Epstein that he says were for the purpose of fundraising for MIT’s Media Lab, which has been reeling for the revelations of the extensive donations it received from Epstein even after his 2009 child sex trafficking convictions. Hoffman asserts that Epstein’s presence at this dinner was at the request of Joi Ito, then the head of Media Lab, for the purpose of fund-raising for Media Lab. Given that Epstein had already been donating to MIT Media Lab for years, it’s unclear how Epstein’s presence at the dinner would assist in that fundraising effort. Was Epstein supposed to convince Musk, Thiel, and Zuckerberg to donate too?
Recall that Hoffman was reportedly the figure who financed the operation by New Knowledge to run a fake ‘Russian Bot’ network in the 2017 Alabama special Senate race. Also recall how, while Hoffman’s political donations are primarily to Democrats, he’s also expressed some views strongly against the New Deal and government regulations. If he’s a real Democrat, he’s decidedly in the ‘corporate Democrat’ wing of the party.
So Hoffman invited Epstein to an August 2015 dinner with leading Silicon Valley investors like Thiel, Zuckerberg, and Musk, apparently at the request of the head of the MIT Media Lab to help with fundraising despite Epstein having donated to the lab for years. At least that’s the explanation we’re being given for this August 2015 dinner:
““By agreeing to participate in any fundraising activity where Epstein was present, I helped to repair his reputation and perpetuate injustice. For this, I am deeply regretful,” Hoffman said in the email.”
So the way Hoffman is spinning this, he was helping to repair Epstein’s reputation by having him present at this august 2015 meeting for “fundraising activities” for MIT’s Media Lab. And Epstein’s involvement in this fundraising was done at the behest of Joi Ito:
But, again, Epstein has been donated to the Media Lab for years. So why would he need to attend another fundraising dinner? Was Epstein making future donations contingent on Media Lab somehow rehabbing his reputation? Or was he at this meeting to make a pitch to Musk, Zuckerberg, and Thiel for why they should donate to Media Lab too?
Note that, in addition to Hoffman funding the Media Lab’s Disobedience Award, he also sites on Media Lab’s advisory council. So he’s more than just a donor and fundraiser for Media Lab.
It’s also worth noting that, as the following article describes, someone in Silicon Valley appeared to be trying to assist Epstein in the public rehabilitation of his reputation as late as this summer, after the Miami Herald’s explosive reporting on him in December. So Epstein has some pretty huge mystery fans in Silicon Valley:
“All three interviews seem to have touched on Epstein’s relationship with Silicon Valley. Stewart wrote that he contacted Epstein to confirm a rumor that Epstein was advising Tesla founder Elon Musk, and both The Information and Bowles cover the tech sector. Stewart reached out directly to Epstein, but it’s unclear who brokered the other meetings. The tech focus suggests that someone in Silicon Valley may have been trying to help Epstein connect with reporters.”
Was Hoffman the mystery person who may have been brokering interviews with Epstein? Recall that Peter Thiel became an Epstein co-investor in Carbyne911 last year. Might Thiel have been the mystery broker? We have no idea, and given the number of contacts Epstein has in Silicon Valley it’s not like Hoffman or Thiel are the only suspects. As the following article by Epstein’s biographer, James B. Stewart, describes, Epstein was allegedly involved with helping Elon Musk find a new Tesla chairman (something Musk denies). Beyond that, Epstein told Stewart during an interview last year that he had personally witnessed prominent tech figures taking drugs and arranging for sex. So when we think about the potential blackmail Epstein’s probably had a Silicon Valley figures, the number of possible figures who may have willingly or unwillingly been working to rehabilitate Epstein’s reputation is a pretty long list:
“Mr. Epstein then meandered into a discussion of other prominent names in technology circles. He said people in Silicon Valley had a reputation for being geeky workaholics, but that was far from the truth: They were hedonistic and regular users of recreational drugs. He said he’d witnessed prominent tech figures taking drugs and arranging for sex (Mr. Epstein stressed that he never drank or used drugs of any kind).”
Having Jeffrey Epstein witness you arranging for sex is probably the kind of situation that will make you highly compliant when it comes to helping his reputation. Or make donations...might that be part of the value Epstein provided for that 2015 dinner party that was ostensibly a fundraising operation for Media Lab? Epstein’s presence could presumably make any former ‘clients’ of his much more likely to open their checkbooks.
It’s also worth noting that Mohammed bin Salman could arguably be considered a prominent Silicon Valley individual given the extensive Saudi investments in Silicon Valley companies:
So when Epstein talks about M.B.S. speaking to him often and visiting him many times, while part of the nature of those visits could obviously include prostitution, it’s also very possible M.B.S. was using Epstein as a kind of Silicon Valley investment front too.
And that’s part of what makes the mystery of the identity of Epstein’s main Silicon Valley benefactor so mysterious: there are just way too many viable suspects.
Remember when a group of Republican members of congress stormed into the secure room for highly sensitive work (the SCIF) where the House Intelligence Committee was holding a impeachment hearing last month, prompting security concerns over the fact that they brought their cell phones into this room where smartphones aren’t allowed? Well, here’s an example of why bringing those smartphones into that room really did pose a very real security risk. It also happens to be an example of how smartphone represent a security risk to pretty much anyone:
A new security flaw was just discovered in Google’s widely-used Android operating system for smartphones. Security firm Checkmarx discovered the flaw and created an app demonstrating the large number of ways it can be exploited. It’s like the perfect flaw for surreptitious targeted spying or mass spying. The flaw enables any app to potentially take control of your smartphone’s camera and microphone. Audio and video recordings and pictures can be made and sent back to a command and control server. The attack appears to rely on Google Camera app to get around these permissions. The flaw also allows the attacker to search through your entire collection of photos and videos already stored on the phone and send them back to the server. It can collect your GPS location data too. So it basically turns your smartphone into the perfect spying device.
But it gets worse. Because while the use of this flaw would be noticeable if it was being executed while a user was looking at their phone (for example, they would see the video being recorded in the app), it’s possible to use a phone’s proximity sensor to determine when the phone is face down when it would be safe to start recording without the user noticing. Another highly opportune time to exploit this vulnerability is when you are holding your phone up to your ear allowing for pictures and video to be taken of the surrounding room. This is also something apps can detect. Checkmarx’s example malware had both of these capabilities.
Perhaps the worst part of this discovered vulnerability is that it demonstrated how apps were able to easily bypass the restrictions in Android’s operating system that is supposed to prevent apps from accessing things like cameras or microphones without users explicitly giving their permissions. So apps that didn’t request access to cameras and microphones could still potentially access them on Android phones until this vulnerability was found. And to upload and videos to the attackers’ command and control server only required that the app be given access to phone’s storage, which is an extremely common permission for apps to request.
At this point we that Android phones built by Google and Samsung are vulnerable to this attack. We’re also told by Checkmarx that Google has privately informed them that other manufacturers are vulnerable, but they haven’t been disclosed yet. Google issued a statement claiming that the vulnerability was addressed on impacted Google devices with a July 2019 patch to the Google Camera Application and that patches have been made available to all partners. Note that in the timeline provided by Checkmarx, they informed Google of the vulnerability on July 4th. So it should have hopefully been fixed for at least some of the impacted people back in July. At least for Android phones built by Google or Samsung. But that still leaves the question of how long this kind of vulnerability has been exploitable:
“The skill and luck required to make the attack work reliably and without detection are high enough that this type of exploit isn’t likely to be used against the vast majority of Android users. Still, the ease of sneaking malicious apps into the Google Play store suggests it wouldn’t be hard for a determined and sophisticated attacker to pull off something like this. No wonder phones and other electronics are barred from SCIFs and other sensitive environments.”
Have sophisticated attackers been using this vulnerability all along? We don’t know, but it didn’t sound like Checkmarx had a very hard time discovering this. And given how Checkmarx was able to build their proof-of-concept app to only operate when the phone was either face down or being held up to someone’s ear, it’s possible this has been a widely used hack that no one noticed:
So if you have an Android phone with some questionable apps , especially phones not manufactured by Google or Samsung and therefore potentially still vulnerable, it might be worth running that app and then laying the phone down a glass surface so you can still see what’s happening on the phone’s screen.
Also note how Checkmarx’s report isn’t just disclosing this vulnerability exploited via the Google Camera app. It’s also a reminder that when apps are access to a phone’s storage device, there’s nothing really stopping those apps from rooting through all of the other data on your phone’s storage card. Like all your photos and videos. And then uploading them to a server:
As Checkmarx describes in their report, when you give an app in Android access to the storage on the device, you aren’t just giving it access to its own stored data. You are giving the app access to everything stored on that SD card:
“It is known that Android camera applications usually store their photos and videos on the SD card. Since photos and videos are sensitive user information, in order for an application to access them, it needs special permissions: storage permissions. Unfortunately, storage permissions are very broad and these permissions give access to the entire SD card. There are a large number of applications, with legitimate use-cases, that request access to this storage, yet have no special interest in photos or videos. In fact, it’s one of the most common requested permissions observed.”
So while this recently disclosed vulnerability is primarily focused on how the Google Camera app had this massive vulnerability that allowed for the hijacking of cameras and microphones, it’s also a remind that all of the contents of your Smartphone’s SD cards are potentially available to any app on your phone as long as those apps have been given the “Storage” permissions. And that’s not just a vulnerability that needs to be fixed. It’s a basic part of how the Android operating system works.
Also don’t forget that Google was started with seed funding from the CIA. So when we learn about these kinds of vulnerabilities that are almost tailor made for spies, maybe that’s what they are.
It’s all a reminder that modern technology regime is predicated on systems of trust. Trust in software and hardware developers that the vast majority of users can’t realistically have a basis for giving and yet must given in order to use the technology. In other words, our modern technology regime is predicated on systems of untrustworthy trust. Which seems like a pretty huge security vulnerability.
Here’s a story about Cambridge Analytica that’s really about a much larger story about Cambridge Analytica that’s going to be unfolding over the coming months: a large leak over over 100,000 Cambridge Analytica documents has started trickling online from the anonymous @HindsightFiles twitter account. The files came from the emails accounts and hard drives of Brittany Kaiser. Recall how Kaiser, the director of business development at SCL between February 2015 and January of 2018, has already come forward and claimed that the ~87 million estimate of the number of people who had their Facebook profile information collected by Cambridge Analytica is too low and the real number is “much greater”. We don’t know yet if Kaiser is the direct source of these anonymous leaks, but it’s her files getting leaked. Kaiser has decided to speak out publicly about the full scope of Cambridge Analytica’s activities following the election in the UK last month. The way she puts it, her cache of files contains thousands and thousands more pages which showed a “breadth and depth of the work” that went “way beyond what people think they know about ‘the Cambridge Analytica scandal’”. The files also turn out to be the same files subpoenaed by the Mueller investigation.
So what new information has been released so far? Well, it’s quite a tease: we’re told the documents are going to relate to Cambridge Analytica’s work in 68 countries. And the “industrial scale” nature of the operation is going to be laid bare. The document release began on New Year’s Day and included materials on elections in Malaysia, Kenya, and Brazil. The files also include material that suggests Cambridge Analytica was working for a political party in Ukraine in 2017. We don’t yet know which party.
Unsurprisingly, there’s also a Dark Money angle to the story. The documents include emails between major Trump donors discussing ways of obscuring the source of their donations through a series of different financial vehicles. So the unlimited secret financing of political campaigns allowed by US election law includes the secret financing of secret sophisticated social media psychological manipulation campaigns too. Surprise. Only some of the 100,000+ documents have been leaked so far and more are set to be released in coming months. So the @HindsightFiles twitter account is going to be one to watch:
“The release of documents began on New Year’s Day on an anonymous Twitter account, @HindsightFiles, with links to material on elections in Malaysia, Kenya and Brazil. The documents were revealed to have come from Brittany Kaiser, an ex-Cambridge Analytica employee turned whistleblower, and to be the same ones subpoenaed by Robert Mueller’s investigation into Russian interference in the 2016 presidential election.”
So the trove of Kaiser’s documents handed over to the Mueller team are set to be released in coming months. That’s exciting. Especially since she’s describing the full scope of the Cambridge Analytica operation as including the coordination of governments and intelligence agencies, in addition to the political campaigns we already knew about. Hopefully we get to learn about about which Ukrainian political party Cambridge Analytica was working with in 2017:
So with much greater scope of the Cambridge Analytica operation in mind, here’s a Grayzone piece from 2018 that describes “Project Titania”, the name for an operation focused on psychologically profiling the Yemeni population for the US military. The article is based on documents that describe SCL’s work as a military contractor in countries around the world and includes some earlier work SCL did in Ukraine. The work is so early it either preceded the formal incorporation of SCL or must have been one of SCL’s very first projects. Because according to the internal SCL documents they obtained, SCL was working on the promoting the “Orange Revolution” in Ukraine back in late 2004. SCL was started in 2005. So Ukraine appears to have been one of SCL’s very first projects. The documents obtained by the Grayzone Project also describe operations across the Middle East as a US and UK counter-insurgency contractor, including an operation in Iran in 2009. It points towards a key context to keep in mind as Kaiser’s 100,000+ documents are released in coming months: while much of what Cambridge Analytica and its SCL parent company were doing in those 68 countries was probably done at the behest of private clients, we can’t forget that SCL has a long history as a military contractor too. The US and UK military and intelligence agencies were probably clients in most of those cases, but it’s also probably not limited to the US and UK. As Kaiser warns us, this is global operation. And these services have been up for sale since as far back as Ukraine’s Orange Revolution:
“Founded in 2005, SCL specializes in what company literature has described as “influence operations” and “psychological warfare” around the globe. An SCL brochure leaked to the BBC revealed how the firm exacerbated ethnic tensions in Latvia to assist their client in 2006.”
SCL’s founding documents going back to 2005 tout its ability to wage “influence operations” and “psychological warfare” around the globe. That’s how far back the Cambridge Analytica story goes. Although it appears to go even further back since SCL’s brochure boasted of its success “in maintaining the cohesion of the coalition to ensure a hard fought victory,” of the 2004 Orange Revolution in Ukraine:
Later, in 2009, SCL was doing some sort of psychological profiling Iran. Along with Libya, Pakistan, and Syria:
So years before the 2016 election, SCL was already acting as a psychological warfare contractor in countries around the world. It points to another important context for the Cambridge Analytica scandal: the US populace targeted in 2016 may have effectively been guinea pigs for this technology in the context of using Facebook to gather psychological profiles on large numbers of people. But they weren’t the first guinea pigs on SCL’s psychological profiling techniques because that’s what SCL has been for years in societies across the world. Apparently starting in Ukraine.
So this story is promising to get much bigger as more documents are leaked. It also raises an interesting question in the context of President Trump’s decision to drone assassinate one of Iran’s most revered leaders: from a psychological warfare perspective, was that a good idea? It doesn’t seem like it was a very good idea, but it would be interesting to know what the regime change psychological warfare specialists say about that. Since Cambridge Analytica has unfortunately reincorporated as Emerdata maybe someone can ask them about that.
The New York Times had a recent piece about a company that’s described as a little-known entity that might end privacy as we know it. Basically, the company, Clearview AI, offers what amounts to a super-facial recognition service. The company appears to have scraped as much image and identity information as possible from social media sites like Facebook, YouTube, and Venmo and allows clients to upload a picture of anyone and see personal profiles on all of those matches. Those profiles include all of the matching pictures as well as links to where those pictures appeared. So it’s like a searchable database of billions of photos and ids, where you start the search with a photo and it returns more photos and information on everyone who is a close enough match. The database of more than 3 billion pictures is described as being far beyond anything ever constructed by the US government or Silicon Valley giants. In addition, Clearview is developing a pair of glasses that will give the wearer a heads-up display of the names and information of anyone you’re looking at in real-time.
And while the company is apparently quite tiny and little known to the public, it’s services have already been used by over 600 law enforcement agencies in the US. But it’s not just law enforcement using these services. We’re also told the software has been licensed to private companies for security purposes, although we aren’t told the names of those companies.
All in all, it’s a pretty troubling company. But it of course gets much worse. It turns out the company is heavily connected to the Republican Party and largely relying on Republicans to promote it to potential clients. The company was co-founded by an Australian, Hoan Ton-That, and Richard Schwartz. Ton-That worked on developing the initial technology and Schwartz was responsible for lining up potential clients. Schwartz is the long-time senior aide to Rudy Giuliani and has quite an extensive Rolodex. Schwartz reportedly met Ton-That in 2016 at a book event at the conservative Manhattan Institute. So it sounds like Ton-That was already working on networking within right-wing circles when he met Schwarz.
By the end of 2017, the company had its facial recognition project ready to start pitching to clients. The way Ton-That describes it, they were trying to think of any possible client who might be interested in this technology, like parents who want to do a background check on a potential baby-sitter or an add-on feature for security cameras. In other words, they have plans on eventually releasing this technology to anyone.
One of the people they made their initial pitch to was Paul Nehlen, the former Republican rising star who ran for Paul Ryan’s former House seat but eventually outed himself as a virulent neo-Nazi. Clearview was offering their services to Nehlen during his campaign for “extreme opposition research”. In other words, they were presumably going to use the database to find all visual records of Nehlen’s opponents and the people working for their campaign to dig up dirt. So this company is started by a bunch of Republicans and one of the first client pitches they make is to a neo-Nazi Republican. It gives us a sense of the politics of this company.
The failed pitch to Nehlen was made in late 2017 and we’re told that soon after that the company got its first round of funding from outside investors. One of those investors was Peter Thiel, who made a $200,000 investment. According to Thiel’s spokesman, “In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” and, “That was Peter’s only contribution; he is not involved in the company.” So Thiel made one of the first investments which was converted to equity, meaning he’s a shareholder now. But we’re told he’s not involved in the company, which sounds like a typical Thiel deception.
Keep in mind that Thiel is in a position to both encourage the handing of large volumes of faces and IDs to the company while also being a position to massively exploit Clearview’s technology. Thiel co-founded Palantir, which could obviously have extensive uses for this technology, and Thiel also sits on the board of Facebook, where much of the photos and ID information was scraped. When asked about Clearview’s scraping of Facebook data to populated its database, Facebook said the company if reviewing the situation and “will take appropriate action if we find they are violating our rules.” But Facebook had no comment on the fact that Thiel sits on its board and is personally invested in Clearview. According to Ton-That, “A lot of people are doing it,” and, “Facebook knows.”
Other Republican Party connections to the company include Jessica Medeiros Garrison and Brandon Fricke. Medeiros Garrisson, the main contact for customers, managed Luther Strange’s Republican campaign for Alabama attorney general while Fricke, a “growth consultant” for the company is engaged to right-wing media personality Tomi Lahren. Clearview claims it’s also enlisted Democrats to market its products too but we aren’t given any names of those Democrats.
So how does Clearview assuage concerns about the legality of its services? That job falls to Paul D. Clement, a United States solicitor general under President George W. Bush. Paul Clement, a former clerk to Antonin Scalia, has the interesting distinction for a Republican lawyer. In 2012, Clement was the lawyer who led the Republican challenge by 26 states in 2012 to repeal Obamacare over its individual mandate provision. That’s something we would expect for a former Bush administration official. But back in October of 2019, Clement was asked by the Supreme Court to defend an Obama-era law after the independence of the head of the Consumer Financial Protection Bureau (CFPB) was challenged by the Trump administration’s Justice Department. The CFPB itself (which is now headed by a Trump appointee) also joined the Justice Department in the lawsuit, leaving no entity to defend the original law that prevents presidents from firing the heads of the CFPB. The CFPB was one of the entities set up by the Obama administration (and designed by Senator Elizabeth Warren) following the financial crisis so it was guaranteed the Trump administration would oppose it. The fact that it’s dedicated to providing consumer financial protection is the other reason it was guaranteed the Trump administration would opposed it. Republicans don’t do consumer protection. The Obama-appointed head of the CFPB, Richard Cordray, resigned in November of 2017 two years early after Trump and the Republicans made it abundantly clear they wanted to replace him. The Trump Justice Department argued back in March of 2017 that this restriction on the president’s ability to fire the head of the CFPB made it unconstitutional. In September of 2019, the Justice Department as the Supreme Court to take the case, and the following month Clement — who has argued before the Supreme Court more than 95 times — was invited by the Supreme Court to defend the existing structure of the CFPB.
Oh, and Paul D. Clement also happens to be one of the lawyers who successfully argued on behalf of the Republicans in Rucho v. Common Cause, a case that has now constitutionally enshrined hyper-partisan gerrymandering that the federal courts can do nothing about. So that gives us a sense of the importance of having somehow like Paul D. Clements soliticing clients for a company like ClearView: while he’s an extremely high profile and respected lawyer, he’s also a partisan hack. But the kind of hack whose words will carry a lot of weight when it comes to assuring potential clients about the legality of Clearview’s products.
And if that all wasn’t shady enough, the author of the following report shares an anecdote that should raise big red flags about the character of the people behind this company: When the author tested the system on his own photo by asking a friend in law enforcement to run his picture through it, the journalist got dozens of pictures of himself back including some pictures he didn’t even know existed. But his law enforcement friend was soon contacted by Clearview to ask if he had been speaking to the media. So Clearview is either actively monitoring and doing its own searches on the people run through its system or it has system set up to flag ‘troublemakers’ like journalists. Ton-That claims that the reason this search prompted a call from the company is because the system is set up to flag “possible anomalous search behavior” in order to prevent “inappropriate searches.” But after that incident, the reported found that his results were removed from future searches, which Ton-That dismissed as a “software bug”. So the company appears to be actively monitoring and manipulating search results. As the article notes, since the primary users of Clearview are police agencies at this point, the company can get a detailed list of people who have received the interest of law enforcement simply by looking at the searches used, which is the kind of information that can be potentially abused. It’s an example of why the character of the people behind this firm is particularly important for a firm offering these kinds of services and why the more we’re learning about this company the more cause there is for serious concern.
It remains unclear how many clients outside of law enforcement will be allowed to purchase Clearview’s services. But as the article notes, now that Clearview has broken the taboo of offering facial recognition software database services like this, it’s just a matter of time before companies do the same thing. And that’s why Clearview might end up ending privacy as we know it: by setting a really, really bad example by showing the world this service is possible and there’s a market for it:
“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”
An end to privacy as we know it. Everyone will be able to just look at someone and immediately access a database of personal information about them. That’s the dark path Clearview’s technology is sending us down:
And both police officer and Clearview’s own investors predict that its app will eventually be available to the public. And yet if you ask investor David Scalzo about the privacy concerns, he appears to take the stance that it’s simply impossible to ban this use of this technology whether or leads to a dystopian future or not:
But while Clearview’s investors appear to have no problem at all with blazing the trail of this dystopian post-privacy future, the company itself has taken pains to get as little exposure as possible. It even freaked out when a journalist’s photo was run through the system:
Beyond that, the technology hasn’t even been validated. According to Ton-That, it works about 75 percent of the time, which sounds pretty good until you realize you’re talking about mismatches that could lead to the wrong person being arrested and charged with a crime:
But while the possible misuses of unproven technology by law enforcement is obviously a problem, it’s the fact that the company appears to run by partisan Republicans that points towards one of the biggest potential sources of abuse. It’s a political opposition research dream tool and the firm is using Republicans find clients:
And “extreme opposition research” is one of the services Clearview offered one of its first potential clients. That client happened to be Paul Nehlen, the GOP rising-star who saw his political future implode after it became clear he was an open neo-Nazi. That’s the person Clearview offered services to right after the company finished its initial product in late 2017 and those services happened to be “extreme opposition research”. It tells us A LOT about the real intent of the figures behind this company. It’s not just for law enforcement. It’s also a reminder that the company’s willingness to manipulate the search results could be very hand for right-wing politicians who would prefer embarrassing pics not be readily available for opponents to find:
And then, shortly after making that offer to Nehlen, Clearview gets its first outside investment, including $200,000 from Peter Thiel that was later converted to equity. So in addition to co-founding Palantir and sitting on the board of Facebook, Thiel owns an undisclosed amount of this company too. And Ton-That claims Facebook is aware that Clearview’s database is heavily populated with data scraped from Facebook:
Beyond that, Clearview hired high-profile Republican lawyer Paul D. Clement to assure clients that the services are legal. Given who Clement is in the legal world that’s a major legal endorsement:
Oh, and it turns out the FBI and DHS are also trying out Clearview’s services, along with Canadian law enforcement agencies:
It doesn’t sound like federal agencies have a problem with using a database of images that was improperly scraped off of major social media sites. That’s apparently legal and fine. And that’s why Clearview appears to be on track to becoming the ‘Palantir’ of facial recognition companies: a highly secretive company owned by political connected shady figures that somehow manages to get massive numbers of government clients by offering services that have obvious intelligence applications. And it’s co-owned by Peter Thiel, further solidifying Thiel’s position as the US’s private intelligence oligarch. It’s quite a position for an open fascist like Thiel.
So when services like this end privacy as we know it ends sooner than you expect, don’t forget that this was brought to you by Republicans who didn’t want you to know about those services in the first place.
Oh look at that, Facebook just hired a new head of video strategy person to head up the video division for the “Facebook News” feature that its creating for 2020. Guess who: Jennifer William, an 18-year veteran of Fox News. Surprise!
And Williams isn’t just a Fox News veteran. She’s was a long-time senior producer of Fox & Friends (from 1997–2009), one of the channel’s most egregious outlets of disinformation. Fox & Friends is bad even by Fox News standards. That’s who is heading up the video section of Facebook’s new News section:
“Thirteen years later, Facebook has reportedly named Jennifer Williams, who was a Fox & Friends senior producer at the time that memo was sent, to head video strategy for the social media giant’s forthcoming Facebook News, NBC News reported Tuesday. Facebook News will serve its billions of users with a dedicated tab including news content curated by a team of journalists from a list of publishers chosen by the company. As Facebook executives plan a shift in the way the nation consumes news that will almost certainly impact the 2020 presidential elections, they are staffing up with an 18-year veteran of the right-wing cable network that effectively serves as President Donald Trump’s personal mouthpiece.”
An 18-year veteran of Fox News. That’s who is going to be ultimately curating the ‘news’ videos served up to Facebook readers. As the article notes, the fact that Facebook decided to make a special ‘news’ section ostensibly managed with a journalistic intent behind it, and not just run by an algorithm, meant the company was going to have to get in the business of having humans make active decisions on whether or not news is worthy of being included in the new News section of the site or if it’s ‘fake news’. So Facebook chose the veteran of the leading purveyor fake news.
But Jennifer Williams isn’t the only highly questionable figure who is going to be running Facebook’s new News division. the company already hired Campbell Brown to lead the News division. And it turns out Brown is close to Betsy DeVos, the far right sister of Erik Prince and Trump’s Education Secretary. As we should expect, Brown has already decided to credential Breibart.com as one of the new sites that Facebook News will promote:
And now here’s Judd Legum’s Popular.Info piece with more on Cambell Brown and her extensive ties to Betsy DeVos. As the article notes, the publication Brown co-founded, The 74, is largely focused on education news, which got rather awkward after Betsy DeVos became Trump’s education secretary. DeVos calls Brown a “friend” and The 74 was started, in part, with a $200,000 grant from Betsy DeVos’s family foundation. Most of the articles in The 74 covering DeVos have been largely laudatory.
Brown is also a member of the board of The American Federation for Children (AFC), a right-wing non-profit started and chaired by DeVos that spends heavily on getting Republicans elected at the state level. The 74 and the AFC co-sponsored a Republican presidential forum in Iowa in 2015.
It’s also worth recalling the recent story describing how the DeVos’s and other far right oligarchs associated with the theocratic Council for National Policy (CNP) have been quietly financing the purchase of local and regional radio stations to ensure the explosive growth of regional right-wing talk radio. It’s a reminder that the damage Betsy DeVos is doing to the intellectual status of America isn’t limited to the damage she’s doing to American education.
As another sign of Brown’s editorial leanings, while editor-in-chief of The 74, the publication featured at least 11 pieces from Eric Owens, a Daily Caller editor with a long history of making transphobic attacks on students and teachers. The 74 also appears to really hate Elizabeth Warren. Interestingly, Mark Zuckerberg’s foundation, the Chan Zuckerberg Initiative, donated $600,000 to The 74, describing it as “a non-profit, nonpartisan news site covering education in America.” Zuckerberg has previously expressed his extreme dislike of Warren’s presidential ambitions, describing her as an “existential” threat to the company.
So that’s who Jennifer Williams is going to be reporting to in her new role as the head of Video at Facebook News: Cambell Brown, the right-wing friend of Betsy DeVos:
“In 2015, Brown co-founded The 74, which focuses on the public education system, and served as editor-in-chief. Even after joining Facebook in 2017, Brown has maintained an active role in The 74, where she is a member of the board of directors. According to documents filed with the IRS in 2017, Brown dedicated five hours per week — the equivalent of a month-and-a-half of full-time work — working for The 74.”
The head of Facebook’s new News feature co-founded The 74, a publication in 2015 focused on education and remained on the board of directors even after joining Facebook. That wouldn’t be a huge deal if The 74 was just a blah non-ideological outlet. But it turns out to have been founded in part with a grant from Betsy DeVos’s family foundation:
And the content of The 74 has a clear right-wing orientation, with articles that describe Elizabeth Warren as “the second coming of Karl Marx”. And it turns out The 74 received a $600,000 donation from none other than Mark Zuckerberg, who openly fears and loathes Warren:
Then there’s the fact that The 74 features writers like Eric Owens, an editor at The Daily Caller. After Brown was hired by Facebook to head up its news division, The Daily Caller was an an official fact-checking partner at Facebook:
Oh, and Brown’s team at Facebook ended up selecting Breitbart, which is banned as a citation source for Wikipedia, as one of its 200 “quality” news sources:
So as we can see, the head of Facebook’s new News feature that’s going to roll out some time in 2020 is a close friend of Betsy DeVos and has already made moves to ensure right-wing garbage sites that should be banned from Facebook purely for journalistic integrity purposes will instead be trusted content producers and fact-checkers. And now long-time Fox News veteran Jennifer Williams will be working under Brown heading up the Facebook News video division. Because of course.
The impeachment of Trump appears to be on course for a quick end following the decision of Senate Republicans to not call any witnesses and proceed to an acquittal vote. The ultimate political consequences of acquitting Trump without calling witnesses in the Senate is hard to estimate, but it seems like a pretty sure bet that the Trump team is going to interpret this acquittal as a greenlight to engage in pretty much any political dirty tricks campaign it can imagine. After all, when Senator Lamar Alexander — one of the hold out Senators who was reportedly on the fence about whether to vote for calling witnesses or not — finally decided to vote against witnesses late last night, Alexander’s reasoning was that House Democrats had already proven their case and Trump really did what they accused him of doing but it doesn’t rise to an impeachable offense so no witnesses were needed. So the Republicans have basically ruled that inviting and then extorting a foreign government to get involved in a US electoral disinformation campaign is acceptable even if they don’t necessarily think its fine. It an open invitation for not just every Republican dirty trick imaginable but an invitation for foreign government meddling too. The Trump president has now become not just the culmination of America’s inundation with disinformation but now a validation of it.
So it’s worth noting that, days before this decision by the Senate, the Bulletin of the Atomic Scientists updated the ‘Doomsday Clock’. It’s now 100 seconds from ‘Midnight’, closer than ever. And the explosion of disinformation campaigns and disinformation technology like ‘deep fakes’ that can send a society into turmoil was apparently a big part of their reasoning:
““Humanity continues to face two simultaneous existential dangers—nuclear war and climate change—that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society’s ability to respond,” said the Bulletin of the Atomic Scientists as it moved the Doomsday Clock from two minutes to midnight to 100 seconds to midnight. This shows that they feel the risk of catastrophe is greater than ever — even higher than during the Cold War.”
A greater risk of man-made catastrophe than during the Cold War. This is where we are. The reasons include ‘oldies’ like the risk of nuclear war. But even there the risks are higher (thanks in large part to Trump’s shredding of nuclear arms treaties). And then there’s the risk of what sound like a ‘Skynet’ scenario involving militaries relying on AI for decision making and command and control systems:
But it’s the growing threats to the “information ecosphere” that runs the risk of damaging our ability to manage virtually every other threat because disinformation campaigns are already corruption the decision-making processes needed to address all those other threats:
And that warning about how disinformation threatens out collective ability to deal with ALL OF THE OTHER existential threats is a reminder that systematic disinformation is kind of a meta-existential threat. It literally makes all other existential threats more likely to happen, which arguably makes it the greatest threat of all. If humanity wasn’t so susceptible to disinformation this wouldn’t be such a massive threat. But that’s clearly not the case. Disinformation is winning. It really works and is increasingly cheap and easy to deploy, which is why someone like Trump can become president and why the far right has been rising across the globe with one big lie campaign after another. And that’s what the Senate Republicans just rubber-stamped and endorsed: the meta-existential threat of systematically trashing the information ecosphere and the resulting collective insanity.
Yasha Levine has a short new piece about an interesting historical intersection between the US’s rehabilitation of fascists and Nazi collaborators in the post-WWII era and the counterinsurgency origins of the development of the internet. It’s the kind of history that’s long been important but has suddenly gained a new level of importance now that President Trump appears to feel ‘unleashed’ following his impeachment acquittal and willing to use the power of his office to protect his friends and attack his political enemies:
Levine was given a number of declassified US Army Counter Intelligence Corp file on Mykola Lebed. Lebed was one of the many OUN‑B Ukraine fascist Nazi collaborators who was basically welcomed into the US’s national security complex and Levine is working on a short biography on him. One particular file on Lebed was from 1947 and mostly illegible, but it did have a clear stamp at the bottom that had the name Col W.P. Yarborough. Yarborough turns out to be a central figure in the development of the US Army’s special forces during this period. Beyond that, he was also a leading figure in the US’s counterintelligence operations in the 1960s and it’s in that context that Yarborough played a significant role in the development of the internet’s predecessor, the ARPANET. Levine covered Yarborough’s role in the development of the ARPANET as a counterinsurgency tool in his book Surveillance Valley. And as he covered in the book, while the counterinsurgency applications of the original ARPANET was used for the war in Vietnam, it was also used to compile a massive unprecedented computerized database on domestic political opponents of the war and left-wing groups in general.
That’s the main point of Levine’s new piece: the observation that the figure who led the development of what was a cutting-edge domestic surveillance operation primarily targeting left-wing political movements was also involved with the recruitment and utilization of WWII fascists and Nazis for use in the national security apparatus. It’s one of those historical fun-facts that highlights how the US’s long-standing ‘anti-communism’ agenda was really an anti-left-wing agenda that included the covert suppression of domestic left-wing movements. Fascists are fine. Anti-war protestors are subversives that need to be surveilled an ultimately neutralized. It’s a prevailing theme throughout the Cold War exemplified by Yarborough’s career. A career that should serve as a warning now that President Trump appears to feel like he’s been given permission to use the full force of the government to attack perceived his political enemies:
“By the time the 1960s rolled around, Yarborough was regarded as an expert on anti-guerrilla and counterinsurgency warfare. In 1967, while in charge of the U.S. Army’s Intelligence Command, he initiated a massive, illegal domestic counterinsurgency surveillance program inside America that targeted civil rights activists, antiwar protesters, leftwing student groups, and anyone who sympathized with to the oppressed.”
A massive ILLEGAL domestic surveillance operation primarily targeting the left. That’s what the first version of the internet was used for under the CONUS Intel project. And it was Yarborough — someone involved with the early Cold War utilization of fascists and Nazi — who led that initiative:
It’s a historical anecdote that’s a big reminder that the use state powers to suppress and minimize left-wing movements and individuals is a significant part of chapter of American history that led to where we are today.
It’s worth recalling at this point the interesting story John Loftus had about the whitewashing of Mykola Lebed involving Whitey Bulger. It turns out Lebed was cast as an anti-Nazi fighter in WWII in order to be allowed to get a US visa and become US asset working for the CIA. That whitewashing was carried about by Dick Sullivan, a US Army attorney operating out of Boston. Lebed was just one of the fascists and Nazis who had his background covered up by Sullivan. Sullivan also happened to be secret member of Irish Republican Party (IRA), an allegiance shared by Bulger. Sullivan eventually told Bulger about an IRA FBI informant, who Bulger subsequently killed (this is discussed by Loftus on side B of FTR#749).
Now here’s a look at that 1971 NY Times report that initially exposed the Army’s CONUS Intel program. As the article describes, while most of the information fed into this database was provided by local police, the FBI, or public sources, the program still involved sending over 1,000 undercover US Army agents to directly gather intelligence. It was only exposed when Senator Sam J. Ervin Jr., Democrat of North Carolina, contended that prominent political figures in Illinois had been under military surveillance since 1968.
The article also describes how then-General Yarborough was replaced as the head of CONUS Intel in August of 1968 by Maj. Gen. Joseph McChristian. AFter McChristian was briefed on the program he immediately asked his subordinates for ways to cut it back. But McChristian ran into resistance from the “domestic war room” and other government agencies, particularly the Justice Department, which said it needed this domestic intelligence. All in all, the CONUS Intel chapter of American history is a chapter that’s become ominously relevant for the age of ‘Trump unleashed’:
“In the operation, which was ordered ended last year, 1,000 Army agents gathered personal and political information on obscure persons, as well as the prominent, on advocates of violent protest arid participants in legitimate political activity, on the National Association for the Advancement of Colored People and the John Birch Society, on the Black Panthers and the Ku Klux Klan, on the Students for a Democratic Society and the Daughters of the American Revolution. The emphasis was on radicals, black militants and dissenters against the war in Vietnam.”
1,000 Army agents collecting domestic intelligence. Sometimes posing as members of the groups under surveillance, or members of the press, or just random bystanders. It’s kind of a nightmare situation from a constitutional perspective:
The program emerged from creation of the Army Intelligence Command at Fort Holabird Md. in 1965 that connected 300 military intelligence field offices across the US:
Then in 1966, following the race riots of 1965 and the first protests against the US war in Vietnam, when federal troops were called in, the Army Intelligence Command instructed those military intelligence offices to start collecting information that might be useful if the Army was called into a city. A side effect of this order was agents making regular visits to campuses and collecting anti-war literature. This resulted in the Counterintelligence Analysis Detachment monitoring expressions of dissent and black militants:
After race riots broke out in Newark and Detroit in 1967, General Yarborough ordered a Conus Intel communications center known as “Operations IV” to be set up at Fort Holabird and a nationwide teletype network that would feed information to it. It was this early telecommunication infrastructure that allowed for collection of information from around the country that we now know was the early incarnation of the internet:
Following a massive anti-war march on the Pentagon in 1967, a review of the role of federal troops in civil disturbances was set up, leading to “city books” plans that detailed how a military commander might need to move troops into an urban area:
Another goal of Counterintelligence Analysis Detachment was predicting when and where a civil disturbance might break out. The MLK was assassinated and protests broke out in 100 cities, making clear that predicting civil disturbances when and where civil disturbances break out might not be feasible:
In 1968, Under Secretary of the Army, David E. McGiffert order the Army to be prepared to send 10,000 troops on short notice to 25 American cities:
Later that year, RFL was assassinated and Congress passed a resolution giving the Secret Service the authority to draw on the Army to protect national political candidates. ON June 8, 1968, Paul Nitze, the Deputy Secretary of Defense, signed an order that gave formal instructions to provide the Pentagon with all essential intelligence data on civil disturbances. This led to the creation of computer databases of civil disturbances and information on individuals are interest
Also in June of 1968, the Directorate for Civil Disturbance Planning and Operations was set up at the Pentagon. This becamse known as the “domestic war room”. By the end of 1968, this whole operation gave Army intelligence the information it needed to predict how many protestors were going to show up for a planned counter-demonstration for Nixon’s inauguration and what they were planning on doing:
But it was also around this time, right when this vast surveillance bureaucracy was set and up delivering results, that General Yarborough was replaced by Maj. Gen. Joseph A. McChristian. After being briefed on the operation, McChristian ordered that ways be found to cut back on this vast domestic surveillance operation. But the “domestic war room” and Justice Department pushed back, arguing they needed this information:
Finally, in February of 1969, Under Secretary McGiffert wondered whether the Army might be exceeding its authority and ordered that covert operations end:
But around the same time McGiffert ordered the end of the covert operations, the Army general counsel explored with the Deputy Attorney General Richard G. Kleindienst whether or not the Justice Department could take over these intelligence gathering operations. Kleindienst asserted that the Justice Department lacked the manpower:
And it wasn’t just protestors and dissidents who were targeted for surveillance. Senator Sam J. Ervin Jr., Democrat of North Carolina, charged that CONUS Intel was also spying on politicians:
And that’s what we learned about this operation back in 1971. So were the lessons of this experience actually learned by the American people? We’ll probably find out as we see this ‘Trump unleashed’ period of Trump’s presidency unfold. But it’s pretty clear that all of the pieces are in place for a major domestic operation that utilizes the full power of the state to attack the perceived political enemies of the White House. Trump himself has now made this clear.
Also note how the assassinations of MLK and RFK played into the justification for this domestic military intelligence operation. The mass riots that broke out in cities across the country only fueled the call for the capability of sending thousands of troops into a large number of cities in short order simultaneously and RFK’s assassination resulted in Congressing allowing the Secret Service to call in federal troops to protect candidates. And yet both the MLK and RFK assassinations had government fingerprints all over them. See AFA #46 for much more on the government role in the assassination of MLK (part 2 includes references to Yarborough and the domestic military intelligence gathering going on at this time) and FTR#789 for more on RFK’s assassination and how the US’s progressive leadership was being systematically killed out during this period. It’s a huge and dark aspect of the story of CONUS Intel: it was a military intelligence operation that didn’t just involve gathering massive amounts of data on domestic dissidents. It also involved planning for federal troops to move into cities and took place during a period when American’s left-wing leaders were getting killed off by right-wing forces operating within and outside the government (i.e. the real ‘Deep State’). So now that President Trump has made it clear that he feels emboldened to do pretty much whatever he wants to do against his opponents, now is probably a good time for American to revisit America’s long history of coddling fascists while overtly and covertly using the full power of the national security state against for domestic political agendas. Domestically politically agendas that were virtually always targeting progressives.
Here’s a pair of stories that highlight one of the Big Data areas of information Palantir is given access to by the US government: massive volumes of IRS information.
First, here’s an article from December 2018 about how the IRS has turned to Palantir’s AI to find signs of tax fraud. The IRS had signed a $99 million seven year contract with Palantir that September. The contract let’s the IRS search for tax cheats using Palantir’s software by mining tax returns, bank reports, property records and even social media posts. As the article notes, part of the motive for relying on Palantir for this work is because the IRS has seen its staff shrink so much in recent years, with over $1 billion in cuts to the IRS budget since 2010. The Criminal Investigations division has lost around 150 agents per year as a result of these cuts. These IRS budget cuts, of course, are the work of the Republicans. As a result, AI and machine learning approaches to finding criminal activity are now seen as necessary for the IRS to do its job with fewer staff and resources. It sounds like the Palantir systems have access to the IRS’s Compliance Data Warehouse, which has 40 data sets on taxpayers stretching back more than 30 years.
Now, the idea of using AI and machine learning in the IRS’s criminal division seems like a very reasonable approach in general. But this isn’t the IRS implementing these approaches. This is the IRS handing over vast volumes of data to Palantir and using Palantir’s tools to do the analysis. Which implies Palantir has access to these IRS databases and, in turn, implicit control over which potential cases get flagged for review by the IRS. So the IRS’s budget and staff get slashed and the result is the effective privatization of the IRS’s crime detection capabilities. And the company that is providing these crime detection capabilities is owned by Peter Thiel, a top Republican donor and one of the biggest anti-tax fascists in the world:
“The information that Egaas and his colleague Benjamin Herndon, the IRS’ chief analytics officer, shared is the first major glimpse of how the revenue agency is using advanced technology since it signed a seven-year, $99 million deal with Palantir Technologies in September to sniff out tax cheats by mining data from tax returns, bank reports, property records and even social media posts.”
After $1 billion in IRS cuts over the past eight years, the IRS signs a 7 year $99 million deal with Palantir to help make up for the lost manpower. It’s a pretty nice deal with Palantir, which now has access to more than three decades of information from the IRS’s Compliance Data Warehouse:
And we’re assured that any vendors the IRS contracts with to carry out its tasks will have to go through the same security and compliance checks that IRS staff go through because privacy is paramount. So don’t worry about giving even more information to Palantir because its employees given access to this data have to go through security checks. That’s the level of assurance we’re getting about handing over this vast amount of financial data to a company run by a libertarian fascist:
This is also a good time to recall the story about JP Morgan hiring Palantir to provide AI oversight of JP Morgan’s employees. It turns out the JP Morgan security officer who was given access to Palantir’s observation systems, Peter Cavicchia, ‘went rogue’ and started spying on people all over the company, including the executives. Cavicchia had a team of Palantir employees working for him and unprecedented access to the bank’s internal information, like emails, and the Palantir system had no real limits. Cavicchia went wild spying on people at the bank, resulting in JP Morgan curtailing its use of Palantir’s systems. That’s the kind of company that’s being trusted with these databases of US tax records. And keep in mind that there’s nothing stopping Palantir from combining the information it gets from the IRS with the financial information its getting from the banks too. It’s literally positioned to become the leading Big Data private repository of sensitive information and its run by a Trump-loving fascist.
Now here’s an example of, ironically, an IRS worker who was just sentenced to five years probation for leaking an IRS “suspicious activity report”. The IRS analysis, John Fry, is charged with pulling a “suspicious activity report” report related to President Trump’s personal attorney, Michael Cohen. Fry grabbed the report from a confidential law enforcement database and leaked it to Stormy Daniel’s attorney, Michael Avenatti, in May of 2018. Fry grabbed the reports from the Palantir database used by the IRS Criminal Investigation division. It’s an example of the kind of potentially political powerful information Palantir was given access to with its IRS contract:
“Fry has worked for the IRS since 2008 and was working in the agency’s San Francisco office as of February last year. As an IRS analyst, he had access to various law enforcement databases, including the Palantir database used by the IRS Criminal Investigation division to collect investigative data from multiple sources, according to a criminal complaint filed in February 2019.”
Yep, as an IRS analyst, Fry had access to various law enforcement databases, including the Palantir database used by the IRS Criminal Investigation division to collect investigative data from multiple sources. IRS analysts have access to those databases and now Palantir employees have access too thanks to these kinds of contracts with the IRS. And Fry tried to access even more reports in a separate criminal database but those reports were restricted. It raises the question of whether or not that separate restricted database was one of the databases maintained by Palantir or not. Because if it was maintained by Palantir we should keep in mind that Palantir’s engineers presumably have access to those restricted files even if IRS agents like Fry don’t have access. It’s one of the caveats with the assurances we get that the employees for vendors like Palantir who are given access to these databases are going to go through security checks like government employees. Those Palantir employees might effectively access to ALL of the information that their government employee counterparts can’t necessarily access so if a Palantir employee ‘goes rogue’ the damage they could do is probably far greater than an IRS or other government employee going rogue:
Now, in this case, it was an IRS employee, not a Palantir employee, who did the leaking. But we have no choice in giving IRS employees to this information. They’re supposed to have access to it and the risk of leaks like this is an unavoidable risk that comes with the territory. But the risk of Palantir employees abusing this kind of information it’s a completely avoidable risk. It’s a choice to outsource these AI capabilities to Palantir. There’s no compelling reason to outsource these giant sensitive data operations. Yes, it would be more expensive for the IRS to developing these kinds of AI capabilities on their own, but that higher cost comes with the benefit of not handing over giant databases of sensitive information to private companies. At this point, Palantir is the AI/machine learning outsourcing entity of choice for the US government. It has the systems set up to incorporate new clients and teams trained to carry it out. And that was a choice. There’s no reason there couldn’t have been a government agency set up to provide these services to other government agencies like IRS. We could have limited access to these vast databases to government employees but thanks to the religion of privatization that dominates the US government Palantir was tapped as a Big Data/AI private outsourcing entity that the US government could trust and now it has access to probably more information on individual Americans than any other single entity on the planet. If the US government set out to create a privatized version of J. Edgar Hoover’s blackmail operation it couldn’t have done a better job than putting Peter Thiel in the position he’s in today with Palantir.
And that’s perhaps the biggest lesson from this to keep in mind: While granting access to these vast troves of government databases to Palantir employees is obviously problematic, there’s one particular individual at Palantir that we need to be extra concerned about having access to this information because he’s a fascist with insatiable personal ambition and appears to be amoral and more than willing to abuse such powers if it suits his personal goals. And he’s not an employee. He’s the owner.
Here’s a disturbing update on the bureaucratic maneuverings involving the US Undersecretary of Defense for Research and Engineering, a leading role for developing next-generation weapon systems and technologies. First, recall how former NASA administrator Mike Griffen was appointed as acting Undersecretary of Defense for Research and Engineering with an agenda of overhauling and streamlining the military’s defense technology procurement processes with the goal of facilitating the rapid development of next-generation technologies utilizing existing commercially available technologies whenever possible and reducing delays caused by cost/risk assessments. Griffin was also a major advocate of the creation of the Space Defense Agency (‘Space Force’), a favorite pet project of President Trump. Also recall how Griffin appeared to be behind the push to end the Pentagon’s contract with the JASON group, which was part of his larger agenda of minimizing the review process for approving the development of new platforms.
So Griffin had major visions for overhauling how the US national security state makes decisions on which hi-tech projects to invest in with an eye on speeding the process up by relying more on commercial technology and dramatically limiting the number of people involved with reviewing the proposals. And while it remains to be seen whether or not Griffin’s vision will be fully realized, we do now know that it won’t be Griffin who completes this vision because he just announced his resignation a few weeks ago, along with his deputy Lisa Porter. The news came a day after the House Armed Services Committee recommended removing the Missile Defense Agency from Griffin’s control. So the Pentagon’s two top technology experts are set to be replaced:
“In his role as R&E head, Griffin had the lead on developing new capabilities for the department, such as hypersonic weapons, directed energy and a variety of space-based programs. Included in his portfolio were the Missile Defense Agency and the Defense Advanced Research Projects Agency.”
As we can see, there’s going to be a new vision for the Pentagon’s approach to developing new weapons, along with all the other projects being developed by DARPA with dual-use military/commercial applications. And note how Griffin’s deputy, Lisa Porter, previously served as executive vice president and director of the CIA’s private investment company, In-Q-Tel Labs. It’s a reflection of how Griffin’s vision or relying more and more on readily available commercial technology was likely going to involve more national security state investments in the private sector via companies like In-Q-Tel:
And since this is the Trump administration that’s going to choosing Griffin’s replacement in the middle of this push to cut reviews and incorporate more off-the-shelf existing commercial technology into the development of the Pentagon’s next-generation systems we have to wonder who the Trump administration is going to find, especially given the fact that we’re months away from an election. And we just got our answer: Mike Griffin — who for all his faults was actually technically extremely competent — is going to be replaced by the White House’s chief technology officer Michael Kratsios. So is Kratsios qualified for a position like this? Well, he’s the White House’s chief technology office so one might assume he’s well qualified for a position like this. But as we’ll see, it turns out Kratsios has no technical education and his primarily qualification is that he worked for Peter Thiel’s investment company, Clarium Capital, and ended up becoming Thiel’s chief of staff. So the main qualification for next Undersecretary of Defense for Research and Engineering is whatever experience he acquired as an unqualified White House chief technology officer. Kratsios will continue serving as the White House’s chief technology officer. It’s the kind of situation that suggests Kratsios’s real qualifications are largely going to be limited to his enthusiasm at steering more defense spending towards Thiel’s companies like Palantir:
“Kratsios graduated from Princeton with a bachelor’s degree in political science and a focus on ancient Greek democracy. The person he’s replacing, Michael Griffin, holds a Ph.D. in aerospace engineering and served as a NASA administrator. Indeed, Kratsios will be less academically credentialled than most of the program-managers he oversees. So how did he get here?”
Yes, how exactly did Kratsios get the job of the Pentagon’s top technology officer despite having no discernible technology expertise? He’s knows the right people. Specifically Peter Thiel:
But note how part of the sales pitch for Kratsios getting this position is that he knows people in Silicon Valley and that will help facilitate relationships between the Pentagon and Silicon Valley firms. But as one former official described it, it’s not like Kratsios is actually widely like in Silicon Valley in part due to his ties to Thiel and the fact that Thiel has created so many enemies. But Kratsios’s selection is unambiguously good for “the Peter Thiel portion of Silicon Valley.” And obviously obscenely good news for Thiel, who now has even more power than ever. If you’re a Silicon Valley firm that wants to do business with the Pentagon you had better not piss off Thiel:
Of course, the kind of power wielded by Kratsios is only going to last for as long as he’s the acting Undersecretary and that may now last long beyond the first months of 2021 if Trump isn’t reelected. But any contracts set up could potentially last much longer. In other words, for Kratsios and Thiel to fully take advantage of this moment they are going to have to move fast and get as many long-term Pentagon contracts set up with Thiel-affiliated firms as possible.
So while the ascension of Mike Griffin to the Undersecretary of Defense for Research and Engineering served as a warning that the defense acquisition process was going to be dramatically sped up, it’s Griffin’s resignation that’s serving as a warning that this process could be kicked into overdrive.
And in probably related news, guess which company just announced it’s going to be doing an IPO this year: yep, Palantir. It just announced it’s filed the IPO papers. So that’s going to be interesting to watch, especially with respect to how any new contracts that get announced this year might impact Palantir’s IPO valuation. But as the following article describes, part of what this IPO announcement interesting is that it means Palantir is going to have to be more open to the public than before over the types of contracts it has with clients. Clients that include governments:
“Palantir said this week that it confidentially filed paperwork with the US Securities and Exchange Commission to go public. As with any publicly-traded company, Palantir would need to disclose more of its financial history and open itself to investor scrutiny. And as with any tech company of its size — with a roughly $20 billion valuation — its initial public offering would likely be a high-profile event.”
It’s quite a convergence of events: Thiel gets Kratsios installed in the perfect position to shovel all sorts of Pentagon contracts at Palantir right at the same time Palantir files an IPO. And there’s potentially just a months left in the Trump administration so they have to move fast. And yet in order for this IPO to happen Palantir needs to open itself up to investor scrutiny to a degree it’s never had to deal with before. What kind of horrible secrets will be revealed? And will those horrible secrets actually harm Palantir’s perceived valuation? It’s a defense contractor, after all. Horrible secrets might be seen as an investor perk if they’re profitable horrible secrets. So there’s plenty of questions raised by the prospect of an Palantir IPO taking place right when Thiel’s chief of staff because the new Pentagon head of technology procurement, including the question of what horrible company Peter Thiel is going to start next with all that new money he’s about to make.
Here’s a ‘good news’/‘bad news’ pair of stories related to the January 6 storming of the Capitol and the subsequent investigation into the identities of people in that insurrectionary mob:
First, here’s a story from a couple of weeks ago that points towards one of the good news aspects of this story. The story is about a wrongful arrest lawsuit emerging from a case where facial recognition AI was used to identify a suspect. The man suing for wrongful arrest, Nijeer Parks, is Asian, and as the article notes, studies have repeatedly shown that facial recognition software does not perform as well on Black and Asian faces. In February 2019, Nijeer Parks was accused of shoplifting candy and trying to hit a police officer with a car at a Hampton Inn in New Jersey.
Curiously, there’s also a mystery as to who conducted the facial recognition that wrongfully fingered Park. Parks’s initial lawsuit accused Clearview AI of running the face match search for the New Jersey Police. Recall how Clearview is the extremely controversial private facial recognition company that appears to have scraped virtually all of the publicly available internet to amass a vast database of billions of photographs. Also recall how Clearview’s investors include Peter Thiel and the first appears to have close ties to the far right and the Republican Party.
But there’s a question as to whether or not Clearview’s tools were actually used in Parks’s case. His attorney said he based the conclusion that Clearview AI did the match based on previous reports that New Jersey law enforcement was already working with Clearview to provide these services. But Clearview denies that its software was used for the match. And according to the police report of Parks’s arrest, the match was to a license photo, which would reside in a government database that Clearview AI technically cannot access. And yet the state agencies asked to run the face recognition search — the New York State Intelligence Center, New Jersey’s Regional Operations Intelligence Center — said they did not make the match. What’s going on here? Is Clearview AI getting access to state databases of license photo information its not supposed to have? We don’t know at this point. But it’s becoming increasingly clear that Clearview AI’s relationship with US law enforcement is deepening.
So what’s the good news here? Well, the good news is that, should AI facial recognition technology be used on the pro-Trump mob of people, at least it was a predominantly white mob. And that means the existing facial recognition algorithms should probably be a lot more accurate, making it much less likely that innocent people will get erroneously accused and face the same kind of nightmare Nijeer Parks faced based on bad facial recognition software:
“Facial recognition technology is known to have flaws. In 2019, a national study of over 100 facial recognition algorithms found that they did not work as well on Black and Asian faces. Two other Black men — Robert Williams and Michael Oliver, who both live in the Detroit area — were also arrested for crimes they did not commit based on bad facial recognition matches. Like Mr. Parks, Mr. Oliver sued over the wrongful arrest.”
Over and over, studies keep finding that facial recognition software doesn’t work as well on non-white faces. And that means there’s inevitably going to be a lot more lawsuits over facial recognition-driven wrongful arrests. In the case of Nijeer Parks, it appeared Clearview AI was the company that generate the wrong match, but as the case has unfolded the question of who actually created the match remains an open question. The wrong match appears to have come from no where:
Did Clearview get improper access to the New Jersey license database? Giving the company access would have obvious utility to New Jersey’s law enforcement so it’s not inconceivable that such access was given. But if so, it’s a sign of how deep Clearview AI’s relationship is getting with government agencies...something perhaps not unexpected given Peter Thiel’s investment in the company and the obscenely close relationship between government agencies and Thiel’s Palantir.
So will Clearview AI’s tools be used to help identify the individuals who particpated in the raid on the Capitol. We don’t know but it’s certainly seems like a possibility. And if so, at least we shouldn’t have to be as worried about mismatches.
But even if we assume the issue of accidental mismatches will be largely addressed when the matching is done on an overwhelmingly white crowd, there another form or mismatch that we should probably keep in mind: missed matches that arise from the exceedingly close relationship between Clearview AI and the far right. The kind of relationship that should raise serious questions about whether or not Clearview AI can be trusted to not run cover for its fellow far right allies. As the following BuzzFeed piece from back in March describes, Clearview AI has different “company type” categories of users for its search database. The categories are “Government”, “Bank”, and “Investor”, etc. But there’s also a “Friend” category. And based on the documents BuzzFeed received, those “Friends” include companies like SHW Partners LLC, a company founded by top Trump campaign official Jason Miller. And guess who turned out to be one of Clearview’s “test users”: Alt Right arch-troll Charles C. Johnson. So while we haven’t yet seen any indication that Clearview AI is going to be used to identify by the insurrectionary mob, and we haven’t seen any indication that Clearview AI is will to run cover for far right suspects, we’ve certainly seen strong indications that Clearview is being used by law enforcement agencies and that the company has disturbingly close ties to the far right:
After reading coverage about a new facial recognition tool, James deduced that Johnson had identified him using Clearview AI, a secretive company that’s claimed to have scraped more than 3 billion photos from social media and the web. Last month, a BuzzFeed News investigation found that people at more than 2,200 organizations have tried Clearview’s facial recognition technology, including federal entities such as Immigration and Customs Enforcement, the FBI, and private companies like Macy’s, the NBA, and Bank of America.
Clearview AI previously claimed its tools were exclusively for law enforcement. But BuzzFeed found more the 2,200 entities had used the tool. Including a disturbing number of entities and figures associated with the Republican Party and Trump White House. Jason Miller’s company was even given “Friend” status:
And then there’s the fact that Charles C. Johnson appears to be a test user with full access to just run searches whenever he wants:
Keep in mind that someone like Charles C. Johnson probably personally knows a number of the people who stormed the Capitol. That’s why his ties to Clearview are so potentially so significant in this case.
So the good news is that contemporary facial recognition software shouldn’t suffer from too much racially biased inaccurate matches if applied to Trump’s Capitol militia. The bad news is that the rioters are literally going to be ‘friends of friends’ of the company that’s probably doing the matching.
Worse than Watergate? It’s one of the meta questions for the Trump era that is once again being asked following the growing revelations about the Trump Department of Justice spying on not just Democratic members of congress but also their family members in a quest to find government leakers. It’s the kind of story that raises questions about who wasn’t being spied on by the Trump adminestration. So with questions about secret government spying once again being asked, it’s worth keeping in mind one of the contemporary contexts of secret government spying operations. In particular spying by Republican administrations: much of the US’s national security analytical capabilities are being carried out by private entities like Palantir. And since Palantir’s services to clients includes the identification of leakers, we can’t rule out the possibility that the Trump administration wasn’t just tasking the Department of Justice in its leak hunt. A private entity like Palantir would almost be ideal for a scandalous operation of that nature, especially for the Trump administration that benefited from an extremely close political alliance between Trump and Palantir co-founder Peter Thiel.
So was Palantir at all involved in this latest ‘worse than Watergate’-level Trump scandal? We have no idea. More importantly, we have no idea if the question is even being asked by investigators. But as the following 2019 piece in Vice makes clear, Palantir was definitely interested in offering leak-hunting services, the kind of service that was almost ideal for working with the Palantir Big Data model of knowing as much as possible about as many people as possible:
“Through its presence on YouTube, Praescient explains its commitment to “applying cutting edge analytic technologies and methodologies to support government and commercial clients.” For example, in one video, the company demonstrates how an organization can use Palantir’s software to find out if one of its employees leaked confidential information to a blogger.”
While we don’t have any direct evidence the Trump administration utilized Palantir’s leak-hunting services, it seems highly likely the Trump administration was at least aware such services existed. Which raises the question about whether or not the US government was already utilizing these leak-hunting services before this scandal even started. The US government is a major Palantir client and helped start the company in the first place, after all. In that context, it would almost be surprising if these services weren’t be utilized by the US agencies:
And that’s why one of the big questions surrounding this story is whether or not questions about Palantir’s potential involvement are being asked at all. Palantir is an obvious suspect for any Trump-related Big Data abuse scandal. Perhaps the obvious suspect. And yet the US government’s relationship with Palantir is also obviously a highly sensitive topic and a large number of people both inside and outside the US national security state probably don’t want to see major public scrutiny of that relationship. For example, it turns out Joe Biden’s current Director of National Intelligence, Avril Haines, was a Palantir consultant from July 5, 2017 to June 23, 2020, placing her at the company during the period of this newly discovered Trump administration spying.
So what was Haines doing at Palantir during this period? Well, here’s where it starts looking bad. Because as the following article describes, Haines scrubbed her work at Palantir shortly after being selected for a potential Biden transition team in the summer of 2020. It’s not a great look.
But another part of the reason the selection of Haines as a national security figure for the Biden administration raised the ire of so many on Left was because of the role she played in investigating the Bush administration’s War on Terror torture interrogation programs and the Obama administration’s drone warfare programs. The way critics see it, Haines effectively protected the CIA was meaningful repercussions over the role it played in the torture and normalized the drone program. She also voiced her support for former CIA director Gina Haspel in 2018 despite the role Haspel played in formulating those torture programs. Haines’s defenders view these as nitpicky criticisms of someone who successfully reigned in US drone warfare policies and pressed for maximal disclosures in the torture report.
So Haines is a rather controversial figure outside of her work for Palantir. But it’s also not hard to imagine why Palantir would have been very interested in hiring her. Haines has the crucial experience of legally vetting intelligence programs, something that would obviously be an invaluable skill set for a company like Palantir. And that brings us to Haines’s answer as to what it was she was doing at Palantir: according to Haines, she was mostly just focused on diversity development and mentoring the careers of the young women working there. That was her role for nearly three years. Diversity training.
Sure it’s possible Palantir hired Haines primarily for diversity training for three years and the company just ignored her invaluable experience vetting intelligence programs. But is that a realistic answer? Of course not. It completely smacks of being a cover up. Now, the fact that Haines doesn’t want to talk about what she actually did at Palantir doesn’t mean she was involved with a ‘Worse than Watergate’ Trump administration illegal domestic spying operation. But it does suggest it’s going to be harder than it should be getting answers about what role Palantir may have played in this latest scandal:
“After the Obama administration ended, Haines took several academic and consulting positions. One of them was with Palantir, the data firm allied with Trump that, among other things, aided ICE in rounding up undocumented immigrants. According to Palantir, Haines consulted on promoting diversity within the company’s hiring from July 5, 2017 to June 23, shortly after her position with the Biden transition was announced. As The Intercept first reported, Palantir quickly disappeared from her Brookings Institution biography, smacking of a whitewash. Brookings told The Daily Beast that Haines’ office had requested an update scrubbed of non-active affiliations broader than Palantir. A Biden transition official said Haines removed several affiliations from her bio, not just Palantir, after ending those affiliations as part of her onboarding to the transition.”
All of a sudden her three years of work at Palantir disappeared from her Brookings Institution biography. It’s not hard to imagine reasons for this. Palantir is a scandalous company, especially for a putative Democratic administration, with or without a spying scandal. But it’s also not hard to imagine that the work Haines actually did for Palantir is the kind of work she really doesn’t want to talk about, which is why her claims of focusing on diversity and inclusion ring to hollow. Why scrub your diversity and inclusion work?
We’ll see if any questions about potential roles Palantir may have played in the Trump administration’s domestic spying activities actually end up getting asked. It’s unlikely. But if those questions do end up getting asked it will be interesting to learn more about the diversity and inclusion training being done at one of the world’s leading fascist-owned Big Data NSA-for-hire service providers.
This article talks about how the US software developer for US Inteligence (funded by Peter Thiele) Palantir signed secretive contracts with the Greeks and had secretive talks with EU President (originally from Germany) Ursula von der Leyen as well as with the then EU’s competition commissioner, Margrethe Vestager, who is now in charge of making the EU fit for the digital age. The article raises concerns of violation of EU’s data protection laws including Palentir’s access to Europol data and investigations and witness testimony.
Palentir’s software “Gotham” has been used by intelligence services in the UK, the Netherlands, Denmark and France and was built for investigative analysis. Some Palantir engineers call what it does “needle-in-haystack” analysis that agencies can use to look for bad actors hiding in complex networks.
Their software also claims to be predictive of crime but the accuracy of that is controversial and has not been disclosed. There was a concern that there is an imbalance of power with knowledge of data use and between software firms and the public interest. Private power over public processes is growing exponentially with access to data and talent.
Palentir is also getting into the ground floor of a new cloud software interface requirements for the EU called of GAIA‑X.
Implicitly if you read between the lines of the article, Palentar is a software that is marketed for intelligence gathering but is likely an espionage tool used to acquire data on individuals to be used for political manipulation.
Impotrant connections to note are Palentir’s, CEO Alex Karp, studied in Germany at Frankfurt University under the influential philosopher Jürgen Habermas. Michael Kratsios was chief technology adviser to then-president, Donald Trump. Kratsios joined the White House from a role as chief of staff to Peter Thiel, the billionaire Silicon Valley tech investor and founder of Palantir, key investor in Facebook, and Paypal.
The Guardian, April 2, 2011
Seeing stones: pandemic reveals Palantir’s troubling reach in Europe Covid has given Peter Thiel’s secretive US tech company new opportunities to operate in Europe in ways some campaigners find worrying
by Daniel Howden, ApostolisFotiadis, Ludek Stavinoha, Ben Holst.
https://www.theguardian.com/world/2021/apr/02/seeing-stones-pandemic-reveals-palantirs-troubling-reach-in-europe?CMP=Share_iOSApp_Other
Seeing stones: pandemic reveals Palantir’s troubling reach in Europe
Covid has given Peter Thiel’s secretive US tech company new opportunities to operate in Europe in ways some campaigners find worrying
The 24 March, 2020 will be remembered by some for the news that Prince Charles tested positive for Covid and was isolating in Scotland. In Athens it was memorable as the day the traffic went silent. Twenty-four hours into a hard lockdown, Greeks were acclimatising to a new reality in which they had to send an SMS to the government in order to leave the house. As well as millions of text messages, the Greek government faced extraordinary dilemmas. The European Union’s most vulnerable economy, its oldest population along with Italy, and one of its weakest health systems faced the first wave of a pandemic that overwhelmed richer countries with fewer pensioners and stronger health provision. The carnage in Italy loomed large across the Adriatic.
One Greek who did go into the office that day was Kyriakos Pierrakakis, the minister for digital transformation, whose signature was inked in blue on an agreement with the US technology company, Palantir. The deal, which would not be revealed to the public for another nine months, gave one of the world’s most controversial tech companies access to vast amounts of personal data while offering its software to help Greece weather the Covid storm. The zero-cost agreement was not registered on the public procurement system, neither did the Greek government carry out a data impact assessment – the mandated check to see whether an agreement might violate privacy laws.
The questions that emerge in pandemic Greece echo those from across Europe during Covid and show Palantir extending into sectors from health to policing, aviation to commerce and even academia. A months-long joint investigation by the Guardian, Lighthouse Reports and Der Spiegel used freedom of information laws, official correspondence, confidential sources and reporting in multiple countries to piece together the European activities of one of the most secretive companies in the world. The findings raise serious questions over the way public agencies work with Palantir and whether its software can work within the bounds of European laws in the sensitive areas where it is being used, or perform in the way the company promises.
Greece was not the only country tempted by a Covid-related free trial. Palantir was already embedded in the NHS, where a no-bid contract valued at £1 was only revealed after data privacy campaigners threatened to take the UK government to court. When that trial period was over the cost of continuing with Palantir came in at £24m.
The company has also been contracted as part of the Netherlands’ Covid response and pitched at least four other European countries, as well as a clutch of EU agencies. The Palantir one-pager that Germany’s health ministry released after a freedom of information request described Europe as the company’s “focus of activities”.
Founded in California in 2003, Palantir may not have been cold-calling around European governments. It has, at times, had a uniquely powerful business development ally in the form of the US government.
On 23 March, the EU’s Centre for Disease Control (ECDC) received an email from their counterparts at the US CDC, extolling their work with Palantir and saying the company had asked for an introduction.
Palantir said it was normal practice for some of its “government customers to serve as reference for other prospective customers”. It said the ECDC turned down its invitation “out of concern of a risk of the contact being perceived as prejudicing ECDC’s independence”.
PHOTO CAPTION: A Palantir banner outside the New York Stock Exchange on the day of its initial public offering on 30 September, 2020. Photograph: Andrew Kelly/Reuters
The Greek government has declined to say how it was introduced to Palantir. But there were senior-level links between Palantir, the Trump administration and the Greek government. The US ambassador to Greece, Geoffrey Pyatt, has spoken publicly of the contacts between Pierrakakis and Michael Kratsios, a Greek-American and chief technology adviser to then-president, Donald Trump. Kratsios joined the White House from a role as chief of staff to Peter Thiel, the billionaire Silicon Valley tech investor and founder of Palantir.
When news of Greece’s relationship with Palantir was disclosed, it was not by government officials or local media but by ambassador Pyatt. A teleconference followed in December between Greece’s prime minister, Kyriakos Mitsotakis, and Palantir CEO Alex Karp, where the latter spoke of “deepening cooperation” between them.
Journalists who asked for a copy of the agreement were refused and it took opposition MPs to force disclosure via parliament. The tone then abruptly changed.
Eleftherios Chelioudakis, a data protection lawyer and member of digital rights group Homo Digitalis, was among the first people to read the two-page document and was stunned by what he found. It appeared to give Palantir phenomenal access to data of exactly the scale and sensitivity that would seem to require an impact assessment. Worse, a revision of the agreement one week after the first deleted any reference to the need to “pseudonymise” the data – to prevent it being relatable to specific individuals. This appears to be in breach of the General Data Protection Regulation (GDPR), the EU law in place since 2018 that governs how the personal information of people living in the EU can be collected and processed. Palantir says that, to its knowledge, processing was limited to “open-source pandemic and high-level Greek state-owned demographic data directly relevant to managing the Covid-19 crisis”.
PHOTO CAPTION: The Greek prime minister, Kyriakos Mitsotakis (centre), and the minister of digital governance, Kyriakos Pierrakakis (left), chat with the US ambassador to Greece, Geoffrey Pyatt (right), in Thessaloniki, Greece, in September 2019. Photograph: Kostas Tsironis/EPA
The Greek government has denied sharing patient data with Palantir, claiming that the software was used to give the prime minister a dashboard summarising key data during the pandemic. However, the contract, seen by the Guardian, specifically refers to categories of data that can be processed and includes personal data. It also includes a clause that has come to be known as an “improvement clause”. These clauses, identified in the rare examples of Palantir contracts released in answer to freedom of information requests, have been studied by Privacy International, a privacy watchdog in the UK. “The improvement clauses in Palantir’s contracts, together with the lack of transparency, are concerning because it enables Palantir to improve its products based on its customers’ use of the Palantir products,” said Privacy International’s Caitlin Bishop.
The company rejects this reading of their activities and states: “Palantir does not train algorithms on customer data for Palantir’s own benefit or to commercialise and sell to Palantir’s other customers.”
“We do not collect, mine, or sell personal data from or for our customers,” it said, adding: “Palantir does not use its customers’ data to build, deploy, transfer, resell, or repurpose machine learning or artificial intelligence models or ‘algorithms’ to other customers.”
Greece’s data protection authority has since launched an investigation. The government says it has ended cooperation with Palantir and that all data has been deleted.
Lord of the Rings mystique
Even by the standards of Silicon Valley tech companies, Palantir has been an outlier in creating a mythology around itself. The name is taken from the powerful and perilous “seeing stones” in Tolkien’s Lord of the Rings. Its leadership often claims the mantle of defenders of the western realm. Early employees cast themselves as brave hobbits and one of Thiel’s co-founders wrote about his departure from the company in a post entitled “leaving the Shire”.
But Palantir polarised opinion in the US before the backlash against big tech. Its critics do not focus on the fortune its founder Thiel made with PayPal or as an early investor in Facebook but on his support for Trump. Palantir has faced protests in the US over its role in facilitating the Trump administration’s mass deportation of undocumented migrants through its contract with US immigration enforcement agency ICE.
Palantir was also reported to have been involved in discussions over a campaign of disinformation and cyberattacks directed against WikiLeaks and journalists such as Glenn Greenwald. It later insisted that the project was never put into effect and said its association with smear tactics had “served as a teachable moment”.
And Palantir was willing to step in at the Pentagon after Google employees rebelled over its involvement in Project Maven, which seeks to use AI in battlefield targeting.
Until Palantir undertook a public listing in September last year, relatively little was known about its client list beyond services to the US military, border enforcement and intelligence agencies.
Media coverage of Palantir has been shaped by its unusual protagonists as well as its national security clients. The company’s CEO is Alex Karp, who studied in Germany at Frankfurt University under the influential philosopher Jürgen Habermas, and often makes corporate announcements in philosophical language in unconventional clothing or locations. His most recent message was tweeted from a snowy forest.
PHOTO CAPTION: Palantir’s CEO Alex Karp. Photograph: Thibault Camus/AP
Rumours over Palantir’s possible involvement [with the CIA] in the operation to find Osama bin Laden have been met with coy non-denials.
The colourful backstory has added mystique to a company which, when it listed on the New York stock exchange, had only 125 customers.
Why did Palantir meet Von Der Leyen?
Sophie in ’t Veld, a Dutch MEP, has tracked Palantir’s lobbying of Europe’s centres of power. She notes the company’s unusual “proximity to power” and questions how it was that an EU delegation to Washington in 2019 met with US government officials and only one private company, Palantir. What was discussed, she wanted to know, when Karp met the president of the European commission, Ursula von der Leyen or when Palantir met the then EU’s competition commissioner, Margrethe Vestager, who is now in charge of making the EU fit for the digital age?
PHOTO CAPTION: EU commission president Ursula Von der Leyen (left) and executive vice-president of the European Commission for A Europe Fit for the Digital Age, Margrethe Vestager (right), in Brussels, Belgium, on 19 February 2020. Photograph: Olivier Hoslet/EPA
In June 2020, In ‘t Veld sent detailed questions to the commission and published her concerns in a blogpost headlined: “Palantir is not our friend”. The commission took eight months to give even partial answers but the company emailed In ‘t Veld three days after she went public with her questions, offering a meeting. She talked to them but questions why the company felt the need to contact “an obnoxious MEP” to reassure her.
In ‘t Veld characterises the commission’s eventual answers as “evasive” with officials saying no minutes were kept of the conversation between Von Der Leyen and Karp because it was on the sidelines of the World Economic Forum at Davos and they already knew each other.
PHOTO CAPTION: Member of European Parliament, Sophie in ‘t Veld, at European parliament headquarters in Brussels, Belgium. Photograph: Wiktor Dąbkowski/ZUMA Press/Alamy
“There’s something that doesn’t add up here between the circumventing of procurement practices, meetings at the highest level of government,” said In ‘t Veld, “there’s a lot more beneath the surface than a simple software company.”
For its part, Palantir says it is “not a data company” and all data it interacts with is “collected, owned, and controlled by the customers themselves, not by Palantir.” The company says “it is essential to preserve fundamental principles of privacy and civil liberties while using data” and that Palantir does not build algorithms off its customers’ data in any form but provides software platforms that serve as the central operating systems for a wide variety of public and private sector institutions.
Palantir said: “We build software products to help our customers integrate and understand their own data, but we don’t collect, hold, mine, or monetize data on our own. Of course, our engineers may be required to interact with some customer data when they are at customer sites, but we are not in the business of collecting, maintaining, or selling data.”
Europol entanglement
Covid has been the occasion for a new business drive but Palantir did not arrive in Europe with the pandemic. It has also found opportunities in European fear of terrorism and its sense of technological inferiority to Silicon Valley.
When health concerns are driving business, the software product Palantir sells is Foundry; when terrorism fears are opening up budgets, it is Gotham.
Foundry is built to meet the needs of commercial clients. One of its champions in Europe is Airbus, which says the system has helped identify supply chain efficiencies. Foundry has more recently found its way into governments, and Palantir’s CEO, Karp, has called Foundry an “operating system for governments”.
Gotham has long been used by intelligence services in the UK, the Netherlands, Denmark and France and was built for investigative analysis. Some Palantir engineers call what it does “needle-in-haystack” analysis that agencies can use to look for bad actors hiding in complex networks.
Since 2013 Palantir has made a sustained drive to embed itself via Gotham in Europe’s police systems.
The first major opportunity to do this came at the EU’s law enforcement agency, Europol, when it won a tender to create a system to store and crunch the reams of data from member states’ police forces. The Europol Analysis System was meant both to store millions of items of information – from criminal records, to witness statements to police reports – and crunch this data into actionable intelligence.
The agreement signed in December 2012 with the French multinational Capgemini, subcontracted the work to Palantir and Gotham.
Over the next three years, heavily redacted Europol documents, obtained under freedom of information laws, tell a story of repeated delays, “low delivery quality” and “performance issues” related to Gotham. Amid the blacked-out lines there is mention of technical shortcomings such as the “inability to properly visualize large datasets”.
By May 2016 the issues were so entrenched that Europol agreed a settlement with Palantir, the terms of which they have refused to disclose. Capgemini, the contractor which brought in Palantir, also declined to comment.
It is also clear that Europol considered suing Palantir and Capgemini. In an internal briefing document ahead of an October 2018 meeting of the organisation’s management board, it is made clear that litigation was considered but rejected: “despite the performance issues identified [litigation] is likely to lead to costly court proceedings for which the outcome is uncertain.”
Palantir declined to comment on these issues specifically but said: “Any issues arising at Europol had nothing to do with the software’s ability to meet GDPR or data protection requirements, and were solely the result of a large, complex software implementation with multiple stakeholders.”
The caution was well advised. Palantir has form for suing large public bodies, including the US army, and winning.
When access was requested from Europol to all records relating to contractual matters with Palantir, 69 documents were identified, but the EU agency twice refused full access to 67 on the grounds of “public security”. An appeal has been lodged with the European ombudsman’s office, a complaint that was ruled admissible and a decision is pending.
The settlement did not disentangle Europol but it brought the project in-house and the effort to use Gotham as a data repository was abandoned but it remained as the main analysis component. In July 2017, a real-world trial of the system on counter-terrorism work found Gotham “suffering from significant performance issues”. Palantir said: “Any issues arising at Europol had nothing to do with the software’s ability to meet GDPR or data protection requirements, and were solely the result of a large, complex software implementation with multiple stakeholders.”
Despite these issues, Palantir has received €4m (£3.4m) from Europol.
The concerns went beyond performance when the EU’s privacy watchdog, the European data protection supervisor, began inspections. Heavily redacted copies of their reports in 2018 and 2019 register the inspectors’ concern that Gotham was not designed to ensure that the Europol analysts made it clear how people’s data had come to be entered into the system. The absence of this “personal implication” meant the system could not be guaranteed to distinguish whether someone was a victim, witness, informant or suspect in a crime. This raises the prospect of people being falsely implicated in criminal investigations or, at the very least, that their data may not have been handled in compliance with data protection laws.
Europol, as the data controller, said that such data was “treated with the greatest care”.
PHOTO CAPTION: The Europol building in The Hague, Netherlands. Photograph: Eva Plevier/Reuters
‘The hottest shit ever in policing’
In 2005, 15 European countries signed a deal to boost counter-terror efforts by exchanging DNA, fingerprints and vehicle registration data. This led to an IT buying spree as police authorities sought ways to get their systems to talk to each other. Norway was a latecomer when it signed up in 2009 but in 2016 a high-ranking delegation from the Norwegian police flew to Silicon Valley to meet Palantir. When they returned the force decided to set up a more far-reaching system to be called Omnia, running on Gotham.
The abrupt decision caught the attention of Ole Martin Mortvedt, a former senior police officer nearing retirement who was editing the national police union’s in-house magazine. When he started asking questions he found it impossible to establish who had gone to Silicon Valley and why the project had been expanded. The only representative of Palantir whom he could talk to in Norway was a relatively junior lawyer.
A frustrated Mortvedt started calling his former pupils from the police academy where he taught for many years who were now in mid-ranking positions in the police. Over the next three years, his police sources described a litany of missed deadlines.
“Those people who went to Silicon Valley, they were turned around by what Palantir had to offer,” said Mortvedt.
The system was handed over in 2020 but is still not functional. Palantir said that the problems were “not a function of our collaboration and, to the best of our knowledge, have their root cause elsewhere.”
The Norwegian police confirmed that Omnia has cost 93m Norwegian kroner, or slightly less than €10m.
Palantir met Danish officials in Silicon Valley two years earlier than their Norwegian counterparts. The Danes ended up buying Gotham for both the police and intelligence services as part of a counter-terrorism drive.
Christian Svanberg, who would become the data protection officer for the system, named POL-INTEL, said he wrote the relevant legislation enabling POL-INTEL.
The tender, which was made public, called for a system with cross-cutting access to existing police and intelligence databases, information exchange with Europol and open-source collection of new information. It also foresaw the need for algorithms to provide pattern recognition and social media analysis.
It was, in other words, a prescription for a predictive policing system, which vendors claim can help police predict where crimes will occur (place-based) and who might commit them (person-based). One of Denmark’s district police chiefs called it a “quantum leap into modern policing”.
Palantir said it understood from the Danish police that they did not use POL-INTEL for predictive policing.
Danish authorities pronounce themselves happy with the performance of POL-INTEL but have so far refused to release an internal evaluation or disclose data to enable any independent assessment of the results.
The police have refused to disclose even redacted versions of the internal evaluations of POL-INTEL. Despite Danish insistence on privacy safeguards with POL-INTEL, the only known internal assessment of the system found that police users had been using it to spy on the whereabouts of former Arsenal footballer, Nicklas Bendtner. A number of police officers were disciplined over the matter.
Norway and Denmark were not alone in the enthusiasm of their senior police for predictive policing, the Germany state of Hesse purchased a similar tool from Palantir in a tender that the opposition in the state parliament considered to be so opaque that a committee of inquiry dealt with it.
A German police official familiar with the development of predictive tools at the time says that senior officers had bought into the hype: “What was promoted three years ago was the hottest shit ever in policing. What we got wasn’t what was expected. You can’t predict crime.”
The Interior Ministry in Hesse said: “The Hessian police has had consistently positive experiences in its cooperation with Palantir.”
A bunker in The Hague
Since the EU passed its GDPR legislation in 2018, setting a global standard for the privacy rights of its citizens, it has talked itself up as a safe haven where digital rights are protected as human rights. While GDPR may still be poorly understood and mainly associated with browser requests to accept cookies, there is a watchdog. The European data protection supervisor and his staff of 75 face the immense task of ensuring that European agencies and the private companies they contract play by the rules. The supervisor himself is Polish lawyer Wojciech Wiewiórowski, who led the inspections at Europol previously. Predictably cautious in his choice of words, he stops short of calling for controversial companies such as Palantir to be kept away from sensitive European data. But he does counsel caution.
“It doesn’t make a difference if systems have been produced in the EU or outside of it when considering their compliance with data protection requirements. But software produced by companies that might have connections with intelligence services of countries outside the EU should be of special interest for us.”
It is not always clear who is taking more interest in who. Palantir has shown it has reach and influence over the shaping of knowledge around data and privacy in Europe. Some of the continent’s leading thinkers on big data, artificial intelligence and ethics have worked with the company in a paid capacity. One of them is Nico van Eijk, who held a professorship at the University of Amsterdam. Meeting Van Eijk in his current job is an involved process. These days his office is in a bunker in The Hague in the same building as the Netherlands’ Council of State. It is here that he runs the committee that oversees the Dutch intelligence services.
You can only enter if you leave all digital devices at the entrance – no phones, laptops, no recording devices. Throughout the Covid crisis employees could not work from home as their communications cannot be trusted to an internet connection. The committee has real-time access to all data and investigations by the military and general intelligence services of the Netherlands.
At a meeting in January 2021, Van Eijk declined to discuss a previous role he held on Palantir’s advisory board but commended the company on having an ethical board in the first place. Palantir said Van Eijk was an adviser on privacy and civil liberties and that board members are “neither asked nor expected to agree with or endorse decisions made by Palantir” and are “compensated for their time”.
Corporations, including those in tech industry, are sponsoring an increasing number of academics with potential implications for the production of knowledge on data and privacy.
Many of Van Eijk’s colleagues at the University of Amsterdam take a different view of Palantir. Ahead of the 2018 Amsterdam Privacy Conference (APC), one of Europe’s premier events on the subject, more than 100 leading scholars signed a complaint that stated: “The presence of Palantir as a sponsor of this conference legitimises the company’s practices and gives it the opportunity to position itself as part of the agenda … Palantir’s business model is based on a particular form of surveillance capitalism that targets marginalised communities and accelerates the use of discriminatory technologies such as predictive policing.”
Palantir said it is not a surveillance company. “We do not provide data collection services, including tools that enable surveillance of individual citizens or consumers.”
Inferiority complex
Europe’s dependence on US tech is not a matter of concern only for human rights advocates and privacy scholars. Some of the biggest businesses in Germany and France have been in talks over the creation of something akin to a safe haven for their own commercially sensitive data. Those discussions revealed that German car manufacturers were just as nervous as any privacy campaigner about releasing their data to US cloud services, such as Amazon Web Services.
Marietje Schaake, the director of Stanford’s Cyber Policy Centre, warned that Europe’s “tech inferiority complex” was leading to bad decisions: “We’re building a software house of cards which is sold as a service to the public but can be a liability to society. There’s an asymmetry of knowledge and power and accountability, a question of what we’re able to know in the public interest. Private power over public processes is growing exponentially with access to data and talent.”
Palantir says that “it successfully operates within and promotes the goals of the GDPR and its underlying principles”. It insists it is not a data company but rather a software company that provides data management platforms. It has for a decade, it says, worked in Europe with commercial and government organisations, “helping them successfully meet data protection requirements at scale as mandated at a European and national level”.
The latest European bid for greater digital sovereignty is GAIA‑X, wrongly billed in some quarters as a project to make a Euro-cloud. It is, in fact, an association that will seek to set the rules by which Europe-based companies do business with cloud computing services. Just as GDPR means that Europeans’ personal data has to be treated differently on Facebook than that of users outside the EU, GAIA‑X would mean commercial data is more tightly controlled on the cloud. Despite its relative obscurity, GAIA‑X may go on to have profound implications for the business model of US tech companies, or hyperscalers.
It was a surprise therefore when Palantir proclaimed itself, among other companies, a “day 1 partner” of GAIA‑X three months before any decision had been made. Officials at the association complained of “delinquent partners” who had jumped the gun for reasons of commercial advantage. Ultimately, Palantir was allowed to join.
Palantir says it did nothing that other companies involved with GAIA‑X did not do.
The chairman of GAIA‑X, Hubert Tardieu, formerly a senior executive at French tech firm ATOS, noted that the association did not want to get mired in lawsuits from “companies in California who know a lot about antitrust law.”
Get ready. It’s coming. What’s coming? We don’t know. And it’s unclear Palantir knows. But following reports that Palantir just purchased $50.7 million in gold bars and announced that it’s now accepting payments in both gold and bitcoin for its software in anticipation of another “black swan event”, we have to ask: what is Palantir seeing that they aren’t telling us? Whatever it is, it doesn’t appear to bode well for the US. At least not the dollar. The move comes roughly a year after the company relocated from San Francisco to Denver.
This is probably a good time to recall that President Biden’s Director of National Intelligence, Avril Haines, was a Palantir consultant from July 5, 2017 to June 23, 2020, when she left to join the Biden campaign. It’s a reminder that Palantir’s intelligence assessments probably include plenty of information flows from the numerous people in the intelligence community with ties to the company.
At the same time, it’s worth keeping in mind that when a company known for its threat analysis capabilities makes big public purchases like this, that’s kind of an advertisement for Palantir’s services. We could be looking at some creative marketing tactics. Either way, for the company that’s effective a privatized NSA it’s quite a signal to send to the world:
“The company spent $50.7 million this month on gold, part of an unusual investment strategy that also includes startups, blank-check companies and possibly Bitcoin. Palantir had previously said it would accept Bitcoin as a form of payment before adding precious metals more recently.”
Yeah, it’s certainly an unusual investment strategy. And note the explanation for this unusual strategy, according to the company’s COO: it’s not that there’s a specific black swan event. It “reflects more a worldview”, where “You have to be prepared for a future with more black swan events”:
And that’s possible the most ominous answer we could have received. There’s no specific black swan event the company is protecting against. Instead, it seems the company has adopted a worldview that assumes a higher rate of black swan events in the future. A worldview rooted in a deepening sense of foreboding doom.
Although who knows, maybe there is something very specific the company is preparing against. It’s not like they would tell us. Well, other than indirectly telling us maybe through weird public investment strategies like this.
Just how much data is Amazon collecting on us? That was the question asked in a new Reuters report when a group of seven reporters request from Amazon profiles of all of the information the company has on them, taking advantage of a new feature Amazon began making available to US customers in early 2020 after failing to defeat a 2018 California ballot measure requiring such disclosures.
This is far from the first time these kinds of questions have been asked about Amazon’s highly invasive products designed for areas like the bedroom. Recall how Amazon’s Echo device — which comes with cameras and an AI — was capturing incredible amounts of information that was potentially be sold to third parties. So as we might expect, the reporters who requested their data summaries were for a bit of a shock when they got the stunningly detailed array of information on the reporters and their families gathered from all the different Amazon-sold products available, from the Amazon.com website to Kindle e‑readers, Ring smart doorbells, and Alexa smartspeakers. Even with their jaded expectations the reporters were stunned to learn the level of detail collected about them. One reporter found Alexa alone was capturing roughly 70 voice recordings from their household daily on average for the prior three and a half years. And while Amazon has long assured customers that any recordings it stores only include the questions asked by the user, the reporters found the recordings often when on for much longer. Amazon said it’s working on fixing those bugs.
Perhaps the most surprising finding was captured video of the children asking Alexa how they could get their parents to let them “Play”. Alexa apparently retrieved exactly this kind of advice from the website wikiHow, advising the children to refute common parent arguments such as “too violent,” “too expensive”, and “you’re not doing well enough in school.” Amazon said it does not own wikiHow and that Alexa sometimes responds to requests with information from websites. So while the information being captured by Amazon’s ubiquitous products is a major part of this story, there’s also the question of what kind of information are their products feeding the users, in particular all the kids who have discovered that Alexa will act as a child-ally in child-parent intra-household struggles.
Finally, we got an update on Amazon’s annual reports on how it complies with law enforcement requests for this kind of data. The update is that Amazon is no longer giving that info out. Why the restriction? The company explains that it expanded its law enforcement compliance report to be a global report now and therefore it decide to streamline the data. Yep. A nonsense non-answer. Which is the kind of answer that suggests governments are probably having a field day, with shades of the NSO Group story here. So while the main story here is about the collection of all of this private data by Amazon, we can’t forget that there’s nothing stopping Amazon from sharing that data, especially with the governments that it needs permission from to continue operating:
“One reporter’s dossier revealed that Amazon had collected more than 90,000 Alexa recordings of family members between December 2017 and June 2021 – averaging about 70 daily. The recordings included details such as the names of the reporter’s young children and their favorite songs.”
70 recordings daily. That’s what Alexa alone was capturing in one reporter’s household. Information from the whole spectrum of Amazon products are collated into a single customer record, gathering information everything from the Amazon.com website searches and purchases (something we should expect to be tracked) down to the words highlighted in your Kindle e‑reader (something one would probably not be expected to assume was happening). But what is arguably the most scandalous aspect of this situation is that the reporter was just learning about these daily captures for the first time after it had been going on for nearly four years. Yes, Amazon technically discloses all of this information capture, but that’s all part of the scandal. According to the rules of commerce, you can apparently collect whatever information you want on customers as long as you tuck away a disclosure of that data capture somewhere in the massive privacy policy:
And then there’s this remarkable anecdote about the kinds of questions children are posing to Alexa: kids were literally getting advice on how to argue with their parents from a website that Alexa was accessing. And then, of course, this was all recorded. It’s an ironic indication of the scale of the potential scandal here: the company is literally recording so much data it’s capturing the data on its other abuses:
Finally, note that when it comes to potential abuses of this captured data, it’s in the transfer of that data to government agencies where the damage can really explode. This is a globally sold product, after all. It’s not just going to the US national security state that’s likely getting access this all of this incredibly private data. Pretty much any government is going to potentially have a right to request access to it under certain circumstances. What kind of circumstances? Well, that presumably depends in part on the local laws. That’s all part of why Amazon’s decision last year to stop disclosing how often it complies with US law enforcement requests is potentially so alarming. Amazon’s explanation for ending the report was that it expanded the report to include sharing compliance globally and therefore streamlined the available information. It’s not exactly a compelling explanation. So how much of this data is being shared with different governments? We don’t know, other than we can be pretty sure it’s enough to embarrass Amazon into ‘streamlining’ its reports and limiting that info:
Did Amazon “streamline” the information on how often it complies with US law enforcement requests right out of its reports out of a sense of customer convenience? That’s the absurd story the company is telling us. It’s not a great sign.
So the overall Reuters update on the collection of personal information by Amazon appears to be that it is indeed worse than previously recognized. Which is about as bad as we should have expected.
Right-wing outrage over ‘Big Tech censorship’ of conservative voices has long been a faith argument made in the spirit of ‘working the refs’ and gaslighting. It’s no secret that the social media giants have been repeatedly caught giving special treatment to right-wing voices on their platforms and making special exceptions to excuse and facilitate far right disinformation. Disinformation that synergizes with Big Tech’s algorithms that priority ‘engagement’, in particular the anger and fear-driven engagement the far right specializes in.
So it’s worth pointing out that when the GOP has been waging its ‘war on Big Tech’ in recent years — endlessly railing against alleged mass censorship by treating each individual instance of a conservative user’s content being pulled for violating the rules as an example of political discrimination — this isn’t just a cynical strategy designed to give social media platforms the ‘space’ to give right-wing users more lenient treatment than they were otherwise be receiving. It’s also a strategy that advocates for the unchecked exploitation of those profit-maximizing algorithms by the platforms themselves. In other words, if Big Tech ever truly did completely cave to these far right demands, and allowed the platforms’ algorithms to be completely unchecked amplifiers of ‘engaging’ far right content as they have been in the past, that doesn’t just help the far right. It’s also a great way for these social media giants to maximize their profits.
It’s long been clear that Big Tech and the GOP are playing some sort of cynical game of political footsie with all of these phony ‘Big Tech is censoring us’ memes. It’s win-win. The GOP can pretend to take a populist stance on something and Big Tech can pretend it’s actually doing something to adequately address the fact that its platforms remain the key tools of fascist politics globally. But given how this conservative political campaign is literally fighting for Big Tech’s right to operate in an uncheck profit-maximizing manner, we have to ask: just how much secret coordination is there between the GOP and Big Tech in creating and orchestrating the GOP’s anti-Big Tech propaganda? Because as this point, you almost couldn’t come up with a more effective lobby for maximizing Big Tech’s profits than the army of Republican officials claiming to be very upset with them:
“This is the economic context in which disinformation wins. As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.”
An economic paradigm centered on maximizing profits by processing ever-increasing volumes of personal information for the purpose of predicting user behavior. And yet this paradigm can’t actually discern truth. A giant information-processing-and-delivering system that can’t determine whether or not the information its processing or deliver is corrupt information. Corrupt or not, it’s the collection and delivery of information that maximizes profits. Peddling disinformation is how these companies maximize their profits. If profit-maximizing is the overarching imperative driving the actions of these entities, the promotion disinformation is a necessary consequence. You can’t disentangle the two:
It’s that inextricable nature of the profit-maximizing motives of these Silicon Valley giants and the imperative to promote misinformation that points us towards what ultimate must be part of the solution here: acknowledging that democracy can’t survive in an environment when disinformation is algorithmically promoted under the cold directive of profit maximization. It really is a choice of which system will ultimately reign supreme. Democracy or surveillance capitalism:
It’s worth recalling at this point the reports of the secret dinner in the fall of the 2019 between Mark Zuckerberg, Peter Thiel, Jared Kushner and Donald Trump at the White House during one of Zuckerberg’s trips to DC. Zuckerberg and Trump apparently came to an agreement during the dinne where Zuckerberg promised that Facebook would take a hands-off approach to the policing of misinformation from conservative sites. So as we see this farcical spat between the GOP and Big Tech play out to the synergistic benefit of both the GOP and Big Tech’s investors, we should probably be asking what else was agreed upon at that secret meeting and the other secret meetings that have undoubtedly been taking place all along between the Silicon Valley giants and powerful forces on the far right. Was this phony GOP-vs-Big Tech campaign actively discussed out during that meeting? Because as Shoshana Zuboff observes, this really is a choice between democracy and maximum profits, and it’s pretty clear Big Tech and the GOP both made the same choice a while ago.
The domination in the social media space of companies with deep ties to the US military industrial complex is nothing new, as Yasha Levine documented in his book Surveillance Valley. So with Elon Musk having just taken personal control of Twitter, it’s worth noting that Musk isn’t just a libertarian billionaire who is clearly finding joy in trolling the left with his new power over this key social media platform. As Levine reminds us below, he’s a US defense contractor and that role is poised to only grow.
It’s a fun fact that adds context to Musk’s hyper-trollish tweet a couple of days ago of a cartoon depicting the classic far right trope that the polarization in US politics is exclusively due to Democrats and liberals lurching to the extreme left, pushing former liberals like Musk into the conservative camp. The cartoon shows three stick figures at three different time periods: in 2008, it’s “my fellow liberal” on the left, “me” (Musk) in the center left, and a conservative on the right. A 2012 scene shows the “my fellow liberal” running quickly to the left, moving “me” to the center. Finally, there’s a 2021 scene showing the liberal far out to the left shouting “Bigot!”, with “me” now in the center-right part of the plot and the conservative stickfigure exclaiming “LOL!”. Musk basically came out as a ‘former liberal’ in the tweet.
And as Greg Sargent points out in the following piece, that tweeted cartoon wasn’t just an expression of Musk’s politics. It was basically a statement of intent. An intent to allow Twitter to revert back into a Alt Right fantasy platform where ‘anything goes’ and far right disinformation dominates.
And this is of course all happening in the midst of the GOP’s deepening embrace of the politics of QAnon and insurrection. At this point, the GOP’s quasi-official stance is that the Democratic Party consists of ‘groomers’ trying to change the law to make it easier to prey on children. How is Musk planning on handling the inevitable deluge of tweets promoting insurrection and calling for the death of pedophile Democrats?
These are the kinds of questions Musk is going to have to answer at some point and based on his public comments thus far it’s not at all clear that he’s thought it through at all. Or maybe he has thought it through and the plan really is to just allow Twitter to revert back into an ‘anything goes’ platform. We’ll see.
At the same time, there are certainly some areas where social media platforms really could use a loosening on their moderation policies, in particular when it comes to global events involving Russia or China. Recall how Ukrainian Jewish activist Eduard Dolinsky was literally banned from Facebook for showing examples of the kind of anti-Semitic graffiti that has become rampant in Ukraine. Also recall how Twitter itself locked the official Twitter account of the Chinese embassy in the US back in January 2021 over a tweet defending Beijing’s treatment of Uyghurs. Perhaps Musk can address this kind of censorship being done on behalf of the US national security state. But that returns us to the fact that Musk is very much a US defense contractor and that relationship with the US national security state is only getting deeper. Musk really is part of ‘the Deep State’. A ‘Deep State’ with that has decades of working relationships with far right elements around the globe. But unlike most elements of the Deep State, he’s got a right-wing fan base that seems to fancy Musk some sort of fellow traveler ‘outsider’. It’s a fascinating situation. A fascinating situation that doesn’t bode well.
Ok, first, here’s Sargent’s piece on Musk’s recent tweet where he basically comes out as a republican. What is the fall out going to be now that Musk is more or less promising to revert Twitter back into an ‘anything goes’ disinformation machine? we’ll find out...probably during the next insurrection fueled by waves of retweeted deep fake videos portraying democrats as satanic pedophiles:
“It may be that Musk might not end up allowing anything like this to happen, once his vague “free speech” bromides collide with messy moderation realities. But when he displays his determination to downplay the radicalization of the right wing of the GOP, he’s showing us a potential future information landscape that far-right Republicans are surely dreaming about.”
The floodgates are being opened. Which means it’s just a matter of time before the worst kind of disinformation is once again flooding that platform. But it’s not just going to be a return to the bad old days of yesteryear. Deep Fake technologies didn’t exist back when Twitter was last a free-for-all far right playground. It’s a brave new world. There’s more than one way to release a Kraken:
So Musk is coming out as a Republican at the same time he’s making this purchase of Twitter seemingly in opposition to lefty ‘wokeism’. It certainly gives us a major hint as to what to expect from Musk, at least when it comes to disinformation in US politics. But how about Twitters other problem area when it comes to moderation: the overmoderation of anything involving China or Russia that doesn’t fit with the prevailing narratives coming out of the US national security state? Can we at least expect some improvements there? Sure, if you believe someone who is anxiously courting more and more Pentagon contracts is going to do anything to piss off his biggest customer:
“I mean here you have Elon — an “outsider” — mounting a hostile takeover of a major global communication platform. And the thing about him is that he’s not just a successful lithium battery salesman, he’s also a major military contractor doing business with the most secretive and “strategically important” spooks in America.”
Musk is clearly more than happy to piss of ‘the left’. He’s kind of making that his personal brand at this point. But how about the Pentagon? As the conservative stick figure in Musk’s tweet put it, LOL!
Following up on the uproar over Elon Musk’s purchase of Twitter and, as Yasha Levine pointed out, the complete lack of any acknowledgement in that uproar over Musk’s growing status as a major US national security contractor, here’s a post on the Lawfare blog from last month that underscores another aspect of Musk’s relationship with the US national security state: the dual use nature of Musk’s Starlink satellites network and the fact that it’s already being used for military purposes. In Ukraine. Yep, it turns out Musk’s Starlink satellite network has been playing a crucial role in providing internet connectivity for Ukraine’s military. A role that was encouraged by USAID. In fact, USAID issued a press release last month touting how it set up a public-private partnership with Starlink to send 5000 Starlink terminals to Ukraine to maintain internet connectivity during the war. And as the following Lawfare blog post points out, that use hasn’t been limited to civilian uses. One Ukrainian commander told the Times of London that they “must” use Starlink to target Russian soldiers at night with thermal imaging.
So Musk delivered a large number of Starlink terminals to Ukraine under a USAID program to provide civilians with internet connectivity and they end up getting used by Ukraine’s military. It’s the kind of situation that creates a number of possible legal headaches. As we’re going to see, the US Space Command has already set up a program for incorporating commercial infrastructure operating in space into military efforts and these Starlink satellites are readily capable of handling the Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) functions necessary for modern military operations.
But perhaps the biggest possible headache that could emerge from this is the one experts have been warning us about ever since Musk hatched this Starlink scheme: the threat of a space junk cascade that makes the earth’s low orbit space effectively unworkable. That kind of scenario was already a risk just from things going wrong. And now we’re learning that Musk is allowing Starlink to be used for exactly the kind of activities that could prompt a physical attack on the Starlink cluster:
“Musk corresponds with the Ukrainian government against the backdrop of a complex legal landscape. This post explores several tenets of international humanitarian law as it might govern Russian targeting of Starlink infrastructure. It then assesses how and why Musk’s actions threaten to draw the U.S. in as a party to the conflict. Finally, it proposes modifications to domestic policy that could help avoid such an outcome now and in the future.”
What are the implications of Elon Musk’s Starlink satellites being used by the Ukrainian military? Well, for starters, those Starlink satellites — which Musk has admitted are capable of executing the Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) functions required by modern militaries — are clearly “dual use” pieces of infrastructure. And under international law that means these satellites could potentially be legally attacked by Russia. So one very direct implication of the Ukrainian military’s use of Musk’s network is a possible Russian attack on a commericial satellite system operated by a US company:
And if Russia does indeed decide to launch some sort of attack against Starlink, what can we expect the US to do in response? Well, according to Article VI of the Outer Space Treaty of 1967, state responsibility to activities in outer space, including those launched from a state’s territory, even when the activity is carried out by a nongovernmental agency. Beyond that, the US Space Command has already set up a Commercial Integration Cell (CIC) program designed to enlist the use of commercial satellite capabilities in armed conflicts. So if there’s a Russian attack on Starlink, it’s not necessarily going to be easy for the US to avoid escalating the situation because the US government will be legally responsible, in part, for sanctioning and coordinating the military use of this commercial infrastructure. In other words, a Russian attack on Starlink could create a very messy situation:
But perhaps the biggest mess that could be created by a Russian attack on Starlin would be the literal mess in space that could result from any kinetic attacks on those satellites. As we’ve seen, Starlink already poses an unprecedented threat to the earth’s orbital space, with a worst case scenario that could litter the space around the planet with so much space junk it’s effectively impossible to launch new objects into orbit. And as this Lawfare post notes, while the risk of creating a bunch of space junk could certainly give Russia pause when considering whether or not to carry out a kinetic attack on those satellites, that risk doesn’t bar Russia from carrying out such an attack. There’s no international law against it so it’s really up to Russia at that moment to weigh the costs and benefits. So if these decisions are being made during a time when Ukraine’s military is eviscerating Russian forces in part on the use of those satellites, don’t be shocked if Russia’s cost/benefit analysis doesn’t leave clean planetary orbits at the top of the priority list:
Also keep in mind that the unique vulnerability the Starlink system — creating a nightmare space junk cascade due to the large number of low orbit tiny satellites Musk just launched into orbit without any serious consideration of the risks — is exactly the kind of thing opposing militaries might be tempted to create as part of a big gamble. Who would be hurt more by a space junk cascade that cripples commercial space activity? An already economically crippled Russia, or the US? It wouldn’t necessarily take the fragmentation of that many of these satellites to get a cascade started. Or maybe they’ll just threaten to do it. Either way, let’s hope the Russian government, and any other governments directly threatened by Starlink in the future conflicts, are actually taking these risks seriously because it’s obvious the people deploying and using the system are not.
Who is going to prevent Elon Musk’s Starlink network of microsatellites from turning the earth’s lower orbits into a swarm of lethal space junk that threatens to incapacitate our ability to operate in space? No one, probably. That’s the likely answer we can infer from the following pairs of articles about Starlink.
The first article, from last month, highlights a rather interesting anomaly observed between Starlink and USAID. As we’ve seen, USAID created some sort of public-private partnership with Starlink for the delivery of 5,000 Starlink terminals to Ukraine to help deliver internet services to the country. Including vital internet services for Ukraine’s military, raising obviously questions about whether or not Starlink could potentially come under attack by Russia.
How much money did USAID provide for this initiative? Well, that’s part of the mystery. The other part of the mystery is what exactly did USAID pay for. We’re told that USAID paid SpaceX $1,500 per Starlink terminal for 1,333 terminals, adding up to $2 million. The standard Starlink terminal costs $600 while there’s a more advanced version that sells for $2,500. So was USAID paying $1,500 for the $600 terminals? If so, that would be some rather outrageous price gouging, so maybe it was $1,500 for the $2,500 terminals. We don’t know, but adding to the mystery is that USAID altered its public statements on this public-private partnership. The initial April 5 statement released by USAID noted that SpaceX donated 3,667 terminals while USAID purchased an additional 1,333 terminals. Those numbers were removed from an update released later that day.
So for whatever reason, USAID behaved in a way that suggested some degree of sensitivity about these numbers. We don’t know why, but what is clear from this story is that the US government sees a lot of value in Starlink’s capabilities. Which is rather problematic when it comes to regulating Starlink and ensuring it doesn’t pose an unreasonable risk of a space junk cascade catastrophe. And that brings us to the second article below from back in August of last year. The story is about a study done by researchers on the rate of orbital close encounters since the launch of Starlink. Basically, they’ve doubled in the last couple of years, with half of the close encounters involving Starlink satellites. So Starlink is already proving itself to be a major space collision hazard, and it’s barely even finished yet. Yes, as of the time of that article, only 1,700 Starlink satellites were in orbit. The plan is for tens of thousands of them to eventually be launched into orbit. That’s why these researchers were predicting that 90% of orbital close encounters in the future are likely to involve Starlink satellites.
And that’s why the mystery regarding Starlink’s relationship to USAID, and SpaceX’s larger relationship to the US national security state, could end up being a rather crucial question in terms of whether or not anything is going to be done to prevent an orbital space junk cascade catastrophe. Because it sure doesn’t look like the US government is overly concerned with these risks right now. Quite the opposite:
“Despite SpaceX implying that the US didn’t give money to send Starlink terminals to Ukraine in March, a report from The Washington Post reveals that the government actually paid millions of dollars for equipment and transportation. The report found that the US Agency for International Development, or USAID, paid $1,500 apiece for 1,333 terminals, adding up to around $2 million. USAID disclosed the number of terminals it bought from the company in a press release from early April that has since been altered to remove mentions of the purchase.”
For whatever reason, USAID decided to alter its press release on the ‘public-private’ partnership it started with SpaceX to deliver 5,000 Starlink terminals to Ukraine. Why the alteration? It’s a mystery, along with the mystery of whether or not the $1,500 USAID was paying for these units was for the $600 terminals or the more advanced $2,500 terminals. If it was $1,500 going towards $2,500 terminals, well, ok, that well be an obvious subsidy towards SpaceX’s ‘charitable contributions’. But if it was $1,500 going towards the $600 units, you have to wonder what exactly was going on here:
And it’s the mystery of that relationship between SpaceX and USAID that brings us to the follow article from back in August of last year about a profoundly disturbing study of the impact SpaceX is already having on the status of space junk and orbital close encounters. As researchers found, roughly half of the ~1,600 close encounters measured weekly involved Starlink satellites. Half. We’re talking about a satellite constellation that didn’t event exist several years ago. It now accounts for half of the orbital close encounters. And as the article notes, only around 1,700 Starlink satellites had been launched by that point last year. The ultimate plan is the creation of a orbital network that consists of tens of thousands of these Starlink microsatellites. That’s why these researchers are predicting that Starlink satellites are on track to account for 90% of orbital close encounters in coming years.
So Elon Musk is apparently developing a monopoly on orbital close encounters. And no one appears to be doing anything to stop it. Quite the contrary, Starlink is involved with ‘public-private partnerships’ with the US government. And that’s why the stories about Starlink’s USAID-sponsored role in the war in Ukraine and the growing threat it poses to the planetary orbital space are really part of the same story:
“SpaceX’s Starlink satellites alone are involved in about 1,600 close encounters between two spacecraft every week, that’s about 50 % of all such incidents, according to Hugh Lewis, the head of the Astronautics Research Group at the University of Southampton, U.K. These encounters include situations when two spacecraft pass within a distance of 0.6 miles (1 kilometer) from each other.”
Half of current space close encounters today involve the Starlink constellation of satellites, something that didn’t exist a few years ago. In other words, we don’t need to simply worry about these micro satellites cause a collision and generating space junk. These things already are space junk. And 1,700 of these things have been launched so far. The plan is to put tens of thousands of these micro satellites into orbit. So we’re just experiencing a taste of the orbital traffic jams yet to come.
And it’s not just like these close encounters just involve Starlink satellites threatening a non-Starlink satellite. Some of these close encounters involve two Starlink satellites. The Starlink constellation is literally a threat to itself and it’s not even close to be fully launched yet. That’s why these experts are predicting that 90% of the close encounters in the future are going to involve Starlink satellites. It’s a space junk monopoly, seemingly being built with the endorsement of the US government:
But as these experts point out, the growing threat posed by the Starlink constellation isn’t just the direct threat of a space collision. There’s also the threat that this abundance of close encounters is going to cause satellite operators to because far more risk tolerant than they should. Repositioning satellites takes time and fuel. Satellite operates are going to be forced to make judgement calls on whether or not a close encounter warning is worth responding to and it’s just a matter of time before they make a mistake. The kind of mistake that can have cascading costs:
How long before the world’s satellite operators hit ‘close encounter fatigue’ and just stop moving their satellites out of the way? The only government in a position to prevent that eventuality is subsidizing it instead, so we’ll find out eventually.
Following up on the role Elon Musk’s Starlink is playing in the conflict in Ukraine — subsidized by USAID — and the potential risks of a cascading orbital catastrophe (Kessler’s Syndrome) that comes with the militarization of the Starlink low orbit constellation of mini-satellites, here’s a pair of articles that should serve as a warning that we should probably expect the Starlink system to be treated as a military target one of these days, with all of the cascading consequences that could arise from that. Because as the articles describe, Starlink has already come under a kind of Russian attack. Specifically, a signal jamming effort that appears to have worked for at least a few hours in Ukraine before Starlink was able to issue a patch that fixed the problem.
The attack took place back in early March. We aren’t given any details on the Russian signal jamming attack, but it was presumably some sort of electronic warfare measure that disrupted the the ability of the Starlink terminals located on the ground in Ukraine to communicate with the satellites. We can also infer that the fix didn’t require any updates to the terminals themselves since they wouldn’t have been able to receive the updates. So some sort of update was delivered to the software operating the satellites themselves that fixed the jamming. That’s about all we know about the incident.
Overall, it sounds like a relatively simple form of electronic warfare. It doesn’t sound like the attack actually hacking the the software operating these satellites. But the fact that the countermeasure for the attack involved a rapid software patch underscores the basic fact that this constellation of satellites has the ability to have its software rapidly remotely updated. Because of course it has the capability. It’s an absolute necessity for managing a growing chaotic cluster. Don’t forget what researchers concluded last year: Starlink satellites are currently responsible for roughly half the close encounter events and will likely be the source of 90% of close encounters by the time SpaceX is done launching the tens of thousands of mini-satellites it has planned. Some of the close encounters involve two Starlink satellites careening towards each other. Having the ability to remotely update the Starlink software and remotely adjust the orbits of each one of those satellites is an absolute necessity.
But that necessity for remotely piloting this unprecedented satellite cluster also obviously poses a hacking risk. Yes, there’s no indication that and Starlink satellites were hacked as part of this signal jamming campaign. But the potential is obviously there. It’s not like satellites are immune to hacking. Quite the contrary. Satellites are notoriously easy to hack.
And not only are there plenty of examples of hackers hacking satellites for fun, don’t forget that you don’t necessarily need to hack the satellite directly. Hacking the satellite operator could potentially give you remote access to those satellites too. Russia’s military was accused of hacking Ukrainian satellite company Viasat at the beginning of the conflict. We don’t have any indication that the hack gave Russia control over Viasat’s satellites. But as we’ve seen with the SolarWinds hacks, once a sophisticated hacker is allow into a corporate network it can be very difficult to get them out. Was Starlink hit by the SolarWinds hack? How about some Starlink contractors? It only takes one compromised partner.
Finally, also recall how Starlink relies in part on automated orbital adjustments to avoid collision. Imagine a hack that sends faulty code handling that part of Starlink’s functionality. You could theoretically send the entire cluster careening into itself and the rest of the satellites in low earth orbit.
And that’s all why the successful repelling of a Russian signal jamming attempt shouldn’t necessarily be a relief for anyone concerned about the potential risk these constellations of microsatellites pose to humanity’s ability to operate in space. Yes, this particular attack didn’t succeed. But with Starlink, we’re still one successful hack away from an orbital catastrophe:
““The next day [after reports about the Russian jamming effort hit the media], Starlink had slung a line of code and fixed it,” Tremper said. “And suddenly that [Russian jamming attack] was not effective anymore. From [the] EW technologist’s perspective, that is fantastic … and how they did that was eye-watering to me.””
A software update ended the attack. On the one hand, that’s a nice sign for Starlink’s robustness in the face of an outside attack like a jamming signal. But it’s also a reminder that if hackers in the future manage to hack Starlink’s own systems they just might find themselves with the capacity to update Starlink’s satellite software. So when Elon Musk tweeted out that “SpaceX reprioritized to cyber defense & overcoming signal jamming”, in response to the incident back in March, let’s hope that includes protecting not just the satellites but all of the systems task with remotely controlling these satellites:
““SpaceX reprioritized to cyber defense & overcoming signal jamming,” he wrote Friday. Musk quipped that the measures were a bit of unexpected quality assurance work for the Starlink system.”
SpaceX had to reprioritize not just overcoming the direct attack of signal jamming, but also cyber defense. It’s an implicit acknowledgement that the Starlink system’s vulnerabilities don’t just involve some sort of direct physical attack. Starlink can potentially get hacked too, whether we’re talking about the direct hacking of these satellites or the indirect hacking of the Starlink command and control centers where these kind of remote software updates can get pushed to the network.
So with Starlink having already been weaponized for battlefield uses and already having come under at least an indirect disruption of its services in response to that weaponization, we have to ask: how high was cyber defense on the priority list when SpaceX was original designing the Starlink system? Don’t forget that Starlink is a platform that’s already been rushed through without a number of other proper safety assessments, like the basic assessment of whether or not it’s safe to suddenly launch thousands of microsatellites into low earth orbit without triggering some sort of Kessler Syndrone cascade catastrophe. Was cybersecurity also rushed through in the race to be the first company with a ‘megaconstellation’ of satellites in orbit? Starlink represents a kind of orbital land grab, after all. How high a priority was cybersecurity in this land grab? It’s a question that is quite literally looming over all of us. Well, looming over most of us. If you happen to be serving a space station, the threat threat is more adjacent.
Following up on the recent reports about the increasing sophistication of the military hardware — longer-range missiles and artillery — being delivered to Ukraine by the US, along with the reports about the increasingly important role Elon Musk’s Starlink satellite cluster network has been playing in providing internet services for Ukraine’s military, here’s a report giving us a better idea of the now vital role Starlink is playing in Ukraine’s military efforts. The kind of military role that has China already freaking out.
At least that’s what we can infer from recent commentary in the official newspaper of the Chinese armed forces warning about a US push for space domination using Starlink. Domination both in terms of the military operations Starlink enables in otherwise remote regions of the planet. But also domination just in terms of the space taken up in the Earth’s orbit. As the commentary pointed out, the Earth’s Low Earth Orbit (LEO) only has space for around 50,000 satellite. If Starlink ends up launching the full 42,000 satellites that its claimed is its goal, that would occupy 80 percent of the Earth’s LEO.
Beyond that, the piece warns that Starlink could effectively turn itself into a second independent internet. An independent internet potentially globally accessible and a clear risk to the internet sovereignty of countries like China.
Of course, there’s also the inherent risk associated with filling the LEO with as many satellites as can fit in that space: the risk of setting off a space junk chain reaction that triggers’ Kessler’s Syndrome that makes the LEO space effectively non-traversable. After all, Starlink is now operating as a military asset. A vital military asset in the case of this conflict. And potentially even more vital military asset in the wars of the future that are increasingly going to be fought with UAVs and other forms of remotely guided warfare. So while Russia obviously has cause for trying to disable Starlink in the context of this war, we shouldn’t assume that Russia is the only military power that’s working on ways of disabling this ‘private’ network of satellites:
“Chinese military observers have repeatedly said that the US is having a head start in space – regarded as a future battlefield by militaries across the world – by rushing to establish the next-generation military communications network based on satellite internet capability.”
Is Elon Musk’s rush to get Starlink up and running as soon as possible, damn the consequences, actually the Pentagon’s rush? That’s how this Chinese military analysis appears to view the situation. Quite understandably. The Pentagon and Ukrainians clearly hasn’t been wasting time testing out Starlink’s potential military applications. Applications that are only going to become more and more important as wars are increasingly fought by remotely controlled vehicles and smart munitions that rely on precise targeting:
Then there’s the possible of Starlink establishing itself as a second internet. A second internet potentially accessible anyway that governments will have no ability to influence. Well, except for the US government, implicitly:
Finally, there’s the orbital land grab underway. If Starlink is finished, it will occupy 80% of the available LEO space. That’s one company’s product taking up 80 percent of the entire planet’s orbit. What right does Starlink have to take this space? Well, it claimed it first. That’s it. So Starlink is being rewarded with a space monopoly for decision to rush this entire project. You’d think more governments would have noticed this by now:
What are the odds that this orbital internet system that is increasingly demonstrating its enormous military utility and operates in a low orbit isn’t attacked some day? And what are the odds of avoiding something like Kessler’s Syndrome should that attack succeed? These are the questions we had better hope Elon Musk and the US military have already been asking. And no doubt they’ve indeed been asking these questions. It’s the fact that they’ve obviously determined that the risks are worth it that makes this such an ominous story. Starlink was always a giant gamble. And not just a giant initial gamble. It’s the kind of giant gamble that just keeps growing the longer the gamble goes.
Here’s a series of articles that underscore how the conflict in Ukraine is ushering in a new kind of Cold War 2.0 “Space Race”: the race for military-capable satellite clusters. As we’ve seen, SpaceX’s Starlink cluster of thousands of low-orbit satellites has enormous military potential. Potential that was put on display with the Russian invasion of Ukraine and Starlink’s rapid rollout of internet services for the country, with financial backing from the US government via USAID. The system proved itself so invaluable for modern warfare methods that it’s already been forced to deal with Russian electronic warfare attacks. As we’re going to see, it sounds like the Pentagon and other militaries have been mightily impressed with Starlink’s abilities to function while under attack. So much so that other militaries are looking into creating their own satellite clusters. And a new space race is born. The race to fill the planet’s orbit with as many satellite clusters as possible.
And while Starlink has apparently warded off Russia’s attacks so far, the cluster still has this implicit giant existential risk of things going wrong. Specifically, the out of control chain reaction destruction of satellites from space debris that could render the low orbit of planet effectively unworkable (“Kesslers syndrome”). It points toward the new form of mutually assured destruction (MAD) in the context of this race: once you have enough rival satellite clusters operating in the same space, the physical destruction of one cluster will potentially destroy all of them as the chain-reaction plays out. It’s a better form of MADness than everyone nuking each other but still obviously not great.
And that brings us to the following Politico article about what appears to be the next phase in the US’s arming of Ukraine: advanced Gray Eagle drones. They’re the US Army’s version of the notorious “Reaper” drones capable of flying for 30 hours at a time and firing precision-guided hellfire missiles. It sounds like the plan is to start delivering them to Ukraine and give a crash course in training that could result in them being unleashed on the battlefield in 4–5 weeks. It’s a potentially huge boost to Ukraine’s military potential. The kind of boost that will make Starlink’s internet services in Ukraine that much more of a vital military asset:
“The possible sale of the Gray Eagles, the Army’s version of the better-known Reaper, represents a new chapter in arms deliveries to Ukraine and could open the door to sending Kyiv even more sophisticated systems. The Gray Eagle would be a significant leap for the embattled Ukrainians because it can fly for up to 30 hours, gather vast amounts of surveillance data, and fire precision Hellfire missiles. The system is also reusable, unlike the smaller Switchblade loitering munitions the U.S. has already sent to the front lines.”
The advanced Gray Eagle drones won’t just be a major step up in terms of the drone technology already being delivered to Ukraine. It’s also seen as opening the door for even more sophisticated weapon systems. Sophisticated weapon systems that will presumably also be remotely piloted and highly dependent on satellite communications. And depending on how the war goes for Ukraine, those advanced remotely piloted weapons systems could even theoretically be piloted from outside Ukraine:
Yes, IF Ukrainian forces had satellite coverage of the entire country, it could potentially operate drones from countries like Poland. Or from anywhere in the world, really. The key factor is maintain internet coverage throughout the battlefield. And that brings us back to the SpaceX’s Starlink cluster of low-orbit satellites already playing a crucial role in Ukraine’s military operations. Including the piloting of drones. Which has already led to Russian electronic warfare attacks on the cluster. So as the reliance on more sophisticated drones becomes a larger part of Ukraine’s military strategy, the military significance of that low-orbit satellite cluster — and its validity as a military target that Russia might reasonably attack — is only growing too:
“Ukrainian drones have relied on Starlink to drop bombs on Russian forward positions. People in besieged cities near the Russian border have stayed in touch with loved ones via the encrypted satellites. Volodymyr Zelenskyy, the country’s president, has regularly updated his millions of social media followers on the back of Musk’s network, as well as holding Zoom calls with global politicians from U.S. President Joe Biden to French leader Emmanuel Macron.”
You can’t operate military drones without satellites. And with Starlink being the only satellite service left in Ukraine, that makes Starlink absolutely vital for the use of all those advanced drones Ukraine is slated to receive. Along with long-range guided missile systems. Starlink is quickly becoming an absolutely vital military asset for the Ukrainian military. Which, of course, makes is a key strategic target by the Russians. It’s that dynamic that makes the apparent touting of Starlink’s supposed security so ominous. As we see, this report is highlighting how Starlink’s model of low-orbit cluster of thousands of tiny satellites is fundamentally different from the traditional model of a few high-orbit satellites and far more robust against attacks like Russian hacking attempts. The report also highlights how each individual satellite can have its code modified as a means of attempting to thwart hacking attempts. And sure, those are all wonderful security features. But what we don’t see in this article is any mention of the enormous downside of the Starlink model of thousands of low orbit satellites: the risk of cascading space collisions, leading to an unstoppable chain reaction (the “Kessler syndrome” scenario). It’s a rather massive security downside to the Starlink model if you think about it. Sure, it’s robust against certain kinds of attacks...until it’s not at which point it’s a complete unprecedented orbital disaster that could end up destroying the far more than just Starlink satellites.
Also recall how there are so many Starlink satellites already in orbit — which is still just a fraction of the planned 40k+ satellites — that the system relies on the automated dynamic repositioning of the satellites to avoid collisions. In other words, there are so many satellites in this system they they couldn’t feasible plan for each satellite to have its own independent orbital space. They’re have to share that space and just keep moving around to avoid collisions. What happens if just a handful of those satellites are hacked in a manner than causes them to lose the ability to accurate self-correct their orbits?
Also keep in mind that we shouldn’t be assuming that satellite clusters are solely vulnerable to remote hacking attacks. As Chinese military researchers reminded the world back in April, there’s plenty of methods for physically attacking this cluster already. This includes microwave jammers that can disrupt communications or fry electrical components; millimeter-resolution lasers that can blind satellite sensors; and long-range anti-satellite (ASAT) missiles. But as these researchers also acknowledged, the risk of space junk from physical attacks on the cluster pose obvious major risks to China’s own satellites. At the same time, they point out that the decentralized nature of the network could make it still functional even after much of it has been incapacitated. As such, the researchers advise China invest in new low cost methods for effectively neutralizing the entire cluster, which could include China launching its own tinier satellites that could swarm Starlink. In other words, we’re already on track for a military satellite cluster space race:
“The Chinese researchers were particularly concerned by the potential military capabilities of the constellation, which they claim could be used to track hypersonic missiles; dramatically boost the data transmission speeds of U.S. drones and stealth fighter jets; or even ram into and destroy Chinese satellites. China has had some near misses with Starlink satellites already, having written to the U.N. last year to complain that the country’s space station was forced to perform emergency maneuvers to avoid “close encounters” with Starlink satellites in July and October 2021.”
From boosting the transmissions of drones and stealth fighters to tracking hypersonic missiles, the advanced military applications are endless. There’s even the possibility that Starlink satellites could be used to physically ramming other satellites. So we shouldn’t be surprised to learn that military powers like China are endlessly alarmed by its existence and working on “soft kill” and “hard kill” methods for disabling it. Methods that might include creating a network of even smaller satellites that could swarm the Starlink cluster. But whatever those methods are, they have to address the fact that physically attacking the Starlink cluster could end up taking out a lot of other satellites in the process and the cluster might still be operational as long as enough satellites remain functional. So if you’re going to physically incapacitate Starlink, you might just have to access that Kessler syndrom is the price that must be paid. It points towards one of the dark dynamics at work here: due to Starlink’s relatively robustness against physical attacks, there’s an incentive to conclude that inducing Kesslers syndrome — and ‘leveling the playing field’ by hopefully incapacitating everyone’s satellites — could be seen as the best military option in a situation where the presences of Starlink is deemed to be an existential threat in the midst of a military conflict:
And don’t forget that SpaceX didn’t ask anyone for permission to start launching thousands of tiny satellites into orbit. It just did it. There’s nothing stopping other countries from doing the same. But there’s a highly compelling logic guiding them to do just that. The Cold War 2.0 logic of MADness. Along those lines, we have to ask: So will the US create an an even larger swarm of micro-satellites to take out the Chinese mini-satellite swarm before it takes out Starlink? And will the Chinese make an even larger swarm of nano-satellites? We’ll see, but as crazy as that sounds, it would all make a lot of sense in the context of our new space race MADness.
Here’s a story that has a prelude kind of feel to it: experts are warning that Sun’s 11-year solar weather cycle is scheduled for its peak activity over the next five years, with direct implications for the thousands of satellites operating in Earth’s orbit. The risk of solar storms threatening satellites isn’t new. What is relatively new is the fact that Earth’s low orbits are now bristling with thousands of microsatellites, most notably the SpaceX’s Starlink cluster of thousands of microsatellites. With around 2k microsatellites already in orbit, SpaceX is less than 1/20 of the way its goal of 42k microsatellites in low orbit. The risk of Kessler’s syndrome — the out-of-control chain-reaction of space junk — is growing with each new batch of satellites. And as we’re going to see, the warnings experts are issuing about the next five years are specifically warnings about small low-orbit satellites.
The gist of it is that increased solar radiation effectively causes the atmosphere to rise slightly. That rising atmosphere, in turn, creates drag on any low orbit satellites, with the smallest satellites experiencing the most drag. With enough drag, those satellites can end up plunging back to earth. It’s not a hypothetical. It’s exactly what happened to 40 out of 49 freshly launched Starlink satellites back in early February. As we’re going to see, SpaceX had plenty of warning about the increased solar activity but went ahead with the launch anyway and decided to have its satellites just try to ride out the increased atmospheric drag by positioning the satellites into a ‘low drag’ orientation. The strategy worked for just 9 out of the 49 satellites.
SpaceX declared it a success in crisis management. And that’s really the story here: the company that has been spearheading the reckless plan to populate the planet’s low obits with microsatellites is turning out to be reckless in deployment of that giant cluster. SpaceX could have just postponed the launch for a week but decided otherwise, losing 80% of the payload. And it’s less than 5% of the way done with launching all of the 42k planned satellites. There’s presumably going to be a lot more launches over the next five years.
But the threats to increased solar activity don’t just pose a risk to the freshly-launched satellites sitting just above the atmosphere. It sounds like solar weather can impact the ability to calculate trajectories of objects in orbit. This could be particularly perilous for the Starlink cluster given that, as we’ve seen, the cluster operates on the assumption that the satellites are not in unique orbits and will routinely need to self-correct to avoid collisions using “autonomous collision avoidance systems”. So any solar storms that disrupt that ability to predict collisions and self-correct could be utterly disastrous, even if those capabilities are knocked out for relatively short periods of time.
And, of course, as we’ve also seen, Starlink has managed to turn itself into a viable military target now that it’s proving to be vital for Ukraine’s war efforts. The risk of some sort direct military attack on the cluster is rising too for the foreseeable future. Imagine what kinds of military opportunities a powerful solar storm might create.
And that’s all why the warnings about increased solar storm activities to satellites aren’t quite like the similar warnings we’ve heard in solar-cycles past. It’s a lot more crowded up there this time:
““Whatever you’ve experienced in the past two years doesn’t matter,” Fang said, as reported in SpaceNews. “Whatever you learned the past two years is not going to apply in the next five years.””
It’s like climate change for space weather. Except largely predictable. And not caused by human activity. But as experts warn, any satellites inhabiting the lower orbits of the planet are going to experience an extra choppy ride for over the next five years. How many plunging satellites are we going to see during this period? Time will tell. Time and the occasional ‘shooting start’:
But note that it’s just that an inflated atmosphere creates extra drag threatens to capture the lowest orbiting satellites. All that drag also just makes the calculating of orbital trajectories more difficult too. Don’t forget that the Starlink system avoids collisions by constantly watching for collisions and adjusting orbits as needed. It’s one of the requirements of throwing thousands of satellites into the same low orbit space. That whole process is going to be extra hard for the next five years as Starlink continues to flood that space with tens of thousands more micro-satellites:
Finally, as the experts remind us, this isn’t just some hypothetical risk to Starlink. Solar storms cause 40 Starlink satellites to plunge from space back in February:
A minor storm forced 40 Starlink satellites out of orbit. It’s not a great sign. Don’t forget that SpaceX plans to launch over 40,000 microsatellites once this cluster is completed. They aren’t even 1/20 of the way there yet and these problems are already happening. How many more satellites will SpaceX have in low orbit by the time the solar activity peaks over the next five years?
But as the following Time article from back in February describes, the botched launch of 40 out of 49 freshly launched satellites as a result of a minor solar storm was really more ominous than it might initially appear. Ominous because it indicates a high risk-taking threshold on the part of Starlink’s decision-making. Because as the article points out, Starlink had plenty of warning about the storm and could have simply postponed the launch for a week. Instead, they went ahead with the launch and just planned on putting the 49 satellites into ‘low-drag’ mode in the hopes of riding out the storm. In the end, SpaceX predictably tried to spin it all as a grand success in crisis management.
So just as increasingly powerful solar storms in coming years are something we can predict with a high degree of confidence based on past observations, reckless decision-making on the part of Starlink is also something observers can reasonably predict during this same period. A giant orbital gamble is scheduled for the next five years:
“SpaceX applauded itself for handling the problem with minimal risk to other satellites or to people or property on the ground—while ignoring the question of whether it would have been wiser simply to postpone the launch for a week or so. “This unique situation demonstrates the great lengths the Starlink team has gone to ensure the system is on the leading edge of on-orbit debris mitigation,” the company wrote.”
A job well done. That’s how SpaceX spun the loss of 40 out of 49 newly-launched satellites back in February. Observers weren’t quite as impressed. Especially given that this solar storm was a mere 2 out of 5. And typical. So typical that the company had a warning that this was coming. But for whatever reason, SpaceX decided to ignore those warnings and go ahead with the launch anyway. And that’s really the takeaway lesson here when it comes to assessing the upcoming orbital risks: SpaceX has a currently rather reckless track record. The whole idea of flooding the Earth’s lower orbit with tens of thousands of microsatellites is reckless to begin with. But even the implementation of that reckless project has been reckless. The reckless implementation of a reckless project is generally a recipe for bad outcomes:
And that brings us to NASA’s curiously-timed warning issued on Feb 7, as the 40 satellites were in the process of plunging: NASA issued five-page letter to the FCC expressing concerns about Starlink creating “the potential for a significant increase in the frequency of conjunction events, and possible impacts to NASA’s science and human space flight missions”. An increased frequency of conjunction events. It’s a polite way of warning about orbital disasters like Kessler’s syndrome unstoppable chain-reaction.
And don’t forget that the Starlink cluster was found to be responsible for over half of the weekly orbital encounters in the Fall of 2021. When NASA wrote that letter it already had plenty of evidence regarding the risks of Starlink:
Keep in mind that, with Starlink’s pivotal role in the conflict in Ukraine, odds are we’re not going to be seeing too many attempts by NASA to reign in the platform any time soon. It’s too important for that project. Which, of course, is what make Starlink a viable military target. A military target that only grows in military importance the more it grows in physical size. The risks just keep growing as the cluster grows. And as SpaceX demonstrated back in February, it has plan to deal with those risks: just launch more satellites.
And who knows, maybe just launching more satellites will work. For a while. The issue is what happens when it doesn’t work anymore. And we already sort of know what happens. Kessler’s syndrome happens. A lack of warnings isn’t the problem. We’ve been warned. We just don’t seem to be actively heeding those warnings.
Following up on the story about the growing risks that solar radiation poses to the low-orbit Starlink constellation of satellites and the troublingly casual response SpaceX had in the face of these risks — resulting in the loss of almost all of the satellites launched in early February — here’s a story about another vulnerability to the Starlink system that the company doesn’t appear to be taking very seriously:
A Belgian security researcher just published a how-to manual on how to hack into the the Starlink satellites. This isn’t the first story about hacking attempts being waged against Starlink. The system has already become a military hacking target given the role it’s playing in Ukraine’s military efforts. But he hadn’t heard about successful hacks before. That’s changed, and all you need to carry out the hack is access to one of the satellite receiver dishes and a $25 Pi Raspberry ‘modschip’. The researcher, Lennert Wouters, published the details on the hack on his Github page this month.
Now, it doesn’t sound like the hack gave Wouters control over the satellite. But it did reportedly give him access to layers of the communication network that users normally cannot access. He claims to have even managed to figure out how to communicate with the backend servers, making this attack a possible vector for accessing Starlink’s own computer networks.
Wouton informed Starlink of this vulnerability last year. The company has issued a software update that reportedly makes the hack more difficult, but not impossible. And here’s one of the key elements of this story: there’s no way to update the existing satellites launched in orbit because the vulnerability is based on software that is hardcoded onto a chip. So hackers are potentially going to be able to exploit this hack as long as the ~2000+ satellites already in orbit remain in orbit remain in orbit.
Here’s the other key detail to keep in mind: We’re Wouters informed SpaceX about this vulnenability last year. So, ideally, SpaceX has already dealt with it and has modified the hardcoded vulnerable software before launching any more satellites into space this year — like the ill-fated launch of 49 satellites back in February despite the incoming solar storm — and yet we are getting no indication that the company has actually taken these steps. Instead, we’re getting assurances from the company that Starlink users don’t need to be at all concerned about their own security. So all of the satellites launched this year could have this vulnerability far all we know at this point:
“The researcher notified Starlink of the flaws last year and the company paid Wouters through its bug bounty scheme for identifying the vulnerabilities. Wouters says that while SpaceX has issued an update to make the attack harder (he changed the modchip in response), the underlying issue can’t be fixed unless the company creates a new version of the main chip. All existing user terminals are vulnerable, Wouters says.”
This is clearly a ‘White Hat’ hacking story. Lennert Wouters, a Belgian academic security researcher, isn’t trying to take down the Starlink constellation. But should any ‘black hat’ hackers decide to replicate Wouter’s attack it sounds like they will be able to do so. At least for the Starlink satellites that are already in orbit, because it sounds like the vulnerability resides in the firmware stored on a chip that can’t be updated. SpaceX has issued some sort of patch that apparently makes the attack more difficult to execute, but it’s still possible.
So when we learn that Wouton informed SpaceX about this vulnerability last year, and it’s a vulnerability that can’t be fixed after the satellites are launched, we have to ask: has SpaceX updated that hardwired firmware on all the satellites launched since the vulnerability was disclosed by Wouton? Note how we are hearing nothing about the company updating the hardware on the satellites launched this year. That’s part of what makes this story rather unsettling. It’s another indication that SpaceX is prioritizing speed and ‘getting there first’ over security. So when we learn that Wouton is describing this hack on his Github account, we can be pretty confident A LOT of other people are going to be engaging in this exact hack because SpaceX can’t actually patch it. At least not entirely:
Thankfully, Wouton makes it sound like the hack doesn’t actually give the attacker the ability to take down the satellite systems, which would be a recipe for Kessler’s Syndrome. Don’t forget that Starlink assumes the satellites aren’t going to be in entirely independent orbits and the ability to make on-the-fly course corrections is crucial for how the system operates while avoiding a chain reaction of space junk. And yet Wouton also warns that the hack can be used to learn about how the Starlink network operates. He’s even communicating with backend servers with it! So while the hack itself may not be devastating, it also sounds like it could be used to learn how to execute genuinely devastating attacks:
Don’t forget that as Ukraine becomes more and more reliant on long-range missile platforms and drones, Starlink is only going to be more and more of a tempting military target. We can be pretty confident Russian hackers have already figured out how to replicate this hack and are currently working on figuring out what additional attacks can be piggy-backed off of it. What will they find? We’ll see. Or rather, they’ll see. The hackers presumably aren’t going to tell the world if they figure out how to exploit this hack to spy on traffic. But it’s also worth noting how this kind of vulnerability could actually increase the physical safety of the Starlink cluster. How so? Because inducing some sort of catastrophic Kessler’s Syndrome chain-reaction of space junk as a means of disabling this system will be a lot less incentivized if Russian’s military is able to easily hack Starlink and just spy in its traffic instead. Silver linings and all that.
How massive is the Pentagon’s fake online activity? Who exactly are they targeting? And why are they repeatedly getting caught? Those are the big question raised by a new Washington Post report about the review of the Pentagon’s online ‘persuasion’ activities. The review was prompted by a report issued last month by Graphika and the Stanford Internet Observatory. The report basically describes a situation where fake online personas are being extensively created by the Pentagon employees — or contractors — and also being extensively caught and deleted by platforms like Facebook. It’s that propensity for getting caught that appears to be a major factor in this review.
So is the lying and disinformation spread as part of these influence operations also part of the review? Sort of. It sounds like there’s an assessment regarding whether or not the lies actually work. That’s sort of the good news in this story: the Pentagon might dial back on the online deception. Not because it’s wrong but because it doesn’t seem to actually work. That includes the fake personas. They just don’t seem to be as persuasive as someone operating a social media account as an overt employee of the DoD.
And what about the years of hysterics about ‘Russian Trolls’ and the Internet Research Agency tampering with American’s fragile psyches? Yeah, that all appears to be part of the justification for all this. In fact, as the article points out, Congress passed a law in 2019 affirming the military’s right to conduct operations in the “information environment” to defend the United States and to push back against foreign disinformation aimed at undermining its interests. But as the second and third article excerpts — from 2011 and 2009 — remind us, this didn’t start in 2019. The Pentagon’s budget for foreign influence operations in 2009 alone was $4.7 billion. That’s all part of the context of the Pentagon’s ongoing review of its global PysOp activities. A review that will presumably have a PsyOp-ed version eventually issued to the public where we’re told everything is great and there’s no problem at all: