Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself, HERE.
Please consider supporting THE WORK DAVE EMORY DOES.
This broadcast was recorded in one, 60-minute segment.
Introduction: Continuing the discussion from FTR #1076, the broadcast recaps key aspects of analysis of the Cambridge Analytica scandal.
In our last program, we noted that both the internet (DARPA projects including Project Agile) and the German Nazi Party had their origins as counterinsurgency gambits. Noting Hitler’s speech before The Industry Club of Dusseldorf, in which he equated communism with democracy, we highlight how the Cambridge Analytica scandal reflects the counterinsurgency origins of the Internet, and how the Cambridge Analytica affair embodies anti-Democracy/as counterinsurgency.
Key aspects of the Cambridge Analytica affair include:
- The use of psychographic personality testing on Facebook that is used for political advantage: ” . . . . For several years, a data firm eventually hired by the Trump campaign, Cambridge Analytica, has been using Facebook as a tool to build psychological profiles that represent some 230 million adult Americans. A spinoff of a British consulting company and sometime-defense contractor known for its counterterrorism ‘psy ops’ work in Afghanistan, the firm does so by seeding the social network with personality quizzes. Respondents — by now hundreds of thousands of us, mostly female and mostly young but enough male and older for the firm to make inferences about others with similar behaviors and demographics — get a free look at their Ocean scores. Cambridge Analytica also gets a look at their scores and, thanks to Facebook, gains access to their profiles and real names. . . .”
- The parent company of Cambridge Analytica–SCL–was deeply involved with counterterrorism “psy-ops” in Afghanistan, embodying the essence of the counterinsurgency dynamic at the root of the development of the Internet. The use of online data to subvert democracy recalls Hitler’s speech to the Industry Club of Dusseldorf, in which he equated democracy with communism: ” . . . . Cambridge Analytica was a company spun out of SCL Group, a British military contractor that worked in information operations for armed forces around the world. It was conducting research on how to scale and digitise information warfare – the use of information to confuse or degrade the efficacy of an enemy. . . . As director of research, Wylie’s original role was to map out how the company would take traditional information operations tactics into the online space – in particular, by profiling people who would be susceptible to certain messaging. This morphed into the political arena. After Wylie left, the company worked on Donald Trump’s US presidential campaign . . . .”
- Cambridge Analytica whistleblower Christopher Wylie’s observations on the anti-democratic nature of the firm’s work: ” . . . . It was this shift from the battlefield to politics that made Wylie uncomfortable. ‘When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy,’ he says. ‘But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists.’ . . . .”
- Wylie’s observations on how Cambridge Analytica’s methodology can be used to build a fascist political movement: ” . . . . One of the reasons these techniques are so insidious is that being a target of a disinformation campaign is ‘usually a pleasurable experience’, because you are being fed content with which you are likely to agree. ‘You are being guided through something that you want to be true,’ Wylie says. To build an insurgency, he explains, you first target people who are more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to ‘like’ a group on social media. They start engaging with the content, which may or may not be true; either way ‘it feels good to see that information’. When the group reaches 1,000 or 2,000 members, an event is set up in the local area. Even if only 5% show up, ‘that’s 50 to 100 people flooding a local coffee shop’, Wylie says. This, he adds, validates their opinion because other people there are also talking about ‘all these things that you’ve been seeing online in the depths of your den and getting angry about’. People then start to believe the reason it’s not shown on mainstream news channels is because ‘they don’t want you to know what the truth is’. As Wylie sums it up: ‘What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.’ . . . .”
- Wylie’s observation that Facebook was “All In” on the Cambridge Analytica machinations: ” . . . . ‘Facebook has known about what Cambridge Analytica was up to from the very beginning of those projects,” Wylie claims. “They were notified, they authorised the applications, they were given the terms and conditions of the app that said explicitly what it was doing. They hired people who worked on building the app. I had legal correspondence with their lawyers where they acknowledged it happened as far back as 2016.’ . . . .”
- The decisive participation of “Spy Tech” firm Palantir in the Cambridge Analytica operation: Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. In addition to his opposition to democracy because it allegedly is inimical to wealth creation, Thiel doesn’t think women should be allowed to vote and holds Nazi legal theoretician Carl Schmitt in high regard. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
- The use of “dark posts” by the Cambridge Analytica team. (We have noted that Brad Parscale has reassembled the old Cambridge Analytica team for Trump’s 2020 election campaign. It seems probable that AOC’s millions of online followers, as well as the “Bernie Bots,” will be getting “dark posts” crafted by AI’s scanning their online efforts.) ” . . . . One recent advertising product on Facebook is the so-called ‘dark post’: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times. . . .”
Supplementing the discussion about Cambridge Analytica, the program reviews information from FTR #718 about Facebook’s apparent involvement with elements and individuals linked to CIA and DARPA: ” . . . . Facebook’s most recent round of funding was led by a company called Greylock Venture Capital, who put in the sum of $27.5m. One of Greylock’s senior partners is called Howard Cox, another former chairman of the NVCA, who is also on the board of In-Q-Tel. What’s In-Q-Tel? Well, believe it or not (and check out their website), this is the venture-capital wing of the CIA. After 9/11, the US intelligence community became so excited by the possibilities of new technology and the innovations being made in the private sector, that in 1999 they set up their own venture capital fund, In-Q-Tel, which ‘identifies and partners with companies developing cutting-edge technologies to help deliver these solutions to the Central Intelligence Agency and the broader US Intelligence Community (IC) to further their missions’. . . .”
More about the CIA/DARPA links to the development of Facebook: ” . . . . The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company’s key areas of expertise are in ‘data mining technologies’. Breyer also served on the board of R&D firm BBN Technologies, which was one of those companies responsible for the rise of the internet. Dr Anita Jones joined the firm, which included Gilman Louie. She had also served on the In-Q-Tel’s board, and had been director of Defence Research and Engineering for the US Department of Defence. She was also an adviser to the Secretary of Defence and overseeing the Defence Advanced Research Projects Agency (DARPA), which is responsible for high-tech, high-end development. . . .”
Program Highlights Include: Review of Facebook’s plans to use brain-to-computer technology to operate its platform, thereby the enabling of recording and databasing people’s thoughts; Review of Facebook’s employment of former DARPA head Regina Dugan to implement the brain-to-computer technology; Review of Facebook’s building 8–designed to duplicate DARPA; Review of Facebook’s hiring of the Atlantic Council to police the social medium’s online content; Review of Facebook’s partnering with Narendra Modi’s Hindutva fascist government in India; Review of Facebook’s emloyment of Ukrainian fascist Kateryna Kruk to manage the social medium’s Ukrainian content.
1a. Facebook personality tests that allegedly let you learn things about what make you tick allows whoever set up that test learn what makes you tick too. Since it’s done through Facebook, they can identify your test results with your real identity.
If the Facebook personality test in question happens to report your “Ocean score” (Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism), that means the test your taking was created by Cambridge Analytica, a company with one of Donald Trump’s billionaire sugar-daddies, Robert Mercer, as a major investor. And it’s Cambridge Analytica that gets to learn all those fun facts about your psychological profile too. And Steve Bannon sat on its board:
“The Secret Agenda of a Facebook Quiz” by McKenzie Funk; The New York Times; 1/19/2017.
Do you panic easily? Do you often feel blue? Do you have a sharp tongue? Do you get chores done right away? Do you believe in the importance of art?
If ever you’ve answered questions like these on one of the free personality quizzes floating around Facebook, you’ll have learned what’s known as your Ocean score: How you rate according to the big five psychological traits of Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. You may also be responsible the next time America is shocked by an election upset.
For several years, a data firm eventually hired by the Trump campaign, Cambridge Analytica, has been using Facebook as a tool to build psychological profiles that represent some 230 million adult Americans. A spinoff of a British consulting company and sometime-defense contractor known for its counterterrorism “psy ops” work in Afghanistan, the firm does so by seeding the social network with personality quizzes. Respondents — by now hundreds of thousands of us, mostly female and mostly young but enough male and older for the firm to make inferences about others with similar behaviors and demographics — get a free look at their Ocean scores. Cambridge Analytica also gets a look at their scores and, thanks to Facebook, gains access to their profiles and real names.
Cambridge Analytica worked on the “Leave” side of the Brexit campaign. In the United States it takes only Republicans as clients: Senator Ted Cruz in the primaries, Mr. Trump in the general election. Cambridge is reportedly backed by Robert Mercer, a hedge fund billionaire and a major Republican donor; a key board member is Stephen K. Bannon, the head of Breitbart News who became Mr. Trump’s campaign chairman and is set to be his chief strategist in the White House.
In the age of Facebook, it has become far easier for campaigners or marketers to combine our online personas with our offline selves, a process that was once controversial but is now so commonplace that there’s a term for it, “onboarding.” Cambridge Analytica says it has as many as 3,000 to 5,000 data points on each of us, be it voting histories or full-spectrum demographics — age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, car ownership, homeownership — from consumer-data giants.
No data point is very informative on its own, but profiling voters, says Cambridge Analytica, is like baking a cake. “It’s the sum of the ingredients,” its chief executive officer, Alexander Nix, told NBC News. Because the United States lacks European-style restrictions on second- or thirdhand use of our data, and because our freedom-of-information laws give data brokers broad access to the intimate records kept by local and state governments, our lives are open books even without social media or personality quizzes.
Ever since the advertising executive Lester Wunderman coined the term “direct marketing” in 1961, the ability to target specific consumers with ads — rather than blanketing the airwaves with mass appeals and hoping the right people will hear them — has been the marketer’s holy grail. What’s new is the efficiency with which individually tailored digital ads can be tested and matched to our personalities. Facebook is the microtargeter’s ultimate weapon.
The explosive growth of Facebook’s ad business has been overshadowed by its increasing role in how we get our news, real or fake. In July, the social network posted record earnings: quarterly sales were up 59 percent from the previous year, and profits almost tripled to $2.06 billion. While active users of Facebook — now 1.71 billion monthly active users — were up 15 percent, the real story was how much each individual user was worth. The company makes $3.82 a year from each global user, up from $2.76 a year ago, and an average of $14.34 per user in the United States, up from $9.30 a year ago. Much of this growth comes from the fact that advertisers not only have an enormous audience in Facebook but an audience they can slice into the tranches they hope to reach.
One recent advertising product on Facebook is the so-called “dark post”: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times.
Imagine the full capability of this kind of “psychographic” advertising. In future Republican campaigns, a pro-gun voter whose Ocean score ranks him high on neuroticism could see storm clouds and a threat: The Democrat wants to take his guns away. A separate pro-gun voter deemed agreeable and introverted might see an ad emphasizing tradition and community values, a father and son hunting together.
In this election, dark posts were used to try to suppress the African-American vote. According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous “super predator” line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake. Federal Election Commission rules are unclear when it comes to Facebook posts, but even if they do apply and the facts are skewed and the dog whistles loud, the already weakening power of social opprobrium is gone when no one else sees the ad you see — and no one else sees “I’m Donald Trump, and I approved this message.”
While Hillary Clinton spent more than $140 million on television spots, old-media experts scoffed at Trump’s lack of old-media ad buys. Instead, his campaign pumped its money into digital, especially Facebook. One day in August, it flooded the social network with 100,000 ad variations, so-called A/B testing on a biblical scale, surely more ads than could easily be vetted by human eyes for compliance with Facebook’s “community standards.”
1b. Christopher Wylie–the former head of research at Cambridge Analytica who became one of the key insider whistle-blowers about how Cambridge Analytica operated and the extent of Facebook’s knowledge about it–gave an interview last month to Campaign Magazine. (We dealt with Cambridge Analytica in FTR #‘s 946, 1021.)
Wylie recounts how, as director of research at Cambridge Analytica, his original role was to determine how the company could use the information warfare techniques used by SCL Group – Cambridge Analytica’s parent company and a defense contractor providing psy op services for the British military. Wylie’s job was to adapt the psychological warfare strategies that SCL had been using on the battlefield to the online space. As Wylie put it:
“ . . . . When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy…But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists. . . . .”
Wylie also draws parallels between the psychological operations used on democratic audiences and the battlefield techniques used to be build an insurgency. It starts with targeting people more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to “like” a group on social media. The information you’re feeding this target audience may or may not be real. The important thing is that it’s content that they already agree with so that “it feels good to see that information.” Keep in mind that one of the goals of the ‘psychographic profiling’ that Cambridge Analytica was to identify traits like neuroticism.
Wylie goes on to describe the next step in this insurgency-building technique: keep building up the interest in the social media group that you’re directing this target audience towards until it hits around 1,000–2,000 people. Then set up a real life event dedicated to the chosen disinformation topic in some local area and try to get as many of your target audience to show up. Even if only 5 percent of them show up, that’s still 50–100 people converging on some local coffee shop or whatever. The people meet each other in real life and start talking about about “all these things that you’ve been seeing online in the depths of your den and getting angry about”. This target audience starts believing that no one else is talking about this stuff because “they don’t want you to know what the truth is”. As Wylie puts it, “What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.”
In the early hours of 17 March 2018, the 28-year-old Christopher Wylie tweeted: “Here we go….”
Later that day, The Observer and The New York Times published the story of Cambridge Analytica’s misuse of Facebook data, which sent shockwaves around the world, caused millions to #DeleteFacebook, and led the UK Information Commissioner’s Office to fine the site the maximum penalty for failing to protect users’ information. Six weeks after the story broke, Cambridge Analytica closed. . . .
. . . . He believes that poor use of data is killing good ideas. And that, unless effective regulation is enacted, society’s worship of algorithms, unchecked data capture and use, and the likely spread of AI to all parts of our lives is causing us to sleepwalk into a bleak future.
Not only are such circumstances a threat to adland – why do you need an ad to tell you about a product if an algorithm is choosing it for you? – it is a threat to human free will. “Currently, the only morality of the algorithm is to optimise you as a consumer and, in many cases, you become the product. There are very few examples in human history of industries where people themselves become products and those are scary industries – slavery and the sex trade. And now, we have social media,” Wylie says.
“The problem with that, and what makes it inherently different to selling, say, toothpaste, is that you’re selling parts of people or access to people. People have an innate moral worth. If we don’t respect that, we can create industries that do terrible things to people. We are [heading] blindly and quickly into an environment where this mentality is going to be amplified through AI everywhere. We’re humans, we should be thinking about people first.”
His words carry weight, because he’s been on the dark side. He has seen what can happen when data is used to spread misinformation, create insurgencies and prey on the worst of people’s characters.
The political battlefield
A quick refresher on the scandal, in Wylie’s words: Cambridge Analytica was a company spun out of SCL Group, a British military contractor that worked in information operations for armed forces around the world. It was conducting research on how to scale and digitise information warfare – the use of information to confuse or degrade the efficacy of an enemy. . . .
. . . . As director of research, Wylie’s original role was to map out how the company would take traditional information operations tactics into the online space – in particular, by profiling people who would be susceptible to certain messaging.
This morphed into the political arena. After Wylie left, the company worked on Donald Trump’s US presidential campaign . . . .
. . . . It was this shift from the battlefield to politics that made Wylie uncomfortable. “When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy,” he says.
“But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists.”
One of the reasons these techniques are so insidious is that being a target of a disinformation campaign is “usually a pleasurable experience”, because you are being fed content with which you are likely to agree. “You are being guided through something that you want to be true,” Wylie says.
To build an insurgency, he explains, you first target people who are more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to “like” a group on social media. They start engaging with the content, which may or may not be true; either way “it feels good to see that information”.
When the group reaches 1,000 or 2,000 members, an event is set up in the local area. Even if only 5% show up, “that’s 50 to 100 people flooding a local coffee shop”, Wylie says. This, he adds, validates their opinion because other people there are also talking about “all these things that you’ve been seeing online in the depths of your den and getting angry about”.
People then start to believe the reason it’s not shown on mainstream news channels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.” . . . .
. . . . Psychographic potential
. . . . But Wylie argues that people underestimate what algorithms allow you to do in profiling. “I can take pieces of information about you that seem innocuous, but what I’m able to do with an algorithm is find patterns that correlate to underlying psychological profiles,” he explains.
“I can ask whether you listen to Justin Bieber, and you won’t feel like I’m invading your privacy. You aren’t necessarily aware that when you tell me what music you listen to or what TV shows you watch, you are telling me some of your deepest and most personal attributes.” . . . .
. . . . Clashes with Facebook
Wylie is opposed to self-regulation, because industries won’t become consumer champions – they are, he says, too conflicted.
“Facebook has known about what Cambridge Analytica was up to from the very beginning of those projects,” Wylie claims. “They were notified, they authorised the applications, they were given the terms and conditions of the app that said explicitly what it was doing. They hired people who worked on building the app. I had legal correspondence with their lawyers where they acknowledged it happened as far back as 2016.” . . . .
1c. In FTR #946, we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.
Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.
The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.
“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .
. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .
. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
. . . . But he [Wylie] said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.
Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .
2a. There are indications that elements in and/or associated with CIA and Pentagon/DARPA were involved with Facebook almost from the beginning: ” . . . . Facebook’s most recent round of funding was led by a company called Greylock Venture Capital, who put in the sum of $27.5m. One of Greylock’s senior partners is called Howard Cox, another former chairman of the NVCA, who is also on the board of In-Q-Tel. What’s In-Q-Tel? Well, believe it or not (and check out their website), this is the venture-capital wing of the CIA. After 9/11, the US intelligence community became so excited by the possibilities of new technology and the innovations being made in the private sector, that in 1999 they set up their own venture capital fund, In-Q-Tel, which ‘identifies and partners with companies developing cutting-edge technologies to help deliver these solutions to the Central Intelligence Agency and the broader US Intelligence Community (IC) to further their missions’. . . .”
“With Friends Like These . . .” by Tim Hodgkinson; guardian.co.uk; 1/14/2008.
. . . . The third board member of Facebook is Jim Breyer. He is a partner in the venture capital firm Accel Partners, who put $12.7m into Facebook in April 2005. On the board of such US giants as Wal-Mart and Marvel Entertainment, he is also a former chairman of the National Venture Capital Association (NVCA). Now these are the people who are really making things happen in America, because they invest in the new young talent, the Zuckerbergs and the like. Facebook’s most recent round of funding was led by a company called Greylock Venture Capital, who put in the sum of $27.5m. One of Greylock’s senior partners is called Howard Cox, another former chairman of the NVCA, who is also on the board of In-Q-Tel. What’s In-Q-Tel? Well, believe it or not (and check out their website), this is the venture-capital wing of the CIA. After 9/11, the US intelligence community became so excited by the possibilities of new technology and the innovations being made in the private sector, that in 1999 they set up their own venture capital fund, In-Q-Tel, which “identifies and partners with companies developing cutting-edge technologies to help deliver these solutions to the Central Intelligence Agency and the broader US Intelligence Community (IC) to further their missions”. . . .
2b. More about the CIA/Pentagon link to the development of Facebook: ” . . . . The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company’s key areas of expertise are in ‘data mining technologies’. Breyer also served on the board of R&D firm BBN Technologies, which was one of those companies responsible for the rise of the internet. Dr Anita Jones joined the firm, which included Gilman Louie. She had also served on the In-Q-Tel’s board, and had been director of Defence Research and Engineering for the US Department of Defence. She was also an adviser to the Secretary of Defence and overseeing the Defence Advanced Research Projects Agency (DARPA), which is responsible for high-tech, high-end development. . . .”
“Facebook–the CIA Conspiracy” by Matt Greenop; The New Zealand Herald; 8/8/2007.
. . . . Facebook’s first round of venture capital funding ($US500,000) came from former Paypal CEO Peter Thiel. Author of anti-multicultural tome ‘The Diversity Myth’, he is also on the board of radical conservative group VanguardPAC.
The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company’s key areas of expertise are in “data mining technologies”.
Breyer also served on the board of R&D firm BBN Technologies, which was one of those companies responsible for the rise of the internet.
Dr Anita Jones joined the firm, which included Gilman Louie. She had also served on the In-Q-Tel’s board, and had been director of Defence Research and Engineering for the US Department of Defence.
She was also an adviser to the Secretary of Defence and overseeing the Defence Advanced Research Projects Agency (DARPA), which is responsible for high-tech, high-end development. . . .
3. Facebook wants to read your thoughts.
- ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
“Facebook Literally Wants to Read Your Thoughts” by Kristen V. Brown; Gizmodo; 4/19/2017.
At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.
What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”
She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.
Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.
Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.
The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.
“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
…
4. The broadcast then reviews (from FTR #1074) Facebook’s inextricable link with the Hindutva fascist BJP of Narendra Modi:
Key elements of discussion and analysis include:
- Indian politics has been largely dominated by fake news, spread by social media: ” . . . . In the continuing Indian elections, as 900 million people are voting to elect representatives to the lower house of the Parliament, disinformation and hate speech are drowning out truth on social media networks in the country and creating a public health crisis like the pandemics of the past century. This contagion of a staggering amount of morphed images, doctored videos and text messages is spreading largely through messaging services and influencing what India’s voters watch and read on their smartphones. A recent study by Microsoft found that over 64 percent Indians encountered fake news online, the highest reported among the 22 countries surveyed. . . . These platforms are filled with fake news and disinformation aimed at influencing political choices during the Indian elections. . . . ”
- Narendra Modi’s Hindutva fascist BJP has been the primary beneficiary of fake news, and his regime has partnered with Facebook: ” . . . . The hearing was an exercise in absurdist theater because the governing B.J.P. has been the chief beneficiary of divisive content that reaches millions because of the way social media algorithms, especially Facebook, amplify ‘engaging’ articles. . . .”
- Rajesh Jain is among those BJP functionaries who serve Facebook, as well as the Hindutva fascists: ” . . . . By the time Rajesh Jain was scaling up his operations in 2013, the BJP’s information technology (IT) strategists had begun interacting with social media platforms like Facebook and its partner WhatsApp. If supporters of the BJP are to be believed, the party was better than others in utilising the micro-targeting potential of the platforms. However, it is also true that Facebook’s employees in India conducted training workshops to help the members of the BJP’s IT cell. . . .”
- Dr. Hiren Joshi is another of the BJP operatives who is heavily involved with Facebook. ” . . . . Also assisting the social media and online teams to build a larger-than-life image for Modi before the 2014 elections was a team led by his right-hand man Dr Hiren Joshi, who (as already stated) is a very important adviser to Modi whose writ extends way beyond information technology and social media. . . . Joshi has had, and continues to have, a close and long-standing association with Facebook’s senior employees in India. . . .”
- Shivnath Thukral, who was hired by Facebook in 2017 to be its Public Policy Director for India & South Asia, worked with Joshi’s team in 2014. ” . . . . The third team, that was intensely focused on building Modi’s personal image, was headed by Hiren Joshi himself who worked out of the then Gujarat Chief Minister’s Office in Gandhinagar. The members of this team worked closely with staffers of Facebook in India, more than one of our sources told us. As will be detailed later, Shivnath Thukral, who is currently an important executive in Facebook, worked with this team. . . .”
- An ostensibly remorseful BJP politician–Prodyut Bora–highlighted the dramatic effect of Facebook and its WhatsApp subsidiary have had on India’s politics: ” . . . . In 2009, social media platforms like Facebook and WhatsApp had a marginal impact in India’s 20 big cities. By 2014, however, it had virtually replaced the traditional mass media. In 2019, it will be the most pervasive media in the country. . . .”
- A concise statement about the relationship between the BJP and Facebook was issued by BJP tech office Vinit Goenka: ” . . . . At one stage in our interview with [Vinit] Goenka that lasted over two hours, we asked him a pointed question: ‘Who helped whom more, Facebook or the BJP?’ He smiled and said: ‘That’s a difficult question. I wonder whether the BJP helped Facebook more than Facebook helped the BJP. You could say, we helped each other.’ . . .”
5. In Ukraine, as well, Facebook and the OUN/B successor organizations function symbiotically:
CrowdStrike–at the epicenter of the supposed Russian hacking controversy is noteworthy. Its co-founder and chief technology officer, Dmitry Alperovitch is a senior fellow at the Atlantic Council, financed by elements that are at the foundation of fanning the flames of the New Cold War: “In this respect, it is worth noting that one of the commercial cybersecurity companies the government has relied on is Crowdstrike, which was one of the companies initially brought in by the DNC to investigate the alleged hacks. . . . Dmitri Alperovitch is also a senior fellow at the Atlantic Council. . . . The connection between [Crowdstrike co-founder and chief technology officer Dmitri] Alperovitch and the Atlantic Council has gone largely unremarked upon, but it is relevant given that the Atlantic Council—which is is funded in part by the US State Department, NATO, the governments of Latvia and Lithuania, the Ukrainian World Congress, and the Ukrainian oligarch Victor Pinchuk—has been among the loudest voices calling for a new Cold War with Russia. As I pointed out in the pages of The Nation in November, the Atlantic Council has spent the past several years producing some of the most virulent specimens of the new Cold War propaganda. . . . ”
(Note that the Atlantic Council is dominant in the array of individuals and institutions constituting the Ukrainian fascist/Facebook cooperative effort. We have spoken about the Atlantic Council in numerous programs, including FTR #943. The organization has deep operational links to elements of U.S. intelligence, as well as the OUN/B milieu that dominates the Ukrainian diaspora.)
” . . . . Facebook is partnering with the Atlantic Council in another effort to combat election-related propaganda and misinformation from proliferating on its service. The social networking giant said Thursday that a partnership with the Washington D.C.-based think tank would help it better spot disinformation during upcoming world elections. The partnership is one of a number of steps Facebook is taking to prevent the spread of propaganda and fake news after failing to stop it from spreading on its service in the run up to the 2016 U.S. presidential election. . . .”
Since autumn 2018, Facebook has looked to hire a public policy manager for Ukraine. The job came after years of Ukrainians criticizing the platform for takedowns of its activists’ pages and the spread of [alleged] Russian disinfo targeting Kyiv. Now, it appears to have one: @Kateryna_Kruk.— Christopher Miller (@ChristopherJM) June 3, 2019

Oleh Tihanybok, leader of the OUN/B successor organization Svoboda, for which Kateryna Kruk worked.
Kateryna Kruk:
- Is Facebook’s Public Policy Manager for Ukraine as of May of this year, according to her LinkedIn page.
- Worked as an analyst and TV host for the Ukrainian ‘anti-Russian propaganda’ outfit StopFake. StopFake is the creation of Irena Chalupa, who works for the Atlantic Council and the Ukrainian government and appears to be the sister of Andrea and Alexandra Chalupa.
- Joined the “Kremlin Watch” team at the European Values think-tank, in October of 2017.
- Received the Atlantic Council’s Freedom award for her communications work during the Euromaidan protests in June of 2014.
- Worked for OUN/B successor organization Svoboda during the Euromaidan protests. “ . . . ‘There are people who don’t support Svoboda because of some of their slogans, but they know it’s the most active political party and go to them for help, said Svoboda volunteer Kateryna Kruk. . . . ” . . . .
- Also has a number of articles on the Atlantic Council’s Blog. Here’s a blog post from August of 2018 where she advocates for the creation of an independent Ukrainian Orthodox Church to diminish the influence of the Russian Orthodox Church.
- According to her LinkedIn page has also done extensive work for the Ukrainian government. From March 2016 to January 2017 she was the Strategic Communications Manager for the Ukrainian parliament where she was responsible for social media and international communications. From January-April 2017 she was the Head of Communications at the Ministry of Health.
- Was not only was a volunteer for Svoboda during the 2014 Euromaidan protests, but openly celebrated on twitter the May 2014 massacre in Odessa when the far right burned dozens of protestors alive. Kruk’s twitter feed is set to private now so there isn’t public access to her old tweet, but people have screen captures of it. Here’s a tweet from Yasha Levine with a screenshot of Kruk’s May 2, 2014 tweet where she writes: “#Odessa cleaned itself from terrorists, proud for city fighting for its identity.glory to fallen heroes..” She even threw in a “glory to fallen heroes” at the end of her tweet celebrating this massacre. Keep in mind that it was month after this tweet that the Atlantic Council gave her that Freedom Award for her communications work during the protests.
- In 2014, . . . tweeted that a man had asked her to convince his grandson not to join the Azov Battalion, a neo-Nazi militia. “I couldn’t do it,” she said. “I thanked that boy and blessed him.” And he then traveled to Luhansk to fight pro-Russian rebels.
- Lionized a Nazi sniper killed in Ukraine’s civil war. In March 2018, a 19-year neo-Nazi named Andriy “Dilly” Krivich was shot and killed by a sniper. Krivich had been fighting with the fascist Ukrainian group Right Sector, and had posted photos on social media wearing Nazi German symbols. After he was killed, Kruk tweeted an homage to the teenage Nazi. (The Nazi was also lionized on Euromaidan Press’ Facebook page.)
- Has staunchly defended the use of the slogan “Slava Ukraini,”which was first coined and popularized by Nazi-collaborating fascists, and is now the official salute of Ukraine’s army.
- Has also said that the Ukrainian fascist politician Andriy Parubiy, who co-founded a neo-Nazi party before later becoming the chairman of Ukraine’s parliament the Rada, is “acting smart,” writing, “Parubiy touche.” . . . .
It sounds like Palantir is experiencing some significant employee morale problems. Why? Because it turns out Palantir’s Investigative Case Management, or ICM, system that is currently being used by Immigration and Customs Enforcement (ICE) has been used to build profiles and track undocumented immigrants, including those immigrant families where children have been separated by parents. Palantir’s software is also used to determine targets for arrest. For example, ICE agents relied on Palantir’s ICM during a 2017 operation that targeted families of migrant children. ICE agents were instructed to use ICM to document any interaction they have with unaccompanied children trying to cross the border and they determined the children’s parents or other family members facilitated smuggling them across the border, the family members could be arrested and prosecuted for deportation. Earlier this month, the ICE unit that carried out the recent high-profile raid in Mississippi — where 680 people were arrested and detained during a school day, resulting in hundreds of children be sent home from school to homes without their parents — uses Palantir’s ICM software. As the following article describes, Palantir was contracted in 2014 to build this ICM system that lets agents access digital profiles of people suspected of violating immigration laws and organize records about them in one place. The data in the profiles includes emails, phone records, text messages and data from automatic license plate cameras so this is potentially very invasive databases of information on the US immigrant community.
The fact that the ICM system is now being used to identify the parents and children who end up getting separated has understandably resulted in a number of Palantir employees experiencing crises of conscience. Although Palantir’s leadership hasn’t experience this crisis. Quite the opposite. As the following article describes, Palantir has in fact used similar stories about employee concerns at Google over work the Google was doing for the US military as an opportunity to bash Google and declare that Palantir wouldn’t have such concerns about controversial government work. And more recently, the company just renewed a $42 million contract with ICE and CEO Alex Karp has defended the role Palantir plays with ICE during company town hall meetings. In general, it appears that Palantir is actively trying to brand itself in Washington DC as the Silicon Valley company that won’t suffer from moral qualms about the work its contracted to do (even if many of the employees are actually suffering moral qualms):
“Ending the contracts with ICE would risk a backlash in Washington, where Palantir was quickly becoming a go-to provider of data-mining services to a wide range of federal agencies. Data mining is a process of compiling multitudes of information from disparate sources to show patterns and relationships. Google’s decision, earlier the same year, to end a contract with the Pentagon over pressure from its employees had chilled the Internet giant’s relationships with some government leaders who accused it of betraying American interests.”
This is the fundamental business problem Palantir faces when confronting fundamental moral problems: its main customer is the US federal government so if it refuses a contract like the ICE case management software contract the company risks the rest of those federal contracts. That’s Palantir’s business model. A business model that includes building the Investigative Case Management (ICM) system that allows ICE to create detailed digital profiles on individuals. It’s the kind of powerful technology that all sorts of government agencies might be interested ing, and maybe even the Palantir’s corporate clients. Building powerful profiles of large numbers of individuals is a generically useful capability to offer clients. But in the end, it’s the US federal government that is Palantir’s core client and that’s why the company can’t easily dismiss controversial contracts with agencies like ICE and when its tools are being used to break up migrant families:
It’s that business model that’s built around keep the US federal government as a core client that makes it no surprise to learn that Alex Karp not only dismissed the concerns of those 200 employees, but Palantir recently renewed a contract with ICE worth $42 million. In addition, Thiel has publicly attacked Google for backing out of a federal government contract and suggested that Google was treasonous (as part of allegation that the Chinese military had infiltrated Google). And Alex Karp recently gave an interview where he shared his view that “I do not believe that these questions should be decided in Silicon Valley by a number of engineers at large platform companies.” So the message from Karp appears to be that Palantir aren’t actually going to engage in any kind of moral decision-making when it comes to its contracts with federal government at all. Not considering the morality of its actions is part of this business model:
And that ‘amoral contractor for hire’ attitude has clearly paid off. In March of this year, Palantir was awarded a massive $800 million contract to develop a new intelligence gathering network for the US military. Interestingly, in order to win this contract, Palantir first had to win a court case that found that the federal government is required by law to consider commercially available products instead of only the custom products built by contracting firms. This 2016 court ruling essentially forced the military into reconsidering its decision to go with the establishment contractor, Raytheon, for this big new contract and Palantir ended up winning in that contest. So given that Palantir’s commercially available software is presumably potentially applicable to a lot more government agencies than currently use it, it’s going to be interesting to see how many new federal contracts with the US government the company ends up securing in coming years:
So Palantir is going to be even more deeply embedded into the US national security state and military following the completion of this new giant Army contract to build the nerve center of a vast intelligence gathering network. What kinds of giant databases of personal profiles might this contract involve?
And since Palantir’s case management software (ICM) that allows for the building of detailed profiles on large numbers of people is one of the main products ICE is interested in, and presumably a lot of other government agencies too, it’s worth recalling that the PROMIS mega-scandal involved bugged commercial case management software also developed in cooperation with the US government. It’s especially notable since Palantir has other corporate clients too, as was the case with PROMIS. And, of course, there’s the whole PRISM saga that makes it abundantly clear Palantir is happy to assist with spying. In other words, if we were to see a repeat of PROMIS in the modern age, it’s a good bet Palantir will be involved. At a minimum, we know the company won’t have any moral qualms about being the next PROMIS.
Here’s the latest example of the GOP’s ongoing and growing efforts to ‘work the refs’ in the media and tech industry. We’ve already seen how the laughable claims of anti-conservative bias waged against social media companies have become a central part of the core right-wing strategy of getting favorable social media treatment and ensuring the platforms remain viable outlets for right-wing disinformation campaigns. Now there appears to be a significant fund-raising effort to finance a project dedicated to researching the past of journalists working for virtually all major mainstream new outlets, including their past social media postings, and find anything that can be embarrassing. The effort is being led by Arthur Schwartz, a Steve Bannon ally who is described as Donald Trump Jr’s “fixer”.
But it get more devious: this group is claiming that they aren’t just going to engage in deep opposition research of journalists who report things critical of Trump. They are also going to be looking into the family members of journalists who happen to be active in politics and anyone else who works at a media organization critical of Trump. And any liberal activists of other opponents of Trump will also be subject to this opposition research campaign. In other words, pretty much anyone who doesn’t support Trump and their family members will be subject to this opposition research.
The group has already released damaging anti-Semitic old tweets from a New York Times editor and a CNN editor. The New York Times editor wrote the tweets while he was in college. The CNN editor wrote them while he was a 15 and 16 year old growing up in Egypt. It underscores how, after more than a decade of widespread social media usage, we now have a large number of people working in media who were teens cluelessly tweeting away years ago and now all that old teenage-generated content is available for use by this network.
We’re told by former Bannon-ally Sam Nunberg that part of the motive of this operation is revenge. Specifically, revenge against the media for its depiction of Trump as a racist. Yep. It’s all part of the generic ‘no, you’re the real racist’ meme that we so often hear these days. But while revenge is the stated goal of this operation, it’s also clearly part of a media intimidation campaign as evidenced by the fact that they are being very out in the open out this:
“Operatives have closely examined more than a decade’s worth of public posts and statements by journalists, the people familiar with the operation said. Only a fraction of what the network claims to have uncovered has been made public, the people said, with more to be disclosed as the 2020 election heats up. The research is said to extend to members of journalists’ families who are active in politics, as well as liberal activists and other political opponents of the president.”
Do you support Trump? Nope? Well, get ready for opposition research conducted on you. And this is all being framed as ‘revenge’ against Trump’s opponents for portraying him, and/or portraying his supporters, as racist. This is presumably how this kind of intimidation campaign will be sold to the right-wing audiences...as a ‘we’re fighting for you and your honor!’ operation:
And the guy behind, Arthur Schwartz, is both an informal adviser to Trump Jr. with a history of working with Steve Bannon. As Bannon describes it, the people targeted by this are just casualties in a culture war:
Of course, the Trump White House and reelection campaign is claiming it has nothing to do it. So if any journalist point out the clear connections between this operation and the Trump White House they will presumably become targets:
And as the following article describes, Arthur Schwartz has decided to make this intimidation campaign even more overtly intimidating by now openly fundraising for this effort. He wants to raise at least $2 million to fund this operation (and clearly wants the public to know this):
“CNN, MSNBC, all broadcast networks, NY Times, Washington Post, BuzzFeed, Huffington Post, and all others that routinely incorporate bias and misinformation in to their coverage. We will also track the reporters and editors of these organizations.”
Intimidating all of the media that doesn’t routinely fête Trump isn’t going to be cheap. But Arthur Schwartz is publicly signaling that his intimidation operation is going to all the resources it needs. And don’t forget that in the age of Big Data and Cambridge Analytica-style mass data-collection operations, a lot of this opposition research will probably be highly automatable. So if you assume that you’re too insignificant to end up being targeted by this operation that’s probably not a safe assumption. And given that it’s not just journalists, but liberal activists and anyone else who openly opposes Trump (never-Trumpers) that are being targeted too, it points towards the next phase of the far right’s assault on democracy and civil society: micro-targeted intimidation campaigns against political dissidents. Today it’s journalists and liberal activists who don’t support Trump. But in the era of social media and vast databases of billions of tweets and social media posts there’s no reason the intimidation needs to be limited to journalists or activists. Virtually all citizens will potentially be vulnerable.
So let’s hope today’s teenagers get the memo about their social media use: watch what you post, kids, because some day it might be used against you. Especially by the Republican Party.
There was a recent story in Politico that appears to solve the mystery of who was behind the “stringray” devices found in Washington DC in recent years. The existence of the devices — which collects cell-phone data by mimic legitimate cell-phone towers — near the White House and other sensitive areas in DC was first publicly acknowledged by the US government in April of 2018. These reports were deemed at the time to be extra alarming given the fact that President Trump was known to use insecure cellphone for sensitive communications. According to the new Politico report, the US government has concluded that the sting-ray devices were most likely put in place by Israel, and yet there have been no consequences at all following this finding. Israel has denied the reports and Trump himself told Politico, “I don’t think the Israelis were spying on us...My relationship with Israel has been great...Anything is possible but I don’t believe it.”.
So we have reports about a US government investigation concluding Israel we behind one of the most mysterious, and potentially significant, spying operation uncovered in DC in recent years coupled with US government denials that this happened. Which is largely what we should have expected given this finding. On the one hand, given the extremely close and long-standing ties between US and Israeli military and intelligence, if this really was an operation that Israel was genuinely behind without the tacit approval of the US government there would likely be an attempt to minimize the diplomatic fallout and deal with these things quietly and out of the public eye. On the other hand, if this was the kind of operation done with the US government’s tacit approval, we would expect at least downplaying of the scandal too.
But as the following article makes clear, there’s another huge we should expect the downplaying by the US government about a story like this: The US and Israel have been increasingly outsourcing their cyber-spying capabilities to the private sector and jointly investing in these companies. Beyond that, Jeffrey Epstein appears to be one of the figures who appears to have been working on this merging of US and Israeli cyber-spying technology in recent years. So when we talk about Israel spying operations in the US involving the covert use of technology, we have to ask whether or not this was an operation involving a company with US national security ties.
The following report, the latest for Whitney Webb at MintPress
on the Epstein scandal, describes this growing joint US/Israeli investment in cyber sector in recent years and some of the figures behind it in addition to Epstein. The piece focuses on Carbyne (Carbyne911), the Israeli company started in 2014 by former members of Israel’s Unit 8200 cyber team. Carbyne created Reporty, a smartphone app that promises to provide faster and better communications to public emergency first responders. As we’ve seen, Reporty isn’t just a smartphone app. It also appears to work by monitoring public emergency communication systems and national civilian communications infrastructure for the ostensible purpose of ensuring minimal data loss during emergency response calls, which is the kind of capability with obvious dual use potential.
As we also saw, while former Israeli prime minister Ehud Barack was publicly the big investor who helped start Carbyne back in 2014, it turns out Jeffrey Epstein was quietly the person behind Barack’s financing. Barack was a known associate of Epstein and reportedly frequented Epstein’s Manhattan mansion. So we have Epstein, a figure with clear ties to Israeli intelligence but also very clear ties to US intelligence, investing in Carbyne. Well, as the piece describes, it turns out that one of the other investors in Carbyne is Peter Thiel. And Carbyne’s board of advisors includes former Palantir employee Trae Stephens, who was a member of the Trump transition team. Former Secretary of Homeland Security Michael Chertoff is also an advisory board member. These are the kinds of investors and advisors that make it clear Carbyne isn’t simply an Israeli intelligence front. This is, at a minimum, a joint operation between the US and Israel.
It’s also noteworthy that both Thiel and Epstein appear to have been leading financiers for ‘transhumanist’ projects like longevity and artificial intelligence. Both have a history of sponsoring scientists working in these areas. Both appeared to have very similar interests and moved in the same circles and yet there previously weren’t indications that Thiel and Epstein had a relationship. Their mutual investments in Carbyne helps answer that. The two definitely knew each other because they were secret business partners.
How many other secret business partnerships might Epstein and Thiel have been involved in and now many of them involve the Israeli tech sector? We obviously don’t know, but as the following article points out, Palantir opened an R&D branch in Israel in 2013 and there have long been suspicions that Palantir’s ‘pre-cog’ predictive crime algorithms have been used against Palestinian populations. So Palantir appears to be well positioned to help lead any quiet joint US-Israeli efforts to develop cyber-intelligence capabilities in the private sector.
Ominously, as the article also describes, the idea of a joint US-Israeli project on ‘pre-crime’ detection is one that goes back to 1982 when the “Main Core” database of 8 million Americans deemed to be potential subversives was developed by Oliver North under the “Continuity of Government” program and maintained using the PROMIS software (which sounds like a complimentary program to “Rex 84”). According to anonymous intelligence sources talking to MintPress, this “Main Core” database of US citizens considered “dissidents” still exists today. According to these anonymous U.S. intelligence officials who reportedly have direct knowledge of the US intelligence community’s use of PROMIS and Main Core from the 1980s to 2000s, Israeli intelligence played a role in the deployment of PROMIS as the software used for the Main Core. And Palantir, with its PROMIS-like Investigative Case Management (ICM) software already being offered to the US government for use in tracking immigrants, is the company well positioned to be maintaining the current version of Main Core. The article also reports that Main Core was used by at least one former CIA official on Ronald Reagan’s National Security Council to blackmail members of Congress, Congressional staffers and journalists. That obviously has thematic ties to the Epstein sexual trafficking network that appears to have blackmailing powerful people as one of its core functions.
Also noteworthy in all this is is that Carbyne’s products were initially sold as a solution for mass shootings (‘solution’, in the sense that victims would be able to contact emergency responders). That’s part of what makes Thiel’s investment in Carbyne extra interesting given the pre-crime prediction technologies capabilities Palantir has been offering law enforcement in recent years. As the article notes, this all potentially ties in to the recent push by the Trump administration to create HARPA, a new US government agency modeled after DARPA, that could create tools for tracking the mentally ill using smartphones and smartwatches and predicting when they might become violent. Palantir is perfectly situated to capitalize on an initiative like that.
And that’s all part of the context we have to keep in mind when reading reports about “string-ray” devices in Washington DC being set up by Israel and the response from the US government is a big *yawn*. When figures like Thiel and Epstein are acting as middle-men in some sort of joint US-Israeli cyber-spying privatization drive, it’s hard not to wonder if those stingray devices aren’t also part of some sort of joint initiative:
“Another funder of Carbyne, Peter Thiel, has his own company that, like Carbyne, is set to profit from the Trump administration’s proposed hi-tech solutions to mass shootings. Indeed, after the recent shooting in El Paso, Texas, President Trump — who received political donations from and has been advised by Thiel following his election — asked tech companies to “detect mass shooters before they strike,” a service already perfected by Thiel’s company Palantir, which has developed “pre-crime software” already in use throughout the country. Palantir is also a contractor for the U.S. intelligence community and also has a branch based in Israel.”
As we can see, Peter Thiel and Jeffrey Epstein’s paths did indeed cross with their mutual investments in Carbyne. And while we should have expected their paths to cross given the enormous overlap between their interests and activities, this is the first confirmation we’ve found. It’s also a big reason we shouldn’t assume that stories about Israeli spying on the US government aren’t being done with the US government’s participation. Don’t forget that letting Israel spy on US citizens and others in the DC area could be a means of the US intelligence services getting around legal and constitutional restrictions on domestic surveillance. In other words, there are a some potentially huge incentives for a joint US-Israeli spying operation that includes spying on Americans. Especially if that spying allows for the blackmailing of US politicians. And based on the history of programs like the “Main Core” dissident database that was reportedly used for blackmailing members of congress, and the supporting role Israeli intelligence reportedly played in setting “Main Core” up, we shouldn’t be surprised by any stories at all about Israel spying operations in DC. Given that history, the only thing we should be surprised by is if this operation wasn’t done in coordination with US intelligence:
So is the story about Israeli “stingrays” in DC really just a story about an Israeli spying operation? Or is it a story about a joint US-Isreali spying operation? And if it is a joint operation, is it part of a blackmail operation too? Is Palantir involved? These are the kinds of questions we have to ask now that we’ve learned that Peter Thiel and Jeffrey Epstein were quiet co-investors in Israeli tech companies with clear ‘dual use’ capabilities.
Here’s some articles are worth keeping in mind regarding the ongoing question of who Jeffrey Epstein was coordinating with in his Silicon Valley investments and the people involved with rehabilitation of Epstein’s reputation in recent years. We’ve already seen how one of Epstein’s co-investors in Carbyne911 — the Israeli tech company that makes emergency responder communication technology with what appears to be possible ‘dual use’ intelligence capabilities — is Peter Thiel. Epstein was reportedly the financier behind the 2015 investments in Carbyne by former Israeli Prime Minister Ehud Barak. Thiel’s Founders Fund invested in Carbyne in 2018. But as the following article describes, Epstein was getting introduced to major Silicon Valley financiers like Thiel back in 2015. And it was apparently Silicon Valley investor Reid Hoffman, a member of the ‘PayPal Mafia’, who arranged for an August 2015 dinner where Epstein was a guest along with Elon Musk, Mark Zuckerberg, and Peter Thiel.
Hoffman has subsequently publicly apologized for inviting Epstein to this dinner, saying in an email, “By agreeing to participate in any fundraising activity where Epstein was present, I helped to repair his reputation and perpetuate injustice. For this, I am deeply regretful.” So Hoffman acknowledges that this dinner helped repair Epstein’s reputation.
Hoffman also acknowledges several interactions with Epstein that he says were for the purpose of fundraising for MIT’s Media Lab, which has been reeling for the revelations of the extensive donations it received from Epstein even after his 2009 child sex trafficking convictions. Hoffman asserts that Epstein’s presence at this dinner was at the request of Joi Ito, then the head of Media Lab, for the purpose of fund-raising for Media Lab. Given that Epstein had already been donating to MIT Media Lab for years, it’s unclear how Epstein’s presence at the dinner would assist in that fundraising effort. Was Epstein supposed to convince Musk, Thiel, and Zuckerberg to donate too?
Recall that Hoffman was reportedly the figure who financed the operation by New Knowledge to run a fake ‘Russian Bot’ network in the 2017 Alabama special Senate race. Also recall how, while Hoffman’s political donations are primarily to Democrats, he’s also expressed some views strongly against the New Deal and government regulations. If he’s a real Democrat, he’s decidedly in the ‘corporate Democrat’ wing of the party.
So Hoffman invited Epstein to an August 2015 dinner with leading Silicon Valley investors like Thiel, Zuckerberg, and Musk, apparently at the request of the head of the MIT Media Lab to help with fundraising despite Epstein having donated to the lab for years. At least that’s the explanation we’re being given for this August 2015 dinner:
““By agreeing to participate in any fundraising activity where Epstein was present, I helped to repair his reputation and perpetuate injustice. For this, I am deeply regretful,” Hoffman said in the email.”
So the way Hoffman is spinning this, he was helping to repair Epstein’s reputation by having him present at this august 2015 meeting for “fundraising activities” for MIT’s Media Lab. And Epstein’s involvement in this fundraising was done at the behest of Joi Ito:
But, again, Epstein has been donated to the Media Lab for years. So why would he need to attend another fundraising dinner? Was Epstein making future donations contingent on Media Lab somehow rehabbing his reputation? Or was he at this meeting to make a pitch to Musk, Zuckerberg, and Thiel for why they should donate to Media Lab too?
Note that, in addition to Hoffman funding the Media Lab’s Disobedience Award, he also sites on Media Lab’s advisory council. So he’s more than just a donor and fundraiser for Media Lab.
It’s also worth noting that, as the following article describes, someone in Silicon Valley appeared to be trying to assist Epstein in the public rehabilitation of his reputation as late as this summer, after the Miami Herald’s explosive reporting on him in December. So Epstein has some pretty huge mystery fans in Silicon Valley:
“All three interviews seem to have touched on Epstein’s relationship with Silicon Valley. Stewart wrote that he contacted Epstein to confirm a rumor that Epstein was advising Tesla founder Elon Musk, and both The Information and Bowles cover the tech sector. Stewart reached out directly to Epstein, but it’s unclear who brokered the other meetings. The tech focus suggests that someone in Silicon Valley may have been trying to help Epstein connect with reporters.”
Was Hoffman the mystery person who may have been brokering interviews with Epstein? Recall that Peter Thiel became an Epstein co-investor in Carbyne911 last year. Might Thiel have been the mystery broker? We have no idea, and given the number of contacts Epstein has in Silicon Valley it’s not like Hoffman or Thiel are the only suspects. As the following article by Epstein’s biographer, James B. Stewart, describes, Epstein was allegedly involved with helping Elon Musk find a new Tesla chairman (something Musk denies). Beyond that, Epstein told Stewart during an interview last year that he had personally witnessed prominent tech figures taking drugs and arranging for sex. So when we think about the potential blackmail Epstein’s probably had a Silicon Valley figures, the number of possible figures who may have willingly or unwillingly been working to rehabilitate Epstein’s reputation is a pretty long list:
“Mr. Epstein then meandered into a discussion of other prominent names in technology circles. He said people in Silicon Valley had a reputation for being geeky workaholics, but that was far from the truth: They were hedonistic and regular users of recreational drugs. He said he’d witnessed prominent tech figures taking drugs and arranging for sex (Mr. Epstein stressed that he never drank or used drugs of any kind).”
Having Jeffrey Epstein witness you arranging for sex is probably the kind of situation that will make you highly compliant when it comes to helping his reputation. Or make donations...might that be part of the value Epstein provided for that 2015 dinner party that was ostensibly a fundraising operation for Media Lab? Epstein’s presence could presumably make any former ‘clients’ of his much more likely to open their checkbooks.
It’s also worth noting that Mohammed bin Salman could arguably be considered a prominent Silicon Valley individual given the extensive Saudi investments in Silicon Valley companies:
So when Epstein talks about M.B.S. speaking to him often and visiting him many times, while part of the nature of those visits could obviously include prostitution, it’s also very possible M.B.S. was using Epstein as a kind of Silicon Valley investment front too.
And that’s part of what makes the mystery of the identity of Epstein’s main Silicon Valley benefactor so mysterious: there are just way too many viable suspects.
Remember when a group of Republican members of congress stormed into the secure room for highly sensitive work (the SCIF) where the House Intelligence Committee was holding a impeachment hearing last month, prompting security concerns over the fact that they brought their cell phones into this room where smartphones aren’t allowed? Well, here’s an example of why bringing those smartphones into that room really did pose a very real security risk. It also happens to be an example of how smartphone represent a security risk to pretty much anyone:
A new security flaw was just discovered in Google’s widely-used Android operating system for smartphones. Security firm Checkmarx discovered the flaw and created an app demonstrating the large number of ways it can be exploited. It’s like the perfect flaw for surreptitious targeted spying or mass spying. The flaw enables any app to potentially take control of your smartphone’s camera and microphone. Audio and video recordings and pictures can be made and sent back to a command and control server. The attack appears to rely on Google Camera app to get around these permissions. The flaw also allows the attacker to search through your entire collection of photos and videos already stored on the phone and send them back to the server. It can collect your GPS location data too. So it basically turns your smartphone into the perfect spying device.
But it gets worse. Because while the use of this flaw would be noticeable if it was being executed while a user was looking at their phone (for example, they would see the video being recorded in the app), it’s possible to use a phone’s proximity sensor to determine when the phone is face down when it would be safe to start recording without the user noticing. Another highly opportune time to exploit this vulnerability is when you are holding your phone up to your ear allowing for pictures and video to be taken of the surrounding room. This is also something apps can detect. Checkmarx’s example malware had both of these capabilities.
Perhaps the worst part of this discovered vulnerability is that it demonstrated how apps were able to easily bypass the restrictions in Android’s operating system that is supposed to prevent apps from accessing things like cameras or microphones without users explicitly giving their permissions. So apps that didn’t request access to cameras and microphones could still potentially access them on Android phones until this vulnerability was found. And to upload and videos to the attackers’ command and control server only required that the app be given access to phone’s storage, which is an extremely common permission for apps to request.
At this point we that Android phones built by Google and Samsung are vulnerable to this attack. We’re also told by Checkmarx that Google has privately informed them that other manufacturers are vulnerable, but they haven’t been disclosed yet. Google issued a statement claiming that the vulnerability was addressed on impacted Google devices with a July 2019 patch to the Google Camera Application and that patches have been made available to all partners. Note that in the timeline provided by Checkmarx, they informed Google of the vulnerability on July 4th. So it should have hopefully been fixed for at least some of the impacted people back in July. At least for Android phones built by Google or Samsung. But that still leaves the question of how long this kind of vulnerability has been exploitable:
“The skill and luck required to make the attack work reliably and without detection are high enough that this type of exploit isn’t likely to be used against the vast majority of Android users. Still, the ease of sneaking malicious apps into the Google Play store suggests it wouldn’t be hard for a determined and sophisticated attacker to pull off something like this. No wonder phones and other electronics are barred from SCIFs and other sensitive environments.”
Have sophisticated attackers been using this vulnerability all along? We don’t know, but it didn’t sound like Checkmarx had a very hard time discovering this. And given how Checkmarx was able to build their proof-of-concept app to only operate when the phone was either face down or being held up to someone’s ear, it’s possible this has been a widely used hack that no one noticed:
So if you have an Android phone with some questionable apps , especially phones not manufactured by Google or Samsung and therefore potentially still vulnerable, it might be worth running that app and then laying the phone down a glass surface so you can still see what’s happening on the phone’s screen.
Also note how Checkmarx’s report isn’t just disclosing this vulnerability exploited via the Google Camera app. It’s also a reminder that when apps are access to a phone’s storage device, there’s nothing really stopping those apps from rooting through all of the other data on your phone’s storage card. Like all your photos and videos. And then uploading them to a server:
As Checkmarx describes in their report, when you give an app in Android access to the storage on the device, you aren’t just giving it access to its own stored data. You are giving the app access to everything stored on that SD card:
“It is known that Android camera applications usually store their photos and videos on the SD card. Since photos and videos are sensitive user information, in order for an application to access them, it needs special permissions: storage permissions. Unfortunately, storage permissions are very broad and these permissions give access to the entire SD card. There are a large number of applications, with legitimate use-cases, that request access to this storage, yet have no special interest in photos or videos. In fact, it’s one of the most common requested permissions observed.”
So while this recently disclosed vulnerability is primarily focused on how the Google Camera app had this massive vulnerability that allowed for the hijacking of cameras and microphones, it’s also a remind that all of the contents of your Smartphone’s SD cards are potentially available to any app on your phone as long as those apps have been given the “Storage” permissions. And that’s not just a vulnerability that needs to be fixed. It’s a basic part of how the Android operating system works.
Also don’t forget that Google was started with seed funding from the CIA. So when we learn about these kinds of vulnerabilities that are almost tailor made for spies, maybe that’s what they are.
It’s all a reminder that modern technology regime is predicated on systems of trust. Trust in software and hardware developers that the vast majority of users can’t realistically have a basis for giving and yet must given in order to use the technology. In other words, our modern technology regime is predicated on systems of untrustworthy trust. Which seems like a pretty huge security vulnerability.
Here’s a story about Cambridge Analytica that’s really about a much larger story about Cambridge Analytica that’s going to be unfolding over the coming months: a large leak over over 100,000 Cambridge Analytica documents has started trickling online from the anonymous @HindsightFiles twitter account. The files came from the emails accounts and hard drives of Brittany Kaiser. Recall how Kaiser, the director of business development at SCL between February 2015 and January of 2018, has already come forward and claimed that the ~87 million estimate of the number of people who had their Facebook profile information collected by Cambridge Analytica is too low and the real number is “much greater”. We don’t know yet if Kaiser is the direct source of these anonymous leaks, but it’s her files getting leaked. Kaiser has decided to speak out publicly about the full scope of Cambridge Analytica’s activities following the election in the UK last month. The way she puts it, her cache of files contains thousands and thousands more pages which showed a “breadth and depth of the work” that went “way beyond what people think they know about ‘the Cambridge Analytica scandal’”. The files also turn out to be the same files subpoenaed by the Mueller investigation.
So what new information has been released so far? Well, it’s quite a tease: we’re told the documents are going to relate to Cambridge Analytica’s work in 68 countries. And the “industrial scale” nature of the operation is going to be laid bare. The document release began on New Year’s Day and included materials on elections in Malaysia, Kenya, and Brazil. The files also include material that suggests Cambridge Analytica was working for a political party in Ukraine in 2017. We don’t yet know which party.
Unsurprisingly, there’s also a Dark Money angle to the story. The documents include emails between major Trump donors discussing ways of obscuring the source of their donations through a series of different financial vehicles. So the unlimited secret financing of political campaigns allowed by US election law includes the secret financing of secret sophisticated social media psychological manipulation campaigns too. Surprise. Only some of the 100,000+ documents have been leaked so far and more are set to be released in coming months. So the @HindsightFiles twitter account is going to be one to watch:
“The release of documents began on New Year’s Day on an anonymous Twitter account, @HindsightFiles, with links to material on elections in Malaysia, Kenya and Brazil. The documents were revealed to have come from Brittany Kaiser, an ex-Cambridge Analytica employee turned whistleblower, and to be the same ones subpoenaed by Robert Mueller’s investigation into Russian interference in the 2016 presidential election.”
So the trove of Kaiser’s documents handed over to the Mueller team are set to be released in coming months. That’s exciting. Especially since she’s describing the full scope of the Cambridge Analytica operation as including the coordination of governments and intelligence agencies, in addition to the political campaigns we already knew about. Hopefully we get to learn about about which Ukrainian political party Cambridge Analytica was working with in 2017:
So with much greater scope of the Cambridge Analytica operation in mind, here’s a Grayzone piece from 2018 that describes “Project Titania”, the name for an operation focused on psychologically profiling the Yemeni population for the US military. The article is based on documents that describe SCL’s work as a military contractor in countries around the world and includes some earlier work SCL did in Ukraine. The work is so early it either preceded the formal incorporation of SCL or must have been one of SCL’s very first projects. Because according to the internal SCL documents they obtained, SCL was working on the promoting the “Orange Revolution” in Ukraine back in late 2004. SCL was started in 2005. So Ukraine appears to have been one of SCL’s very first projects. The documents obtained by the Grayzone Project also describe operations across the Middle East as a US and UK counter-insurgency contractor, including an operation in Iran in 2009. It points towards a key context to keep in mind as Kaiser’s 100,000+ documents are released in coming months: while much of what Cambridge Analytica and its SCL parent company were doing in those 68 countries was probably done at the behest of private clients, we can’t forget that SCL has a long history as a military contractor too. The US and UK military and intelligence agencies were probably clients in most of those cases, but it’s also probably not limited to the US and UK. As Kaiser warns us, this is global operation. And these services have been up for sale since as far back as Ukraine’s Orange Revolution:
“Founded in 2005, SCL specializes in what company literature has described as “influence operations” and “psychological warfare” around the globe. An SCL brochure leaked to the BBC revealed how the firm exacerbated ethnic tensions in Latvia to assist their client in 2006.”
SCL’s founding documents going back to 2005 tout its ability to wage “influence operations” and “psychological warfare” around the globe. That’s how far back the Cambridge Analytica story goes. Although it appears to go even further back since SCL’s brochure boasted of its success “in maintaining the cohesion of the coalition to ensure a hard fought victory,” of the 2004 Orange Revolution in Ukraine:
Later, in 2009, SCL was doing some sort of psychological profiling Iran. Along with Libya, Pakistan, and Syria:
So years before the 2016 election, SCL was already acting as a psychological warfare contractor in countries around the world. It points to another important context for the Cambridge Analytica scandal: the US populace targeted in 2016 may have effectively been guinea pigs for this technology in the context of using Facebook to gather psychological profiles on large numbers of people. But they weren’t the first guinea pigs on SCL’s psychological profiling techniques because that’s what SCL has been for years in societies across the world. Apparently starting in Ukraine.
So this story is promising to get much bigger as more documents are leaked. It also raises an interesting question in the context of President Trump’s decision to drone assassinate one of Iran’s most revered leaders: from a psychological warfare perspective, was that a good idea? It doesn’t seem like it was a very good idea, but it would be interesting to know what the regime change psychological warfare specialists say about that. Since Cambridge Analytica has unfortunately reincorporated as Emerdata maybe someone can ask them about that.
The New York Times had a recent piece about a company that’s described as a little-known entity that might end privacy as we know it. Basically, the company, Clearview AI, offers what amounts to a super-facial recognition service. The company appears to have scraped as much image and identity information as possible from social media sites like Facebook, YouTube, and Venmo and allows clients to upload a picture of anyone and see personal profiles on all of those matches. Those profiles include all of the matching pictures as well as links to where those pictures appeared. So it’s like a searchable database of billions of photos and ids, where you start the search with a photo and it returns more photos and information on everyone who is a close enough match. The database of more than 3 billion pictures is described as being far beyond anything ever constructed by the US government or Silicon Valley giants. In addition, Clearview is developing a pair of glasses that will give the wearer a heads-up display of the names and information of anyone you’re looking at in real-time.
And while the company is apparently quite tiny and little known to the public, it’s services have already been used by over 600 law enforcement agencies in the US. But it’s not just law enforcement using these services. We’re also told the software has been licensed to private companies for security purposes, although we aren’t told the names of those companies.
All in all, it’s a pretty troubling company. But it of course gets much worse. It turns out the company is heavily connected to the Republican Party and largely relying on Republicans to promote it to potential clients. The company was co-founded by an Australian, Hoan Ton-That, and Richard Schwartz. Ton-That worked on developing the initial technology and Schwartz was responsible for lining up potential clients. Schwartz is the long-time senior aide to Rudy Giuliani and has quite an extensive Rolodex. Schwartz reportedly met Ton-That in 2016 at a book event at the conservative Manhattan Institute. So it sounds like Ton-That was already working on networking within right-wing circles when he met Schwarz.
By the end of 2017, the company had its facial recognition project ready to start pitching to clients. The way Ton-That describes it, they were trying to think of any possible client who might be interested in this technology, like parents who want to do a background check on a potential baby-sitter or an add-on feature for security cameras. In other words, they have plans on eventually releasing this technology to anyone.
One of the people they made their initial pitch to was Paul Nehlen, the former Republican rising star who ran for Paul Ryan’s former House seat but eventually outed himself as a virulent neo-Nazi. Clearview was offering their services to Nehlen during his campaign for “extreme opposition research”. In other words, they were presumably going to use the database to find all visual records of Nehlen’s opponents and the people working for their campaign to dig up dirt. So this company is started by a bunch of Republicans and one of the first client pitches they make is to a neo-Nazi Republican. It gives us a sense of the politics of this company.
The failed pitch to Nehlen was made in late 2017 and we’re told that soon after that the company got its first round of funding from outside investors. One of those investors was Peter Thiel, who made a $200,000 investment. According to Thiel’s spokesman, “In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” and, “That was Peter’s only contribution; he is not involved in the company.” So Thiel made one of the first investments which was converted to equity, meaning he’s a shareholder now. But we’re told he’s not involved in the company, which sounds like a typical Thiel deception.
Keep in mind that Thiel is in a position to both encourage the handing of large volumes of faces and IDs to the company while also being a position to massively exploit Clearview’s technology. Thiel co-founded Palantir, which could obviously have extensive uses for this technology, and Thiel also sits on the board of Facebook, where much of the photos and ID information was scraped. When asked about Clearview’s scraping of Facebook data to populated its database, Facebook said the company if reviewing the situation and “will take appropriate action if we find they are violating our rules.” But Facebook had no comment on the fact that Thiel sits on its board and is personally invested in Clearview. According to Ton-That, “A lot of people are doing it,” and, “Facebook knows.”
Other Republican Party connections to the company include Jessica Medeiros Garrison and Brandon Fricke. Medeiros Garrisson, the main contact for customers, managed Luther Strange’s Republican campaign for Alabama attorney general while Fricke, a “growth consultant” for the company is engaged to right-wing media personality Tomi Lahren. Clearview claims it’s also enlisted Democrats to market its products too but we aren’t given any names of those Democrats.
So how does Clearview assuage concerns about the legality of its services? That job falls to Paul D. Clement, a United States solicitor general under President George W. Bush. Paul Clement, a former clerk to Antonin Scalia, has the interesting distinction for a Republican lawyer. In 2012, Clement was the lawyer who led the Republican challenge by 26 states in 2012 to repeal Obamacare over its individual mandate provision. That’s something we would expect for a former Bush administration official. But back in October of 2019, Clement was asked by the Supreme Court to defend an Obama-era law after the independence of the head of the Consumer Financial Protection Bureau (CFPB) was challenged by the Trump administration’s Justice Department. The CFPB itself (which is now headed by a Trump appointee) also joined the Justice Department in the lawsuit, leaving no entity to defend the original law that prevents presidents from firing the heads of the CFPB. The CFPB was one of the entities set up by the Obama administration (and designed by Senator Elizabeth Warren) following the financial crisis so it was guaranteed the Trump administration would oppose it. The fact that it’s dedicated to providing consumer financial protection is the other reason it was guaranteed the Trump administration would opposed it. Republicans don’t do consumer protection. The Obama-appointed head of the CFPB, Richard Cordray, resigned in November of 2017 two years early after Trump and the Republicans made it abundantly clear they wanted to replace him. The Trump Justice Department argued back in March of 2017 that this restriction on the president’s ability to fire the head of the CFPB made it unconstitutional. In September of 2019, the Justice Department as the Supreme Court to take the case, and the following month Clement — who has argued before the Supreme Court more than 95 times — was invited by the Supreme Court to defend the existing structure of the CFPB.
Oh, and Paul D. Clement also happens to be one of the lawyers who successfully argued on behalf of the Republicans in Rucho v. Common Cause, a case that has now constitutionally enshrined hyper-partisan gerrymandering that the federal courts can do nothing about. So that gives us a sense of the importance of having somehow like Paul D. Clements soliticing clients for a company like ClearView: while he’s an extremely high profile and respected lawyer, he’s also a partisan hack. But the kind of hack whose words will carry a lot of weight when it comes to assuring potential clients about the legality of Clearview’s products.
And if that all wasn’t shady enough, the author of the following report shares an anecdote that should raise big red flags about the character of the people behind this company: When the author tested the system on his own photo by asking a friend in law enforcement to run his picture through it, the journalist got dozens of pictures of himself back including some pictures he didn’t even know existed. But his law enforcement friend was soon contacted by Clearview to ask if he had been speaking to the media. So Clearview is either actively monitoring and doing its own searches on the people run through its system or it has system set up to flag ‘troublemakers’ like journalists. Ton-That claims that the reason this search prompted a call from the company is because the system is set up to flag “possible anomalous search behavior” in order to prevent “inappropriate searches.” But after that incident, the reported found that his results were removed from future searches, which Ton-That dismissed as a “software bug”. So the company appears to be actively monitoring and manipulating search results. As the article notes, since the primary users of Clearview are police agencies at this point, the company can get a detailed list of people who have received the interest of law enforcement simply by looking at the searches used, which is the kind of information that can be potentially abused. It’s an example of why the character of the people behind this firm is particularly important for a firm offering these kinds of services and why the more we’re learning about this company the more cause there is for serious concern.
It remains unclear how many clients outside of law enforcement will be allowed to purchase Clearview’s services. But as the article notes, now that Clearview has broken the taboo of offering facial recognition software database services like this, it’s just a matter of time before companies do the same thing. And that’s why Clearview might end up ending privacy as we know it: by setting a really, really bad example by showing the world this service is possible and there’s a market for it:
“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”
An end to privacy as we know it. Everyone will be able to just look at someone and immediately access a database of personal information about them. That’s the dark path Clearview’s technology is sending us down:
And both police officer and Clearview’s own investors predict that its app will eventually be available to the public. And yet if you ask investor David Scalzo about the privacy concerns, he appears to take the stance that it’s simply impossible to ban this use of this technology whether or leads to a dystopian future or not:
But while Clearview’s investors appear to have no problem at all with blazing the trail of this dystopian post-privacy future, the company itself has taken pains to get as little exposure as possible. It even freaked out when a journalist’s photo was run through the system:
Beyond that, the technology hasn’t even been validated. According to Ton-That, it works about 75 percent of the time, which sounds pretty good until you realize you’re talking about mismatches that could lead to the wrong person being arrested and charged with a crime:
But while the possible misuses of unproven technology by law enforcement is obviously a problem, it’s the fact that the company appears to run by partisan Republicans that points towards one of the biggest potential sources of abuse. It’s a political opposition research dream tool and the firm is using Republicans find clients:
And “extreme opposition research” is one of the services Clearview offered one of its first potential clients. That client happened to be Paul Nehlen, the GOP rising-star who saw his political future implode after it became clear he was an open neo-Nazi. That’s the person Clearview offered services to right after the company finished its initial product in late 2017 and those services happened to be “extreme opposition research”. It tells us A LOT about the real intent of the figures behind this company. It’s not just for law enforcement. It’s also a reminder that the company’s willingness to manipulate the search results could be very hand for right-wing politicians who would prefer embarrassing pics not be readily available for opponents to find:
And then, shortly after making that offer to Nehlen, Clearview gets its first outside investment, including $200,000 from Peter Thiel that was later converted to equity. So in addition to co-founding Palantir and sitting on the board of Facebook, Thiel owns an undisclosed amount of this company too. And Ton-That claims Facebook is aware that Clearview’s database is heavily populated with data scraped from Facebook:
Beyond that, Clearview hired high-profile Republican lawyer Paul D. Clement to assure clients that the services are legal. Given who Clement is in the legal world that’s a major legal endorsement:
Oh, and it turns out the FBI and DHS are also trying out Clearview’s services, along with Canadian law enforcement agencies:
It doesn’t sound like federal agencies have a problem with using a database of images that was improperly scraped off of major social media sites. That’s apparently legal and fine. And that’s why Clearview appears to be on track to becoming the ‘Palantir’ of facial recognition companies: a highly secretive company owned by political connected shady figures that somehow manages to get massive numbers of government clients by offering services that have obvious intelligence applications. And it’s co-owned by Peter Thiel, further solidifying Thiel’s position as the US’s private intelligence oligarch. It’s quite a position for an open fascist like Thiel.
So when services like this end privacy as we know it ends sooner than you expect, don’t forget that this was brought to you by Republicans who didn’t want you to know about those services in the first place.
Oh look at that, Facebook just hired a new head of video strategy person to head up the video division for the “Facebook News” feature that its creating for 2020. Guess who: Jennifer William, an 18-year veteran of Fox News. Surprise!
And Williams isn’t just a Fox News veteran. She’s was a long-time senior producer of Fox & Friends (from 1997–2009), one of the channel’s most egregious outlets of disinformation. Fox & Friends is bad even by Fox News standards. That’s who is heading up the video section of Facebook’s new News section:
“Thirteen years later, Facebook has reportedly named Jennifer Williams, who was a Fox & Friends senior producer at the time that memo was sent, to head video strategy for the social media giant’s forthcoming Facebook News, NBC News reported Tuesday. Facebook News will serve its billions of users with a dedicated tab including news content curated by a team of journalists from a list of publishers chosen by the company. As Facebook executives plan a shift in the way the nation consumes news that will almost certainly impact the 2020 presidential elections, they are staffing up with an 18-year veteran of the right-wing cable network that effectively serves as President Donald Trump’s personal mouthpiece.”
An 18-year veteran of Fox News. That’s who is going to be ultimately curating the ‘news’ videos served up to Facebook readers. As the article notes, the fact that Facebook decided to make a special ‘news’ section ostensibly managed with a journalistic intent behind it, and not just run by an algorithm, meant the company was going to have to get in the business of having humans make active decisions on whether or not news is worthy of being included in the new News section of the site or if it’s ‘fake news’. So Facebook chose the veteran of the leading purveyor fake news.
But Jennifer Williams isn’t the only highly questionable figure who is going to be running Facebook’s new News division. the company already hired Campbell Brown to lead the News division. And it turns out Brown is close to Betsy DeVos, the far right sister of Erik Prince and Trump’s Education Secretary. As we should expect, Brown has already decided to credential Breibart.com as one of the new sites that Facebook News will promote:
And now here’s Judd Legum’s Popular.Info piece with more on Cambell Brown and her extensive ties to Betsy DeVos. As the article notes, the publication Brown co-founded, The 74, is largely focused on education news, which got rather awkward after Betsy DeVos became Trump’s education secretary. DeVos calls Brown a “friend” and The 74 was started, in part, with a $200,000 grant from Betsy DeVos’s family foundation. Most of the articles in The 74 covering DeVos have been largely laudatory.
Brown is also a member of the board of The American Federation for Children (AFC), a right-wing non-profit started and chaired by DeVos that spends heavily on getting Republicans elected at the state level. The 74 and the AFC co-sponsored a Republican presidential forum in Iowa in 2015.
It’s also worth recalling the recent story describing how the DeVos’s and other far right oligarchs associated with the theocratic Council for National Policy (CNP) have been quietly financing the purchase of local and regional radio stations to ensure the explosive growth of regional right-wing talk radio. It’s a reminder that the damage Betsy DeVos is doing to the intellectual status of America isn’t limited to the damage she’s doing to American education.
As another sign of Brown’s editorial leanings, while editor-in-chief of The 74, the publication featured at least 11 pieces from Eric Owens, a Daily Caller editor with a long history of making transphobic attacks on students and teachers. The 74 also appears to really hate Elizabeth Warren. Interestingly, Mark Zuckerberg’s foundation, the Chan Zuckerberg Initiative, donated $600,000 to The 74, describing it as “a non-profit, nonpartisan news site covering education in America.” Zuckerberg has previously expressed his extreme dislike of Warren’s presidential ambitions, describing her as an “existential” threat to the company.
So that’s who Jennifer Williams is going to be reporting to in her new role as the head of Video at Facebook News: Cambell Brown, the right-wing friend of Betsy DeVos:
“In 2015, Brown co-founded The 74, which focuses on the public education system, and served as editor-in-chief. Even after joining Facebook in 2017, Brown has maintained an active role in The 74, where she is a member of the board of directors. According to documents filed with the IRS in 2017, Brown dedicated five hours per week — the equivalent of a month-and-a-half of full-time work — working for The 74.”
The head of Facebook’s new News feature co-founded The 74, a publication in 2015 focused on education and remained on the board of directors even after joining Facebook. That wouldn’t be a huge deal if The 74 was just a blah non-ideological outlet. But it turns out to have been founded in part with a grant from Betsy DeVos’s family foundation:
And the content of The 74 has a clear right-wing orientation, with articles that describe Elizabeth Warren as “the second coming of Karl Marx”. And it turns out The 74 received a $600,000 donation from none other than Mark Zuckerberg, who openly fears and loathes Warren:
Then there’s the fact that The 74 features writers like Eric Owens, an editor at The Daily Caller. After Brown was hired by Facebook to head up its news division, The Daily Caller was an an official fact-checking partner at Facebook:
Oh, and Brown’s team at Facebook ended up selecting Breitbart, which is banned as a citation source for Wikipedia, as one of its 200 “quality” news sources:
So as we can see, the head of Facebook’s new News feature that’s going to roll out some time in 2020 is a close friend of Betsy DeVos and has already made moves to ensure right-wing garbage sites that should be banned from Facebook purely for journalistic integrity purposes will instead be trusted content producers and fact-checkers. And now long-time Fox News veteran Jennifer Williams will be working under Brown heading up the Facebook News video division. Because of course.
The impeachment of Trump appears to be on course for a quick end following the decision of Senate Republicans to not call any witnesses and proceed to an acquittal vote. The ultimate political consequences of acquitting Trump without calling witnesses in the Senate is hard to estimate, but it seems like a pretty sure bet that the Trump team is going to interpret this acquittal as a greenlight to engage in pretty much any political dirty tricks campaign it can imagine. After all, when Senator Lamar Alexander — one of the hold out Senators who was reportedly on the fence about whether to vote for calling witnesses or not — finally decided to vote against witnesses late last night, Alexander’s reasoning was that House Democrats had already proven their case and Trump really did what they accused him of doing but it doesn’t rise to an impeachable offense so no witnesses were needed. So the Republicans have basically ruled that inviting and then extorting a foreign government to get involved in a US electoral disinformation campaign is acceptable even if they don’t necessarily think its fine. It an open invitation for not just every Republican dirty trick imaginable but an invitation for foreign government meddling too. The Trump president has now become not just the culmination of America’s inundation with disinformation but now a validation of it.
So it’s worth noting that, days before this decision by the Senate, the Bulletin of the Atomic Scientists updated the ‘Doomsday Clock’. It’s now 100 seconds from ‘Midnight’, closer than ever. And the explosion of disinformation campaigns and disinformation technology like ‘deep fakes’ that can send a society into turmoil was apparently a big part of their reasoning:
““Humanity continues to face two simultaneous existential dangers—nuclear war and climate change—that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society’s ability to respond,” said the Bulletin of the Atomic Scientists as it moved the Doomsday Clock from two minutes to midnight to 100 seconds to midnight. This shows that they feel the risk of catastrophe is greater than ever — even higher than during the Cold War.”
A greater risk of man-made catastrophe than during the Cold War. This is where we are. The reasons include ‘oldies’ like the risk of nuclear war. But even there the risks are higher (thanks in large part to Trump’s shredding of nuclear arms treaties). And then there’s the risk of what sound like a ‘Skynet’ scenario involving militaries relying on AI for decision making and command and control systems:
But it’s the growing threats to the “information ecosphere” that runs the risk of damaging our ability to manage virtually every other threat because disinformation campaigns are already corruption the decision-making processes needed to address all those other threats:
And that warning about how disinformation threatens out collective ability to deal with ALL OF THE OTHER existential threats is a reminder that systematic disinformation is kind of a meta-existential threat. It literally makes all other existential threats more likely to happen, which arguably makes it the greatest threat of all. If humanity wasn’t so susceptible to disinformation this wouldn’t be such a massive threat. But that’s clearly not the case. Disinformation is winning. It really works and is increasingly cheap and easy to deploy, which is why someone like Trump can become president and why the far right has been rising across the globe with one big lie campaign after another. And that’s what the Senate Republicans just rubber-stamped and endorsed: the meta-existential threat of systematically trashing the information ecosphere and the resulting collective insanity.
Yasha Levine has a short new piece about an interesting historical intersection between the US’s rehabilitation of fascists and Nazi collaborators in the post-WWII era and the counterinsurgency origins of the development of the internet. It’s the kind of history that’s long been important but has suddenly gained a new level of importance now that President Trump appears to feel ‘unleashed’ following his impeachment acquittal and willing to use the power of his office to protect his friends and attack his political enemies:
Levine was given a number of declassified US Army Counter Intelligence Corp file on Mykola Lebed. Lebed was one of the many OUN‑B Ukraine fascist Nazi collaborators who was basically welcomed into the US’s national security complex and Levine is working on a short biography on him. One particular file on Lebed was from 1947 and mostly illegible, but it did have a clear stamp at the bottom that had the name Col W.P. Yarborough. Yarborough turns out to be a central figure in the development of the US Army’s special forces during this period. Beyond that, he was also a leading figure in the US’s counterintelligence operations in the 1960s and it’s in that context that Yarborough played a significant role in the development of the internet’s predecessor, the ARPANET. Levine covered Yarborough’s role in the development of the ARPANET as a counterinsurgency tool in his book Surveillance Valley. And as he covered in the book, while the counterinsurgency applications of the original ARPANET was used for the war in Vietnam, it was also used to compile a massive unprecedented computerized database on domestic political opponents of the war and left-wing groups in general.
That’s the main point of Levine’s new piece: the observation that the figure who led the development of what was a cutting-edge domestic surveillance operation primarily targeting left-wing political movements was also involved with the recruitment and utilization of WWII fascists and Nazis for use in the national security apparatus. It’s one of those historical fun-facts that highlights how the US’s long-standing ‘anti-communism’ agenda was really an anti-left-wing agenda that included the covert suppression of domestic left-wing movements. Fascists are fine. Anti-war protestors are subversives that need to be surveilled an ultimately neutralized. It’s a prevailing theme throughout the Cold War exemplified by Yarborough’s career. A career that should serve as a warning now that President Trump appears to feel like he’s been given permission to use the full force of the government to attack perceived his political enemies:
“By the time the 1960s rolled around, Yarborough was regarded as an expert on anti-guerrilla and counterinsurgency warfare. In 1967, while in charge of the U.S. Army’s Intelligence Command, he initiated a massive, illegal domestic counterinsurgency surveillance program inside America that targeted civil rights activists, antiwar protesters, leftwing student groups, and anyone who sympathized with to the oppressed.”
A massive ILLEGAL domestic surveillance operation primarily targeting the left. That’s what the first version of the internet was used for under the CONUS Intel project. And it was Yarborough — someone involved with the early Cold War utilization of fascists and Nazi — who led that initiative:
It’s a historical anecdote that’s a big reminder that the use state powers to suppress and minimize left-wing movements and individuals is a significant part of chapter of American history that led to where we are today.
It’s worth recalling at this point the interesting story John Loftus had about the whitewashing of Mykola Lebed involving Whitey Bulger. It turns out Lebed was cast as an anti-Nazi fighter in WWII in order to be allowed to get a US visa and become US asset working for the CIA. That whitewashing was carried about by Dick Sullivan, a US Army attorney operating out of Boston. Lebed was just one of the fascists and Nazis who had his background covered up by Sullivan. Sullivan also happened to be secret member of Irish Republican Party (IRA), an allegiance shared by Bulger. Sullivan eventually told Bulger about an IRA FBI informant, who Bulger subsequently killed (this is discussed by Loftus on side B of FTR#749).
Now here’s a look at that 1971 NY Times report that initially exposed the Army’s CONUS Intel program. As the article describes, while most of the information fed into this database was provided by local police, the FBI, or public sources, the program still involved sending over 1,000 undercover US Army agents to directly gather intelligence. It was only exposed when Senator Sam J. Ervin Jr., Democrat of North Carolina, contended that prominent political figures in Illinois had been under military surveillance since 1968.
The article also describes how then-General Yarborough was replaced as the head of CONUS Intel in August of 1968 by Maj. Gen. Joseph McChristian. AFter McChristian was briefed on the program he immediately asked his subordinates for ways to cut it back. But McChristian ran into resistance from the “domestic war room” and other government agencies, particularly the Justice Department, which said it needed this domestic intelligence. All in all, the CONUS Intel chapter of American history is a chapter that’s become ominously relevant for the age of ‘Trump unleashed’:
“In the operation, which was ordered ended last year, 1,000 Army agents gathered personal and political information on obscure persons, as well as the prominent, on advocates of violent protest arid participants in legitimate political activity, on the National Association for the Advancement of Colored People and the John Birch Society, on the Black Panthers and the Ku Klux Klan, on the Students for a Democratic Society and the Daughters of the American Revolution. The emphasis was on radicals, black militants and dissenters against the war in Vietnam.”
1,000 Army agents collecting domestic intelligence. Sometimes posing as members of the groups under surveillance, or members of the press, or just random bystanders. It’s kind of a nightmare situation from a constitutional perspective:
The program emerged from creation of the Army Intelligence Command at Fort Holabird Md. in 1965 that connected 300 military intelligence field offices across the US:
Then in 1966, following the race riots of 1965 and the first protests against the US war in Vietnam, when federal troops were called in, the Army Intelligence Command instructed those military intelligence offices to start collecting information that might be useful if the Army was called into a city. A side effect of this order was agents making regular visits to campuses and collecting anti-war literature. This resulted in the Counterintelligence Analysis Detachment monitoring expressions of dissent and black militants:
After race riots broke out in Newark and Detroit in 1967, General Yarborough ordered a Conus Intel communications center known as “Operations IV” to be set up at Fort Holabird and a nationwide teletype network that would feed information to it. It was this early telecommunication infrastructure that allowed for collection of information from around the country that we now know was the early incarnation of the internet:
Following a massive anti-war march on the Pentagon in 1967, a review of the role of federal troops in civil disturbances was set up, leading to “city books” plans that detailed how a military commander might need to move troops into an urban area:
Another goal of Counterintelligence Analysis Detachment was predicting when and where a civil disturbance might break out. The MLK was assassinated and protests broke out in 100 cities, making clear that predicting civil disturbances when and where civil disturbances break out might not be feasible:
In 1968, Under Secretary of the Army, David E. McGiffert order the Army to be prepared to send 10,000 troops on short notice to 25 American cities:
Later that year, RFL was assassinated and Congress passed a resolution giving the Secret Service the authority to draw on the Army to protect national political candidates. ON June 8, 1968, Paul Nitze, the Deputy Secretary of Defense, signed an order that gave formal instructions to provide the Pentagon with all essential intelligence data on civil disturbances. This led to the creation of computer databases of civil disturbances and information on individuals are interest
Also in June of 1968, the Directorate for Civil Disturbance Planning and Operations was set up at the Pentagon. This becamse known as the “domestic war room”. By the end of 1968, this whole operation gave Army intelligence the information it needed to predict how many protestors were going to show up for a planned counter-demonstration for Nixon’s inauguration and what they were planning on doing:
But it was also around this time, right when this vast surveillance bureaucracy was set and up delivering results, that General Yarborough was replaced by Maj. Gen. Joseph A. McChristian. After being briefed on the operation, McChristian ordered that ways be found to cut back on this vast domestic surveillance operation. But the “domestic war room” and Justice Department pushed back, arguing they needed this information:
Finally, in February of 1969, Under Secretary McGiffert wondered whether the Army might be exceeding its authority and ordered that covert operations end:
But around the same time McGiffert ordered the end of the covert operations, the Army general counsel explored with the Deputy Attorney General Richard G. Kleindienst whether or not the Justice Department could take over these intelligence gathering operations. Kleindienst asserted that the Justice Department lacked the manpower:
And it wasn’t just protestors and dissidents who were targeted for surveillance. Senator Sam J. Ervin Jr., Democrat of North Carolina, charged that CONUS Intel was also spying on politicians:
And that’s what we learned about this operation back in 1971. So were the lessons of this experience actually learned by the American people? We’ll probably find out as we see this ‘Trump unleashed’ period of Trump’s presidency unfold. But it’s pretty clear that all of the pieces are in place for a major domestic operation that utilizes the full power of the state to attack the perceived political enemies of the White House. Trump himself has now made this clear.
Also note how the assassinations of MLK and RFK played into the justification for this domestic military intelligence operation. The mass riots that broke out in cities across the country only fueled the call for the capability of sending thousands of troops into a large number of cities in short order simultaneously and RFK’s assassination resulted in Congressing allowing the Secret Service to call in federal troops to protect candidates. And yet both the MLK and RFK assassinations had government fingerprints all over them. See AFA #46 for much more on the government role in the assassination of MLK (part 2 includes references to Yarborough and the domestic military intelligence gathering going on at this time) and FTR#789 for more on RFK’s assassination and how the US’s progressive leadership was being systematically killed out during this period. It’s a huge and dark aspect of the story of CONUS Intel: it was a military intelligence operation that didn’t just involve gathering massive amounts of data on domestic dissidents. It also involved planning for federal troops to move into cities and took place during a period when American’s left-wing leaders were getting killed off by right-wing forces operating within and outside the government (i.e. the real ‘Deep State’). So now that President Trump has made it clear that he feels emboldened to do pretty much whatever he wants to do against his opponents, now is probably a good time for American to revisit America’s long history of coddling fascists while overtly and covertly using the full power of the national security state against for domestic political agendas. Domestically politically agendas that were virtually always targeting progressives.
Here’s a pair of stories that highlight one of the Big Data areas of information Palantir is given access to by the US government: massive volumes of IRS information.
First, here’s an article from December 2018 about how the IRS has turned to Palantir’s AI to find signs of tax fraud. The IRS had signed a $99 million seven year contract with Palantir that September. The contract let’s the IRS search for tax cheats using Palantir’s software by mining tax returns, bank reports, property records and even social media posts. As the article notes, part of the motive for relying on Palantir for this work is because the IRS has seen its staff shrink so much in recent years, with over $1 billion in cuts to the IRS budget since 2010. The Criminal Investigations division has lost around 150 agents per year as a result of these cuts. These IRS budget cuts, of course, are the work of the Republicans. As a result, AI and machine learning approaches to finding criminal activity are now seen as necessary for the IRS to do its job with fewer staff and resources. It sounds like the Palantir systems have access to the IRS’s Compliance Data Warehouse, which has 40 data sets on taxpayers stretching back more than 30 years.
Now, the idea of using AI and machine learning in the IRS’s criminal division seems like a very reasonable approach in general. But this isn’t the IRS implementing these approaches. This is the IRS handing over vast volumes of data to Palantir and using Palantir’s tools to do the analysis. Which implies Palantir has access to these IRS databases and, in turn, implicit control over which potential cases get flagged for review by the IRS. So the IRS’s budget and staff get slashed and the result is the effective privatization of the IRS’s crime detection capabilities. And the company that is providing these crime detection capabilities is owned by Peter Thiel, a top Republican donor and one of the biggest anti-tax fascists in the world:
“The information that Egaas and his colleague Benjamin Herndon, the IRS’ chief analytics officer, shared is the first major glimpse of how the revenue agency is using advanced technology since it signed a seven-year, $99 million deal with Palantir Technologies in September to sniff out tax cheats by mining data from tax returns, bank reports, property records and even social media posts.”
After $1 billion in IRS cuts over the past eight years, the IRS signs a 7 year $99 million deal with Palantir to help make up for the lost manpower. It’s a pretty nice deal with Palantir, which now has access to more than three decades of information from the IRS’s Compliance Data Warehouse:
And we’re assured that any vendors the IRS contracts with to carry out its tasks will have to go through the same security and compliance checks that IRS staff go through because privacy is paramount. So don’t worry about giving even more information to Palantir because its employees given access to this data have to go through security checks. That’s the level of assurance we’re getting about handing over this vast amount of financial data to a company run by a libertarian fascist:
This is also a good time to recall the story about JP Morgan hiring Palantir to provide AI oversight of JP Morgan’s employees. It turns out the JP Morgan security officer who was given access to Palantir’s observation systems, Peter Cavicchia, ‘went rogue’ and started spying on people all over the company, including the executives. Cavicchia had a team of Palantir employees working for him and unprecedented access to the bank’s internal information, like emails, and the Palantir system had no real limits. Cavicchia went wild spying on people at the bank, resulting in JP Morgan curtailing its use of Palantir’s systems. That’s the kind of company that’s being trusted with these databases of US tax records. And keep in mind that there’s nothing stopping Palantir from combining the information it gets from the IRS with the financial information its getting from the banks too. It’s literally positioned to become the leading Big Data private repository of sensitive information and its run by a Trump-loving fascist.
Now here’s an example of, ironically, an IRS worker who was just sentenced to five years probation for leaking an IRS “suspicious activity report”. The IRS analysis, John Fry, is charged with pulling a “suspicious activity report” report related to President Trump’s personal attorney, Michael Cohen. Fry grabbed the report from a confidential law enforcement database and leaked it to Stormy Daniel’s attorney, Michael Avenatti, in May of 2018. Fry grabbed the reports from the Palantir database used by the IRS Criminal Investigation division. It’s an example of the kind of potentially political powerful information Palantir was given access to with its IRS contract:
“Fry has worked for the IRS since 2008 and was working in the agency’s San Francisco office as of February last year. As an IRS analyst, he had access to various law enforcement databases, including the Palantir database used by the IRS Criminal Investigation division to collect investigative data from multiple sources, according to a criminal complaint filed in February 2019.”
Yep, as an IRS analyst, Fry had access to various law enforcement databases, including the Palantir database used by the IRS Criminal Investigation division to collect investigative data from multiple sources. IRS analysts have access to those databases and now Palantir employees have access too thanks to these kinds of contracts with the IRS. And Fry tried to access even more reports in a separate criminal database but those reports were restricted. It raises the question of whether or not that separate restricted database was one of the databases maintained by Palantir or not. Because if it was maintained by Palantir we should keep in mind that Palantir’s engineers presumably have access to those restricted files even if IRS agents like Fry don’t have access. It’s one of the caveats with the assurances we get that the employees for vendors like Palantir who are given access to these databases are going to go through security checks like government employees. Those Palantir employees might effectively access to ALL of the information that their government employee counterparts can’t necessarily access so if a Palantir employee ‘goes rogue’ the damage they could do is probably far greater than an IRS or other government employee going rogue:
Now, in this case, it was an IRS employee, not a Palantir employee, who did the leaking. But we have no choice in giving IRS employees to this information. They’re supposed to have access to it and the risk of leaks like this is an unavoidable risk that comes with the territory. But the risk of Palantir employees abusing this kind of information it’s a completely avoidable risk. It’s a choice to outsource these AI capabilities to Palantir. There’s no compelling reason to outsource these giant sensitive data operations. Yes, it would be more expensive for the IRS to developing these kinds of AI capabilities on their own, but that higher cost comes with the benefit of not handing over giant databases of sensitive information to private companies. At this point, Palantir is the AI/machine learning outsourcing entity of choice for the US government. It has the systems set up to incorporate new clients and teams trained to carry it out. And that was a choice. There’s no reason there couldn’t have been a government agency set up to provide these services to other government agencies like IRS. We could have limited access to these vast databases to government employees but thanks to the religion of privatization that dominates the US government Palantir was tapped as a Big Data/AI private outsourcing entity that the US government could trust and now it has access to probably more information on individual Americans than any other single entity on the planet. If the US government set out to create a privatized version of J. Edgar Hoover’s blackmail operation it couldn’t have done a better job than putting Peter Thiel in the position he’s in today with Palantir.
And that’s perhaps the biggest lesson from this to keep in mind: While granting access to these vast troves of government databases to Palantir employees is obviously problematic, there’s one particular individual at Palantir that we need to be extra concerned about having access to this information because he’s a fascist with insatiable personal ambition and appears to be amoral and more than willing to abuse such powers if it suits his personal goals. And he’s not an employee. He’s the owner.
Here’s a disturbing update on the bureaucratic maneuverings involving the US Undersecretary of Defense for Research and Engineering, a leading role for developing next-generation weapon systems and technologies. First, recall how former NASA administrator Mike Griffen was appointed as acting Undersecretary of Defense for Research and Engineering with an agenda of overhauling and streamlining the military’s defense technology procurement processes with the goal of facilitating the rapid development of next-generation technologies utilizing existing commercially available technologies whenever possible and reducing delays caused by cost/risk assessments. Griffin was also a major advocate of the creation of the Space Defense Agency (‘Space Force’), a favorite pet project of President Trump. Also recall how Griffin appeared to be behind the push to end the Pentagon’s contract with the JASON group, which was part of his larger agenda of minimizing the review process for approving the development of new platforms.
So Griffin had major visions for overhauling how the US national security state makes decisions on which hi-tech projects to invest in with an eye on speeding the process up by relying more on commercial technology and dramatically limiting the number of people involved with reviewing the proposals. And while it remains to be seen whether or not Griffin’s vision will be fully realized, we do now know that it won’t be Griffin who completes this vision because he just announced his resignation a few weeks ago, along with his deputy Lisa Porter. The news came a day after the House Armed Services Committee recommended removing the Missile Defense Agency from Griffin’s control. So the Pentagon’s two top technology experts are set to be replaced:
“In his role as R&E head, Griffin had the lead on developing new capabilities for the department, such as hypersonic weapons, directed energy and a variety of space-based programs. Included in his portfolio were the Missile Defense Agency and the Defense Advanced Research Projects Agency.”
As we can see, there’s going to be a new vision for the Pentagon’s approach to developing new weapons, along with all the other projects being developed by DARPA with dual-use military/commercial applications. And note how Griffin’s deputy, Lisa Porter, previously served as executive vice president and director of the CIA’s private investment company, In-Q-Tel Labs. It’s a reflection of how Griffin’s vision or relying more and more on readily available commercial technology was likely going to involve more national security state investments in the private sector via companies like In-Q-Tel:
And since this is the Trump administration that’s going to choosing Griffin’s replacement in the middle of this push to cut reviews and incorporate more off-the-shelf existing commercial technology into the development of the Pentagon’s next-generation systems we have to wonder who the Trump administration is going to find, especially given the fact that we’re months away from an election. And we just got our answer: Mike Griffin — who for all his faults was actually technically extremely competent — is going to be replaced by the White House’s chief technology officer Michael Kratsios. So is Kratsios qualified for a position like this? Well, he’s the White House’s chief technology office so one might assume he’s well qualified for a position like this. But as we’ll see, it turns out Kratsios has no technical education and his primarily qualification is that he worked for Peter Thiel’s investment company, Clarium Capital, and ended up becoming Thiel’s chief of staff. So the main qualification for next Undersecretary of Defense for Research and Engineering is whatever experience he acquired as an unqualified White House chief technology officer. Kratsios will continue serving as the White House’s chief technology officer. It’s the kind of situation that suggests Kratsios’s real qualifications are largely going to be limited to his enthusiasm at steering more defense spending towards Thiel’s companies like Palantir:
“Kratsios graduated from Princeton with a bachelor’s degree in political science and a focus on ancient Greek democracy. The person he’s replacing, Michael Griffin, holds a Ph.D. in aerospace engineering and served as a NASA administrator. Indeed, Kratsios will be less academically credentialled than most of the program-managers he oversees. So how did he get here?”
Yes, how exactly did Kratsios get the job of the Pentagon’s top technology officer despite having no discernible technology expertise? He’s knows the right people. Specifically Peter Thiel:
But note how part of the sales pitch for Kratsios getting this position is that he knows people in Silicon Valley and that will help facilitate relationships between the Pentagon and Silicon Valley firms. But as one former official described it, it’s not like Kratsios is actually widely like in Silicon Valley in part due to his ties to Thiel and the fact that Thiel has created so many enemies. But Kratsios’s selection is unambiguously good for “the Peter Thiel portion of Silicon Valley.” And obviously obscenely good news for Thiel, who now has even more power than ever. If you’re a Silicon Valley firm that wants to do business with the Pentagon you had better not piss off Thiel:
Of course, the kind of power wielded by Kratsios is only going to last for as long as he’s the acting Undersecretary and that may now last long beyond the first months of 2021 if Trump isn’t reelected. But any contracts set up could potentially last much longer. In other words, for Kratsios and Thiel to fully take advantage of this moment they are going to have to move fast and get as many long-term Pentagon contracts set up with Thiel-affiliated firms as possible.
So while the ascension of Mike Griffin to the Undersecretary of Defense for Research and Engineering served as a warning that the defense acquisition process was going to be dramatically sped up, it’s Griffin’s resignation that’s serving as a warning that this process could be kicked into overdrive.
And in probably related news, guess which company just announced it’s going to be doing an IPO this year: yep, Palantir. It just announced it’s filed the IPO papers. So that’s going to be interesting to watch, especially with respect to how any new contracts that get announced this year might impact Palantir’s IPO valuation. But as the following article describes, part of what this IPO announcement interesting is that it means Palantir is going to have to be more open to the public than before over the types of contracts it has with clients. Clients that include governments:
“Palantir said this week that it confidentially filed paperwork with the US Securities and Exchange Commission to go public. As with any publicly-traded company, Palantir would need to disclose more of its financial history and open itself to investor scrutiny. And as with any tech company of its size — with a roughly $20 billion valuation — its initial public offering would likely be a high-profile event.”
It’s quite a convergence of events: Thiel gets Kratsios installed in the perfect position to shovel all sorts of Pentagon contracts at Palantir right at the same time Palantir files an IPO. And there’s potentially just a months left in the Trump administration so they have to move fast. And yet in order for this IPO to happen Palantir needs to open itself up to investor scrutiny to a degree it’s never had to deal with before. What kind of horrible secrets will be revealed? And will those horrible secrets actually harm Palantir’s perceived valuation? It’s a defense contractor, after all. Horrible secrets might be seen as an investor perk if they’re profitable horrible secrets. So there’s plenty of questions raised by the prospect of an Palantir IPO taking place right when Thiel’s chief of staff because the new Pentagon head of technology procurement, including the question of what horrible company Peter Thiel is going to start next with all that new money he’s about to make.
Here’s a ‘good news’/‘bad news’ pair of stories related to the January 6 storming of the Capitol and the subsequent investigation into the identities of people in that insurrectionary mob:
First, here’s a story from a couple of weeks ago that points towards one of the good news aspects of this story. The story is about a wrongful arrest lawsuit emerging from a case where facial recognition AI was used to identify a suspect. The man suing for wrongful arrest, Nijeer Parks, is Asian, and as the article notes, studies have repeatedly shown that facial recognition software does not perform as well on Black and Asian faces. In February 2019, Nijeer Parks was accused of shoplifting candy and trying to hit a police officer with a car at a Hampton Inn in New Jersey.
Curiously, there’s also a mystery as to who conducted the facial recognition that wrongfully fingered Park. Parks’s initial lawsuit accused Clearview AI of running the face match search for the New Jersey Police. Recall how Clearview is the extremely controversial private facial recognition company that appears to have scraped virtually all of the publicly available internet to amass a vast database of billions of photographs. Also recall how Clearview’s investors include Peter Thiel and the first appears to have close ties to the far right and the Republican Party.
But there’s a question as to whether or not Clearview’s tools were actually used in Parks’s case. His attorney said he based the conclusion that Clearview AI did the match based on previous reports that New Jersey law enforcement was already working with Clearview to provide these services. But Clearview denies that its software was used for the match. And according to the police report of Parks’s arrest, the match was to a license photo, which would reside in a government database that Clearview AI technically cannot access. And yet the state agencies asked to run the face recognition search — the New York State Intelligence Center, New Jersey’s Regional Operations Intelligence Center — said they did not make the match. What’s going on here? Is Clearview AI getting access to state databases of license photo information its not supposed to have? We don’t know at this point. But it’s becoming increasingly clear that Clearview AI’s relationship with US law enforcement is deepening.
So what’s the good news here? Well, the good news is that, should AI facial recognition technology be used on the pro-Trump mob of people, at least it was a predominantly white mob. And that means the existing facial recognition algorithms should probably be a lot more accurate, making it much less likely that innocent people will get erroneously accused and face the same kind of nightmare Nijeer Parks faced based on bad facial recognition software:
“Facial recognition technology is known to have flaws. In 2019, a national study of over 100 facial recognition algorithms found that they did not work as well on Black and Asian faces. Two other Black men — Robert Williams and Michael Oliver, who both live in the Detroit area — were also arrested for crimes they did not commit based on bad facial recognition matches. Like Mr. Parks, Mr. Oliver sued over the wrongful arrest.”
Over and over, studies keep finding that facial recognition software doesn’t work as well on non-white faces. And that means there’s inevitably going to be a lot more lawsuits over facial recognition-driven wrongful arrests. In the case of Nijeer Parks, it appeared Clearview AI was the company that generate the wrong match, but as the case has unfolded the question of who actually created the match remains an open question. The wrong match appears to have come from no where:
Did Clearview get improper access to the New Jersey license database? Giving the company access would have obvious utility to New Jersey’s law enforcement so it’s not inconceivable that such access was given. But if so, it’s a sign of how deep Clearview AI’s relationship is getting with government agencies...something perhaps not unexpected given Peter Thiel’s investment in the company and the obscenely close relationship between government agencies and Thiel’s Palantir.
So will Clearview AI’s tools be used to help identify the individuals who particpated in the raid on the Capitol. We don’t know but it’s certainly seems like a possibility. And if so, at least we shouldn’t have to be as worried about mismatches.
But even if we assume the issue of accidental mismatches will be largely addressed when the matching is done on an overwhelmingly white crowd, there another form or mismatch that we should probably keep in mind: missed matches that arise from the exceedingly close relationship between Clearview AI and the far right. The kind of relationship that should raise serious questions about whether or not Clearview AI can be trusted to not run cover for its fellow far right allies. As the following BuzzFeed piece from back in March describes, Clearview AI has different “company type” categories of users for its search database. The categories are “Government”, “Bank”, and “Investor”, etc. But there’s also a “Friend” category. And based on the documents BuzzFeed received, those “Friends” include companies like SHW Partners LLC, a company founded by top Trump campaign official Jason Miller. And guess who turned out to be one of Clearview’s “test users”: Alt Right arch-troll Charles C. Johnson. So while we haven’t yet seen any indication that Clearview AI is going to be used to identify by the insurrectionary mob, and we haven’t seen any indication that Clearview AI is will to run cover for far right suspects, we’ve certainly seen strong indications that Clearview is being used by law enforcement agencies and that the company has disturbingly close ties to the far right:
After reading coverage about a new facial recognition tool, James deduced that Johnson had identified him using Clearview AI, a secretive company that’s claimed to have scraped more than 3 billion photos from social media and the web. Last month, a BuzzFeed News investigation found that people at more than 2,200 organizations have tried Clearview’s facial recognition technology, including federal entities such as Immigration and Customs Enforcement, the FBI, and private companies like Macy’s, the NBA, and Bank of America.
Clearview AI previously claimed its tools were exclusively for law enforcement. But BuzzFeed found more the 2,200 entities had used the tool. Including a disturbing number of entities and figures associated with the Republican Party and Trump White House. Jason Miller’s company was even given “Friend” status:
And then there’s the fact that Charles C. Johnson appears to be a test user with full access to just run searches whenever he wants:
Keep in mind that someone like Charles C. Johnson probably personally knows a number of the people who stormed the Capitol. That’s why his ties to Clearview are so potentially so significant in this case.
So the good news is that contemporary facial recognition software shouldn’t suffer from too much racially biased inaccurate matches if applied to Trump’s Capitol militia. The bad news is that the rioters are literally going to be ‘friends of friends’ of the company that’s probably doing the matching.
Worse than Watergate? It’s one of the meta questions for the Trump era that is once again being asked following the growing revelations about the Trump Department of Justice spying on not just Democratic members of congress but also their family members in a quest to find government leakers. It’s the kind of story that raises questions about who wasn’t being spied on by the Trump adminestration. So with questions about secret government spying once again being asked, it’s worth keeping in mind one of the contemporary contexts of secret government spying operations. In particular spying by Republican administrations: much of the US’s national security analytical capabilities are being carried out by private entities like Palantir. And since Palantir’s services to clients includes the identification of leakers, we can’t rule out the possibility that the Trump administration wasn’t just tasking the Department of Justice in its leak hunt. A private entity like Palantir would almost be ideal for a scandalous operation of that nature, especially for the Trump administration that benefited from an extremely close political alliance between Trump and Palantir co-founder Peter Thiel.
So was Palantir at all involved in this latest ‘worse than Watergate’-level Trump scandal? We have no idea. More importantly, we have no idea if the question is even being asked by investigators. But as the following 2019 piece in Vice makes clear, Palantir was definitely interested in offering leak-hunting services, the kind of service that was almost ideal for working with the Palantir Big Data model of knowing as much as possible about as many people as possible:
“Through its presence on YouTube, Praescient explains its commitment to “applying cutting edge analytic technologies and methodologies to support government and commercial clients.” For example, in one video, the company demonstrates how an organization can use Palantir’s software to find out if one of its employees leaked confidential information to a blogger.”
While we don’t have any direct evidence the Trump administration utilized Palantir’s leak-hunting services, it seems highly likely the Trump administration was at least aware such services existed. Which raises the question about whether or not the US government was already utilizing these leak-hunting services before this scandal even started. The US government is a major Palantir client and helped start the company in the first place, after all. In that context, it would almost be surprising if these services weren’t be utilized by the US agencies:
And that’s why one of the big questions surrounding this story is whether or not questions about Palantir’s potential involvement are being asked at all. Palantir is an obvious suspect for any Trump-related Big Data abuse scandal. Perhaps the obvious suspect. And yet the US government’s relationship with Palantir is also obviously a highly sensitive topic and a large number of people both inside and outside the US national security state probably don’t want to see major public scrutiny of that relationship. For example, it turns out Joe Biden’s current Director of National Intelligence, Avril Haines, was a Palantir consultant from July 5, 2017 to June 23, 2020, placing her at the company during the period of this newly discovered Trump administration spying.
So what was Haines doing at Palantir during this period? Well, here’s where it starts looking bad. Because as the following article describes, Haines scrubbed her work at Palantir shortly after being selected for a potential Biden transition team in the summer of 2020. It’s not a great look.
But another part of the reason the selection of Haines as a national security figure for the Biden administration raised the ire of so many on Left was because of the role she played in investigating the Bush administration’s War on Terror torture interrogation programs and the Obama administration’s drone warfare programs. The way critics see it, Haines effectively protected the CIA was meaningful repercussions over the role it played in the torture and normalized the drone program. She also voiced her support for former CIA director Gina Haspel in 2018 despite the role Haspel played in formulating those torture programs. Haines’s defenders view these as nitpicky criticisms of someone who successfully reigned in US drone warfare policies and pressed for maximal disclosures in the torture report.
So Haines is a rather controversial figure outside of her work for Palantir. But it’s also not hard to imagine why Palantir would have been very interested in hiring her. Haines has the crucial experience of legally vetting intelligence programs, something that would obviously be an invaluable skill set for a company like Palantir. And that brings us to Haines’s answer as to what it was she was doing at Palantir: according to Haines, she was mostly just focused on diversity development and mentoring the careers of the young women working there. That was her role for nearly three years. Diversity training.
Sure it’s possible Palantir hired Haines primarily for diversity training for three years and the company just ignored her invaluable experience vetting intelligence programs. But is that a realistic answer? Of course not. It completely smacks of being a cover up. Now, the fact that Haines doesn’t want to talk about what she actually did at Palantir doesn’t mean she was involved with a ‘Worse than Watergate’ Trump administration illegal domestic spying operation. But it does suggest it’s going to be harder than it should be getting answers about what role Palantir may have played in this latest scandal:
“After the Obama administration ended, Haines took several academic and consulting positions. One of them was with Palantir, the data firm allied with Trump that, among other things, aided ICE in rounding up undocumented immigrants. According to Palantir, Haines consulted on promoting diversity within the company’s hiring from July 5, 2017 to June 23, shortly after her position with the Biden transition was announced. As The Intercept first reported, Palantir quickly disappeared from her Brookings Institution biography, smacking of a whitewash. Brookings told The Daily Beast that Haines’ office had requested an update scrubbed of non-active affiliations broader than Palantir. A Biden transition official said Haines removed several affiliations from her bio, not just Palantir, after ending those affiliations as part of her onboarding to the transition.”
All of a sudden her three years of work at Palantir disappeared from her Brookings Institution biography. It’s not hard to imagine reasons for this. Palantir is a scandalous company, especially for a putative Democratic administration, with or without a spying scandal. But it’s also not hard to imagine that the work Haines actually did for Palantir is the kind of work she really doesn’t want to talk about, which is why her claims of focusing on diversity and inclusion ring to hollow. Why scrub your diversity and inclusion work?
We’ll see if any questions about potential roles Palantir may have played in the Trump administration’s domestic spying activities actually end up getting asked. It’s unlikely. But if those questions do end up getting asked it will be interesting to learn more about the diversity and inclusion training being done at one of the world’s leading fascist-owned Big Data NSA-for-hire service providers.
This article talks about how the US software developer for US Inteligence (funded by Peter Thiele) Palantir signed secretive contracts with the Greeks and had secretive talks with EU President (originally from Germany) Ursula von der Leyen as well as with the then EU’s competition commissioner, Margrethe Vestager, who is now in charge of making the EU fit for the digital age. The article raises concerns of violation of EU’s data protection laws including Palentir’s access to Europol data and investigations and witness testimony.
Palentir’s software “Gotham” has been used by intelligence services in the UK, the Netherlands, Denmark and France and was built for investigative analysis. Some Palantir engineers call what it does “needle-in-haystack” analysis that agencies can use to look for bad actors hiding in complex networks.
Their software also claims to be predictive of crime but the accuracy of that is controversial and has not been disclosed. There was a concern that there is an imbalance of power with knowledge of data use and between software firms and the public interest. Private power over public processes is growing exponentially with access to data and talent.
Palentir is also getting into the ground floor of a new cloud software interface requirements for the EU called of GAIA‑X.
Implicitly if you read between the lines of the article, Palentar is a software that is marketed for intelligence gathering but is likely an espionage tool used to acquire data on individuals to be used for political manipulation.
Impotrant connections to note are Palentir’s, CEO Alex Karp, studied in Germany at Frankfurt University under the influential philosopher Jürgen Habermas. Michael Kratsios was chief technology adviser to then-president, Donald Trump. Kratsios joined the White House from a role as chief of staff to Peter Thiel, the billionaire Silicon Valley tech investor and founder of Palantir, key investor in Facebook, and Paypal.
The Guardian, April 2, 2011
Seeing stones: pandemic reveals Palantir’s troubling reach in Europe Covid has given Peter Thiel’s secretive US tech company new opportunities to operate in Europe in ways some campaigners find worrying
by Daniel Howden, ApostolisFotiadis, Ludek Stavinoha, Ben Holst.
https://www.theguardian.com/world/2021/apr/02/seeing-stones-pandemic-reveals-palantirs-troubling-reach-in-europe?CMP=Share_iOSApp_Other
Seeing stones: pandemic reveals Palantir’s troubling reach in Europe
Covid has given Peter Thiel’s secretive US tech company new opportunities to operate in Europe in ways some campaigners find worrying
The 24 March, 2020 will be remembered by some for the news that Prince Charles tested positive for Covid and was isolating in Scotland. In Athens it was memorable as the day the traffic went silent. Twenty-four hours into a hard lockdown, Greeks were acclimatising to a new reality in which they had to send an SMS to the government in order to leave the house. As well as millions of text messages, the Greek government faced extraordinary dilemmas. The European Union’s most vulnerable economy, its oldest population along with Italy, and one of its weakest health systems faced the first wave of a pandemic that overwhelmed richer countries with fewer pensioners and stronger health provision. The carnage in Italy loomed large across the Adriatic.
One Greek who did go into the office that day was Kyriakos Pierrakakis, the minister for digital transformation, whose signature was inked in blue on an agreement with the US technology company, Palantir. The deal, which would not be revealed to the public for another nine months, gave one of the world’s most controversial tech companies access to vast amounts of personal data while offering its software to help Greece weather the Covid storm. The zero-cost agreement was not registered on the public procurement system, neither did the Greek government carry out a data impact assessment – the mandated check to see whether an agreement might violate privacy laws.
The questions that emerge in pandemic Greece echo those from across Europe during Covid and show Palantir extending into sectors from health to policing, aviation to commerce and even academia. A months-long joint investigation by the Guardian, Lighthouse Reports and Der Spiegel used freedom of information laws, official correspondence, confidential sources and reporting in multiple countries to piece together the European activities of one of the most secretive companies in the world. The findings raise serious questions over the way public agencies work with Palantir and whether its software can work within the bounds of European laws in the sensitive areas where it is being used, or perform in the way the company promises.
Greece was not the only country tempted by a Covid-related free trial. Palantir was already embedded in the NHS, where a no-bid contract valued at £1 was only revealed after data privacy campaigners threatened to take the UK government to court. When that trial period was over the cost of continuing with Palantir came in at £24m.
The company has also been contracted as part of the Netherlands’ Covid response and pitched at least four other European countries, as well as a clutch of EU agencies. The Palantir one-pager that Germany’s health ministry released after a freedom of information request described Europe as the company’s “focus of activities”.
Founded in California in 2003, Palantir may not have been cold-calling around European governments. It has, at times, had a uniquely powerful business development ally in the form of the US government.
On 23 March, the EU’s Centre for Disease Control (ECDC) received an email from their counterparts at the US CDC, extolling their work with Palantir and saying the company had asked for an introduction.
Palantir said it was normal practice for some of its “government customers to serve as reference for other prospective customers”. It said the ECDC turned down its invitation “out of concern of a risk of the contact being perceived as prejudicing ECDC’s independence”.
PHOTO CAPTION: A Palantir banner outside the New York Stock Exchange on the day of its initial public offering on 30 September, 2020. Photograph: Andrew Kelly/Reuters
The Greek government has declined to say how it was introduced to Palantir. But there were senior-level links between Palantir, the Trump administration and the Greek government. The US ambassador to Greece, Geoffrey Pyatt, has spoken publicly of the contacts between Pierrakakis and Michael Kratsios, a Greek-American and chief technology adviser to then-president, Donald Trump. Kratsios joined the White House from a role as chief of staff to Peter Thiel, the billionaire Silicon Valley tech investor and founder of Palantir.
When news of Greece’s relationship with Palantir was disclosed, it was not by government officials or local media but by ambassador Pyatt. A teleconference followed in December between Greece’s prime minister, Kyriakos Mitsotakis, and Palantir CEO Alex Karp, where the latter spoke of “deepening cooperation” between them.
Journalists who asked for a copy of the agreement were refused and it took opposition MPs to force disclosure via parliament. The tone then abruptly changed.
Eleftherios Chelioudakis, a data protection lawyer and member of digital rights group Homo Digitalis, was among the first people to read the two-page document and was stunned by what he found. It appeared to give Palantir phenomenal access to data of exactly the scale and sensitivity that would seem to require an impact assessment. Worse, a revision of the agreement one week after the first deleted any reference to the need to “pseudonymise” the data – to prevent it being relatable to specific individuals. This appears to be in breach of the General Data Protection Regulation (GDPR), the EU law in place since 2018 that governs how the personal information of people living in the EU can be collected and processed. Palantir says that, to its knowledge, processing was limited to “open-source pandemic and high-level Greek state-owned demographic data directly relevant to managing the Covid-19 crisis”.
PHOTO CAPTION: The Greek prime minister, Kyriakos Mitsotakis (centre), and the minister of digital governance, Kyriakos Pierrakakis (left), chat with the US ambassador to Greece, Geoffrey Pyatt (right), in Thessaloniki, Greece, in September 2019. Photograph: Kostas Tsironis/EPA
The Greek government has denied sharing patient data with Palantir, claiming that the software was used to give the prime minister a dashboard summarising key data during the pandemic. However, the contract, seen by the Guardian, specifically refers to categories of data that can be processed and includes personal data. It also includes a clause that has come to be known as an “improvement clause”. These clauses, identified in the rare examples of Palantir contracts released in answer to freedom of information requests, have been studied by Privacy International, a privacy watchdog in the UK. “The improvement clauses in Palantir’s contracts, together with the lack of transparency, are concerning because it enables Palantir to improve its products based on its customers’ use of the Palantir products,” said Privacy International’s Caitlin Bishop.
The company rejects this reading of their activities and states: “Palantir does not train algorithms on customer data for Palantir’s own benefit or to commercialise and sell to Palantir’s other customers.”
“We do not collect, mine, or sell personal data from or for our customers,” it said, adding: “Palantir does not use its customers’ data to build, deploy, transfer, resell, or repurpose machine learning or artificial intelligence models or ‘algorithms’ to other customers.”
Greece’s data protection authority has since launched an investigation. The government says it has ended cooperation with Palantir and that all data has been deleted.
Lord of the Rings mystique
Even by the standards of Silicon Valley tech companies, Palantir has been an outlier in creating a mythology around itself. The name is taken from the powerful and perilous “seeing stones” in Tolkien’s Lord of the Rings. Its leadership often claims the mantle of defenders of the western realm. Early employees cast themselves as brave hobbits and one of Thiel’s co-founders wrote about his departure from the company in a post entitled “leaving the Shire”.
But Palantir polarised opinion in the US before the backlash against big tech. Its critics do not focus on the fortune its founder Thiel made with PayPal or as an early investor in Facebook but on his support for Trump. Palantir has faced protests in the US over its role in facilitating the Trump administration’s mass deportation of undocumented migrants through its contract with US immigration enforcement agency ICE.
Palantir was also reported to have been involved in discussions over a campaign of disinformation and cyberattacks directed against WikiLeaks and journalists such as Glenn Greenwald. It later insisted that the project was never put into effect and said its association with smear tactics had “served as a teachable moment”.
And Palantir was willing to step in at the Pentagon after Google employees rebelled over its involvement in Project Maven, which seeks to use AI in battlefield targeting.
Until Palantir undertook a public listing in September last year, relatively little was known about its client list beyond services to the US military, border enforcement and intelligence agencies.
Media coverage of Palantir has been shaped by its unusual protagonists as well as its national security clients. The company’s CEO is Alex Karp, who studied in Germany at Frankfurt University under the influential philosopher Jürgen Habermas, and often makes corporate announcements in philosophical language in unconventional clothing or locations. His most recent message was tweeted from a snowy forest.
PHOTO CAPTION: Palantir’s CEO Alex Karp. Photograph: Thibault Camus/AP
Rumours over Palantir’s possible involvement [with the CIA] in the operation to find Osama bin Laden have been met with coy non-denials.
The colourful backstory has added mystique to a company which, when it listed on the New York stock exchange, had only 125 customers.
Why did Palantir meet Von Der Leyen?
Sophie in ’t Veld, a Dutch MEP, has tracked Palantir’s lobbying of Europe’s centres of power. She notes the company’s unusual “proximity to power” and questions how it was that an EU delegation to Washington in 2019 met with US government officials and only one private company, Palantir. What was discussed, she wanted to know, when Karp met the president of the European commission, Ursula von der Leyen or when Palantir met the then EU’s competition commissioner, Margrethe Vestager, who is now in charge of making the EU fit for the digital age?
PHOTO CAPTION: EU commission president Ursula Von der Leyen (left) and executive vice-president of the European Commission for A Europe Fit for the Digital Age, Margrethe Vestager (right), in Brussels, Belgium, on 19 February 2020. Photograph: Olivier Hoslet/EPA
In June 2020, In ‘t Veld sent detailed questions to the commission and published her concerns in a blogpost headlined: “Palantir is not our friend”. The commission took eight months to give even partial answers but the company emailed In ‘t Veld three days after she went public with her questions, offering a meeting. She talked to them but questions why the company felt the need to contact “an obnoxious MEP” to reassure her.
In ‘t Veld characterises the commission’s eventual answers as “evasive” with officials saying no minutes were kept of the conversation between Von Der Leyen and Karp because it was on the sidelines of the World Economic Forum at Davos and they already knew each other.
PHOTO CAPTION: Member of European Parliament, Sophie in ‘t Veld, at European parliament headquarters in Brussels, Belgium. Photograph: Wiktor Dąbkowski/ZUMA Press/Alamy
“There’s something that doesn’t add up here between the circumventing of procurement practices, meetings at the highest level of government,” said In ‘t Veld, “there’s a lot more beneath the surface than a simple software company.”
For its part, Palantir says it is “not a data company” and all data it interacts with is “collected, owned, and controlled by the customers themselves, not by Palantir.” The company says “it is essential to preserve fundamental principles of privacy and civil liberties while using data” and that Palantir does not build algorithms off its customers’ data in any form but provides software platforms that serve as the central operating systems for a wide variety of public and private sector institutions.
Palantir said: “We build software products to help our customers integrate and understand their own data, but we don’t collect, hold, mine, or monetize data on our own. Of course, our engineers may be required to interact with some customer data when they are at customer sites, but we are not in the business of collecting, maintaining, or selling data.”
Europol entanglement
Covid has been the occasion for a new business drive but Palantir did not arrive in Europe with the pandemic. It has also found opportunities in European fear of terrorism and its sense of technological inferiority to Silicon Valley.
When health concerns are driving business, the software product Palantir sells is Foundry; when terrorism fears are opening up budgets, it is Gotham.
Foundry is built to meet the needs of commercial clients. One of its champions in Europe is Airbus, which says the system has helped identify supply chain efficiencies. Foundry has more recently found its way into governments, and Palantir’s CEO, Karp, has called Foundry an “operating system for governments”.
Gotham has long been used by intelligence services in the UK, the Netherlands, Denmark and France and was built for investigative analysis. Some Palantir engineers call what it does “needle-in-haystack” analysis that agencies can use to look for bad actors hiding in complex networks.
Since 2013 Palantir has made a sustained drive to embed itself via Gotham in Europe’s police systems.
The first major opportunity to do this came at the EU’s law enforcement agency, Europol, when it won a tender to create a system to store and crunch the reams of data from member states’ police forces. The Europol Analysis System was meant both to store millions of items of information – from criminal records, to witness statements to police reports – and crunch this data into actionable intelligence.
The agreement signed in December 2012 with the French multinational Capgemini, subcontracted the work to Palantir and Gotham.
Over the next three years, heavily redacted Europol documents, obtained under freedom of information laws, tell a story of repeated delays, “low delivery quality” and “performance issues” related to Gotham. Amid the blacked-out lines there is mention of technical shortcomings such as the “inability to properly visualize large datasets”.
By May 2016 the issues were so entrenched that Europol agreed a settlement with Palantir, the terms of which they have refused to disclose. Capgemini, the contractor which brought in Palantir, also declined to comment.
It is also clear that Europol considered suing Palantir and Capgemini. In an internal briefing document ahead of an October 2018 meeting of the organisation’s management board, it is made clear that litigation was considered but rejected: “despite the performance issues identified [litigation] is likely to lead to costly court proceedings for which the outcome is uncertain.”
Palantir declined to comment on these issues specifically but said: “Any issues arising at Europol had nothing to do with the software’s ability to meet GDPR or data protection requirements, and were solely the result of a large, complex software implementation with multiple stakeholders.”
The caution was well advised. Palantir has form for suing large public bodies, including the US army, and winning.
When access was requested from Europol to all records relating to contractual matters with Palantir, 69 documents were identified, but the EU agency twice refused full access to 67 on the grounds of “public security”. An appeal has been lodged with the European ombudsman’s office, a complaint that was ruled admissible and a decision is pending.
The settlement did not disentangle Europol but it brought the project in-house and the effort to use Gotham as a data repository was abandoned but it remained as the main analysis component. In July 2017, a real-world trial of the system on counter-terrorism work found Gotham “suffering from significant performance issues”. Palantir said: “Any issues arising at Europol had nothing to do with the software’s ability to meet GDPR or data protection requirements, and were solely the result of a large, complex software implementation with multiple stakeholders.”
Despite these issues, Palantir has received €4m (£3.4m) from Europol.
The concerns went beyond performance when the EU’s privacy watchdog, the European data protection supervisor, began inspections. Heavily redacted copies of their reports in 2018 and 2019 register the inspectors’ concern that Gotham was not designed to ensure that the Europol analysts made it clear how people’s data had come to be entered into the system. The absence of this “personal implication” meant the system could not be guaranteed to distinguish whether someone was a victim, witness, informant or suspect in a crime. This raises the prospect of people being falsely implicated in criminal investigations or, at the very least, that their data may not have been handled in compliance with data protection laws.
Europol, as the data controller, said that such data was “treated with the greatest care”.
PHOTO CAPTION: The Europol building in The Hague, Netherlands. Photograph: Eva Plevier/Reuters
‘The hottest shit ever in policing’
In 2005, 15 European countries signed a deal to boost counter-terror efforts by exchanging DNA, fingerprints and vehicle registration data. This led to an IT buying spree as police authorities sought ways to get their systems to talk to each other. Norway was a latecomer when it signed up in 2009 but in 2016 a high-ranking delegation from the Norwegian police flew to Silicon Valley to meet Palantir. When they returned the force decided to set up a more far-reaching system to be called Omnia, running on Gotham.
The abrupt decision caught the attention of Ole Martin Mortvedt, a former senior police officer nearing retirement who was editing the national police union’s in-house magazine. When he started asking questions he found it impossible to establish who had gone to Silicon Valley and why the project had been expanded. The only representative of Palantir whom he could talk to in Norway was a relatively junior lawyer.
A frustrated Mortvedt started calling his former pupils from the police academy where he taught for many years who were now in mid-ranking positions in the police. Over the next three years, his police sources described a litany of missed deadlines.
“Those people who went to Silicon Valley, they were turned around by what Palantir had to offer,” said Mortvedt.
The system was handed over in 2020 but is still not functional. Palantir said that the problems were “not a function of our collaboration and, to the best of our knowledge, have their root cause elsewhere.”
The Norwegian police confirmed that Omnia has cost 93m Norwegian kroner, or slightly less than €10m.
Palantir met Danish officials in Silicon Valley two years earlier than their Norwegian counterparts. The Danes ended up buying Gotham for both the police and intelligence services as part of a counter-terrorism drive.
Christian Svanberg, who would become the data protection officer for the system, named POL-INTEL, said he wrote the relevant legislation enabling POL-INTEL.
The tender, which was made public, called for a system with cross-cutting access to existing police and intelligence databases, information exchange with Europol and open-source collection of new information. It also foresaw the need for algorithms to provide pattern recognition and social media analysis.
It was, in other words, a prescription for a predictive policing system, which vendors claim can help police predict where crimes will occur (place-based) and who might commit them (person-based). One of Denmark’s district police chiefs called it a “quantum leap into modern policing”.
Palantir said it understood from the Danish police that they did not use POL-INTEL for predictive policing.
Danish authorities pronounce themselves happy with the performance of POL-INTEL but have so far refused to release an internal evaluation or disclose data to enable any independent assessment of the results.
The police have refused to disclose even redacted versions of the internal evaluations of POL-INTEL. Despite Danish insistence on privacy safeguards with POL-INTEL, the only known internal assessment of the system found that police users had been using it to spy on the whereabouts of former Arsenal footballer, Nicklas Bendtner. A number of police officers were disciplined over the matter.
Norway and Denmark were not alone in the enthusiasm of their senior police for predictive policing, the Germany state of Hesse purchased a similar tool from Palantir in a tender that the opposition in the state parliament considered to be so opaque that a committee of inquiry dealt with it.
A German police official familiar with the development of predictive tools at the time says that senior officers had bought into the hype: “What was promoted three years ago was the hottest shit ever in policing. What we got wasn’t what was expected. You can’t predict crime.”
The Interior Ministry in Hesse said: “The Hessian police has had consistently positive experiences in its cooperation with Palantir.”
A bunker in The Hague
Since the EU passed its GDPR legislation in 2018, setting a global standard for the privacy rights of its citizens, it has talked itself up as a safe haven where digital rights are protected as human rights. While GDPR may still be poorly understood and mainly associated with browser requests to accept cookies, there is a watchdog. The European data protection supervisor and his staff of 75 face the immense task of ensuring that European agencies and the private companies they contract play by the rules. The supervisor himself is Polish lawyer Wojciech Wiewiórowski, who led the inspections at Europol previously. Predictably cautious in his choice of words, he stops short of calling for controversial companies such as Palantir to be kept away from sensitive European data. But he does counsel caution.
“It doesn’t make a difference if systems have been produced in the EU or outside of it when considering their compliance with data protection requirements. But software produced by companies that might have connections with intelligence services of countries outside the EU should be of special interest for us.”
It is not always clear who is taking more interest in who. Palantir has shown it has reach and influence over the shaping of knowledge around data and privacy in Europe. Some of the continent’s leading thinkers on big data, artificial intelligence and ethics have worked with the company in a paid capacity. One of them is Nico van Eijk, who held a professorship at the University of Amsterdam. Meeting Van Eijk in his current job is an involved process. These days his office is in a bunker in The Hague in the same building as the Netherlands’ Council of State. It is here that he runs the committee that oversees the Dutch intelligence services.
You can only enter if you leave all digital devices at the entrance – no phones, laptops, no recording devices. Throughout the Covid crisis employees could not work from home as their communications cannot be trusted to an internet connection. The committee has real-time access to all data and investigations by the military and general intelligence services of the Netherlands.
At a meeting in January 2021, Van Eijk declined to discuss a previous role he held on Palantir’s advisory board but commended the company on having an ethical board in the first place. Palantir said Van Eijk was an adviser on privacy and civil liberties and that board members are “neither asked nor expected to agree with or endorse decisions made by Palantir” and are “compensated for their time”.
Corporations, including those in tech industry, are sponsoring an increasing number of academics with potential implications for the production of knowledge on data and privacy.
Many of Van Eijk’s colleagues at the University of Amsterdam take a different view of Palantir. Ahead of the 2018 Amsterdam Privacy Conference (APC), one of Europe’s premier events on the subject, more than 100 leading scholars signed a complaint that stated: “The presence of Palantir as a sponsor of this conference legitimises the company’s practices and gives it the opportunity to position itself as part of the agenda … Palantir’s business model is based on a particular form of surveillance capitalism that targets marginalised communities and accelerates the use of discriminatory technologies such as predictive policing.”
Palantir said it is not a surveillance company. “We do not provide data collection services, including tools that enable surveillance of individual citizens or consumers.”
Inferiority complex
Europe’s dependence on US tech is not a matter of concern only for human rights advocates and privacy scholars. Some of the biggest businesses in Germany and France have been in talks over the creation of something akin to a safe haven for their own commercially sensitive data. Those discussions revealed that German car manufacturers were just as nervous as any privacy campaigner about releasing their data to US cloud services, such as Amazon Web Services.
Marietje Schaake, the director of Stanford’s Cyber Policy Centre, warned that Europe’s “tech inferiority complex” was leading to bad decisions: “We’re building a software house of cards which is sold as a service to the public but can be a liability to society. There’s an asymmetry of knowledge and power and accountability, a question of what we’re able to know in the public interest. Private power over public processes is growing exponentially with access to data and talent.”
Palantir says that “it successfully operates within and promotes the goals of the GDPR and its underlying principles”. It insists it is not a data company but rather a software company that provides data management platforms. It has for a decade, it says, worked in Europe with commercial and government organisations, “helping them successfully meet data protection requirements at scale as mandated at a European and national level”.
The latest European bid for greater digital sovereignty is GAIA‑X, wrongly billed in some quarters as a project to make a Euro-cloud. It is, in fact, an association that will seek to set the rules by which Europe-based companies do business with cloud computing services. Just as GDPR means that Europeans’ personal data has to be treated differently on Facebook than that of users outside the EU, GAIA‑X would mean commercial data is more tightly controlled on the cloud. Despite its relative obscurity, GAIA‑X may go on to have profound implications for the business model of US tech companies, or hyperscalers.
It was a surprise therefore when Palantir proclaimed itself, among other companies, a “day 1 partner” of GAIA‑X three months before any decision had been made. Officials at the association complained of “delinquent partners” who had jumped the gun for reasons of commercial advantage. Ultimately, Palantir was allowed to join.
Palantir says it did nothing that other companies involved with GAIA‑X did not do.
The chairman of GAIA‑X, Hubert Tardieu, formerly a senior executive at French tech firm ATOS, noted that the association did not want to get mired in lawsuits from “companies in California who know a lot about antitrust law.”
Get ready. It’s coming. What’s coming? We don’t know. And it’s unclear Palantir knows. But following reports that Palantir just purchased $50.7 million in gold bars and announced that it’s now accepting payments in both gold and bitcoin for its software in anticipation of another “black swan event”, we have to ask: what is Palantir seeing that they aren’t telling us? Whatever it is, it doesn’t appear to bode well for the US. At least not the dollar. The move comes roughly a year after the company relocated from San Francisco to Denver.
This is probably a good time to recall that President Biden’s Director of National Intelligence, Avril Haines, was a Palantir consultant from July 5, 2017 to June 23, 2020, when she left to join the Biden campaign. It’s a reminder that Palantir’s intelligence assessments probably include plenty of information flows from the numerous people in the intelligence community with ties to the company.
At the same time, it’s worth keeping in mind that when a company known for its threat analysis capabilities makes big public purchases like this, that’s kind of an advertisement for Palantir’s services. We could be looking at some creative marketing tactics. Either way, for the company that’s effective a privatized NSA it’s quite a signal to send to the world:
“The company spent $50.7 million this month on gold, part of an unusual investment strategy that also includes startups, blank-check companies and possibly Bitcoin. Palantir had previously said it would accept Bitcoin as a form of payment before adding precious metals more recently.”
Yeah, it’s certainly an unusual investment strategy. And note the explanation for this unusual strategy, according to the company’s COO: it’s not that there’s a specific black swan event. It “reflects more a worldview”, where “You have to be prepared for a future with more black swan events”:
And that’s possible the most ominous answer we could have received. There’s no specific black swan event the company is protecting against. Instead, it seems the company has adopted a worldview that assumes a higher rate of black swan events in the future. A worldview rooted in a deepening sense of foreboding doom.
Although who knows, maybe there is something very specific the company is preparing against. It’s not like they would tell us. Well, other than indirectly telling us maybe through weird public investment strategies like this.
Just how much data is Amazon collecting on us? That was the question asked in a new Reuters report when a group of seven reporters request from Amazon profiles of all of the information the company has on them, taking advantage of a new feature Amazon began making available to US customers in early 2020 after failing to defeat a 2018 California ballot measure requiring such disclosures.
This is far from the first time these kinds of questions have been asked about Amazon’s highly invasive products designed for areas like the bedroom. Recall how Amazon’s Echo device — which comes with cameras and an AI — was capturing incredible amounts of information that was potentially be sold to third parties. So as we might expect, the reporters who requested their data summaries were for a bit of a shock when they got the stunningly detailed array of information on the reporters and their families gathered from all the different Amazon-sold products available, from the Amazon.com website to Kindle e‑readers, Ring smart doorbells, and Alexa smartspeakers. Even with their jaded expectations the reporters were stunned to learn the level of detail collected about them. One reporter found Alexa alone was capturing roughly 70 voice recordings from their household daily on average for the prior three and a half years. And while Amazon has long assured customers that any recordings it stores only include the questions asked by the user, the reporters found the recordings often when on for much longer. Amazon said it’s working on fixing those bugs.
Perhaps the most surprising finding was captured video of the children asking Alexa how they could get their parents to let them “Play”. Alexa apparently retrieved exactly this kind of advice from the website wikiHow, advising the children to refute common parent arguments such as “too violent,” “too expensive”, and “you’re not doing well enough in school.” Amazon said it does not own wikiHow and that Alexa sometimes responds to requests with information from websites. So while the information being captured by Amazon’s ubiquitous products is a major part of this story, there’s also the question of what kind of information are their products feeding the users, in particular all the kids who have discovered that Alexa will act as a child-ally in child-parent intra-household struggles.
Finally, we got an update on Amazon’s annual reports on how it complies with law enforcement requests for this kind of data. The update is that Amazon is no longer giving that info out. Why the restriction? The company explains that it expanded its law enforcement compliance report to be a global report now and therefore it decide to streamline the data. Yep. A nonsense non-answer. Which is the kind of answer that suggests governments are probably having a field day, with shades of the NSO Group story here. So while the main story here is about the collection of all of this private data by Amazon, we can’t forget that there’s nothing stopping Amazon from sharing that data, especially with the governments that it needs permission from to continue operating:
“One reporter’s dossier revealed that Amazon had collected more than 90,000 Alexa recordings of family members between December 2017 and June 2021 – averaging about 70 daily. The recordings included details such as the names of the reporter’s young children and their favorite songs.”
70 recordings daily. That’s what Alexa alone was capturing in one reporter’s household. Information from the whole spectrum of Amazon products are collated into a single customer record, gathering information everything from the Amazon.com website searches and purchases (something we should expect to be tracked) down to the words highlighted in your Kindle e‑reader (something one would probably not be expected to assume was happening). But what is arguably the most scandalous aspect of this situation is that the reporter was just learning about these daily captures for the first time after it had been going on for nearly four years. Yes, Amazon technically discloses all of this information capture, but that’s all part of the scandal. According to the rules of commerce, you can apparently collect whatever information you want on customers as long as you tuck away a disclosure of that data capture somewhere in the massive privacy policy:
And then there’s this remarkable anecdote about the kinds of questions children are posing to Alexa: kids were literally getting advice on how to argue with their parents from a website that Alexa was accessing. And then, of course, this was all recorded. It’s an ironic indication of the scale of the potential scandal here: the company is literally recording so much data it’s capturing the data on its other abuses:
Finally, note that when it comes to potential abuses of this captured data, it’s in the transfer of that data to government agencies where the damage can really explode. This is a globally sold product, after all. It’s not just going to the US national security state that’s likely getting access this all of this incredibly private data. Pretty much any government is going to potentially have a right to request access to it under certain circumstances. What kind of circumstances? Well, that presumably depends in part on the local laws. That’s all part of why Amazon’s decision last year to stop disclosing how often it complies with US law enforcement requests is potentially so alarming. Amazon’s explanation for ending the report was that it expanded the report to include sharing compliance globally and therefore streamlined the available information. It’s not exactly a compelling explanation. So how much of this data is being shared with different governments? We don’t know, other than we can be pretty sure it’s enough to embarrass Amazon into ‘streamlining’ its reports and limiting that info:
Did Amazon “streamline” the information on how often it complies with US law enforcement requests right out of its reports out of a sense of customer convenience? That’s the absurd story the company is telling us. It’s not a great sign.
So the overall Reuters update on the collection of personal information by Amazon appears to be that it is indeed worse than previously recognized. Which is about as bad as we should have expected.
Right-wing outrage over ‘Big Tech censorship’ of conservative voices has long been a faith argument made in the spirit of ‘working the refs’ and gaslighting. It’s no secret that the social media giants have been repeatedly caught giving special treatment to right-wing voices on their platforms and making special exceptions to excuse and facilitate far right disinformation. Disinformation that synergizes with Big Tech’s algorithms that priority ‘engagement’, in particular the anger and fear-driven engagement the far right specializes in.
So it’s worth pointing out that when the GOP has been waging its ‘war on Big Tech’ in recent years — endlessly railing against alleged mass censorship by treating each individual instance of a conservative user’s content being pulled for violating the rules as an example of political discrimination — this isn’t just a cynical strategy designed to give social media platforms the ‘space’ to give right-wing users more lenient treatment than they were otherwise be receiving. It’s also a strategy that advocates for the unchecked exploitation of those profit-maximizing algorithms by the platforms themselves. In other words, if Big Tech ever truly did completely cave to these far right demands, and allowed the platforms’ algorithms to be completely unchecked amplifiers of ‘engaging’ far right content as they have been in the past, that doesn’t just help the far right. It’s also a great way for these social media giants to maximize their profits.
It’s long been clear that Big Tech and the GOP are playing some sort of cynical game of political footsie with all of these phony ‘Big Tech is censoring us’ memes. It’s win-win. The GOP can pretend to take a populist stance on something and Big Tech can pretend it’s actually doing something to adequately address the fact that its platforms remain the key tools of fascist politics globally. But given how this conservative political campaign is literally fighting for Big Tech’s right to operate in an uncheck profit-maximizing manner, we have to ask: just how much secret coordination is there between the GOP and Big Tech in creating and orchestrating the GOP’s anti-Big Tech propaganda? Because as this point, you almost couldn’t come up with a more effective lobby for maximizing Big Tech’s profits than the army of Republican officials claiming to be very upset with them:
“This is the economic context in which disinformation wins. As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.”
An economic paradigm centered on maximizing profits by processing ever-increasing volumes of personal information for the purpose of predicting user behavior. And yet this paradigm can’t actually discern truth. A giant information-processing-and-delivering system that can’t determine whether or not the information its processing or deliver is corrupt information. Corrupt or not, it’s the collection and delivery of information that maximizes profits. Peddling disinformation is how these companies maximize their profits. If profit-maximizing is the overarching imperative driving the actions of these entities, the promotion disinformation is a necessary consequence. You can’t disentangle the two:
It’s that inextricable nature of the profit-maximizing motives of these Silicon Valley giants and the imperative to promote misinformation that points us towards what ultimate must be part of the solution here: acknowledging that democracy can’t survive in an environment when disinformation is algorithmically promoted under the cold directive of profit maximization. It really is a choice of which system will ultimately reign supreme. Democracy or surveillance capitalism:
It’s worth recalling at this point the reports of the secret dinner in the fall of the 2019 between Mark Zuckerberg, Peter Thiel, Jared Kushner and Donald Trump at the White House during one of Zuckerberg’s trips to DC. Zuckerberg and Trump apparently came to an agreement during the dinne where Zuckerberg promised that Facebook would take a hands-off approach to the policing of misinformation from conservative sites. So as we see this farcical spat between the GOP and Big Tech play out to the synergistic benefit of both the GOP and Big Tech’s investors, we should probably be asking what else was agreed upon at that secret meeting and the other secret meetings that have undoubtedly been taking place all along between the Silicon Valley giants and powerful forces on the far right. Was this phony GOP-vs-Big Tech campaign actively discussed out during that meeting? Because as Shoshana Zuboff observes, this really is a choice between democracy and maximum profits, and it’s pretty clear Big Tech and the GOP both made the same choice a while ago.
The domination in the social media space of companies with deep ties to the US military industrial complex is nothing new, as Yasha Levine documented in his book Surveillance Valley. So with Elon Musk having just taken personal control of Twitter, it’s worth noting that Musk isn’t just a libertarian billionaire who is clearly finding joy in trolling the left with his new power over this key social media platform. As Levine reminds us below, he’s a US defense contractor and that role is poised to only grow.
It’s a fun fact that adds context to Musk’s hyper-trollish tweet a couple of days ago of a cartoon depicting the classic far right trope that the polarization in US politics is exclusively due to Democrats and liberals lurching to the extreme left, pushing former liberals like Musk into the conservative camp. The cartoon shows three stick figures at three different time periods: in 2008, it’s “my fellow liberal” on the left, “me” (Musk) in the center left, and a conservative on the right. A 2012 scene shows the “my fellow liberal” running quickly to the left, moving “me” to the center. Finally, there’s a 2021 scene showing the liberal far out to the left shouting “Bigot!”, with “me” now in the center-right part of the plot and the conservative stickfigure exclaiming “LOL!”. Musk basically came out as a ‘former liberal’ in the tweet.
And as Greg Sargent points out in the following piece, that tweeted cartoon wasn’t just an expression of Musk’s politics. It was basically a statement of intent. An intent to allow Twitter to revert back into a Alt Right fantasy platform where ‘anything goes’ and far right disinformation dominates.
And this is of course all happening in the midst of the GOP’s deepening embrace of the politics of QAnon and insurrection. At this point, the GOP’s quasi-official stance is that the Democratic Party consists of ‘groomers’ trying to change the law to make it easier to prey on children. How is Musk planning on handling the inevitable deluge of tweets promoting insurrection and calling for the death of pedophile Democrats?
These are the kinds of questions Musk is going to have to answer at some point and based on his public comments thus far it’s not at all clear that he’s thought it through at all. Or maybe he has thought it through and the plan really is to just allow Twitter to revert back into an ‘anything goes’ platform. We’ll see.
At the same time, there are certainly some areas where social media platforms really could use a loosening on their moderation policies, in particular when it comes to global events involving Russia or China. Recall how Ukrainian Jewish activist Eduard Dolinsky was literally banned from Facebook for showing examples of the kind of anti-Semitic graffiti that has become rampant in Ukraine. Also recall how Twitter itself locked the official Twitter account of the Chinese embassy in the US back in January 2021 over a tweet defending Beijing’s treatment of Uyghurs. Perhaps Musk can address this kind of censorship being done on behalf of the US national security state. But that returns us to the fact that Musk is very much a US defense contractor and that relationship with the US national security state is only getting deeper. Musk really is part of ‘the Deep State’. A ‘Deep State’ with that has decades of working relationships with far right elements around the globe. But unlike most elements of the Deep State, he’s got a right-wing fan base that seems to fancy Musk some sort of fellow traveler ‘outsider’. It’s a fascinating situation. A fascinating situation that doesn’t bode well.
Ok, first, here’s Sargent’s piece on Musk’s recent tweet where he basically comes out as a republican. What is the fall out going to be now that Musk is more or less promising to revert Twitter back into an ‘anything goes’ disinformation machine? we’ll find out...probably during the next insurrection fueled by waves of retweeted deep fake videos portraying democrats as satanic pedophiles:
“It may be that Musk might not end up allowing anything like this to happen, once his vague “free speech” bromides collide with messy moderation realities. But when he displays his determination to downplay the radicalization of the right wing of the GOP, he’s showing us a potential future information landscape that far-right Republicans are surely dreaming about.”
The floodgates are being opened. Which means it’s just a matter of time before the worst kind of disinformation is once again flooding that platform. But it’s not just going to be a return to the bad old days of yesteryear. Deep Fake technologies didn’t exist back when Twitter was last a free-for-all far right playground. It’s a brave new world. There’s more than one way to release a Kraken:
So Musk is coming out as a Republican at the same time he’s making this purchase of Twitter seemingly in opposition to lefty ‘wokeism’. It certainly gives us a major hint as to what to expect from Musk, at least when it comes to disinformation in US politics. But how about Twitters other problem area when it comes to moderation: the overmoderation of anything involving China or Russia that doesn’t fit with the prevailing narratives coming out of the US national security state? Can we at least expect some improvements there? Sure, if you believe someone who is anxiously courting more and more Pentagon contracts is going to do anything to piss off his biggest customer:
“I mean here you have Elon — an “outsider” — mounting a hostile takeover of a major global communication platform. And the thing about him is that he’s not just a successful lithium battery salesman, he’s also a major military contractor doing business with the most secretive and “strategically important” spooks in America.”
Musk is clearly more than happy to piss of ‘the left’. He’s kind of making that his personal brand at this point. But how about the Pentagon? As the conservative stick figure in Musk’s tweet put it, LOL!
Following up on the uproar over Elon Musk’s purchase of Twitter and, as Yasha Levine pointed out, the complete lack of any acknowledgement in that uproar over Musk’s growing status as a major US national security contractor, here’s a post on the Lawfare blog from last month that underscores another aspect of Musk’s relationship with the US national security state: the dual use nature of Musk’s Starlink satellites network and the fact that it’s already being used for military purposes. In Ukraine. Yep, it turns out Musk’s Starlink satellite network has been playing a crucial role in providing internet connectivity for Ukraine’s military. A role that was encouraged by USAID. In fact, USAID issued a press release last month touting how it set up a public-private partnership with Starlink to send 5000 Starlink terminals to Ukraine to maintain internet connectivity during the war. And as the following Lawfare blog post points out, that use hasn’t been limited to civilian uses. One Ukrainian commander told the Times of London that they “must” use Starlink to target Russian soldiers at night with thermal imaging.
So Musk delivered a large number of Starlink terminals to Ukraine under a USAID program to provide civilians with internet connectivity and they end up getting used by Ukraine’s military. It’s the kind of situation that creates a number of possible legal headaches. As we’re going to see, the US Space Command has already set up a program for incorporating commercial infrastructure operating in space into military efforts and these Starlink satellites are readily capable of handling the Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) functions necessary for modern military operations.
But perhaps the biggest possible headache that could emerge from this is the one experts have been warning us about ever since Musk hatched this Starlink scheme: the threat of a space junk cascade that makes the earth’s low orbit space effectively unworkable. That kind of scenario was already a risk just from things going wrong. And now we’re learning that Musk is allowing Starlink to be used for exactly the kind of activities that could prompt a physical attack on the Starlink cluster:
“Musk corresponds with the Ukrainian government against the backdrop of a complex legal landscape. This post explores several tenets of international humanitarian law as it might govern Russian targeting of Starlink infrastructure. It then assesses how and why Musk’s actions threaten to draw the U.S. in as a party to the conflict. Finally, it proposes modifications to domestic policy that could help avoid such an outcome now and in the future.”
What are the implications of Elon Musk’s Starlink satellites being used by the Ukrainian military? Well, for starters, those Starlink satellites — which Musk has admitted are capable of executing the Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) functions required by modern militaries — are clearly “dual use” pieces of infrastructure. And under international law that means these satellites could potentially be legally attacked by Russia. So one very direct implication of the Ukrainian military’s use of Musk’s network is a possible Russian attack on a commericial satellite system operated by a US company:
And if Russia does indeed decide to launch some sort of attack against Starlink, what can we expect the US to do in response? Well, according to Article VI of the Outer Space Treaty of 1967, state responsibility to activities in outer space, including those launched from a state’s territory, even when the activity is carried out by a nongovernmental agency. Beyond that, the US Space Command has already set up a Commercial Integration Cell (CIC) program designed to enlist the use of commercial satellite capabilities in armed conflicts. So if there’s a Russian attack on Starlink, it’s not necessarily going to be easy for the US to avoid escalating the situation because the US government will be legally responsible, in part, for sanctioning and coordinating the military use of this commercial infrastructure. In other words, a Russian attack on Starlink could create a very messy situation:
But perhaps the biggest mess that could be created by a Russian attack on Starlin would be the literal mess in space that could result from any kinetic attacks on those satellites. As we’ve seen, Starlink already poses an unprecedented threat to the earth’s orbital space, with a worst case scenario that could litter the space around the planet with so much space junk it’s effectively impossible to launch new objects into orbit. And as this Lawfare post notes, while the risk of creating a bunch of space junk could certainly give Russia pause when considering whether or not to carry out a kinetic attack on those satellites, that risk doesn’t bar Russia from carrying out such an attack. There’s no international law against it so it’s really up to Russia at that moment to weigh the costs and benefits. So if these decisions are being made during a time when Ukraine’s military is eviscerating Russian forces in part on the use of those satellites, don’t be shocked if Russia’s cost/benefit analysis doesn’t leave clean planetary orbits at the top of the priority list:
Also keep in mind that the unique vulnerability the Starlink system — creating a nightmare space junk cascade due to the large number of low orbit tiny satellites Musk just launched into orbit without any serious consideration of the risks — is exactly the kind of thing opposing militaries might be tempted to create as part of a big gamble. Who would be hurt more by a space junk cascade that cripples commercial space activity? An already economically crippled Russia, or the US? It wouldn’t necessarily take the fragmentation of that many of these satellites to get a cascade started. Or maybe they’ll just threaten to do it. Either way, let’s hope the Russian government, and any other governments directly threatened by Starlink in the future conflicts, are actually taking these risks seriously because it’s obvious the people deploying and using the system are not.
Who is going to prevent Elon Musk’s Starlink network of microsatellites from turning the earth’s lower orbits into a swarm of lethal space junk that threatens to incapacitate our ability to operate in space? No one, probably. That’s the likely answer we can infer from the following pairs of articles about Starlink.
The first article, from last month, highlights a rather interesting anomaly observed between Starlink and USAID. As we’ve seen, USAID created some sort of public-private partnership with Starlink for the delivery of 5,000 Starlink terminals to Ukraine to help deliver internet services to the country. Including vital internet services for Ukraine’s military, raising obviously questions about whether or not Starlink could potentially come under attack by Russia.
How much money did USAID provide for this initiative? Well, that’s part of the mystery. The other part of the mystery is what exactly did USAID pay for. We’re told that USAID paid SpaceX $1,500 per Starlink terminal for 1,333 terminals, adding up to $2 million. The standard Starlink terminal costs $600 while there’s a more advanced version that sells for $2,500. So was USAID paying $1,500 for the $600 terminals? If so, that would be some rather outrageous price gouging, so maybe it was $1,500 for the $2,500 terminals. We don’t know, but adding to the mystery is that USAID altered its public statements on this public-private partnership. The initial April 5 statement released by USAID noted that SpaceX donated 3,667 terminals while USAID purchased an additional 1,333 terminals. Those numbers were removed from an update released later that day.
So for whatever reason, USAID behaved in a way that suggested some degree of sensitivity about these numbers. We don’t know why, but what is clear from this story is that the US government sees a lot of value in Starlink’s capabilities. Which is rather problematic when it comes to regulating Starlink and ensuring it doesn’t pose an unreasonable risk of a space junk cascade catastrophe. And that brings us to the second article below from back in August of last year. The story is about a study done by researchers on the rate of orbital close encounters since the launch of Starlink. Basically, they’ve doubled in the last couple of years, with half of the close encounters involving Starlink satellites. So Starlink is already proving itself to be a major space collision hazard, and it’s barely even finished yet. Yes, as of the time of that article, only 1,700 Starlink satellites were in orbit. The plan is for tens of thousands of them to eventually be launched into orbit. That’s why these researchers were predicting that 90% of orbital close encounters in the future are likely to involve Starlink satellites.
And that’s why the mystery regarding Starlink’s relationship to USAID, and SpaceX’s larger relationship to the US national security state, could end up being a rather crucial question in terms of whether or not anything is going to be done to prevent an orbital space junk cascade catastrophe. Because it sure doesn’t look like the US government is overly concerned with these risks right now. Quite the opposite:
“Despite SpaceX implying that the US didn’t give money to send Starlink terminals to Ukraine in March, a report from The Washington Post reveals that the government actually paid millions of dollars for equipment and transportation. The report found that the US Agency for International Development, or USAID, paid $1,500 apiece for 1,333 terminals, adding up to around $2 million. USAID disclosed the number of terminals it bought from the company in a press release from early April that has since been altered to remove mentions of the purchase.”
For whatever reason, USAID decided to alter its press release on the ‘public-private’ partnership it started with SpaceX to deliver 5,000 Starlink terminals to Ukraine. Why the alteration? It’s a mystery, along with the mystery of whether or not the $1,500 USAID was paying for these units was for the $600 terminals or the more advanced $2,500 terminals. If it was $1,500 going towards $2,500 terminals, well, ok, that well be an obvious subsidy towards SpaceX’s ‘charitable contributions’. But if it was $1,500 going towards the $600 units, you have to wonder what exactly was going on here:
And it’s the mystery of that relationship between SpaceX and USAID that brings us to the follow article from back in August of last year about a profoundly disturbing study of the impact SpaceX is already having on the status of space junk and orbital close encounters. As researchers found, roughly half of the ~1,600 close encounters measured weekly involved Starlink satellites. Half. We’re talking about a satellite constellation that didn’t event exist several years ago. It now accounts for half of the orbital close encounters. And as the article notes, only around 1,700 Starlink satellites had been launched by that point last year. The ultimate plan is the creation of a orbital network that consists of tens of thousands of these Starlink microsatellites. That’s why these researchers are predicting that Starlink satellites are on track to account for 90% of orbital close encounters in coming years.
So Elon Musk is apparently developing a monopoly on orbital close encounters. And no one appears to be doing anything to stop it. Quite the contrary, Starlink is involved with ‘public-private partnerships’ with the US government. And that’s why the stories about Starlink’s USAID-sponsored role in the war in Ukraine and the growing threat it poses to the planetary orbital space are really part of the same story:
“SpaceX’s Starlink satellites alone are involved in about 1,600 close encounters between two spacecraft every week, that’s about 50 % of all such incidents, according to Hugh Lewis, the head of the Astronautics Research Group at the University of Southampton, U.K. These encounters include situations when two spacecraft pass within a distance of 0.6 miles (1 kilometer) from each other.”
Half of current space close encounters today involve the Starlink constellation of satellites, something that didn’t exist a few years ago. In other words, we don’t need to simply worry about these micro satellites cause a collision and generating space junk. These things already are space junk. And 1,700 of these things have been launched so far. The plan is to put tens of thousands of these micro satellites into orbit. So we’re just experiencing a taste of the orbital traffic jams yet to come.
And it’s not just like these close encounters just involve Starlink satellites threatening a non-Starlink satellite. Some of these close encounters involve two Starlink satellites. The Starlink constellation is literally a threat to itself and it’s not even close to be fully launched yet. That’s why these experts are predicting that 90% of the close encounters in the future are going to involve Starlink satellites. It’s a space junk monopoly, seemingly being built with the endorsement of the US government:
But as these experts point out, the growing threat posed by the Starlink constellation isn’t just the direct threat of a space collision. There’s also the threat that this abundance of close encounters is going to cause satellite operators to because far more risk tolerant than they should. Repositioning satellites takes time and fuel. Satellite operates are going to be forced to make judgement calls on whether or not a close encounter warning is worth responding to and it’s just a matter of time before they make a mistake. The kind of mistake that can have cascading costs:
How long before the world’s satellite operators hit ‘close encounter fatigue’ and just stop moving their satellites out of the way? The only government in a position to prevent that eventuality is subsidizing it instead, so we’ll find out eventually.
Following up on the role Elon Musk’s Starlink is playing in the conflict in Ukraine — subsidized by USAID — and the potential risks of a cascading orbital catastrophe (Kessler’s Syndrome) that comes with the militarization of the Starlink low orbit constellation of mini-satellites, here’s a pair of articles that should serve as a warning that we should probably expect the Starlink system to be treated as a military target one of these days, with all of the cascading consequences that could arise from that. Because as the articles describe, Starlink has already come under a kind of Russian attack. Specifically, a signal jamming effort that appears to have worked for at least a few hours in Ukraine before Starlink was able to issue a patch that fixed the problem.
The attack took place back in early March. We aren’t given any details on the Russian signal jamming attack, but it was presumably some sort of electronic warfare measure that disrupted the the ability of the Starlink terminals located on the ground in Ukraine to communicate with the satellites. We can also infer that the fix didn’t require any updates to the terminals themselves since they wouldn’t have been able to receive the updates. So some sort of update was delivered to the software operating the satellites themselves that fixed the jamming. That’s about all we know about the incident.
Overall, it sounds like a relatively simple form of electronic warfare. It doesn’t sound like the attack actually hacking the the software operating these satellites. But the fact that the countermeasure for the attack involved a rapid software patch underscores the basic fact that this constellation of satellites has the ability to have its software rapidly remotely updated. Because of course it has the capability. It’s an absolute necessity for managing a growing chaotic cluster. Don’t forget what researchers concluded last year: Starlink satellites are currently responsible for roughly half the close encounter events and will likely be the source of 90% of close encounters by the time SpaceX is done launching the tens of thousands of mini-satellites it has planned. Some of the close encounters involve two Starlink satellites careening towards each other. Having the ability to remotely update the Starlink software and remotely adjust the orbits of each one of those satellites is an absolute necessity.
But that necessity for remotely piloting this unprecedented satellite cluster also obviously poses a hacking risk. Yes, there’s no indication that and Starlink satellites were hacked as part of this signal jamming campaign. But the potential is obviously there. It’s not like satellites are immune to hacking. Quite the contrary. Satellites are notoriously easy to hack.
And not only are there plenty of examples of hackers hacking satellites for fun, don’t forget that you don’t necessarily need to hack the satellite directly. Hacking the satellite operator could potentially give you remote access to those satellites too. Russia’s military was accused of hacking Ukrainian satellite company Viasat at the beginning of the conflict. We don’t have any indication that the hack gave Russia control over Viasat’s satellites. But as we’ve seen with the SolarWinds hacks, once a sophisticated hacker is allow into a corporate network it can be very difficult to get them out. Was Starlink hit by the SolarWinds hack? How about some Starlink contractors? It only takes one compromised partner.
Finally, also recall how Starlink relies in part on automated orbital adjustments to avoid collision. Imagine a hack that sends faulty code handling that part of Starlink’s functionality. You could theoretically send the entire cluster careening into itself and the rest of the satellites in low earth orbit.
And that’s all why the successful repelling of a Russian signal jamming attempt shouldn’t necessarily be a relief for anyone concerned about the potential risk these constellations of microsatellites pose to humanity’s ability to operate in space. Yes, this particular attack didn’t succeed. But with Starlink, we’re still one successful hack away from an orbital catastrophe:
““The next day [after reports about the Russian jamming effort hit the media], Starlink had slung a line of code and fixed it,” Tremper said. “And suddenly that [Russian jamming attack] was not effective anymore. From [the] EW technologist’s perspective, that is fantastic … and how they did that was eye-watering to me.””
A software update ended the attack. On the one hand, that’s a nice sign for Starlink’s robustness in the face of an outside attack like a jamming signal. But it’s also a reminder that if hackers in the future manage to hack Starlink’s own systems they just might find themselves with the capacity to update Starlink’s satellite software. So when Elon Musk tweeted out that “SpaceX reprioritized to cyber defense & overcoming signal jamming”, in response to the incident back in March, let’s hope that includes protecting not just the satellites but all of the systems task with remotely controlling these satellites:
““SpaceX reprioritized to cyber defense & overcoming signal jamming,” he wrote Friday. Musk quipped that the measures were a bit of unexpected quality assurance work for the Starlink system.”
SpaceX had to reprioritize not just overcoming the direct attack of signal jamming, but also cyber defense. It’s an implicit acknowledgement that the Starlink system’s vulnerabilities don’t just involve some sort of direct physical attack. Starlink can potentially get hacked too, whether we’re talking about the direct hacking of these satellites or the indirect hacking of the Starlink command and control centers where these kind of remote software updates can get pushed to the network.
So with Starlink having already been weaponized for battlefield uses and already having come under at least an indirect disruption of its services in response to that weaponization, we have to ask: how high was cyber defense on the priority list when SpaceX was original designing the Starlink system? Don’t forget that Starlink is a platform that’s already been rushed through without a number of other proper safety assessments, like the basic assessment of whether or not it’s safe to suddenly launch thousands of microsatellites into low earth orbit without triggering some sort of Kessler Syndrone cascade catastrophe. Was cybersecurity also rushed through in the race to be the first company with a ‘megaconstellation’ of satellites in orbit? Starlink represents a kind of orbital land grab, after all. How high a priority was cybersecurity in this land grab? It’s a question that is quite literally looming over all of us. Well, looming over most of us. If you happen to be serving a space station, the threat threat is more adjacent.
Following up on the recent reports about the increasing sophistication of the military hardware — longer-range missiles and artillery — being delivered to Ukraine by the US, along with the reports about the increasingly important role Elon Musk’s Starlink satellite cluster network has been playing in providing internet services for Ukraine’s military, here’s a report giving us a better idea of the now vital role Starlink is playing in Ukraine’s military efforts. The kind of military role that has China already freaking out.
At least that’s what we can infer from recent commentary in the official newspaper of the Chinese armed forces warning about a US push for space domination using Starlink. Domination both in terms of the military operations Starlink enables in otherwise remote regions of the planet. But also domination just in terms of the space taken up in the Earth’s orbit. As the commentary pointed out, the Earth’s Low Earth Orbit (LEO) only has space for around 50,000 satellite. If Starlink ends up launching the full 42,000 satellites that its claimed is its goal, that would occupy 80 percent of the Earth’s LEO.
Beyond that, the piece warns that Starlink could effectively turn itself into a second independent internet. An independent internet potentially globally accessible and a clear risk to the internet sovereignty of countries like China.
Of course, there’s also the inherent risk associated with filling the LEO with as many satellites as can fit in that space: the risk of setting off a space junk chain reaction that triggers’ Kessler’s Syndrome that makes the LEO space effectively non-traversable. After all, Starlink is now operating as a military asset. A vital military asset in the case of this conflict. And potentially even more vital military asset in the wars of the future that are increasingly going to be fought with UAVs and other forms of remotely guided warfare. So while Russia obviously has cause for trying to disable Starlink in the context of this war, we shouldn’t assume that Russia is the only military power that’s working on ways of disabling this ‘private’ network of satellites:
“Chinese military observers have repeatedly said that the US is having a head start in space – regarded as a future battlefield by militaries across the world – by rushing to establish the next-generation military communications network based on satellite internet capability.”
Is Elon Musk’s rush to get Starlink up and running as soon as possible, damn the consequences, actually the Pentagon’s rush? That’s how this Chinese military analysis appears to view the situation. Quite understandably. The Pentagon and Ukrainians clearly hasn’t been wasting time testing out Starlink’s potential military applications. Applications that are only going to become more and more important as wars are increasingly fought by remotely controlled vehicles and smart munitions that rely on precise targeting:
Then there’s the possible of Starlink establishing itself as a second internet. A second internet potentially accessible anyway that governments will have no ability to influence. Well, except for the US government, implicitly:
Finally, there’s the orbital land grab underway. If Starlink is finished, it will occupy 80% of the available LEO space. That’s one company’s product taking up 80 percent of the entire planet’s orbit. What right does Starlink have to take this space? Well, it claimed it first. That’s it. So Starlink is being rewarded with a space monopoly for decision to rush this entire project. You’d think more governments would have noticed this by now:
What are the odds that this orbital internet system that is increasingly demonstrating its enormous military utility and operates in a low orbit isn’t attacked some day? And what are the odds of avoiding something like Kessler’s Syndrome should that attack succeed? These are the questions we had better hope Elon Musk and the US military have already been asking. And no doubt they’ve indeed been asking these questions. It’s the fact that they’ve obviously determined that the risks are worth it that makes this such an ominous story. Starlink was always a giant gamble. And not just a giant initial gamble. It’s the kind of giant gamble that just keeps growing the longer the gamble goes.
Here’s a series of articles that underscore how the conflict in Ukraine is ushering in a new kind of Cold War 2.0 “Space Race”: the race for military-capable satellite clusters. As we’ve seen, SpaceX’s Starlink cluster of thousands of low-orbit satellites has enormous military potential. Potential that was put on display with the Russian invasion of Ukraine and Starlink’s rapid rollout of internet services for the country, with financial backing from the US government via USAID. The system proved itself so invaluable for modern warfare methods that it’s already been forced to deal with Russian electronic warfare attacks. As we’re going to see, it sounds like the Pentagon and other militaries have been mightily impressed with Starlink’s abilities to function while under attack. So much so that other militaries are looking into creating their own satellite clusters. And a new space race is born. The race to fill the planet’s orbit with as many satellite clusters as possible.
And while Starlink has apparently warded off Russia’s attacks so far, the cluster still has this implicit giant existential risk of things going wrong. Specifically, the out of control chain reaction destruction of satellites from space debris that could render the low orbit of planet effectively unworkable (“Kesslers syndrome”). It points toward the new form of mutually assured destruction (MAD) in the context of this race: once you have enough rival satellite clusters operating in the same space, the physical destruction of one cluster will potentially destroy all of them as the chain-reaction plays out. It’s a better form of MADness than everyone nuking each other but still obviously not great.
And that brings us to the following Politico article about what appears to be the next phase in the US’s arming of Ukraine: advanced Gray Eagle drones. They’re the US Army’s version of the notorious “Reaper” drones capable of flying for 30 hours at a time and firing precision-guided hellfire missiles. It sounds like the plan is to start delivering them to Ukraine and give a crash course in training that could result in them being unleashed on the battlefield in 4–5 weeks. It’s a potentially huge boost to Ukraine’s military potential. The kind of boost that will make Starlink’s internet services in Ukraine that much more of a vital military asset:
“The possible sale of the Gray Eagles, the Army’s version of the better-known Reaper, represents a new chapter in arms deliveries to Ukraine and could open the door to sending Kyiv even more sophisticated systems. The Gray Eagle would be a significant leap for the embattled Ukrainians because it can fly for up to 30 hours, gather vast amounts of surveillance data, and fire precision Hellfire missiles. The system is also reusable, unlike the smaller Switchblade loitering munitions the U.S. has already sent to the front lines.”
The advanced Gray Eagle drones won’t just be a major step up in terms of the drone technology already being delivered to Ukraine. It’s also seen as opening the door for even more sophisticated weapon systems. Sophisticated weapon systems that will presumably also be remotely piloted and highly dependent on satellite communications. And depending on how the war goes for Ukraine, those advanced remotely piloted weapons systems could even theoretically be piloted from outside Ukraine:
Yes, IF Ukrainian forces had satellite coverage of the entire country, it could potentially operate drones from countries like Poland. Or from anywhere in the world, really. The key factor is maintain internet coverage throughout the battlefield. And that brings us back to the SpaceX’s Starlink cluster of low-orbit satellites already playing a crucial role in Ukraine’s military operations. Including the piloting of drones. Which has already led to Russian electronic warfare attacks on the cluster. So as the reliance on more sophisticated drones becomes a larger part of Ukraine’s military strategy, the military significance of that low-orbit satellite cluster — and its validity as a military target that Russia might reasonably attack — is only growing too:
“Ukrainian drones have relied on Starlink to drop bombs on Russian forward positions. People in besieged cities near the Russian border have stayed in touch with loved ones via the encrypted satellites. Volodymyr Zelenskyy, the country’s president, has regularly updated his millions of social media followers on the back of Musk’s network, as well as holding Zoom calls with global politicians from U.S. President Joe Biden to French leader Emmanuel Macron.”
You can’t operate military drones without satellites. And with Starlink being the only satellite service left in Ukraine, that makes Starlink absolutely vital for the use of all those advanced drones Ukraine is slated to receive. Along with long-range guided missile systems. Starlink is quickly becoming an absolutely vital military asset for the Ukrainian military. Which, of course, makes is a key strategic target by the Russians. It’s that dynamic that makes the apparent touting of Starlink’s supposed security so ominous. As we see, this report is highlighting how Starlink’s model of low-orbit cluster of thousands of tiny satellites is fundamentally different from the traditional model of a few high-orbit satellites and far more robust against attacks like Russian hacking attempts. The report also highlights how each individual satellite can have its code modified as a means of attempting to thwart hacking attempts. And sure, those are all wonderful security features. But what we don’t see in this article is any mention of the enormous downside of the Starlink model of thousands of low orbit satellites: the risk of cascading space collisions, leading to an unstoppable chain reaction (the “Kessler syndrome” scenario). It’s a rather massive security downside to the Starlink model if you think about it. Sure, it’s robust against certain kinds of attacks...until it’s not at which point it’s a complete unprecedented orbital disaster that could end up destroying the far more than just Starlink satellites.
Also recall how there are so many Starlink satellites already in orbit — which is still just a fraction of the planned 40k+ satellites — that the system relies on the automated dynamic repositioning of the satellites to avoid collisions. In other words, there are so many satellites in this system they they couldn’t feasible plan for each satellite to have its own independent orbital space. They’re have to share that space and just keep moving around to avoid collisions. What happens if just a handful of those satellites are hacked in a manner than causes them to lose the ability to accurate self-correct their orbits?
Also keep in mind that we shouldn’t be assuming that satellite clusters are solely vulnerable to remote hacking attacks. As Chinese military researchers reminded the world back in April, there’s plenty of methods for physically attacking this cluster already. This includes microwave jammers that can disrupt communications or fry electrical components; millimeter-resolution lasers that can blind satellite sensors; and long-range anti-satellite (ASAT) missiles. But as these researchers also acknowledged, the risk of space junk from physical attacks on the cluster pose obvious major risks to China’s own satellites. At the same time, they point out that the decentralized nature of the network could make it still functional even after much of it has been incapacitated. As such, the researchers advise China invest in new low cost methods for effectively neutralizing the entire cluster, which could include China launching its own tinier satellites that could swarm Starlink. In other words, we’re already on track for a military satellite cluster space race:
“The Chinese researchers were particularly concerned by the potential military capabilities of the constellation, which they claim could be used to track hypersonic missiles; dramatically boost the data transmission speeds of U.S. drones and stealth fighter jets; or even ram into and destroy Chinese satellites. China has had some near misses with Starlink satellites already, having written to the U.N. last year to complain that the country’s space station was forced to perform emergency maneuvers to avoid “close encounters” with Starlink satellites in July and October 2021.”
From boosting the transmissions of drones and stealth fighters to tracking hypersonic missiles, the advanced military applications are endless. There’s even the possibility that Starlink satellites could be used to physically ramming other satellites. So we shouldn’t be surprised to learn that military powers like China are endlessly alarmed by its existence and working on “soft kill” and “hard kill” methods for disabling it. Methods that might include creating a network of even smaller satellites that could swarm the Starlink cluster. But whatever those methods are, they have to address the fact that physically attacking the Starlink cluster could end up taking out a lot of other satellites in the process and the cluster might still be operational as long as enough satellites remain functional. So if you’re going to physically incapacitate Starlink, you might just have to access that Kessler syndrom is the price that must be paid. It points towards one of the dark dynamics at work here: due to Starlink’s relatively robustness against physical attacks, there’s an incentive to conclude that inducing Kesslers syndrome — and ‘leveling the playing field’ by hopefully incapacitating everyone’s satellites — could be seen as the best military option in a situation where the presences of Starlink is deemed to be an existential threat in the midst of a military conflict:
And don’t forget that SpaceX didn’t ask anyone for permission to start launching thousands of tiny satellites into orbit. It just did it. There’s nothing stopping other countries from doing the same. But there’s a highly compelling logic guiding them to do just that. The Cold War 2.0 logic of MADness. Along those lines, we have to ask: So will the US create an an even larger swarm of micro-satellites to take out the Chinese mini-satellite swarm before it takes out Starlink? And will the Chinese make an even larger swarm of nano-satellites? We’ll see, but as crazy as that sounds, it would all make a lot of sense in the context of our new space race MADness.
Here’s a story that has a prelude kind of feel to it: experts are warning that Sun’s 11-year solar weather cycle is scheduled for its peak activity over the next five years, with direct implications for the thousands of satellites operating in Earth’s orbit. The risk of solar storms threatening satellites isn’t new. What is relatively new is the fact that Earth’s low orbits are now bristling with thousands of microsatellites, most notably the SpaceX’s Starlink cluster of thousands of microsatellites. With around 2k microsatellites already in orbit, SpaceX is less than 1/20 of the way its goal of 42k microsatellites in low orbit. The risk of Kessler’s syndrome — the out-of-control chain-reaction of space junk — is growing with each new batch of satellites. And as we’re going to see, the warnings experts are issuing about the next five years are specifically warnings about small low-orbit satellites.
The gist of it is that increased solar radiation effectively causes the atmosphere to rise slightly. That rising atmosphere, in turn, creates drag on any low orbit satellites, with the smallest satellites experiencing the most drag. With enough drag, those satellites can end up plunging back to earth. It’s not a hypothetical. It’s exactly what happened to 40 out of 49 freshly launched Starlink satellites back in early February. As we’re going to see, SpaceX had plenty of warning about the increased solar activity but went ahead with the launch anyway and decided to have its satellites just try to ride out the increased atmospheric drag by positioning the satellites into a ‘low drag’ orientation. The strategy worked for just 9 out of the 49 satellites.
SpaceX declared it a success in crisis management. And that’s really the story here: the company that has been spearheading the reckless plan to populate the planet’s low obits with microsatellites is turning out to be reckless in deployment of that giant cluster. SpaceX could have just postponed the launch for a week but decided otherwise, losing 80% of the payload. And it’s less than 5% of the way done with launching all of the 42k planned satellites. There’s presumably going to be a lot more launches over the next five years.
But the threats to increased solar activity don’t just pose a risk to the freshly-launched satellites sitting just above the atmosphere. It sounds like solar weather can impact the ability to calculate trajectories of objects in orbit. This could be particularly perilous for the Starlink cluster given that, as we’ve seen, the cluster operates on the assumption that the satellites are not in unique orbits and will routinely need to self-correct to avoid collisions using “autonomous collision avoidance systems”. So any solar storms that disrupt that ability to predict collisions and self-correct could be utterly disastrous, even if those capabilities are knocked out for relatively short periods of time.
And, of course, as we’ve also seen, Starlink has managed to turn itself into a viable military target now that it’s proving to be vital for Ukraine’s war efforts. The risk of some sort direct military attack on the cluster is rising too for the foreseeable future. Imagine what kinds of military opportunities a powerful solar storm might create.
And that’s all why the warnings about increased solar storm activities to satellites aren’t quite like the similar warnings we’ve heard in solar-cycles past. It’s a lot more crowded up there this time:
““Whatever you’ve experienced in the past two years doesn’t matter,” Fang said, as reported in SpaceNews. “Whatever you learned the past two years is not going to apply in the next five years.””
It’s like climate change for space weather. Except largely predictable. And not caused by human activity. But as experts warn, any satellites inhabiting the lower orbits of the planet are going to experience an extra choppy ride for over the next five years. How many plunging satellites are we going to see during this period? Time will tell. Time and the occasional ‘shooting start’:
But note that it’s just that an inflated atmosphere creates extra drag threatens to capture the lowest orbiting satellites. All that drag also just makes the calculating of orbital trajectories more difficult too. Don’t forget that the Starlink system avoids collisions by constantly watching for collisions and adjusting orbits as needed. It’s one of the requirements of throwing thousands of satellites into the same low orbit space. That whole process is going to be extra hard for the next five years as Starlink continues to flood that space with tens of thousands more micro-satellites:
Finally, as the experts remind us, this isn’t just some hypothetical risk to Starlink. Solar storms cause 40 Starlink satellites to plunge from space back in February:
A minor storm forced 40 Starlink satellites out of orbit. It’s not a great sign. Don’t forget that SpaceX plans to launch over 40,000 microsatellites once this cluster is completed. They aren’t even 1/20 of the way there yet and these problems are already happening. How many more satellites will SpaceX have in low orbit by the time the solar activity peaks over the next five years?
But as the following Time article from back in February describes, the botched launch of 40 out of 49 freshly launched satellites as a result of a minor solar storm was really more ominous than it might initially appear. Ominous because it indicates a high risk-taking threshold on the part of Starlink’s decision-making. Because as the article points out, Starlink had plenty of warning about the storm and could have simply postponed the launch for a week. Instead, they went ahead with the launch and just planned on putting the 49 satellites into ‘low-drag’ mode in the hopes of riding out the storm. In the end, SpaceX predictably tried to spin it all as a grand success in crisis management.
So just as increasingly powerful solar storms in coming years are something we can predict with a high degree of confidence based on past observations, reckless decision-making on the part of Starlink is also something observers can reasonably predict during this same period. A giant orbital gamble is scheduled for the next five years:
“SpaceX applauded itself for handling the problem with minimal risk to other satellites or to people or property on the ground—while ignoring the question of whether it would have been wiser simply to postpone the launch for a week or so. “This unique situation demonstrates the great lengths the Starlink team has gone to ensure the system is on the leading edge of on-orbit debris mitigation,” the company wrote.”
A job well done. That’s how SpaceX spun the loss of 40 out of 49 newly-launched satellites back in February. Observers weren’t quite as impressed. Especially given that this solar storm was a mere 2 out of 5. And typical. So typical that the company had a warning that this was coming. But for whatever reason, SpaceX decided to ignore those warnings and go ahead with the launch anyway. And that’s really the takeaway lesson here when it comes to assessing the upcoming orbital risks: SpaceX has a currently rather reckless track record. The whole idea of flooding the Earth’s lower orbit with tens of thousands of microsatellites is reckless to begin with. But even the implementation of that reckless project has been reckless. The reckless implementation of a reckless project is generally a recipe for bad outcomes:
And that brings us to NASA’s curiously-timed warning issued on Feb 7, as the 40 satellites were in the process of plunging: NASA issued five-page letter to the FCC expressing concerns about Starlink creating “the potential for a significant increase in the frequency of conjunction events, and possible impacts to NASA’s science and human space flight missions”. An increased frequency of conjunction events. It’s a polite way of warning about orbital disasters like Kessler’s syndrome unstoppable chain-reaction.
And don’t forget that the Starlink cluster was found to be responsible for over half of the weekly orbital encounters in the Fall of 2021. When NASA wrote that letter it already had plenty of evidence regarding the risks of Starlink:
Keep in mind that, with Starlink’s pivotal role in the conflict in Ukraine, odds are we’re not going to be seeing too many attempts by NASA to reign in the platform any time soon. It’s too important for that project. Which, of course, is what make Starlink a viable military target. A military target that only grows in military importance the more it grows in physical size. The risks just keep growing as the cluster grows. And as SpaceX demonstrated back in February, it has plan to deal with those risks: just launch more satellites.
And who knows, maybe just launching more satellites will work. For a while. The issue is what happens when it doesn’t work anymore. And we already sort of know what happens. Kessler’s syndrome happens. A lack of warnings isn’t the problem. We’ve been warned. We just don’t seem to be actively heeding those warnings.
Following up on the story about the growing risks that solar radiation poses to the low-orbit Starlink constellation of satellites and the troublingly casual response SpaceX had in the face of these risks — resulting in the loss of almost all of the satellites launched in early February — here’s a story about another vulnerability to the Starlink system that the company doesn’t appear to be taking very seriously:
A Belgian security researcher just published a how-to manual on how to hack into the the Starlink satellites. This isn’t the first story about hacking attempts being waged against Starlink. The system has already become a military hacking target given the role it’s playing in Ukraine’s military efforts. But he hadn’t heard about successful hacks before. That’s changed, and all you need to carry out the hack is access to one of the satellite receiver dishes and a $25 Pi Raspberry ‘modschip’. The researcher, Lennert Wouters, published the details on the hack on his Github page this month.
Now, it doesn’t sound like the hack gave Wouters control over the satellite. But it did reportedly give him access to layers of the communication network that users normally cannot access. He claims to have even managed to figure out how to communicate with the backend servers, making this attack a possible vector for accessing Starlink’s own computer networks.
Wouton informed Starlink of this vulnerability last year. The company has issued a software update that reportedly makes the hack more difficult, but not impossible. And here’s one of the key elements of this story: there’s no way to update the existing satellites launched in orbit because the vulnerability is based on software that is hardcoded onto a chip. So hackers are potentially going to be able to exploit this hack as long as the ~2000+ satellites already in orbit remain in orbit remain in orbit.
Here’s the other key detail to keep in mind: We’re Wouters informed SpaceX about this vulnenability last year. So, ideally, SpaceX has already dealt with it and has modified the hardcoded vulnerable software before launching any more satellites into space this year — like the ill-fated launch of 49 satellites back in February despite the incoming solar storm — and yet we are getting no indication that the company has actually taken these steps. Instead, we’re getting assurances from the company that Starlink users don’t need to be at all concerned about their own security. So all of the satellites launched this year could have this vulnerability far all we know at this point:
“The researcher notified Starlink of the flaws last year and the company paid Wouters through its bug bounty scheme for identifying the vulnerabilities. Wouters says that while SpaceX has issued an update to make the attack harder (he changed the modchip in response), the underlying issue can’t be fixed unless the company creates a new version of the main chip. All existing user terminals are vulnerable, Wouters says.”
This is clearly a ‘White Hat’ hacking story. Lennert Wouters, a Belgian academic security researcher, isn’t trying to take down the Starlink constellation. But should any ‘black hat’ hackers decide to replicate Wouter’s attack it sounds like they will be able to do so. At least for the Starlink satellites that are already in orbit, because it sounds like the vulnerability resides in the firmware stored on a chip that can’t be updated. SpaceX has issued some sort of patch that apparently makes the attack more difficult to execute, but it’s still possible.
So when we learn that Wouton informed SpaceX about this vulnerability last year, and it’s a vulnerability that can’t be fixed after the satellites are launched, we have to ask: has SpaceX updated that hardwired firmware on all the satellites launched since the vulnerability was disclosed by Wouton? Note how we are hearing nothing about the company updating the hardware on the satellites launched this year. That’s part of what makes this story rather unsettling. It’s another indication that SpaceX is prioritizing speed and ‘getting there first’ over security. So when we learn that Wouton is describing this hack on his Github account, we can be pretty confident A LOT of other people are going to be engaging in this exact hack because SpaceX can’t actually patch it. At least not entirely:
Thankfully, Wouton makes it sound like the hack doesn’t actually give the attacker the ability to take down the satellite systems, which would be a recipe for Kessler’s Syndrome. Don’t forget that Starlink assumes the satellites aren’t going to be in entirely independent orbits and the ability to make on-the-fly course corrections is crucial for how the system operates while avoiding a chain reaction of space junk. And yet Wouton also warns that the hack can be used to learn about how the Starlink network operates. He’s even communicating with backend servers with it! So while the hack itself may not be devastating, it also sounds like it could be used to learn how to execute genuinely devastating attacks:
Don’t forget that as Ukraine becomes more and more reliant on long-range missile platforms and drones, Starlink is only going to be more and more of a tempting military target. We can be pretty confident Russian hackers have already figured out how to replicate this hack and are currently working on figuring out what additional attacks can be piggy-backed off of it. What will they find? We’ll see. Or rather, they’ll see. The hackers presumably aren’t going to tell the world if they figure out how to exploit this hack to spy on traffic. But it’s also worth noting how this kind of vulnerability could actually increase the physical safety of the Starlink cluster. How so? Because inducing some sort of catastrophic Kessler’s Syndrome chain-reaction of space junk as a means of disabling this system will be a lot less incentivized if Russian’s military is able to easily hack Starlink and just spy in its traffic instead. Silver linings and all that.
How massive is the Pentagon’s fake online activity? Who exactly are they targeting? And why are they repeatedly getting caught? Those are the big question raised by a new Washington Post report about the review of the Pentagon’s online ‘persuasion’ activities. The review was prompted by a report issued last month by Graphika and the Stanford Internet Observatory. The report basically describes a situation where fake online personas are being extensively created by the Pentagon employees — or contractors — and also being extensively caught and deleted by platforms like Facebook. It’s that propensity for getting caught that appears to be a major factor in this review.
So is the lying and disinformation spread as part of these influence operations also part of the review? Sort of. It sounds like there’s an assessment regarding whether or not the lies actually work. That’s sort of the good news in this story: the Pentagon might dial back on the online deception. Not because it’s wrong but because it doesn’t seem to actually work. That includes the fake personas. They just don’t seem to be as persuasive as someone operating a social media account as an overt employee of the DoD.
And what about the years of hysterics about ‘Russian Trolls’ and the Internet Research Agency tampering with American’s fragile psyches? Yeah, that all appears to be part of the justification for all this. In fact, as the article points out, Congress passed a law in 2019 affirming the military’s right to conduct operations in the “information environment” to defend the United States and to push back against foreign disinformation aimed at undermining its interests. But as the second and third article excerpts — from 2011 and 2009 — remind us, this didn’t start in 2019. The Pentagon’s budget for foreign influence operations in 2009 alone was $4.7 billion. That’s all part of the context of the Pentagon’s ongoing review of its global PysOp activities. A review that will presumably have a PsyOp-ed version eventually issued to the public where we’re told everything is great and there’s no problem at all:
“A key issue for senior policymakers now is determining whether the military’s execution of clandestine influence operations is delivering results. “Is the juice worth the squeeze? Does our approach really have the potential for the return on investment we hoped or is it just causing more challenges?” one person familiar with the debate said.”
Do the lies even work? That’s the big question policymakers are facing as part of the fallout from a report issued last month by Graphika and the Stanford Internet Observatory showing how Facebook and Twitter have been identifying and taking down fake accounts. And while that report didn’t explicitly name the Pentagon as being being the fake accounts, that was pretty obvious from the Pentagon’s response, ordering a review of the US military’s internet activities. Internet activities focused on persuading foreign populations through fake internet personas. Activities that are not only authorized by US law and policy, but were expanded by Congress in 2019 with the passage of a law affirming that the military could conduct operations in the “information environment” to defend the United States and to push back against foreign disinformation aimed at undermining its interests. Fake persona’s ostensibly tasked with countering disinformation. That’s the kind of kind of policy environment these activities have been operating in:
Also note how these fake accounts were apparently easily detected by Facebook. It raises the question: so what percentage of the fake accounts operating on Facebook are known by Facebook to be fake? Is the knowledge that governments have often being fake accounts part of the reason Facebook and other social media platforms have done so little to address rampant bot activity? Because that’s the scenario kind of depicted in this report: Facebook knew about the fake accounts and was complaining directly to the DoD about those accounts and taking some of them down, but the policies never changed. The bot activity went unabated:
And regarding the growing trend of the social media giants of hiring senior figures from national security government positions, note how the Facebook employee who was lodging these complaints with the Pentagon, David Agranovich, previously worked on Donald Trump’s National Security Council. Also note who Agranovich spoke with when he issued those complains: Christopher Miller. Recall how Miller was one of the figures ominous promoted by Trump in the days following the November 2020 election, with questions about what exactly he did in the lead up to the January 6 Capitol insurrection still yet to be adequately answered, in part due to the mysterious loss of text messages. So it’s rather interesting to note that the guy elevated to acting Secretary of Defense had previously overseen in the Pentagon’s online influence operations:
Finally, the disturbing reference to the use of “deep fakes”. Just imagine how sophisticated military deep fakes could ultimately be. It’s a rather terrifying prospect. You almost can’t come up with a more effective means of getting global populations to ‘not believe their lying eyes’ than by flooding the internet with convincing deep fakes. That’s the kind of fire being played with here:
But let’s also not assume that the activity covered in that Graphika report were someone just started in 2019 following the passage of that law clarifying the US military’s right to engage in information warfare. We’ve been hearing reports about this kind of activity for years. For example, recall the following report from 2011: The US Air Force was already using sophisticated software designed to allow a single user to operate up to 50 fake social media personas. This was 11 years ago:
“As the rest of the contract explains, the Air Force would be able to manipulate IP addresses to make these “individuals” appear to be located in any part of the world. That is explicitly to protect the “identity of government agencies and enterprise organizations,” otherwise known as large defense contractors. The system would be used at MacDill Air Force Base near Tampa as well as in Kabul, Afghanistan and Baghdad, Iraq.”
That sure sounds a lot like the kind of activity described in that Graphika report. Again, this was 2011. We can reasonably assume the scale of these programs has expanded significantly over the following decade of social media explosion.
And that 2011 was far from the first time we’ve received reports like this. As that article points out, the AP had already found the US spends billions of dollars annually trying to influence global opinion. At least $4.7 in 2009 alone. And while that 2009 AP report doesn’t explicitly mention fake online personas, we can be pretty confident that there were at least a few fake online personas created from those multi-billion-dollar annual budgets. And as the AP report also points out, we can be pretty confident the propaganda getting pumped out by this machine isn’t just impacting foreign populations. It’s just assumed that domestic populations are receiving this propaganda too. And this was 2009:
“An Associated Press investigation found that over the past five years, the money the military spends on winning hearts and minds at home and abroad has grown by 63 percent, to at least $4.7 billion this year, according to Department of Defense budgets and other documents. That’s almost as much as it spent on body armor for troops in Iraq and Afghanistan between 2004 and 2006.”
At least $4.7 billion for 2009 alone. That was the Pentagon’s budget for ‘winning hearts and minds’ that the AP discovered after their investigation. It was a massive operation involving ten of thousands of people. 13 years ago. Again, it’s presumably a lot bigger now:
So what is the full impact of these billions of dollars and thousands of people? We don’t really get to know. By definition. It has to remain a secret. Like the secret Pentagon-written news articles percolating throughout the media landscape:
And those examples of Pentagon-crafted news articles just underscores the unavoidable nature of these kinds of operations: there is no realistic way to avoid having this propaganda impact domestic populations, despite the laws to the contrary. It’s an inevitability the Pentagon appears to have acknowledge since it appears that these foreign persuasion operations are being operating with the full expectation that domestic audiences after going to be persuaded too. And that’s been the case as far back as 2003, when then-Secretary of Defense Donald Rumsfeld was championing the agenda:
They knew what they were doing back in the early days of the War on Terror. It was a global propaganda war. A war deemed to be just as important as the kinetic war on the battlefield, with billion dollar budgets to back it up. That’s the thing currently under review. Because it kept getting caught and wasn’t lying effectively enough, apparently.
It sounds like the ongoing game of ‘space chicken’ over Ukraine is intensifying. Russia just issue a warning at the UN that should be taken very seriously. And yet, as we’re going to see, it’s not clear it’s being taken seriously at all. Even worse, it appears that not taking Russia’s threat seriously might be part of the US“s strategy for dealing with the threat. A strategy that is ultimately a game of space chicken. A game to see who ‘blinks’ first.
The Russian warnings weren’t specifically targeting Space X’s Starlink cluster of mini-satellites, but it’s pretty obvious what Russia was referring to when it warned that “Quasi-civilian infrastructure may be a legitimate target for a retaliatory strike.” And as we’ve seen, Starlink isn’t just operating as a quasi-military piece of infrastructure for Ukraine’s military. It’s turning out to be an absolutely vital platform for Ukraine’s military capabilities and has already faced Russian military hacking attempts as a consequence. That’s an important context for Russia’s warnings. Russia has already been probing and testing Starlink’s defenses so when we hear these warnings it’s a sign that a significant attack on Starlink is possible.
But as we’re going to see in the second article excerpt below in a report published just days before Russia’s warning at the UN, part of the context of that warning is the fact that the US Space Force doesn’t appear to be recognizing the biggest risk of an attack on Starlink. That would obviously be the risk of triggering an out-of-control space junk chain-reaction, or Kessler’s Syndrome. A risk that’s been dramatically enhanced with the proliferation of Starlink’s thousands of mini-satellites in low orbit. But if we listen to Space Force, Starlink has been a wild success as a military tool. Russia hasn’t even shot down a single Starlink satellite, a fact that Space Force attributes to the “resilience” of Starlink in the face of lost satellites. In other words, Russia hasn’t bothered taking down Starlink satellites precisely because Russia knows that taking down a few satellites would do nothing. This “resilience” rooted in having a large number of satellites is what so excites the Pentagon about Starlink. As Space Force put it, as the Department of Defense looks at future scenarios when satellites could be targeted, “what we base the resiliency off of is proliferation.” The more satellites the merrier from a military perspective. A message that completely ignores that obvious and growing risks of these satellite clusters triggering Kessler’s syndrome.
That was the message publicly delivered from Space Force just days before Russia issued its ominous warning at the UN. A message that makes the US’s strategy in space much clearer: Just throw up a constellation of mini-satellites that get used for military purposes and dare your rivals to blow it up. That’s ‘space chicken’. And that’s the context of Russia’s UN warning: a warning about attacks on Starlink taking place in the middle of giant game of ‘space chicken’. The kind of game that results in ‘Kessler’s Syndrome’ if no one ‘blinks’:
““Quasi-civilian infrastructure may be a legitimate target for a retaliatory strike,” senior foreign ministry official Konstantin Vorontsov told the United Nations, reiterating Moscow’s position that Western civilian and commercial satellites helping Ukrainian’s war effort was “an extremely dangerous trend.””
The formal warnings are being issue: commercial satellites used by Ukraine for military purposes represent “legitimate targets” for a military strike. Not that a formal warning was really necessary. It was already pretty obvious the Starlink satellite cluster was operating as a Ukrainian military asset. What’s more remarkable is that the cluster hasn’t actually be incapacitated yet given that it’s clearly operating as absolutely crucial military asset.
At the same time, it’s not actually surprising that Russian hasn’t physically attack Starlink yet given the enormous potential fallout of such an attack. The fallout of Kessler’s Syndrome and an out-of-control space-funk chain reaction. As the CEO of Iridium warns, “If somebody starts shooting satellites in space, I’d imagine it would quickly make space unusable.” That’s a key piece of context for Russia’s warning: the best operational defense for the Starlink cluster is the fact that if Russia attacks it the consequences could be disastrous for everyone:
Adding to the ambiguity of the situation is that it’s not really clear whether or not an attack on Starlink could be considered an act of war on US infrastructure:
And yet, despite the warnings from Russia that could use its obvious capabilities to destroy the Starlink cluster if it chooses to do so, we’re getting comments from US Space Force officials touting Starlink’s “resilience” against potential anti-satellite attacks. It’s like two completely separate conversations happening in parallel:
Now, it’s worth noting that Russia doesn’t necessarily have to blow the Starlink satelites up to disable them. Some sort of electronic attack that disables the satellites could be deployed that makes Kessler’s Syndrome a less likely outcome. So in that sense, yes, Starlink should be robust against an electronic warfare attack that manages to disrupt the operations of some subset of satellites without disrupting the entire network. It’s possible that’s the “resilience” the US Space Force was celebrating. But also recall how the system relies on the automated dynamic repositioning of the satellites to avoid collisions. So knocking those mini-satellites out of commission does still pose the risk of a collision. It’s just not as immediate a risk as there would be if you blow them out with anti-satellite missiles.
Still, it’s pretty remarkable just how excited The US Space Force is sounding when it comes to Starlink’s “resilience”. In fact, it was just days before Russia issued its warning at the the UN that we got the following report about Space Force’s enthusiasm for Starlink’s “resilience” in the face of military at tacks. Resilience rooted in the large numbers of satellites and the fact that the network can still operate even if an enemy disables a large number of satellites. And yes, it is indeed resilient, much like how the internet is resilience to individual nodes being knocked out. But knocked out internet nodes don’t turn into space-junk chain-reactions that threaten to take down the rest of the internet. Only Starlink’s internet possess that vulnerability. And yet we see no acknowledgement from Space Force of that paramount space junk chain-reaction risk. Instead, it’s just a celebration of how Russia hasn’t managed to shoot one down yet:
“To date, however, “how many Starlink satellites have the Russians shot down? … zero,” noted Derek Tournear, director of the U.S. Space Force’s Space Development Agency.”
It’s a rather bizarre metric for the US Space Force to be touting. Yes, Russia has yet to shoot down and Starlink satellites. And that decision to not shoot down any satellites is rooted in the “resilience” of the network and Russia’s awareness that disabling just a few satellites would have no impact on the overall network’s capabilities. At least that’s the narrative we’re getting from Space Force. The “resilience” of Starlink against the disabling of a few of its satellites is itself the deterrent against an attack. It’s a convenient narrative that ignores the fact that attacks on satellites remain unprecedented are are something that Russia presumably isn’t going to casually engage in. Hence the UN warning. But it’s a narrative that also completely ignores the fact that Russia is probably highly wary about launching physical attacks that could trigger Kessler’s Syndrome.
And, again, don’t forget that Russia’s warning at the UN about Starlink being a legitimate military target was issued just a few days after this report about Space Force pushing a narrative of Starlink being immune to attack:
What kind of space MADness are we looking at here? Is there’ a new kind of space cluster mutual assured destruction (MAD) showdown shaping up? A game of ‘space chicken’ where countries launch dual-use satellite clusters and just dare rivals to shoot them down and risk Kessler’s Syndrome? That does appear to be the plan. At least that’s the US’s plan. A plan that has already been put into effect. Starlink is already a giant space-based dare. Will Russia be willing to risk a catastrophe in space? Or a catastrophe on the battlefield? That’s the MADness at work here. Space chicken MADness. Don’t look up.
It’s been a good month for critics of Silicon Valley. If there’s one thing more satisfying to watch than the steady meltdown of Meta, it’s the active blowup of Twitter. And all signs are that the end to the woes of these social media giants is nowhere in sight. It’s just going to keep getting worse.
And that brings us to a fascinating study recently put out by researchers at the University of Adelaide about the active propaganda networks they found on Twitter and Facebook. Specifically, propaganda networks pumping out either ‘pro-Ukrainian’ or ‘pro-Russian’ content since the breakout of the conflict in Ukraine this year. As the article notes, the study differs from studies previously put out on the topic of online disinformation in several key ways. For starters, the researchers examined over 5 million tweets, dwarfing the data sets used for other studies. Crucially, they also didn’t limit their analysis to accounts that had already been flagged by Twitter for violating their rules, something previous studies had done. This turns out to have been vital for their analysis since over 90 percent of the tweets they examined were ‘pro-Ukrainian’ while the ‘pro-Russian’ accounts were systematically purged by Twitter.
This is a good time to recall how the Pentagon ordered a review of its online influencing operations back in September following a report by the Stanford Internet Observatory/Graphika that found that the Pentagon was heavily involved with creating fake social media personas for online influence operations. Online bots that were repeatedly getting caught and purged from platforms and weren’t actually influencing people. As we also saw, Congress passed a law in 2019 affirming the military’s right to conduct operations in the “information environment” to defend the United States and to push back against foreign disinformation aimed at undermining its interests. So when we read about this new report exposing a vast ‘pro-Ukrainian’ online influence operation, it’s pretty obvious that much of this is the handywork of the tens of thousands of people employed by the US government for clandestine influence operations.
Interestingly, as the researchers also discovered, the Russian government’s influence operations appeared to be largely non-existent for the first week of the war. And when it did get underway, the pro-Russian accounts were aggressively purged. It was a surprising dynamic given Russia’s reputation as being a master of online manipulations. A reputation that is, of course, largely a product of Western propaganda.
So a giant pro-Ukrainian bot army was just revealed in a report that makes clear that the bulk of the propaganda Western audiences are exposed to in this conflict is propaganda put out by Western governments. How will the Western media respond to this report? Presumably by ignoring it, which is reminder that lies of omission are at the core of any propaganda campaigns. It’s the other side of this coin: a massive bot-powered megaphone pumping out disinformation that doesn’t just misinform but also distracts from all the real content getting systematically ignored. The ‘fog of war’ now includes massively misinformed populations:
“An anti-Russia propaganda campaign originating from a ‘bot army’ of fake automated Twitter accounts flooded the internet at the start of the war. The research shows of the more than 5‑million tweets studied, 90.2 percent of all tweets (both bot and non-bot) came from accounts that were pro-Ukraine, with fewer than 7 percent of the accounts being classed as pro-Russian.”
It’s not even a contest. Of the 5 million tweets related to the war in Ukraine studied by the University of Adelaide’s researchers, over 90% were from pro-Ukrainian sources. And crucially, the University of Adelaide researchers didn’t completely skew their analysis by limited to the analyzed tweets to those put out by accounts that were flagged by Meta(Facebook) or Twitter for violating guidelines. It was a finding that ran counter to the narratives we’ve often gotten from much smaller studies put out from groups like the Stanford University/Graphika team that are focused entirely on detecting ‘Russian trolls’. For example, recall how the Pentagon ordered a review of its social media manipulation initiatives after Graphika issued a report describing a vast network of fake personas pushing pro-Western narratives that are repeatedly getting caught purged from platform. While Graphika’s researchers didn’t identify these fake personas as being created by the Pentagon or Pentagon contractors, observers noted that this was obviously the case, hence the Pentagon review that was ordered following the report. That’s part of the context of the this report: its finding were very different from out reports on the topic because it didn’t limit its data set to account that were identified as breaking the rules. And yet, at the same time, it appears that the fake personas getting pumped out by the Pentagon and other Western governments are so aggressive in the disinformation that they’re putting out that they’re still getting repeatedly banned. It’s an indication of the sheer volume of fake accounts at work here:
But there’s also the fact that ‘pro-Russian’ accounts were just getting much more aggressively purged, in part because ‘pro-Ukrainian authorities’ were playing a role in guiding that purging:
Also note how the lack of pro-Russian accounts seem to defy the common wisdom about vast sophisticated Russian online influence operations. Because of course. Reality has a way of doing that:
Also note the interesting juxtaposition of the nature of the content between the ‘pro-Russian’ and ‘pro-Ukrainian’ sides: The non-bot pro-Russian accounts involved significant information flows, something not observed with the non-bot pro-Ukrainian accounts. At the same time, they found the pro-Ukrainian bots were focused on promoting angst and panic. Information flows vs angst and panic:
Finally, there’s the reference to the elephant in the room: the US military has been openly conducting “full spectrum: offensive, defensive, [and] information operations” with a focus on targeting ‘Russian disinformation’. Don’t forget how Congress passed a law in 2019 affirming the military’s right to conduct operations in the “information environment” to defend the United States and to push back against foreign disinformation aimed at undermining its interests. And that was just building off of decades of global influence operations run by the US military. So when the researchers found that the pro-Ukrainian tweets tended to peak between 6 and 9PM across US timezone, it underscores just how much propaganda the public in the Western is routinely exposed to when it comes to foreign policy issues. It’s just an avalanche of propaganda, justified under the guise of countering ‘Russian disinformation’:
It’s all lawful and approved by Congress. In other words, it’s going to continue and expand. That’s what we can more or less expect. More bots pushing angst and panic. And more sophisticated bots that defy Twitter’s and Facebook bot-detection algorithms. Along with no meaningful mainstream coverage of the fact that this vast propaganda network is dominating the West’s online discourse over Ukraine. Don’t forget that when the Pentagon ordered that review back in September, the problem was that it was caught running a vast propaganda network. The problem was that it wasn’t seen as working. So don’t expect this vast propaganda bot network to just continue doing what it’s doing. Expect it to get much larger and better at what it’s doing using the most advanced techniques modern militaries can deploy. And also don’t expect very much reporting on this.
We’ve been hearing warnings about the risks of government abuses related to COVID data-tracking since the start of the pandemic almost three years ago. Warnings that clearly weren’t listened to by governments around the world, as the following AP report describes. From China, to Israel, India, Austria, and the US, COVID-related data collected by the government ostensibly for pandemic-related purposes have been retooled for general use. Data ranging from cellphone location information, facial recognition, and even highly sensitive and invasive personal health like substance abuse histories is finding its way into law enforcement databases and who knows where else.
And as we’re also going to see, when we look at the list of government abuses of this data described in the following AP piece from around the world, it’s the abuses by Palantir on behalf of the US government that sure sound like the most invasive. As we’ve seen, Palantir was given US government contracts in 2020 to use its data mining and surveillance technology for the pandemic. And based on Freedom of Information Act documents recently obtained on the government plans for how to use this data, federal officials were contemplating how to share data that went far beyond COVID-19 data and included integrating “identifiable patient data,” such as mental health, substance use and behavioral health information from group homes, shelters, jails, detox facilities and schools. There was also reportedly a lack of information safeguards or usage restrictions.
It’s the kind of revelation that raises the obvious question: so does Palantir already have access to all of this highly invasive personal health information the US government was considering sharing? And if so, who else is Palantir selling this information to? This is a good time to recall Palantir’s keen interest in acquire health data analytics firms in the UK. Just how much personal health information is Palantir collecting and reselling? And who are its clients? These are the kinds of questions raised by this AP report.
Questions with some disturbing readily available answers, as we should expect. As we’re going to see in the second article excerpt below, Palantir is just buying a lot of this data from commecial data brokerage giants like LexisNexis and Thomson Reuters, two Palantir “partners” who who reportedly just pipe their vast databases with highly detailed information on virtually everyone living in the US directly into Palantir’s databases. It also turns out that LexisNexis’s parent company, REXL, was an early Palantir investor. That’s another big part of this story: when we’re talking about the explosion of the US surveillance state, we’re inevitably talking about an explosion in Palantir’s business. But not just Palantir’s business. All of Palantir’s partners too.
Ok, first, here’s that AP report on COVID-data government abuses from around the world. With Palantir seemingly leading the way:
“For more than a year, AP journalists interviewed sources and pored over thousands of documents to trace how technologies marketed to “flatten the curve” were put to other uses. Just as the balance between privacy and national security shifted after the Sept. 11 terrorist attacks, COVID-19 has given officials justification to embed tracking tools in society that have lasted long after lockdowns.”
A global investigation into how COVID-fighting technologies have evolved into new permanent pieces of the surveillance state. It’s guaranteed to be a depressing read. But as we can see, it’s remarkable how it’s the US where this phenomena appears to be at its most extreme.
First there was the examples out of China, where provincial governments created smartphone apps that were used to regulate the flow of people during the pandemic based on infection status records. Apps that linked health information with location and even credit information. It’s an example of the ongoing abuse potential from all of the various new tools created just for the pandemic:
Then there’s the example out of Israel, where cellphone location information that was first being used by Israeli security services for anti-terror purposes was then retooled for the pandemic to track large gatherings of people. But, of course, those retooled tools were re-retooled to because even more invasive and security measures which threatened to ensnare innocent bystanders. It’s an example of how the pandemic didn’t just trigger a wave of new technologies but also became a force multiplier for existing surveillance state technologies:
Similarly, in India we’re finding that the facial recognition systems the government has been investing in since Narendra Modi won office in 2014 became a key go-to technology for enforcing masking requirements. But it did a lot more than just enable the enforcement of those new rules. It created an excuse for authorities to further the establishment of a 360 surveillance society:
Then we get to Australia, where, like in China, government-built apps took on a leading role in implementing strict lockdown measures. But those apps did a lot more than that, which data “incidentally” ending up in the hands of the national’s intelligence agencies and even ended up being used to investigate the shooting of a biker gang boss, despite prior assurances that the data would ONLY be used for contact-tracing purposes. It’s an example of how all of the assurances about how data will be used after getting collected are nothing more than that: assurances. Not guarantees:
Finally, we get to the examples of COVID-data abuses in the US, where we find multiple contracts between the US government and Palantir involving data-mining. Alarmingly, files released under the Freedom of Information Act revealed discussions showing federal officials discussed how to share data that went far beyond COVID-19 status and included “identifiable patient data,” such as mental health, substance use and behavioral health information from group homes, shelters, jails, detox facilities and schools. That is some very invasive personal health data flowing through Palantir’s systems. Adding to the alarming nature of these findings is the fact that there didn’t appear to be any information safeguards or usage restrictions in the contracts. Don’t forget that Palantir has a lot more clients than just the US government. Who else was Palantir potentially selling this information to? It’s example of how this story of the potential government abuses of COVID-related data isn’t actually limited to abuses by the government. When you’re feeding in highly sensitive data streams to companies like Palantir that data could end up in all sorts of public and private hands:
That is some wildly invasive data collection. It’s basically “all the data we can possible collected on everyone” as a model. Taking place in the US, an alleged bastion of civil liberties. And don’t forget that Palantir has government clients all around the world, like its contracts with the UK’s NHS national health provider and its aggressive purchase of UK health data analytics firms. We have every reason to suspect Palantir has been offering similar services to other governments.
So given that we’re now learning that US federal officials were looking into handing Palantir troves of sensitive patient health data and then sharing that data with various government agencies, we have to ask what exactly is the data Palantir has access to and who else is being sold access to that data? That brings us to the following remarkable piece in the Intercept from last April about the incredible stream of highly detail data getting pumped directly into Palantir’s databases by two rather unexpected entities: LexisNexis and Thomson Reuters. Yep, the two firms known for their giant databases of news articles happen to have another increasingly lucrative type of service: data brokerages. Highly detailed data brokerage services with a myriad of datapoints on hundreds of millions of people in the US. That information is being fed directly into Palantir, with both LexisNexis and Thomson Reuters listed as Palantir “Partners”. Beyond that, LexisNexis’s parent company, RELX, was an early Palantir investor, so this is probably a relationship that’s been going on for years. As we’re going to see, DHS has had a contract with LexisNexis since at least 2016 to use this data for immigration enforcement, so it’s clearly a data trove with a lot of ‘actionable’ data.
That’s all part of the context of the AP’s report: when we’re learning about federal officials looking into taking the information provided by Palantir and sharing it across the government for non-COVID-related activities, we’re actually talking about sharing the massive data streams fed into Palantir that have been made commercially available to government agencies and private entities for years through the growing data brokerage industry that doesn’t appear to have any meaningful regulations:
“For those seeking to surveil large populations, the scope of the data sold by LexisNexis and Thomson Reuters is equally clear and explains why both firms are listed as official data “partners” of Palantir, a software company whose catalog includes products designed to track down individuals by feasting on enormous datasets. This partnership lets law enforcement investigators ingest material from the companies’ databases directly into Palantir data-mining software, allowing agencies to more seamlessly spy on migrants or round them up for deportation. “I compare what they provide to the blood that flows through the circulation system,” explained City University of New York law professor and scholar of government data access systems Sarah Lamdan. “What would Palantir be able to do without these data flows? Nothing. Without all their data, the software is worthless.” Asked for specifics of the company’s relationship with Palantir, the LexisNexis spokesperson told The Intercept only that its parent company RELX was an early investor in Palantir and that “LexisNexis Risk Solutions does not have an operational relationship with Palantir.””
Yep, both LexisNexis and Thomsom Reuters — two companies not typically associated with selling massive troves of personal information — are listed as “partners” of Palantir. The kind of partners who apparently just pipe their torrents of data directly into Palantir’s databases. Beyond that, we’re learning that LexisNexis’s parent company, RELX, was an early investor in Palantir. So the relationship between Palantir and LexisNexis presumably goes back a number of years:
And note how ICE has been using this LexisNexis database since at least 2016. Presumably via Palantir. It’s another reminder that these invasive data aggregation practices didn’t just start with the pandemic. The pandemic merely turbo-charged an existing phenomena:
Keep in mind that, as expansive as these databases might seem today, they’re only getting more expansive with time. More and more of everything we do is being collected, databased, and resold in the vast barely-regulated commercial data brokerage space.
So will Palantir end up getting an extensions on its federal COVID-tracking contract? Time will tell, but as these articles make clear, there’s a lot more than just COVID tracking going on with these contracts. And presumably a lot more than just governments ultimately buying this data.
Here we are again. It’s another MLK Day in America. Which means another year went by without any real national discussion about the ongoing coverup surrounding MLK assassination. But it’s still worth noting an interesting report published back in October about another Black civil rights era figure who ended up with a shockingly extensive FBI case file of her own: Aretha Franklin. Yes, the FBI managed to assemble a 270 page dossier on the singer, which was recently to the public following a FOIA request. A dossier spanning four decades, from 1967 to 2007. In other words, this wasn’t just another story about the excesses of the J Edgard Hoover-era FBI. This is a contemporary going story.
And as we’re also have to keep in mind, a modern story about extensive FBI dossiers on public figures isn’t just a “Hoover-redux” kind of story. This is the era of Big Data. And the era of the mass privatization of the most sensitive government services. It’s the era of Palantir’s capture of the most sensitive data flowing through the US.
And that brings us to a little-noticed story from back in August of 2021 that really should be kept in mind when reading about the FBI’s decades of deeply invasive snooping on figures like Aretha Franklin. A story about the apparent mess taking place inside Palantir’s platforms used by the FBI and US federal prosecutors for handling federal investigative files. A mess involving documents that were supposed to be accessible to only the prosecutors involved with a case but ended up being accessible to everyone with access to the platform. Which happened multiple times, where FBI agents not involved with the case ended up accessing the files. And while it would be tempted to brush the story off as a lone anomaly, it turns out the situation that resulted in the improper security setting for these case files were the default settings. As observers point out, this suggests these kinds of ‘oops’ situations involving the FBI’s case files are a lot more common than this lone case would suggest.
And that’s really the big story here: the FBI’s mass domestic surveillance of targeted groups doesn’t appear to have ever really ended. At the same time, the era of mass privatization of government services is only becoming more entrenched. The mass government surveillance of yesteryear has been fused with the Big Data privatized infrastructure of today. It’s a huge story that can never really be told. So we just have to take these hints.
Ok, first, here’s a NY Times piece describing the release of a 270 page FBI dossier on Franklin created by an agency that was clearly terrified about the kinds of public passions she could inflame:
““Picking up in 1967 and 1968 through the early 1970s, the F.B.I. was keeping files on almost every major Black figure and particularly anyone who seemed to be, or was suspected of being, involved in civil rights or Black politics,” said Beverly Gage, a professor of history and American studies at Yale and the author of a forthcoming biography, “G‑Man: J. Edgar Hoover and the Making of the American Century.””
Yep, by the time MLK was assassinated, the FBI had files on virtually every major Black figure in America. That’s the chilling context of this story about the extensive files on Aretha Franklin. It wasn’t an anomaly that the FBI had a 270 page file on Franklin. That was the norm for an agency the largely viewed African American’s as subversive second-class citizens:
And now here’s the original Rolling Stone report on the story that includes a rather crucial detail: the pervasive spying on Franklin didn’t end until 2007. This is a contemporary phenomena taking place in the modern Big Data era:
“Despite the four decades of surveillance and hundreds of pages of notes, the bureau ultimately never discovered anything linking Queen of Soul to any type of extremist or “radical” activities. “It does make me feel a certain way knowing the FBI had her targeted and wanted to know her every move” Kecalf Franklin says. “But at the same time knowing my mother and the way she ran her business I know she had nothing to hide so they wouldn’t have found anything and were wasting their time. As you see…they found nothing at all.””
Four decades, from 1967 to 2007. That’s how intensive the FBI surveillance of Aretha Franklin was, which obviously went well beyond the FBI’s J Edgar Hoover era! That’s part of the story here: we aren’t just talking about more revelations about the out-of-control domestic spying by the Hoover-era FBI. This is apparently an ongoing modern phenomena. At least that’s what we can reasonably infer from this story. It’s not like the FBI is going to just tell us about it. This all came out via a FOIA request, after all:
So with that additional dark chapter in US civil rights history finally revealed, here’s a pair of articles that should serve as a reminder that nightmare situations involving the abuse of FBI case files are potentially a lot more nightmarish in the modern era. This is the era of Big Data, after all. We’re still living through a golden age of systematic abuses of privacy and undue surveillance.
So we have to ask: who are the subversive second-class citizens of today’s FBI and just how extensive are their dossiers? And that brings us to the following story that was really just a blip back in August of 2021. A story about an FBI case file f#ck up. The kind of f$ck up that should raise all sorts of questions about how routinely these types of f@ck ups are happening.
It was a f*ck up with the FBI’s case file system. But it wasn’t just an FBI f%ck up. It was Palantir’s f!ck up too. Yes, the FBI is using Palantir software to host its vast case file system. And in this case, a set of files that were only supposed to be accessible to US prosecutors involved with a particular were access left accessible to everyone with access to this system and was indeed accessed by several FBI agents uninvolved with the case over a period of 15 months.
And as we’re also going to see, while Palantir is claiming that this was all the fault of the government employees using its platform who improperly set the security settings for these files when they uploaded them to Palantir’s platform, Palantir’s alibi isn’t quite that clean. Because the problem wasn’t just that the files were uploaded with the improper security settings. They were uploaded with the default security settings. In other words, the f&ck up described this story is the default f^ck up for Palantir’s system. Meaning this presumably wasn’t the only instance of federal case files that are supposed to restricted ending up accessible to anyone with access to this Palantir-hosted system:
“The data had been meant to be segregated so that it was available only to those prosecuting the case. However, the error meant four FBI employees unconnected to the prosecution were able to view the data for over a 15-month period, Strauss said.”
An FBI case file that was supposed to be limited to just the prosecutors in a case was instead accessed by four agents completely unconnected to the case for over 15 months. It’s quite an ‘oopsy’ for the FBI. But also Palantir:
So how rare is this ‘oops’ situation on the FBI’s case files hosted on Palantir’s systems? Well, as the following New York Post article describes, the problem didn’t arise due to FBI agents accidentally manually giving the rest of the FBI access to these files. No, the problem arose by default. Yes, the mistake the FBI agents made when uploading these files to Palantir’s system was forgetting to change the default security settings for the files. Default settings that give the entire FBI access to files in the system. And as Albert Fox Cahn, the founder of privacy and civil rights group Surveillance Technology Oversight Project, points out, that explanation suggests this is happening a lot more than just this one case. It’s the default setting, after all:
“Manhattan prosecutors instructed Palantir employees to delete the data on Aug. 17 and said they do not intend on using the information in their case against Griffith, according to the letter.”
It’s not just FBI employees or federal prosecutors with access to this sensitive data. Palantir’s engineers presumably have access too. Access to all of it.
And note how the FBI agent working the case against accused hacker Virgil Griffith wasn’t automatically informed by Palantir’s platform about this unauthorized access. It took another agent emailing him to inform him about that access for him to become aware in the first place. It’s a detail that hints at a lack of internal safeguards of a system intended to handle the most sensitive kind of data:
And as Albert Fox Cahn, the founder of Surveillance Technology Oversight Project, points out, the fact that this all happened because someone accidentally used the default settings on Palantir’s platform suggests this is happening in a lot more cases:
How many other sensitive cases are there with files just floating and accessible to anyone with access to Palantir’s platform? Who knows, but let’s not forge that Palantir’s engineers can presumably access all of it as part of their systems administrator roles.
Still, in light of the recent revelations about modern day mass FBI domestic spying we have to ask: so is access to the FBI’s ongoing collection of mass domestic spying dossiers available to all other FBI personnel by default on Palantir’s platform? Or just the agents involved? We shouldn’t really have to ask the question because mass domestic spying dossiers shouldn’t actually exist. But since they do, we have to ask.
With all the reports about the US and German decision to open the floodgates for heavy tanks to Ukraine, it’s worth noting another form of escalation that’s quietly taking place in terms of the military capabilities of the aid delivered to Ukraine: Germany just announce the planned delivery of 10,000 more Starlink terminals to Ukraine.
And while there isn’t any indication that these terminals have more capabilities than the thousands of Starlink terminals already sent to Ukraine, there appears to have been a major upgrade in what the Ukrainians are capable of doing with the terminals. The kind of major upgrade that simultaneously upgrades the military threat posed by the Starlink system and implicitly makes the odds of some sort of attack on the platform — and all of the potential fallout including the “Kesslers Syndrome” space catastrophe — all the more likely.
As we’ve seen, the heavy reliance that Ukraine has on Starlink for many of its key military communications like controlling drones has already threatened to turn that microsatellite cluster into a valid military target. As we’ve also seen, Ukraine’s drone warfare capabilities now include bomb-dropping quadcopter militarized drones built from off-the-shelf parts. We’re now getting reports of Starlink terminals getting built into the quadcopters themselves, extending the operating range of the drones dramatically to effectively anywhere in the world.
Which obviously includes not just Russian held territory in Ukraine but even deep inside Russia. It’s also worth recalling the reports about the CIA leading a behind-the-lines sabotage operation inside Russia with the help of an unnamed NATO ally. Starlink-powered drones would have some powerful potential for an operation like that. In other words, Starlink is set to become a much more significant threat to not just Russian forces on the battlefield but much of the rest of Russia too. The range of attack will be limited by the drone batteries.
That’s part of the context of Germany’s announced plans to deliver 10,000 Starlink terminals to Ukraine in the coming months. Deep-ranged drone warfare is set to become a reality in Ukraine. At least until something somehow incapacitates Starlink itself.
Ok, first, here’s a PC Magazine article from November of last year about the new Starlink-enabled extended range drones being built by Canadian firm RDARS that, as the company touts, promises to give their drones a global range. Global range with a catch. RDARS’s drone simply uses a Starlink-dish at the remote ground stations that their drones communicate with. So it’s not really the drones themselves that got the capability to communicate with the Starlink satellites. And at the end of the article, we hear the company predict that “Starlink antennas will be small enough to put into small quad type drones and even smartphones. You might not get more than 1 to 2mb/s but it will be highly reliable and low latency, which is all a drone needs.” We’re going to see, that at was quite a prescient prediction:
“Although RDARS is installing the Starlink equipment only to the drone’s ground station, Braverman sees potential in adding the Starlink terminal on the drone itself. “I believe one day Starlink antennas will be small enough to put into small quad type drones and even smartphones. You might not get more than 1 to 2mb/s but it will be highly reliable and low latency, which is all a drone needs,” he added.”
We’re heading towards a time when Starlink dish’s will be built into the drones themselves. Someday. That was the prediction from RDARS just a few months ago. And then, a couple of weeks, we got the following update:
“On Wednesday, a pro-Russian paramilitary group called KCPN posted photos of a captured drone that seems to come from Ukraine. KCPN investigated how the unmanned drone was communicating with its handlers, and discovered the retrofitted Starlink equipment attached to the machine. ”
It was just a matter of time. Not a lot of time either. These drones are now a reality. And while the drone photographed in the article appears to be the kind of ‘quadcopter’ style of drone that’s inherently going to have a limited range due to it’s limited battery power, there’s nothing stopping this method from being extended to other types of drones with far greater operating ranges. In other words, just as it was just a matter of time before we see Starlink-drones, it’s also just a matter of time before we see Starlink-drones that actually have the kind of extended ranges needed to really make use of the global communications-potential. Which means drone strikes potentially deep inside Russia. It’s just a matter of time. And therefore just a matter of time before more counter-measures are developed too:
Is Starlink-dish-hunting part of the next phase of the conflict in Ukraine? It’s looking probable. And probably already happening:
“The technology can supposedly pinpoint a Starlink dish within 5 to 60 meters (16 to 196 feet) of its actual location. In addition, it can be fitted on top of a moving vehicle, allowing it to detect Starlink activity across the front lines on a battlefield.”
We’ll find out of this Starlink-dish-hunting technology actually works. But let’s hope it does. Because with the increasing military applications of Starlink the Russian military is going to face two choices: destroy those dishes. Or destroy the entire Starlink satellite cluster. Only one of those scenarios risks turning the world’s fleet of satellites into a floating junk yard.
And don’t forget: what happens in Ukraine doesn’t stay in Ukraine. It’s not Vegas. So while we don’t know when exactly long-range weaponized Starlink drones will capable of executing remote attacks will eventually become possible, we know this technology is coming. Anywhere. Launched from anywhere else and remotely piloted by someone sitting who knows where. That’s going to become a reality. Sooner rather than later thanks to this war.
Following up on the reports about the incorporation of Starlink terminals into Ukrainian military drones — potentially giving them the capacity to strike deep inside Russia - here’s a set of articles about the just how extensive those plans already are. Last week, Ukraine announced the creation of new drone assault companies inside its armed forces. It sounds like these units are specifically going to be equipped with Starlink terminals, a further indication of just how reliant the Ukrainian military effort is on the ‘civilian’ Starlink infrastructure. As we’re going to see, that wasn’t the first announcement of Ukraine’s long-range drone strike development efforts. In fact, the mysterious apparent drone strike at Engels air base deep inside Russia back in December came one day after Ukraine announced it was conducting “final” tests on it’s long-range drone capabilities. So Ukraine has already demonstrated both the capability to attack large swathes of Russia territory along with the willingness to do so.
So how is SpaceX addressing the fact that its Starlink platform is a key part of Ukraine’s growing drone capabilities that now include striking deep inside Russia? Well, in response to recent “war criminal” accusations from a Russian pundit over Starlink’s use by Ukraine, Musk assured Russia that Starlink isn’t allowing itself to be used for long-range drone strikes.
That’s the situation developing: Ukraine is basically declaring that its developed long-range drone strike capabilities that it’s planning on rolling out soon at the same time Musk is assuring Russia that Starlink won’t be used for long-range drone strikes. What’s going on here?
Ok, first, here’s a report from last week about the new drone assault units Ukraine is deploying. With Starlink technology at their core:
It’s not exactly a shocking development. Inevitable, really. But it is a potentially destabilizing development when it comes to the status of the planet’s space junk challenges when we’re hearing that these new drone assault units are going to be equipped with Starlink terminals. Starlink is becoming more and more of a valid military target. Especially with the development of new drones equipped with their own Starlink terminals, potentially giving them a global operating space, allowing for strikes deep inside Russia. Long-range drone strikes that appear to have already started with the reports back in December on drone strikes against Engels air base. Ukraine isn’t just planning on striking deep inside Russian with long-range drones. It’s already happening. With big plans for a lot more long-range attacks:
“The project to develop an unmanned aerial vehicle with a range of more than 1,000 kilometers and a payload of up to 75 kilograms “has reached such a stage that, unfortunately, we cannot talk about it,” Sad added.”
It’s quite an announcement: Ukraine’s long-range drone program has reached such a stage that they can’t talk about it.
But, of course, we’ve already been talking about Ukraine’s long-range drone program for months after the drone attacks deep inside Russia back in early December. Drone attacks that, as the following Meduza.io article points out, came just a day after Ukraine announced its long-range drones were undergoing their “final” tests.
As the article notes, it appears that the drones used in those long-range attacks were modified Tu-141 Strizh Soviet-era jet-powered drones. Crucially, it appears that these old school drones lack the kind of targeting capabilities that make them effective for missions like bombing runs. But that problem may have been solved in recent years using “civilian satellite technology”. And while Starlink isn’t mentioned in the report, that’s the obvious candidate for the “civilian satellite technology” service provider that would be actually used. So given the reports about modified quadcopter-style drones with Starlink dishes built into them that are already being used on the battlefield, we have to ask: did those long-range drones have Starlink terminals built into them too?:
“In October 2022, Ukraine reported that it had developed its own model of suicide drone capable of traveling 1,000 kilometers (620 miles) and carrying a warhead weighing 75 kilograms (165 pounds). On December 4, the day before the strikes on Russia’s Engels and Dyagilevo air bases, the Ukrainian military reported that the drones were undergoing their “final tests.” The drones in question are likely an upgraded version of the Tu-141. After all, Ukraine has the facilities and expertise necessary to make them: in the Soviet area, Strizh drones were manufactured at the Kharkiv Aviation Factory.”
It was an undoubtedly successful “test” of Ukraine’s long-range drone capabilities. Capabilities that appear to include precision are getting using “civilian satellite technology”. That sure sounds like a reference to Starlink:
And note how these Soviet-era jet-powered drones appear to be a temporarily available weapon. There’s a limited number and no more are being produced. But with Ukraine ramping up its production of cheaper long-range modern drones, it’s just a matter of time before Ukraine is capable of launching the kind of large-scale simultaneous attacks involving dozens of drones deep inside Russia. It’s coming. At least that’s the plan:
And that brings us to the following ‘clarification’ made by the head of the leading ‘civilian’ satellite technology provider that has become the lynchpin of Ukraine’s drone ambitions: SpaceX CEO Elon Musk felt the need point out that SpaceX is explicitly NOT allowing Ukraine to use its Starlink satellite network to launch “long-range” drone strikes:
“In response, Musk tweeted: “SpaceX Starlink has become the connectivity backbone of Ukraine all the way up to the front lines. This is the damned if you do part.””
The “connectivity backbone of Ukraine all the way up to the front lines.” That how SpaceX’s own CEO characterizes Starlink. Which raises obvious questions about the ‘civilian’ status of that infrastructure. Questions that Musk was undoubtedly trying to ward off with his public assurances that Starlink cannot be used for long-range drone strikes:
It’s not exactly clear how Starlink would prevent a drone with a Starlink terminal embedded in it from operating while inside Russian space, although that seems like the kind of restriction that should be technically possible for Starlink to impose.
So at the same time we’re getting reports about Ukraine’s intent on developing its long-range drone strike capacity, and doing so in part with the help of ‘civilian’ satellite technology, we’re also hearing assurances from Elon Musk that Starlink won’t be used for that exact purpose. Those are some pretty mixed signals. And that brings us to the latest reminder that Elon Musk is effectively a major US defense contractor whose SpaceX business interests are deeply intertwined with the US national security complex: astronomers got an usual, but not unprecedented, visual treat of a bizarre spiral in the sky recently thanks to another SpaceX launch. But it wasn’t the launch of more Starlink satellites. It was the launch of a US military GPS satellite:
“The observatory also provided a time lapse video of the spiral, showing its evolution over time, along with an unsettling number of satellites zipping by. “Earlier that day, SpaceX launched a satellite to medium-Earth orbit,” Subaru Telescope said in the video. “We believe this phenomenon is related [to] its orbital deployment operation.” SpaceX’s GPS III Space Vehicle 06 mission did in fact launch earlier in the day, delivering a GPS satellite for the U.S. Space Force.”
The mystery of the swirl was solved: it was the launching of a US Space Force military GPS satellite. One of many launches for the US government, SpaceX’s biggest client. And that deep relationship with the US government is part of any claims by SpaceX that its Starlink cluster is simply a ‘civilian’ satellite technology system. Starlink is, in Musk’s own words, the “connectivity backbone” providing key military infrastructure “all the way up to the front lines” in a conflict the US is deeply invested in. How is Russia going to respond to Starlink in the event of a wave of dozens of simultaneous drone strikes hitting deep inside Russia? Because that’s Ukraine’s plan: massive drone long-range strikes. Will Musk’s assurances that Starlink wasn’t used in those strikes be enough to ward off a Russia response to the threat Starlink poses? And will those assurances by Musk even be true, or will it just be disinformation bluster? We’ll find out, but it appears that Ukraine has big plans for long-range strikes. So if you’ve ever wondered what kind of bizarre orbital light shows the world might get to see following the triggering of “Kessler’s syndrome”, keep your eyes on the skies. Especially in the days following any reports about waves of drone strikes deep inside Russia.
Here’s a rather interesting story following up on the emerging Ukrainian practice of embedding Starlink terminals directly into drones, potentially turning them into long-range offensive platforms that could strike deep inside Russia. The story also potentially relates to those explosive charges laid out by Sy Hersh describing the US’s direct role in planning and executing the Nord Stream attacks, which is obviously the kind of story that might have Russia looking for opportunities to ‘return the favor’ when it comes to major US infrastructure. Starlink is a major piece of US infrastructure, after all. Infrastructure that is increasing getting weaponized by Ukraine.
And that brings us to the announcement by SpaceX president Gwynne Shotwell made on Wednesday about the limits the company is placing on Ukraine’s using of Starlink. Limits intended to prevent the “weaponization” of the platform. As Shotwell put it, “It was never intended to be weaponized...However, Ukrainians have leveraged it in ways that were unintentional and not part of any agreement”:
““We know the military is using them for comms, and that’s OK,” Shotwell added. “But our intent was never to have them use it for offensive purposes.””
Starlink knew Ukraine was using its service for military communications..just not military communications involved with offensive purposes. That’s the new spin we’re suddenly hearing from SpaceX, along with vague references to apparent acts of geofencing that the company has already deployed to restrict Ukraine’s use of Starlink on the front lines:
Who knows what exactly is prompting these public declarations. Because it’s not like the weaponization of Starlink was a secret. Something is prompting these shifts by Starlink. So, again, it’s hard not to notice that this happened on the same day of the publication of the explosive claims published by Sy Hersh about the US planning behind the Nord Stream attacks. The kind of story that undoubtedly has Russia looking for opportunities for retaliation. Retaliation Starlink has been courting for quite a while now. And the kind of retaliation that Ukrainian drone ingenuity is going to make all the more inviting.
Knowing is half the battle. It’s not just a cartoon slogan. Of course, there’s an obvious flip side to that kernel of wisdom: confusing the enemy is also half the battle. And then there’s all the relevant bystanders, like members of general public both foreign and domestic. What they know, or think they know, is part of this battlefield too. But perhaps most of all is foreign leaders, leaders and elite influencers. Knowing, and simultaneously confusing and misdirecting, is really half the battle if we’re honest about it. Of course, honesty is obviously one of the first casualties for a topic like this.
So with that awareness of the importance of possessing meaningful knowledge in mind, here’s an announcement that should give pause to anyone with an interest in getting an accurate understanding of the
conflicts facing the world. Anyone living inside or outside the US: the Office of Strategic Influence — created in 2002 by Donald Rumsfeld with the mission of influence global public opinion, including US opinions — appears to be back. Back and now operating in an environment when charges of ‘Russian and Chinese disinformation’ targeting US audiences are now routine.
Yes, the newly created Influence and Perception Management Office (IPMO) — created in March of 2022 — has been tasked with “perception management” and ‘countering disinformation’. This is a good time to recall the September 2022, Washington post review of the Pentagon’s online ‘persuasion’ activities that described a situation where large numbers of fake online personas are being extensively created by the Pentagon, but then caught and deleted by platforms like Facebook. Also recall that 2021 report by Bill Arkin describe a vast secret army of tens of thousands of undercover military and intelligence personnel operating under the ‘Signature Reduction’ program designed to give cover stories for their national-security related jobs. Both of these stories are presumably intertwined with the IPMO’s agenda.
And while the IPMO is just one of the many new ‘anti-disinformation’ government agencies that have popped up since the 2016 election and all of the charges of ‘Russian meddling’, it stands out in one key respect: while most of the Department of Homeland Security’s counter-disinformation efforts are unclassified in nature, much of the IPMO’s operations are highly classified.
Another major difference between the IPMO and its Office of Strategic Influence predecessor is the broader geopolitical context: while the Office of Strategic Influence was created as the ‘War on Terror’ was just ramping up, the IPMO is explicitly focused on upcoming ‘great power competitions’. Which is obviously a reference to ongoing or planned conflicts with Russia and China. With Russia, there’s the obvious need to assure domestic audiences that all the Nazis and fascists gaining influence and power in Ukraine is nothing to worry about. And in the case of China, there’s the ongoing US government push to blame in COVID19 pandemic on a secret Chinese biowarfare program.
Finally, there’s the difference in the overall legality of what these agencies were tasked with doing: while military propaganda targeting domestic audiences was made illegal in 1948 with the passage of the Smith-Mundt Act, the 2012 Smith-Mundt Modernization Act ended that domestic propaganda blockade, arguing that the global nature of the internet made it impractical to create a domestic propaganda ban. So while it the US population should probably expect more domestically-targeted propaganda, it’s not like the flood gates were just opened. We’ve been drowning in this for years now. We’re just going to drown a little more:
“Now, two decades later, “perception management” is once again becoming a central focus for the national security state. On March 1, 2022, the Pentagon established a new office with similar goals to the one once deemed too controversial to remain open. Very little has been made public about the effort, which The Intercept learned about through a review of budget documents and an internal memo we obtained. This iteration is called the Influence and Perception Management Office, or IPMO, according to the memo, which was produced by the office for an academic institution, and its responsibilities include overseeing and coordinating the various counter-disinformation efforts being conducted by the military, which can include the U.S.’s own propaganda abroad.”
The Office of Strategic Influence created by the Bush administration over two decades ago is back. This time as the Influence and Perception Management Office (IPMO), a new agency under the command of acting director, James Holly, himself a former director of special programs for U.S. Special Operations Command. It’s a spooky new entity:
But unlike the Office of Strategic Influence’s focus on the ‘War on Terror’, the IPMO appears to be made in anticipation of “great power competition”. In other word, it’s going to be used for the conflict in Ukraine and, eventually, a war against China. And note how various ‘perception management offices’ started springing up after all the allegations of Russia’s meddling in the 2016 election. It’s some rather ominous context given the all the evidence that later came out pointing at Israel and the UAE being behind much of that ‘Russian’ election meddling:
And as an example of one of the many other ‘perception management’ offices that have popped up inside the US government, 2022 also saw the creation of the Defense Military Deception Program Office, tasked with “Sensitive Messaging, Deception, Influence and other Operations in the Information Environment.”:
And as the article reminds us, while military propaganda targeting domestic audiences has long been seen as out of bounds for these kinds of Pentagon-direct influence operations, that’s not really the case anymore following the ‘update’ to US law in 2012 that determined the global reach of the internet made a distinction between foreign and domestic propaganda effectively moot. That’s part of the legal context of the new IPMO: it was preceded by a decade of effectively legal domestic military propaganda operations:
Finally, while the IPMO looks to be a modern day reboot of the Office of Strategic Influence created in 2002, it’s important to recall how this kind of domestically targeted “perception management” has its origins in the Iran Contra scandal. In particular, all of the efforts to convince the US public that the ruthless and brutal far right Contras were honorable freedom fighters while the socialist Sandinistas were abusive authoritarians:
What the US government did back then was clearly very illegal. But that was the 80’s. As we’ve seen, times change. Time and laws and general ethics. Everyone’s mind is fair game in the age of the internet, iteself a creation of the Pentagon.
And while this is all very ominous and disturbing, we’re presumably going to be getting all sorts of propaganda about how counter-disinformation propaganda is actually good for us and beneficial. And in no time you’ll be convinced that it’s all fine. And also that war with Russia and China is good and necessary. Give it time.
“How am I in this war?” That’s the question Elon Musk posed in Walter Isaacson’s new biography. A question asked in the face of the seemingly unwinnable position Musk has found himself in with respect to Ukraine and Ukraine’s use of SpaceX’s Starlink satellite constellation in the war with Russia. Specifically, it was a question raised by an incident covered in Isaacson’s book that has blown up into a public relations nightmare for Musk: Musk’s refusal to extend Starlink’s coverage to include the major Russian port in Sevastopol, Crimea.
As the story is typically told, Musk cut off Starlink’s coverage in an area where Ukraine drone boats were heading towards the port, killing operation and forcing the drones to drift harmlessly to the shore. As we’re going to see, it appears that Ukraine actually contacted Musk after the boats were already heading towards their destination, demanding an “emergency request” to extend the range all the way to Sevastopol. In other words, Ukraine launched a sneak attack before it actually had an assurances that Starlink’s services would be extend, and is now crying bloody murder due to Musk’s refusal to go along with the last minute ’emergency’ request.
Keep in mind the metaphorical Sword of Damocles that is literally looming over this entire situation that raises another big questions. A question not just for Musk: the extreme vulnerability Starlink has with respect to military attacks due to the possibility of Kessler Syndrome and an unstoppable cascade of orbital space junk. In virtually ALL of the coverage of this incident, there has been NO mention at all of the obvious risk of a Kessler Syndrome scenario had Musk gone along with Ukraine’s request. How is this possible that everyone seems to have missed this glaring detail in this story?
But also note the other major detail that’s left out of this story: so what did the US government say to Musk about the incident? Because we are told Musk quickly got on the phone with National Security advisor Jake Sullivan and Mark Milley, in addition to speaking with the Russian ambassador. But we still have no idea what they told Musk. Did they agree that going along with the plan would pose an unacceptable risk of escalation? Or were they sympathetic to the Ukrainians?
There’s also the generation question of whether or not the US knew about the Ukrainian plan in the first place? It’s clear Ukraine would like to see this conflict blow up into a NATO vs Russian conflict. Was Ukraine trying to wrangle Starlink into an major escalation that would enrage Russia into strike US assets? And if that was the plan, who in the US government knew about it? We have no answer yet to these major questions.
But as we’re going to see, we are getting an idea of how the US is planning on responding on the incident: the Air Force is now openly musing about the need to get clarity on commercial platforms used for military purposes. Clarity that the platforms will be available for use upon request.
And as we’re also going to see, it’s not just the ongoing conflict in Ukraine that the incident has war planners worried about. What about Taiwan and the plans for a conflict with China? Will Musk, who has Tesla factories in China, allow Taiwan to use Starlink in the event of a Chinese invasion? It’s the kind of question that has some scrambling for solutions, including the solution basically creating more Starlink competitors. Yep, so if it seemed like the threat of Kessler Syndrome wasn’t looming large enough, just wait for the coming era of multiple competing satellite clusters. Sure, we already know that countries like China are now planning on creating satellite clusters of their own, but that doesn’t mean there can’t be more commercial competitors occupying that same space. The sky is the limit!
Of course, the sky is also limited, at least when it comes to how many satellites you can fit into low earth orbits at the same time. But we haven’t hit that limit yet and filling the skies with even more satellites in preparation for more conflicts appears to be the current plan. So while the world may have dodged the Kessler Syndrome bullet with this one incident, it appears the plan going forward is to make Kessler Syndrome an inevitability. It’s a matter of when, not if, at this point:
““There was an emergency request from government authorities to activate Starlink all the way to Sevastopol,” Musk posted on X, the platform formally known as Twitter that he owns. Sevastopol is a port city in Crimea. “The obvious intent being to sink most of the Russian fleet at anchor. If I had agreed to their request, then SpaceX would be explicitly complicit in a major act of war and conflict escalation.””
An emergency request by Ukrainian authorities to extend Starlink’s operating range. That appears to be what actually happened, as opposed to Musk suddenly withdrawing Starlink services from an area where it had previously operated. So Ukraine tried to use this ’emergency request’ to stage a powerful sneak attack — an emergency in the form of having launched the drone boats before actually getting permission from Musk — but got rebuffed:
And note how Musk was reportedly soon on the phone with Jake Sullivan and General Mark Milley to discuss the situation. But we aren’t told what they told Musk, which is a rather massive question looming over this story. Did Sullivan and Milley back the Ukrainian attack, or did they agree with Musk that such a move would be potentially destabilizing and a dramatic escalation? We have no idea. But it sounds like Gwynne Shotwell, Musk’s president at SpaceX, was ready to go in granting Ukraine’s emergency request:
So what was the US government’s stance on this whole episode? It’s rather notable how there’s been no real update on that major facet of this story as it’s been playing out in the press over the past week. Were Sullivan and Milley aware of Ukraine’s plans for this “emergency request” and already on board? Or were they as surprised as Musk and shared his concerns about a major escalation? We have no idea. But the peril posed by the risk of a Kessler Syndrome orbital chain-reaction resulting from any sort of attack on Starlink had to have been a factor in this discussion, right? Let’s hope so, but also note the complete lack an any mention of such a risk in any of the coverage of this event. It’s like if everyone is pretending Starlink has a magical shield despite that extreme vulnerability to attack being at the center of this story. Musk was obviously worried about an attack on Starlink. Why wouldn’t he be? And, more importantly, why is no one else seemingly concerned?
And as the following article describes, while we don’t know what the Pentagon’s stance was on this particular Ukrainian sneak attack plan, the US military is indeed planning on responding to this incident. Plans that Air Force Secretary Frank Kendall described as gaining clarity and assurances that commercial platforms used for military operations will indeed be available upon request in the future:
“But the Pentagon is reliant on SpaceX for far more than the Ukraine response, and the uncertainty that Musk or any other commercial vendor could refuse to provide services in a future conflict has led space systems military planners to reconsider what needs to be explicitly laid out in future agreements, Kendall said during a roundtable with reporters at the Air Force Association convention at National Harbor, Maryland, on Monday.”
Note the framing of this incident by the Air Force: Musk’s refusal to extend the Starlink range apparently wasn’t seen as a sober refusal to play along with a surprise ’emergency’ scheme that would have significantly escalated the tensions between the US and Russian. Which, again, raises the question: what did Jake Sullivan and General Milley tell Musk? Did they demand he go along with the plan, but lacked the legal power to compel him to do so? We don’t know. But the fact that the Air Force is now thinking about the kinds of laws and regulations that might have prevented Musk from having the power to turn down Ukraine’s request in the first place kind of hints at what Sullivan and Milley may have told him, and it doesn’t bode well for the future of this conflict. This was, after all, not just a sneak attack on the Russian fleet. It was a planned massive escalation of the conflict and one that would have made direct conflict between the US and Russia all the more likely. On top of a giant Kessler Syndrome gamble:
So should we expect a wave of new laws limiting the independence of commercial platforms used for military purposes? It’s possible given the level of public outrage this story has produced. But as the following Atlantic article suggests, there could be another response: more Starlink competitors. Presumably competitors who are far more likely to allow their platform to wage sneak attacks on Russian territory...and China:
“The concerns about relying on Musk don’t end with Ukraine or even with questions of temperament. Musk’s commercial holdings could expose Washington to unwanted entanglements. Take, for example, his ownership of Tesla, which has a large factory and market presence in China. In the event of an invasion of Taiwan, would Musk willingly provide Starlink terminals to Taiwanese forces—at the behest of the United States—and take huge financial losses as a result? Last October, Musk told the Financial Times that China had already pressured him about Starlink, seeking “assurances” that he will not give satellite internet to Chinese citizens. He did not make clear in the interview how he responded, but Starlink was then and remains unavailable in China.”
Will Musk allow Starlink to be used in a war with China? It’s one of the big questions people are now asking. And while options like compelling SpaceX with the Defense Production Act or nationalizing Starlink do exist, note the complication with those approaches: As a private company, Starlink can provide products that assist Ukrainian forces even while claiming that it’s simply offering a service and not taking sides. It’s an ambiguity that is seen as making it less likely Starlink will be attacked. In other words, the more the US makes it clear that Starlink has to operate as US military asset, the more likely it is that we’ll all wake up one day to shooting start across the skies as Kessler Syndrome envelops the earth’s orbital space:
Are we in store for a newly invigorated satellite constellation space race? It sounds like that’s what some want to see. Although, as the article noted, it’s not like this can happen very easily for any company that doesn’t also own its own rocket launching capacity. It’s hard to see any competitors replacing Starlink any time soon.
Of course, if you’re a company looking to make a big mark in space, setting up another satellite cluster competitor of Starlink isn’t necessarily the best investment. The future of orbital opportunities is mostly going to be cleaning up giant orbital messes.
Here’s a set of articles about the growing orbital space race. As we should expect, the race to fill up the limited low orbit space currently getting populated with satellite clusters like Starlink is only heating up as the military utility of these satellite clusters continues to be proven on battlefields of Ukraine.
But there’s a new innovation that could accelerate the development of military applications for this ostensibly civilian technology: direct smartphone-to-satellite communication.
As we’ve seen, Ukraine has already managed to repurpose the Starlink satellite dishes, allowing for drones with Starlink connectivity built-in and potentially giving these drones a much larger operational range. Now imagine doing that same thing with a tiny smartphone. That’s the potential that’s just around the corner, based on various announcements.
For starters, China announced the start of a new satellite cluster of its own, intended to compete directly with Starlink in providing consumer satellite services, specifically for China and most of the other regions involved in the Belt and Road Initiative: parts of Russia, Southeast Asia, Mongolia, India, and the Indian and Pacific Oceans. But it’s not a competing cluster of cheap low-orbit satellites. Instead, it’s just three high-orbit satellites. That’s it. And Huwei has already released a 5G smartphone capable of communicating directly with the network.
But China won’t have a monopoly on satellite networks capable of directly communicating with smartphones. SpaceX has already announced plans to upgrade the satellites it’s going to launch in the future with celltower equipment, which, in theory, just allow any standard smartphone to communicate directly with Starlink. Neat technology, but again, just imagine the military applications.
And, of course, when we’re talking the military applications of civilian satellite infrastructure, we’re also talking about the horrific potential for a Kessler Syndrome cascade of space junk that will be produced should these satellite networks ever end up military targets. Which brings us to the last update on this story: Speaking this the UK Space Conference in Belfast, last month, Maj Jeremy Grunert, of the US Air Force Judge Advocate General Corps, warned that companies needed to be careful about straying into conflicts. Grunert pointed out that Russian has already warned that the use of civilian infrastructure by Ukraine would make it a legal target. Grunert added that, “There was some shock at the time that those comments were made. But in the context of the law of war, the Russians are likely not wrong on that, because of the military benefits that those sorts of things can provide. It doesn’t mean that civilian satellites would be targeted or targeted all the time. But it does mean that they potentially could be.”
And that’s our update: satellite-to-smartphone connectivity is coming, with all of the implications that come with it. Including military implications in response to the clear military applications for that kind of tech upgrade.
Ok, first, here’s a piece on China’s new competition for Starlink: three-satellite high-orbit high-throughput satellites that promise to serve nearly the entire Belt and Road region of the world. With direct-to-satellite smartphones already available for purchase, making China the first county in the world to offer that kind of service:
“The network, which consists of three high-throughput satellites named ChinaSat 16, 19, and 26, is expected to compete with SpaceX’s Starlink, a low-orbit satellite system developed by the American aerospace company, according to a Beijing-based communications expert.”
Yes, Starlink’s new competitor is kind of the opposite of StarLink: a network of just three high-orbit high-throughput satellites. That’s what’s expected to compete with StarLink. And not just in China. These three satellites are going to cover most of the regions involved in the Belt and Road Initiative:parts of Russia, Southeast Asia, Mongolia, India, and the Indian and Pacific Oceans. China is getting so invested in this high-orbit approach to consumer telecommunications that it became the first country to offer satellite smartphone service with a 5G Huawei phone capable of connecting to these high-orbit satellites:
And yet, as Professor Sun Yaohua as Beijing University of Posts and Telecommunications cautions, China is still going to have to invest in low-orbit networks when it comes to the next generation 6G technology, in part because satellite orbits and radio frequencies are “first-come, first served” resources. In other words, there is an unavoidable race for orbital space. It’s the new space race:
It’s a race for low and high orbit space. The future is satellite clusters at all orbits. And satellite connected smartphones apparently, which is the kind of technology that could have obvious extensive military applications. Just stick a satellite-connected smartphone in a drone and now you have satellite-controlled drone. No repurposed Starlink dishes required.
And that future of satellite-connected smartphones could be here a lot sooner than expected. And without a phone upgrade. As soon as next year. Those were the ambitious plans announced by Starlink. Thanks to newer, larger Starlink satellites with additional celltower equipment, the Starlink satellite cluster will be able to directly communicate with existing phones that rely on LTE connectivity:
“The plan for Starlink Direct to Cell is different thanks to a lot of foundational improvements over what’s currently available. First, those other two networks are in a higher orbit: the iPhone’s Globalstar network is at 1,400 km above Earth, and Iridium is at 781 km. Starlink currently operates a lot closer to Earth, in the 550 km range. The other major shift is that SpaceX is developing the world’s largest rocket, Starship, and having the world’s largest rocket means you get to launch the world’s biggest satellites. Bigger satellites can involve bigger, more sensitive antennas than what generally are launched into space, and this part of the operation isn’t rocket science: Your tiny smartphone will have a much easier time connecting to the closer, bigger satellites, leading to a level of cellular space service that wasn’t possible before.”
As we can see, Starlink already has plans to deliver smartphone-to-satellite connectivity to existing phones. No new phones needed. It’s one of the benefits of those lower orbits. But there’s a catch: those low orbit satellites have to get bigger. And that’s part of Starlink’s plan. Once the SpaceX Starship is ready, the thousands of Starlink satellites yet to be launched are slated to get bigger and heavier. Keep in mind that Starlink already avoids collisions using an automated collision avoidance system where the satellites shift their orbits if potential collisions are detected. Heavier satellites are going to be harder to shift. Also keep in mind that Starlink’s plans are to put around 42,000 satellites in orbit. Tens of thousands have yet to be launched, with plenty of time for additional equipment. Which makes this story a reminder that these satellites are probably going to be getting larger and more sophisticated as the longer this Starlink project goes. Sure, they could get smaller and lighter too. But that’s not the trend:
And while it’s not hard to imagine a wide variety of potential applications for turning cell phones into satellite phones, keep in mind one of the most obvious and explosive applications: warfare. In particular, Ukraine. We’ve already heard about Ukraine incorporating Starlink dishes into military drones. Now imagine a simple cellphone can accomplish the same connectivity. It really could be a military technological breakthrough, unleashed right into the middle of the conflict in Ukraine.
Also keep in mind one of the other obvious implications of a dramatic expansion of Starlink’s military applications: it makes it that much more likely that Starlink will be treated as a military target. Something that Russian could legally do, as Maj Jeremy Grunert, of the US Air Force Judge Advocate General Corps, cautioned last month at the UK Space Conference in Belfast. As Grunert warned the audience, if Starlink is used for military applications, Russia can legally target it according to international rules of war:
“Speaking this week at the UK Space Conference in Belfast, Maj Jeremy Grunert, of the US Air Force Judge Advocate General Corps, said that companies needed to be careful about straying into conflicts.”
Be careful about allowing your civilian infrastructure to be used for war. It might become a military target. Legally, under international law. That was the warning issued last month by Maj Jeremy Grunert of the US Air Force Judge Advocate General Corps. When Russia warned Starlink that it could be targeted, that wasn’t bluster. That was a legal military right based on how Starlink is being used:
Try not to be shocked if one of these satellite clusters ends up on the receiving end of a military strike. That was Maj Grunert’s warning. The kind of warning that Starlink has obviously heard and partially heeded. But only partially. Starlink is still very much being used as a military asset and it’s hard to see how it doesn’t becoming even more of a military asset after it implements the direct-to-phone upgrades.
Here’s a pair of updates about the growing number of military capabilities of the Starlink satellite cluster. The kind of military capabilities that could end up getting the cluster targeted by hostile militaries someday according to international law, as Russia reminded the world a couple of months ago:
First, we got reports last month that Starlink completed testing the viability of the Starlink platform in the Arctic, where the remoteness and harsh conditions limit the ability to use existing military satellites. The tests were done on behalf of the Pentagon and declared a success. The Pentagon now appears ready to use Starlink in the Arctic, a part of the world seen by the Pentagon as an important area of competition between the US, Russian, and China in coming years.
And then there’s the report we got a few days ago out of Germany, where researchers have discovered a new application for Starlink: passive radar detection. The idea is to use in the emissions from the satellites themselves as a kind of passive radar system. And not only is such a system difficult to detect, but it potentially detect stealthed objects.
We don’t have to ask whether or not Starlink will be used someday for military purposes. It’s already been used by Ukraine extensively, hence Russia’s warning about the legal right to treat it as a military target. So when we’re asking how Starlink’s status as a military platform might play out, keep in in mind that it’s not really a matter of “if” but “when” there’s some sort of military attempt to neutralize Starlink:
“The testing suggests that Starlink has the potential to become a crucial asset in what’s becoming an increasingly important area of competition with Russia and China, which have both sought to expand their influence in the Arctic. But the region’s rough climate and remoteness limit communications through existing military satellites.”
A crucial asset for the “area of competition with Russian and China” in the Arctic. That’s how the Pentagon described Starlink following a series of tests in the region, making it one of the growing number of military services now provided by SpaceX to the Pentagon:
It’s one of the latest military applications for Starlink that we’re learning about, but not the latest. That prize goes to the following report about a new method developed by Germany to use the Starlink satellite cluster to conduct passive radar detection. It sounds like the idea is to use the electromagnetic emissions from the satellites themselves as a kind of radar to detect objects. The system involves just two antennas: a high-gain reference antenna to track a Starlink satellite and copy its signal, and a second surveillance antenna pointed towards the area of interest where you’re trying to track targets. Not only does this technique allow for the passive detection of objects — making it more difficult for adversaries to throw up interference radiation — but it potentially allows for the detection of stealth objects. Which could end up making Starlink — and any other satellite cluster with similar capabilities — extremely valuable in the stealthed aerial battlefields of tomorrow:
“The opportunistic use of existing transmitters from the Starlink network opens the door for covert operation that is robust against jamming and better at detecting stealth targets, according to the report. ”
Better detection of stealth targets. And not just better detection but passive covert detection. That’s the promise of this new application of the Starlink constellations of satellites. Which, of course, makes this technology something that will be of even greater interest in future conflicts.
And note how the ability to to utilize Starlink in this way appears to be directly dependent on the number of these satellites in orbit. This is a good time to recall how there is currently a little over 5000 Starlink satellites in orbit, which is just 1/8th of the long-term plans for around 42k. In other words, this passive radar technique is poised to become a lot more powerful over time as more and more constellations of satellites are put into orbit:
And note how part of the appeal of this form of radar is the passive nature that makes it difficult to detect by adversaries which, in turns, makes it difficult for them to impose interference radiation to obscure the radar signal. And while that may be true, it’s not like these satellite clusters are immune to physical military attacks. Sure, on one level, they are robust against military attacks in the sense that knocking a handful of satellites out of commission won’t disrupt the overall network. But that’s assuming such attacks don’t end up triggering the kind of Kessler’s Syndrome scenario that could end up taking down virtually all of the satellites in low orbit (and maybe a lot of other orbits):
Don’t forget: the more Starlink is for military applications, the likelier it is that it will become a military target someday. That’s how it works. And not only is it becoming more and more militarily useful with time, but also more useful with the raw number of satellites in orbit. We’ll see if Starlink makes it to its goal of 42k satellites before that attacks happens, but it really is just a matter of time at this point. You can’t keep building a military platform that is both increasingly capable and increasingly fragile and vulnerable, spanning the globe, and assume everything is going to go fine. Even though those appear to be the prevailing assumptions at the moment.
There’s so much data they don’t know how to deal with it. It’s a ‘good’ problem for the US intelligence establish to have, at the end of the day, but still a problem. A problem with potential solutions in development according to a recent Bloomberg report on the growing interesting by US spy agencies in the efficient exploitation of all the ‘open source’ data now available. As the article describes, the explosion of commercially available data — whether it’s satellite data of a region in China or social media about anyone on the planet — has simply overwhelmed spy agencies of the ability to efficient find the data that matters the most. It’s a big enough problem that the Office of the Director of National Intelligence (ODNI) hired cyber expert Jason Barrett to help the US intelligence community for a joint solution.
Now, as the article also notes, it’s not like the US government hasn’t relied on commercially available open source information to arrive at conclusions in the past, with the use of such data regarding China’s alleged genocide of the Uyghur population of Xinjiang as a prominent example how the US would use this kind of intelligence. Of course, as we’ve seen, those open source ‘intelligence’ campaigns focused on China have been based on seriously questionable ‘analysis’ provided by private individuals like Adrian Zenz and uber-hawkish think-tanks like the Australian Strategic Policy Institute (ASPI). So it sounds like the US intelligence community would like to amplify those kinds of ‘open source’-based accusations by delving even further into publicly available data. In fact, the National Geospatial-Intelligence Agency created the Tearline institute in 2017 — a collaboration with think-tanks and universities — for the purpose of tracking China’s “Belt & Road” initiative. The head of Tearline, Chris Rasmussen, is now pushing for a new independent intelligence agency focused just on open source. Rasmussen hopes that such an agency would provide policymakers with daily briefings akin to the presidential daily briefings.
But there’s another potentially massive new development in this area: the CIA has been working on its own version of a ChatGPT tool specifically for the purpose of sifting through open source data. The plan is to make the tool available to all 18 US intelligence agencies. The vision is to have a tool that pushes the most ‘relevant’ data to human analysts for further review. So the solution the US intelligence community appears to have arrived at for dealing with the avalanche of data is some sort of ChatGPT AI blackbox tasked with doing the first pass of the data and determining what merits further investigation.
Of course, when we’re talking about the exploitation of massive publicly available datasets by the US government, there are obviously going to be serious privacy concerns. And that would potentially include ChatGPT like tools, which are derived from massive data sets that implicitly include all sorts of personal information. And yet, at the same time, it’s not hard to imagine the reliance on AI will be used as a kind of privacy assurance. After all, it won’t be humans looking over the raw data going into these systems. It will be blackbox AIs that no one really understands. It’s all a reminder that we should probably add ‘increasingly sophisticated surveillance states’ to the list of areas that will be transformed with ChatGPT-like AI technology:
“The challenge with other forms of intelligence-gathering, such as electronic surveillance or human intelligence, can be secretly collecting enough information in the first place. With OSINT, the issue is sifting useful insights out of the unthinkable amount of information available digitally. “Our greatest weakness in OSINT has been the vast scale of how much we collect,” says Randy Nixon, director of the CIA’s Open Source Enterprise division.”
Our greatest weakness is the vast scale of data in our possession. That’s quite an admission from the CIA! But that’s apparently the big problem in need of solution at not just the CIA but the entire US intelligence establishment. Hence the appointment of Jason Barrett as the new USINT ‘open source’ coordinator. Also, hence the growing privacy concerns. After all, the more this vast trove of data is exploited, the more privacy violations that will inevitably transpire. It’s kind of unavoidable. And note the assurances from the NSA regarding these privacy concerns that the NSA won’t collection location data for phones known to be used inside the United States. As we’re going to see below, the collection of such location data isn’t really necessary anymore. It’s all for sale commercially:
But beyond the privacy concerns should be the potential weaponization of open source data. In particular, weaponized data that delivers policy-makers a desired conclusion, regardless of the underlying veracity of the analysis. So it should be particularly alarming to see open source analysis of satellite imagery from Xinjiang, China, as an example of the kind of application for open source data they are trying to facilitate. As we’ve seen, the years-long public relations campaign waging genocide claims against the Chinese government has been basically entirely dependent on the highly questionable open source ‘analysis’ of figures like Adrian Zenz and non-profit think-tanks like the Newlines Institute and the uber-hawkish Australian Strategic Policy Institute (ASPI). And now we’re hearing from Chris Rasmussen, the creator of the Tearline Institute — a collaboration between the National Geospatial-Intelligence Agency and universities and non-profits — about how he wants to see a new separate open source intelligence agency that delivers daily reports to lawmakers. In other words, we should probably expect these strategic ‘open source’ influence operations to become a lot more influential with policy-makers:
And then we get to this intriguing detail: the CIA has been working on a ChatGPT-like tool to help analysts sift through the vast trove of data:
And as the following article notes, the CIA’s version of ChatGPT won’t be limited to the CIA. It’s going to be available to all 18 US intelligence agencies. A tool that promises to sift through the vast streams of data and push the most important nuggets to humans for review:
“The AI tool will be available across the 18-agency US intelligence community, which includes the CIA, National Security Agency, the Federal Bureau of Investigation and agencies run by branches of the military. It won’t be available to policy makers or the public. Nixon said the agency closely follows US privacy laws.”
Is the CIA’s ChatGPT intelligence tool providing policy-makers with skewed data? Maybe, but policy-makers aren’t exactly going to be in a position to know since they won’t be getting access. Although, who knows, that might be for the best given the incredible damage a motivated lawmaker could do with a tool that potentially facilitates the weaponization of intelligence and the generation of desired conclusions. But with the creation of tool that just feeds intelligence to analysts and appears to decide what’s important and what isn’t, someone is presumably shaping the tools priorities. Which raises all sorts of questions about the kind of AI-driven ‘group think’ risks that such a tool poses:
And then we get to the privacy concerns associated with the massive datasets that go into the creation of these kinds of ChatGPT-like tools. Along with an admission from the The Office of the Director of National Intelligence about how US intelligence agencies are turning to the unregulated commercial space to acquire all sorts of privacy-violating data, including phone location data. In addition to being a reminder of the woefully unregulated commercial data brokerage industry that continues to thrive and grow in the US, it’s also a reminder that the kinds of ChatGPT-like tools developed by the CIA probably aren’t just going to rely on text data but will also incorporate a much broader scope of data like location data and any other kind of signals intelligence. Which is another way of saying that these ChatGPT tools will potentially double as mass surveillance tools into which all of the different information streams of fed. With a kind of ‘blackbox’ analysis that no human actually understands churning through all that data:
So how tempting will it be for policymakers and the US intelligence community to now brush of the privacy concerns associated with the creation of an increasingly sophisticated surveillance state with assurances that it’s only AIs that have raw access to these troves of data? Because it’s not hard to imagine “It’s only ChatGPT, not a human, looking through all your most intimate details” actually being used as a kind of public assurance. On one level, it really is better to imagine it’s just an AI looking over all these details instead of a human. But at the same time, this isn’t just some random AI. This is going to an AI designed to sift through all of the details available on each one of us and arrive at a conclusion about the risks we pose. Over and over. Day after day. As more and more information is gathered. In that sense, it’s kind of super extra creepy to imagine the such a system deployed on a populace.
It’s part of what’s going to be grimly interesting to see play out: we’re all getting a new Big Brother. It’s an AI Big Brother that’s probably going to know us better than we know ourselves in some respects. It won’t judge use personally with all it knows about us. It’s not a person. But it will still have the ability to assign each one of use with ‘potential danger’ digital flags. Will humans find that reassuring? Or creepy as hell? Either way, we’re going find out. Probably creepily.
It’s not an uncontrollable problem yet. And hopefully it never will be. But hope isn’t the best plan, especially in the face of increasingly risky gambles. And yet, as we’re going to see, when it comes to humanity’s plans for deal with uncontrollable chain reactions or orbital space junk — the Kessler syndrome nightmare — the plans at this point appear to consist of hope that something viable is going to be developed in the future.
It’s not pure hope. A growing number of private companies are jumping into the space debris removal industry with a variety a different approaches under development. But it doesn’t sound like there’s been too much success to far. In fact, the European Space Agency experienced a setback of its own last year after a piece of rocket debris that was planned to be deorbited as part of a 2026 test of the ClearSpace startup’s technology ended up getting struck by more debris, splitting it into pieces. It was the Kessler syndrome chain reaction happening in real-time.
Other planned deorbiting technologies include earth-based lasers that and potentially slow the debris enough to force it out of orbit and into the atmosphere. Which is also the kind of technology that could be great for attack rival satellites which is a reminder that space cleanup technology is often going to have dual use military applications. In other words, we should fully expect the growing interest in space debris removal technology to following the growing trend of the militarization of space. And, of course, it’s hard to imagine a scenario more likely to create some sort of space debris catastrophe than space-based warfare. That increasing militarization of space as part of the necessary development of space debris removal technology is part of the story we’re looking at here.
But as we’re also going to see, the need for effective space debris removal technologies isn’t just something that we can be confident we’ll need decades from now. There is an immediate need. At least potentially, given our orbital state of affairs. At least that’s what we can infer from a trouble story from back in February about plans by SpaceX to deorbit 100 Starlink satellites due to a ‘flaw’ discovered in these early generation Starlink satellites. According to the company, the issue caused at least 17 Starlink satellites to become “currently non-maneuverable,” leading to an “increase the probability of failure.”
Keep in mind that Starlink doesn’t operate by putting all of its satellites in independent orbits. There are too many for that. Instead, Starlink satellites relying on “automated orbital adjustments” to avoid collisions. So when we learn that 17 of them already lost the ability to maneuver, that kind of suggests 17 of them effectively became large piece of orbital debris, albeit debris that can apparently still be deorbited.
The deorbiting process was slated to take around six months, so Starlink is presumably about half way through that process by now. No accidents reported yet. But that lack of accidents isn’t exactly something to celebrate given that we’re dealing with a scenario that’s just a matter of time. You can’t keep flooding the planet’s orbit with more and more objects and expect that accidents won’t happen. Or war won’t happen. It’s a ‘when, not if’ situation. We know growing space debris is an inevitability. What isn’t yet inevitable is the technology to effectively clean it up:
“I’ve asked the company for further details and will update this post if I hear back, but based on the description and context, it seems likely that the “failure” in question would mean a loss of control. Seventeen Starlink satellites are “currently non-maneuverable,” but SpaceX did not say whether this was due to the same issue as the 100 being de-orbited.”
A potential loss of control is reportedly the culprit behind the preemptive deorbiting of these 100 Starlink satellites. That’s the plan that was initiated back in February, with the idea being that the process would take place over the following six months as the satellites are ‘nudged’ out of orbit. Except, with 17 of those satellites already classified as “currently non-maneuverable”, that raises some questions about how smoothly this process is going to go. Especially since it’s Starlinks’s satellites that will also “take maneuver responsibility for any high-risk conjunctions.” It’s a bit of a puzzle: satellites are being taken out of orbit over concerns about a loss of control but it’s these same satellites that are expected to maneuver out of the way in case there’s a potential collision:
So let’s hope this process happens without a hitch. Because don’t forget about one of the inherent risks associated with an increasingly populated orbital space: it potentially only takes once nasty incident to set up the Kessler syndrome chain-reaction. Just one event to act as the catalyst. That’s how chain-reactions work. With Starlink satellites relying on “automated orbital adjustments” to avoid collisions and some of those satellites already in a “currently non-maneuverable” status, what are the odds that we’re not going to have a lot more problems with ‘non-maneuverable’ satellites in the future that can’t be preemptively deorbited?
And that growing challenge brings us to the following article about the growing excitement of an industry seen as both vital for the future of space and also potentially quite lucrative: the space junk clean up industry. With a quintupling of satellites over just the last 5 years alone, all expectations are that growing volumes space junk is going to have to be removed sooner or later. As a result, a variety of technologies are being developed, ranging from orbital tracking systems that can deliver warnings to devices that can be attached to space junk to help drive them out of orbit. Earth-based lasers that can slow the objects down and potentially drive them out of orbit are also under developed.
Of course, deorbiting space trash is easier said than done, with some companies’ experiments already resulting in collisions and even more space trash. But there’s another more basic problem: it’s unclear who is responsible and who will pay. It’s the classic “tragedy of the commons”, literally looming over us all:
“The number of active satellites has roughly quintupled over five years, driven largely by SpaceX’s Starlink constellation, and will grow as everyone from Amazon.com Inc. to Chinese state-owned enterprises target low-Earth orbit.. The Federal Communications Commission has applications for more than 50,000 satellites, Chairwoman Jessica Rosenworcel said in a speech on March 18.”
A quintupling of satellites over just the past 5 years alone. And all expectations are that this trend is only going to continue as more and more companies and countries begin constructing satellite clusters of their own. Starlink isn’t going to be allowed to have a satellite cluster monopoly. It’s not hard to see why there’s so much interest in space debris removal. What is more difficult to see at this point is how they’re actually going to do it. As companies have discovered, this isn’t an easy business, with rocket debris planned to be used for a test of ClearSpace’s own cleanup-satellite being struck by more debris and shattering into pieces:
Then there’s the question of who is actually going to pay, which is a question left all the more ambiguous thanks to international laws that require getting permission first to even attempt to remove a piece of debris. Everyone is put at risk by this debris, but it’s unclear who is responsible for paying for it. It’s a growing market without an actual marketplace of clear buyers:
And then there’s the obvious dual use nature of much of this technology. After all, if you can take out a piece of space debris, you can presumably take out functioning satellites. Similarly, any collision early-warning systems potentially double as a satellite military attack early-warning system too. It’s a reminder that these debris removal technologies are potentially going to deeply intertwined with the growing trend of the militarization of space:
Similarly, ground-based lasers that can cause a piece of debris to fall out of orbit would be incredibly useful for cleaning orbits if they work. But, of course, if they work at de-orbiting debris, they presumably could potentially deorbit a functioning rival satellite. Don’t be surprised if anti-laser technologies, and other technologies designed to protect against deorbiting methods, end up getting incorporated into highly sensitive military satellites:
And with simply dropping objects out of orbit into the atmosphere as the method of choice for disposing of this debris, note the predictions for the number of objects expected to plummet to earth as a consequence: by 2035, the expectation is someone is going to be injured or killed by falling space debris once every couple of years. Which is the kind of statistic that suggests we should be seeing debris falling on human infrastructure, but not necessarily killing someone, a lot more frequently:
And, again, keep in mind that when we are talking about Kessler syndrome, we are talking about a chain reaction. All it takes is one bad incident to set things in motion. So when we continue to populate earth’s orbits with more and more satellites, it’s more or less an inevitability that some sort of incident is going to happen. What isn’t guaranteed at this point is whether or not adequate deorbiting technology is going to be available in time. That’s part of the context of this growing space junk cleanup industry: it’s going to be an absolutely vital industry in the future, whether we actually have it ready to go or not. Fingers crossed:
Finally, note the words of wisdom from Abhishek Tripathi, director of mission operations at the University of California Berkeley’s Space Sciences Laboratory and veteran of NASA and SpaceX, warns, the best option is to avoid space debris in the first place. It’s an approach that is obviously incompatible with filling earth’s orbit with multiple mega-constellations and therefore an approach we’re not going to take. But it’s the approach we would have taken if we were a wiser species. Oh well:
And in more recent news, SpaceX announced the launching of 23 more Starlink satellites into obit. Let’s hope these newer satellites aren’t harboring any yet-to-be-discovered flaws that turn them into more orbital debris. Or if that does happen, it happens long after a space debris removal industry as been thoroughly developed. An industry in its infancy that could take years, maybe decades, to become a reality. Hopefully that industry arrives sooner rather than later. And in the mean time, we’ll be sure to keep putting more and more stuff in an increasingly crowded space.
Elon Musk is at it again. He just can’t stop showing the world his fascist sensibilities. This time, he decided to retweet an ‘Alt Right’-style online rant from 4Chan about how only high-testosterone males should be allowed to vote. The gist of the rant is that only high testosterone males, and in particular those on the autistic spectrum, are capable of truly independent thought and therefore that’s the only group that should be involved in decision-making. It was the kind of rant one might expect to hear at one of Charles Haywood’s Society for American Civic Renewal (SACR) gatherings. Musk replied to the rant with the statement “interesting observation”.
It was, on one level, a now-typical form of trolling we’ve come to expect from Musk. Part of the ongoing normalization of fascist thought he’s embraced since becoming the owner of Twitter. But part of what made this story extra disturbing is that it seemed to be Musk’s rebuttal for an opinion piece by former US Labor Secretary Robert Reich in The Guardian arguing at Musk is “out of control” and listing options the US government can take to help reign him in. It turns out the 4Chan commenter who wrote that rant has a history of defending Musk and had been attacking Reich the day Musk decided to retweet them. So in response to a column calling Musk “out of control”, Musk retweets a fascist who openly loves him.
But it’s also worth noting one of the arguments put forward by Reich in his piece. As Reich observes, the risks of doing business with Musk aren’t limited to his far right personality. SpaceX is effectively already a monopoly in the rocket launch sector. A monopoly built with massive US government subsidies and with large and growing numbers of US government contracts. Isn’t this exactly the time anti-trust steps should be taken?
And that brings us to the latest development in that deepening relationship between SpaceX and the US government: the US Navy recently announced that its planning on incorporating Starlink with its surface vessel fleet, bringing high-speed internet connected to ships anywhere in the world and that had previously been reliant on relatively slow and aging traditional satellites.
As we’ve seen, Starlink has a number of potential military applications. The war in Ukraine, in particular, has seen Starlink deployed in a number of ways, especially when it comes to guiding drones. And then there’s the fact that German researchers learned out to passively turn Starlink satellites into a passive radar system.
Now, based on the US Navy’s statements it doesn’t sound like Starlink will be used for enhanced war fighting capabilities. But it’s not hard to imagine those applications quietly being developed anyway. In other words, this declaration of the US Navy’s use of Starlink is one of reason we can expect Starlink to eventually become a military target, with all of the disastrous Kessler syndrome consequences that could follow. So while there’s plenty of reason to be highly perturbed by Musk’s decision to rebut Reich’s column with a retweeted 4Chan piece about ending democracy, it’s important to keep in mind that Musk’s relationship with the US military is only deepening at this point:
““Interesting observation,” Musk said, in response to the post.”
The idea that only high-testosterone males — in particular autistic high-testosterone males — should be allowed to vote because only they are capable for truly independent thought is an “interesting observation”, according to Elon Musk. So interesting, apparently, that Musk decided to retweet a screenshot of the rant from 4Chan. Musk just can’t seen to contain his ‘Alt Right’ sentiments anymore.
But it’s the context of Musk’s decision to retweet these ‘interesting observations’ that makes it extra disturbing: he seemed to be retweeting these ideas in response to a piece in the Guardian by Robert Reich calling Musk “out of control” and in need of being stopped. This is a good time to recall how the notion that ‘high-testosterone males’ should seize control of society sure has a number of parallels with Charles Haywood’s Society for American Civic Renewal (SACR). So in an apparent response to a column arguing he’s “out of control”, Musk seemingly ‘jokes’ about how society needs to be run by a self-appointed elite:
It’s the kind of antics we’ve now come to expect from Musk. But it’s worth taking a closer look at that piece by Robert Reich that seems to have triggered Musk so effectively. Because while Reich brings up many of the critiques of Musk that we’ve long heard about his embrace and amplification of far right content, Reich brings up another critique we don’t hear very often but one that really should weigh on US lawmakers. The kind of critique that one could apply to Palantir or any other company offering key national security services despite being run by individuals with palpable fascist sentiments: Musk’s SpaceX is effectively a monopoly at this point. A government subsidized monopoly that should be of increasing concern to the US government given the ‘Alt Right’ ideology of its owner:
“Meanwhile, SpaceX is cornering the rocket launch market. Its rockets were responsible for two-thirds of flights from US launch sites in 2022 and handled 88% in the first six months of this year.”
There’s no denying that SpaceX is the dominant player for rocket launches. With the US government as one of the biggest customers. What kind of risks do Musk’s increasingly open fascist sentiments pose for governments planning on relying on his companies’ services now that SpaceX is effectively a monopoly?
And that brings us to the following piece in Wired about the latest development in the US government’s relationship with Starlink. It’s the kind of development we should expect: The US Navy is planning on rolling out Starlink access for its fleet of surface vessels.
Interestingly, while the promise of high-speed internet for surface vessels across the globe is described as a significant upgrade in internet connectivity for the US Navy, it doesn’t sound like there are plans to use this connectivity for enhance war fighting capabilities. Instead, we’re told that this will really just be used for things like entertainment or communication with loved ones back home.
Now, as the article also notes, the Navy isn’t the only branch of the US military interested in SpaceX’s satellite clusters. The US Space Force and Army both have Starlink contracts and SpaceX is already building a network of hundreds of specialized Starshield satellites for the National Reconnaissance Office. And then it brings up that now-infamous episode where Starlink refused to extend services to Crimea to assist in a Ukrainian naval drone attack on the Russia fleet. An incident highlighted as an example of Elon Musk’s personal volatility and how it might impact the reliability of these services. Of course, had Musk approved that use of Starlink, he would have turned it into a major military target and invited a Russian attack. It’s a reminder that Elon Musk’s fascist sensibilities aren’t the only major problem with this story:
“In a now deleted press release from the Naval Information Warfare Systems Command (NAVWAR), the Navy recently announced that it is experimenting with bringing reliable and persistent high-speed internet to its surface warships. The connectivity comes via a new system developed under its Sailor Edge Afloat and Ashore (SEA2) initiative, which uses satellites from the Starlink network maintained by Musk’s SpaceX and other spaceborne broadband internet providers to maintain a constant and consistent internet connection for sailors—a system that NAVWAR says has “applications across the entire Navy.””
The US Navy is going all in on Starlink. It’s not exactly a surprise. As we’ve seen, the US military has been eagerly adopting Starlink for a variety of applications including in the Arctic.
And as the article notes, while it doesn’t sound like Starlink will be directly used to enhance for naval war fighting capabilities, SpaceX is building out a special military network of Starshield spy satellites for the National Reconnaissance Office. And that’s presumably just a start. Plus, it’s not like there won’t ever be applications for Starlink directly in combat. For example, German researchers learned out to passively turn Starlink satellites into a passive radar system. And then there’s Ukraine’s use of Starlink for guiding drones. The number of potential military applications for high-speed internet available around the globe is only going to grow. So while we’re told this isn’t really going to be used directly in warfare, it’s hard to imagine that remaining the case in the long-run:
And, of course, all of these potential military applications bring us back to the giant disaster waiting to happen: turning Starlink into a military target, with all the “Kessler Syndrome” consequences that could follow. So when we see all of this fretting over Elon Musk’s refusal to allow the use of Starlink in a Ukrainian naval drone attack on the Russian fleet in Crimea, keep in mind that, had he allowed Starlink to be used in that manner, we could be looking at Kessler Syndrome already today. In other words, the more we’re seeing Starlink getting embraced by militaries, the more we should assume Kessler syndrome is just a matter of time:
As we can see, while the things Elon Musk says and does are often quite troubling, so are the things his companies are going to be asked to do on behalf of the Pentagon. Again, had Musk agreed to Starlink’s use in that attack on Crimea, we could already be living in the ‘Kessler syndrome’ world of broken satellites today. Take a moment and think about was a perilous situation this is: had Musk not been as erratic as he is, the situation could have been worse. That’s not an excuse for Musk’s fascist sentiments. But it’s a reflection of how broken the situation is. Musk’s fascism is obviously the biggest problem at the center of this overall ‘Musk situation’ with the US government. But he’s not the only giant problem here.
It’s not just a catastrophic scenario but an inevitability. Kessler’s syndrome is a matter of when, not if. That was the conclusion NASA scientist Donald Kessler arrived at back in 2009 when he observed that modeling results had concluded that “the current debris environment is “unstable”, or above a critical threshold, such that any attempt to achieve a growth-free small debris environment by eliminating sources of past debris will likely fail because fragments from future collisions will be generated faster than atmospheric drag will remove them.” The debris environment was already unstable in 2009. And here we are, 15 years later, in an unrestrained race to fill the earth’s orbit with as many satellite clusters as humanly possible.
That’s all part of the context of a rather trouble story we just got about a satellite incident. No, it didn’t involve the SpaceX Starlink satellite cluster, thankfully. Instead, it was more bad new for Boeing: a Boeing-built geostationary satellite just suddenly exploded. No one knows why. We just know it was a high energy explosion, with all the resulting space junk one should expect from an exploding satellite. NASA is currently tracking 20 chunks of debris while the Russian space agency has reportedly detected up to 80 chunks. It’s not yet know if these chunks pose a risk to other satellites. But, at a minimum, it nudges us that much closer to Kessler’s syndrome. A syndrome that, as Kessler observed in 2009, is already here, slowly getting underway.
So should we suspect this suddenly exploding satellite was a consequence of Kessler’s syndrome? Not yet. There’s simply too little information. And being a geostationary satellite in a high orbit, we wouldn’t necessarily expect it to be the victim of this kind of random event. In other words, it would be pretty bad luck if true. And yet, as we’re going to see, this isn’t the first time one of these Boeing-built geostationary satellites experienced a catastrophe. Back in 2019, another one of these satellites, one of 6 built by Boeing for the company Intelesat, also went out of commission following an incident that was attributed to either a wiring flaw or a meteroid strike. And if indeed it was a ‘meteroid strike’, we have to ask, was that ‘meteor’ actually a piece of space junk? Are we seeing early symptoms of Kessler’s syndrome, where these low probability events get steadily more probable?
But then there’s another increasingly ominous part of this story: with Donald Trump on the cusp of returning to the White House, the Kessler syndrome timeline is about to getting turbocharged thanks to reality that Elon Musk is now joined at the hip with Trump. In fact, Starlink was initially launched under Trump’s first term, over the complaints of rivals that such a platform was going to clutter the skies with junk. Get ready for that clutter to jump significantly under a second Trump term.
But even if Trump doesn’t somehow end up back in the White House, the reality is that Starlink has already become a major Pentagon contractor. In fact, the company is on track to build the first real-time total surveillance network with the ability to collect high resolution imagery of the entire planet. Visual Total Information Awareness. And thanks to SpaceX’s launch capacity, Starlink could be in such a dominant position in the satellite industry that it could effectively drive the competition out of business. Musk is on track to being a satellite monopolist, fueled by massive Pentagon contracts. And at the core of this monopoly is a network of, what will be, tens of thousands of satellites, dwarfing the current number of objects in the skies. An explosion of objects in orbit, most with a military mission.
So while we don’t yet know what caused the Boeing satellite to suddenly explode into dozens of pieces, we can be confident Kessler’s Syndrome is a little bit closer to its inevitable fruition. But don’t get too worried about this one exploding satellite. It’s the massive unchecked explosion of objects in orbit, and in inevitable consequences of such hubris, that you should be worried about:
“The U.S. Space Force, which confirmed the breakup, said it is tracking 20 associated pieces. The Air Force branch said it is conducting routine assessments but has found no immediate threats. Roscosmos, Russia’s space agency, on Tuesday said it had found more than 80 fragments from the destroyed satellite.”
20 more chunks potentially capable of starting a chain reaction, according to the US Space Force. 80 more chunks according to Roscosmos. Chunks generated by a high-energy instantaneous explosion. We don’t know how many chunks of high velocity space junk were created by this explosion for sure or why it happened, but we can be very confident Kessler’s Syndrome just got a boost. Closer and closer to the inevitable:
And note how this wasn’t the only one of these Intelesat satellites to experience some sort of incident. Another satellite experienced a failure that was attributed to either a wiring flaw or a meteoroid. Keep in mind that a piece of space junk would behave a lot like a meteoroid when it comes to slamming into satellites:
And, of course, this is all on top of the growing list of Boeing related incidents, which now includes the failed Boeing Starliner episode that has resulted in the US being even more reliant on SpaceX for NASA’s 2025 Commercial Crew Program:
It’s obviously not a great story for Boeing. And, on the surface, a pretty great story for SpaceX. And yet, let’s not ignore the elephant in the room here: was this a space junk incident? On one level, being a geostationary satellite should make the odds of space junk hitting this satellite a pretty probability event. But on the other hand, this is the second of these satellites to experience some sort of catastrophe in the last five years, with the previous incident potentially attributed to a meteoroid. Which is effectively the same as being potentially attributed to space junk.
Did both of these Intelesat geostationary satellites experience space junk events? We have no idea, but the possibility is a grim one. Which brings us to the following update on Starlink’s prospects. Prospects poised to explode under a second Trump administration. As the article reminds us, not only is Musk increasingly close to Trump politically at this point and poised to even serve in some capacity in his administration, but it was the first Trump administration that gave the greenlight for Starlink in the first place in 2018. At a time when many were warning that such a system could end up cluttering the skies. Flash forwards to today, and not only is SpaceX a large and growing Pentagon contractor — currently building the largest network of spy satellites on the planet — but SpaceX’s share of the satellite is in such a dominant position that observers warn rivals could be driven out of business. In other words, SpaceX is on track to becoming a major military contractor monopoly. In control of a surveillance network that could, for the first time ever, provide complete high-resolution real-time surveillance of the entire planet. It’s like a visual version of the Total Information Awareness program, owned and operated by Elon Musk, America’s preeminent fascist oligarch. Or at least that’s going to be the case until Kessler’s syndrome inevitably takes hold and brings the whole thing down:
“Musk’s shift to supporting Trump appears to be driven largely by conviction on social issues, according to people familiar with him who spoke on the condition of anonymity to discuss his thinking. But the tech executive’s business empire also stands to benefit if Trump wins the election — potentially by a far larger amount than the billionaire has splashed out to support Trump’s campaign.”
Musk’s motives for backing Trump aren’t a grand mystery. The payoffs potentially dwarf the contributions Musk has made to Trump’s campaign. Payoffs in the form of government contracts that go beyond Starlink or SpaceX and could include Tesla or maybe even Twitter/X. But beyond those direct financial payoffs is the possibility that Trump will appoint Musk to lead a “government efficiency commission”. It’s like the promise of some sort of undefined raw government power. It’s not hard to see what Musk finds so alluring about a second Trump White House. He’s going to be a big part of it:
But Musk likely isn’t just motivated by the future prospects for SpaceX under a second Trump administration. There’s also the reality that Starlink, which remains a highly experimental unprecedented enterprise, was greenlit by the Trump administration. Of course Musk wants Trump back in the White House. All the zany schemes he can imagine we’ll be approved, probably with major federal financing involved. If you think Starlink is a high risk gambit, just wait:
But it’s not like there aren’t already big federal plans for Starlink. Military plans, with a special Pentagon version of Starlink, Starshield, already under construction for the National Reconnaissance Office. It’s the kind of massive Pentagon contract that’s only going to grow:
And if Starlink manages to build the world’s first real-time high-resolution full-earth surveillance system, not only will the Pentagon will be more reliant on the company than ever but Starlink could effectively crush the rest of the competition. Between the head start it already has and the launch capacity of SpaceX, the competition doesn’t really stand a chance:
Beyond all the military applications for Starlink is the potentially massive federal subsidies for consumer broadband. Spending poised to growing significantly thanks to the $42 billion Broadband Equity Access and Deployment Program launched under the Biden administration. It’s not hard to imagine Starlink getting a big part of those contracts. Especially under a second Trump administration:
Finally, we can’t ignore the reality that all of these grand ambitions are just building towards the inevitable. Kessler’s Syndrome isn’t just something that’s going to happen in the event of a catastrophe. It’s an inevitability as long as we keep launching things into a space. A matter of when, not if. So when we see an apparently unchecked explosion of satellites, heavily subsidized by the US Pentagon, for the purpse of building systems with both civilian and military applications, it’s also just a matter of time before these plaforms because military targets, with all of the space junk debris potential that comes with that kind of scenario. So when we see Musk’s refusal to extend Starlink’s reach in order to allow Ukraine to launch a drone strike on the Russian Navy — which was absolutely the correct move by Musk in that instance, regardless of his personal motives — it’s a warning of another “when, not if” scenario on the horizon. Don’t forget that Russia could have treated Starlink as a legal military target under international law under that kind of scenario. It really was a kind of came of ‘Kessler’s chicken’ over whether or not Russia would be willing to risk Kessler’s syndome while, legally, responding to the use of Starlink in this manner. A horribly irresponsible game of Kessler’s chicken:
It’s going to be grimly interesting to see how these federal contracts end up getting distributed under a second Trump term. Especially with Amazon as one of the other major competitors. But as Donald Kessler warned back in 2009, the situation was already unstable back then. Before Donald Trump gave Musk the green light to go wild. And all the green lights to follow.
With Elon Musk on track to become the world’s first trillionaire pretty soon now that he’s become Donald Trump’s new favorite billionaire and empowered to basically serve as a shadow President who can write himself whatever lucrative government contract he wants, the growing questions about the reckless nature of Starlink are blaring louder than ever. Musk is obviously going to have massive new government contracts involving SpaceX, both commercial and military contracts, and it’s hard to imagine Trump reigning him in at all. Space, or at least orbital space, is going to be Musk’s plaything for at least the next four years. All around the earth.
And that situation, of course, brings us back to the growing catastrophe in orbit: the Kessler’s syndrome space junk nightmare. A nightmare that was already a reality back in 2009 when NASA scientist Donald Kessler warned that the space junk situation had already reached an uncontrollable level, with future collisions creating more space junk at a faster rate than the existing space junk will burn up in the atmosphere. That was 2009, almost a decade before the first Starlink launch. And here we are with Starlink now comprising the majority of satellites in orbit and plans to launch tens of thousands more in coming years. Plans that have presumably accelerated significantly with Donald Trump’s win and Musk’s ascension to the role of Trump’s billionaire whisperer.
Now, as we’ve also seen, the risks of this explosion of satellites isn’t just a Kessler syndrome scenario where the future of space is littered skies filled with broken satellites that will eventually burn up in the atmosphere. These satellites are polluting the upper atmosphere too, with the aluminum oxide released from burning up reacting with ozone and blowing holes in the ozone layer. It’s a new form of pollution that didn’t really exist before. And it’s pollution specifically associated with the one act that ‘cleans’ orbits: burning up satellites and junk in the atmosphere. We aren’t just creating a space junk nightmare. It’s a space junk AND ozone hole nightmare.
Now, in fairness, we should note one aspect of Starlink that could be a lot worse: the Falcon 9 and Falcon Heavy rockets that are launching all of these satellites into orbit use a form of liquid fuel that is far less polluting to the atmosphere than the solid fuel of traditional rockets. That’s great. But, while parts of the rockets are returned to earth for reuse, not all of the rockets are reused. Instead, the 4‑ton upper stages of the rockets become space junk before descending back down and burning up in the atmosphere, releasing aluminum oxide in the process. Also keep in mind that there are going to be thousands of launches required to complete the megaconstellation. Plus, these satellites are built to be regularly replaced. Starlink is designed to create a steady stream of decommissioned satellites indefinitely, with Starlink satellites already falling into the atmosphere daily now. In other words, it will be more a ‘death by a thousand cuts’ collapse of the ozone layer.
And, again, all of these trends are poised to accelerate dramatically now that Elon Musk has unbridled power and will effectively write his own government contracts. And let’s not forget that, while a Kessler syndrome disaster scenario might be more like a “boiling frog” scenario than a single catastrophic event, there’s no reason to assume a catastrophic event won’t happen. Especially as these platforms continue to be used to military purposes. As is, the explosion of the number of satellite in orbit guarantees a steady stream of satellites burning up ozone for decades to come. But that’s assume these satellites don’t get knocked out of orbit ahead of schedule. What happens to the ozone layer if there’s a space catastrophe that results in thousands of satellites all fall to earth at once?
These are the kinds of scenarios the US is going to be aggressively inviting in coming years. Don’t look up:
““There is now a Starlink reentry almost every day,” McDowell told Space.com. “Sometimes multiple.””
Daily Starlink satellite burnups in the atmosphere. That’s a thing now. It wasn’t always a thing in earth’s history, but it is now. More and more every year. And as we’ve seen, every time those satellites burn up in the atmosphere, they release ozone-destroy aluminum oxide. In other words, humanity found a new way to blow holes in the ozone layer:
Now, it’s also worth noting that, while the liquid fuel rockets used by SpaceX really are much less polluting than traditional solid state rockets, the 4‑ton upper stages of the rockets used to launch Starlink not only become space debris but eventually also burn up in the atmosphere, releasing more aluminum oxide. And there’s going to be thousands of these launches required to complete the mega-constellation:
But, again, this growing threat to the ozone layer isn’t just an issue with old satellites returning to earth and burning up on a planned schedule. Space junk really is getting out of control and that space junk creates non-functioning satellites that can fall back to earth well ahead of schedule. Maybe many satellites all at once should some sort of accelerated Kessler Syndrome disaster unfold. And as the following piece notes, the Low Earth Orbit space getting increasingly filled with Starlink satellites had already performed nearly 50,000 collision-avoidance maneuvers in the first half of 2024 alone. As we’ve seen, Starlink satellites have collision avoidance systems that need to be deploy at any moment because that low orbit space is already so full that satellites can’t be placed in independent non-colliding orbits. That translates to well over 200 avoidance maneuvers each day. And Starlink isn’t even close to reaching its goal of roughly 42,000 satellites. And that’s on top of all the other competing satellite mega-constellations that are inevitably going to follow. It’s the kind of situation that has experts warning that the collision-avoidance systems may not be adequate in the long-run. And, again, the way Kessler’s syndrome builds is one collision at a time. Each collision makes the problem a little worse. There doesn’t have to be one giant catastrophic event. As Andy Lawrence, Regius Professor of Astronomy at University of Edinburgh, puts it, the challenge is more like the “boiling the frog” problem, where it just keeps getting worse and worse whether or not there’s a single catastrophic triggering event:
“But according to Andy Lawrence, Regius Professor of Astronomy at University of Edinburgh, it’s more insidious than that. “This idea that eventually there will be some sort of catastrophe is not quite right. It’s more like the infamous ‘boiling the frog’ problem,” he says.”
Don’t expect a sudden space catastrophe. It’ll be a slow boil towards that long-term disaster. Which presumably means humanity will be about as helpless in preventing that long-term catastrophe as we’ve proven with climate change. Short-term greed will win out. Damn the consequences:
And while Starlink satellites do have collision avoidance systems, there’s no guarantee these measures will be adequate as these orbits get more and more crowded. Also keep in mind that space junk may not be trackable. These satellites can only respond to junk large enough to detect:
Finally, as we are again reminded, the only real solution to the growing problem of space junk — other than not launching all that junk in the first place — is to somehow get that junk to burn up in the atmosphere, releasing more aluminum oxide and other atmospheric pollutants:
And while this latest warning about our lurch towards Kessler’s syndrome cautions that it doesn’t have to be a big ‘event’, but instead will be more or a slow boil, let’s not forget that some sort of big event could still happen. Especially if space becomes a military battlefield of the future. Don’t forget about Starshield, the special militarized satellite cluster for the Pentagon that Starlink is also building, or the ‘Total Information Awareness’ style real-time video surveillance of the entire planet services being built for the Pentagon. The risk of drawing these platforms into a military conflict is only going to grow. Especially now that Musk gets to write his own government contracts.