Dave Emory’s entire lifetime of work is available on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)
WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.
You can subscribe to e‑mail alerts from Spitfirelist.com HERE.
You can subscribe to RSS feed from Spitfirelist.com HERE.
You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.
This broadcast was recorded in one, 60-minute segment.
Introduction: This program follows up FTR #‘s 718 and 946, we examined Facebook, noting how it’s cute, warm, friendly public facade obscured a cynical, reactionary, exploitative and, ultimately “corporatist” ethic and operation.
The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”
Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
The article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”
In addition, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting associated with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. This is a service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.
Next, we note that Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”
The above-mentioned Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica?
No.
They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . ”
In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”
Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.
Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.
” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
Program Highlights Include:
- Facebook’s project to incorporate brain-to-computer interface into its operating system: ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
- Some telling observations by Nigel Oakes, the founder of Cambridge Analytica parent firm SCL: ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
- Further exposition of Oakes’ statement: ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
- Observations about the possibilities of Facebook’s goal of having AI governing the editorial functions of its content: As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
- Microsoft’s Tay Chatbot offers a glimpse into this future: As one Twitter user noted, employing sarcasm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
1. The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”
An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups “exceed deletion threshold,” and that those pages are “subject to different treatment in the same category as pages belonging to governments and news organizations.” The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. The investigation outlines questionable practices on behalf of CPL Resources, a third-party content moderator firm based in Dublin that Facebook has worked with since 2010.
Those questionable practices primarily involve a hands-off approach to flagged and reported content like graphic violence, hate speech, and racist and other bigoted rhetoric from far-right groups. The undercover reporter says he was also instructed to ignore users who looked as if they were under 13 years of age, which is the minimum age requirement to sign up for Facebook in accordance with the Child Online Protection Act, a 1998 privacy law passed in the US designed to protect young children from exploitation and harmful and violent content on the internet. The documentary insinuates that Facebook takes a hands-off approach to such content, including blatantly false stories parading as truth, because it engages users for longer and drives up advertising revenue. . . .
. . . . And as the Channel 4 documentary makes clear, that threshold appears to be an ever-changing metric that has no consistency across partisan lines and from legitimate media organizations to ones that peddle in fake news, propaganda, and conspiracy theories. It’s also unclear how Facebook is able to enforce its policy with third-party moderators all around the world, especially when they may be incentivized by any number of performance metrics and personal biases. . . . .
Meanwhile, Facebook is ramping up efforts in its artificial intelligence division, with the hope that one day algorithms can solve these pressing moderation problems without any human input. Earlier today, the company said it would be accelerating its AI research efforts to include more researchers and engineers, as well as new academia partnerships and expansions of its AI research labs in eight locations around the world. . . . .The long-term goal of the company’s AI division is to create “machines that have some level of common sense” and that learn “how the world works by observation, like young children do in the first few months of life.” . . . .
2. Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
The following article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”
Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.
It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period. An ideological purge of the State Department is reportedly already underway.
AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.
The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.
Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.
According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.
Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.
“I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”
He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.
AggregateIQ has sought to distance itself from an international privacy scandal engulfing Facebook and Cambridge Analytica, despite allegations from Christopher Wylie, a whistleblower at the now-defunct UK firm, that it had acted as the Canadian branch of the organisation.
The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.
The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said. . . .
. . . . “The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.” . . . .
3. Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”
Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.
A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.
Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.
It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.
Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University, said, in her opinion, this exposed a potential tension for Facebook.
“Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.”
The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.
The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.
It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours. . . .
. . . . The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.
The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.
“The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.
The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.
Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.
“Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.
In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends.
Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.
Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.
The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.
It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents. . . .
. . . . David Godkin, Six4Three’s lead counsel said: “We believe the public has a right to see the evidence and are confident the evidence clearly demonstrates the truth of our allegations, and much more.”
Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.
“They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.” . . .
4. Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica?
No.
They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . ”
. . . . In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.
But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . .
. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.
Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.
An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . .
5. In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”
The president of BlackRock, the world’s biggest asset manager, is among those who think big technology firms could invade the financial industry’s turf. Google and Facebook have thrived by collecting and storing data about consumer habits—our emails, search queries, and the videos we watch. Understanding of our financial lives could be an even richer source of data for them to sell to advertisers.
“I worry about the data,” said BlackRock president Robert Kapito at a conference in London today (Nov. 2). “We’re going to have some serious competitors.”
If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said.
Kapito is worried because the effort to win control of payment systems is already underway—Apple will allow iMessage users to send cash to each other, and Facebook is integrating person-to-person PayPal payments into its Messenger app.
As more payments flow through mobile phones, banks are worried they could get left behind, relegated to serving as low-margin utilities. To fight back, they’ve started initiatives such as Zelle to compete with payment services like PayPal.
…
Barclays CEO Jes Staley pointed out at the conference that banks probably have the “richest data pool” of any sector, and he said some 25% of the UK’s economy flows through Barlcays’ payment systems. The industry could use that information to offer better services. Companies could alert people that they’re not saving enough for retirement, or suggest ways to save money on their expenses. The trick is accessing that data and analyzing it like a big technology company would.
And banks still have one thing going for them: There’s a massive fortress of rules and regulations surrounding the industry. “No one wants to be regulated like we are,” Staley said.
6. Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.
Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.
” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”
Facebook Inc. wants your financial data.
The social-media giant has asked large U.S. banks to share detailed financial information about their customers, including card transactions and checking-account balances, as part of an effort to offer new services to users.
Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter.
Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said.
Data privacy is a sticking point in the banks’ conversations with Facebook, according to people familiar with the matter. The talks are taking place as Facebook faces several investigations over its ties to political analytics firm Cambridge Analytica, which accessed data on as many as 87 million Facebook users without their consent.
One large U.S. bank pulled away from talks due to privacy concerns, some of the people said.
Facebook has told banks that the additional customer information could be used to offer services that might entice users to spend more time on Messenger, a person familiar with the discussions said. The company is trying to deepen user engagement: Investors shaved more than $120 billion from its market value in one day last month after it said its growth is starting to slow..
Facebook said it wouldn’t use the bank data for ad-targeting purposes or share it with third parties. . . .
. . . . Alphabet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to provide basic banking services on applications such as Google Assistant and Alexa, according to people familiar with the conversations. . . .
7. In FTR #946, we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.
Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”
As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.
Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.
The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.
“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .
. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .
. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
. . . . But he [Wylie] said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.
Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .
8a. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”
Facebook wants to read your thoughts.
- ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
“Facebook Literally Wants to Read Your Thoughts” by Kristen V. Brown; Gizmodo; 4/19/2017.
At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.
What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”
She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.
Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.
Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.
The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.
“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”
“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”
Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
…
8b. More about Facebook’s brain-to-computer interface:
- ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
- ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”
Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.
Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.”
So at least there will be independent ethicists working to minimize the potential for malicious use of Facebook’s brain-reading technology to steal or police people’s thoughts.
During our interview, Dugan showed her cognizance of people’s concerns, repeating the start of her keynote speech today saying, “I’ve never seen a technology that you developed with great impact that didn’t have unintended consequences that needed to be guardrailed or managed. In any new technology you see a lot of hype talk, some apocalyptic talk and then there’s serious work which is really focused on bringing successful outcomes to bear in a responsible way.”
In the past, she says the safeguards have been able to keep up with the pace of invention. “In the early days of the Human Genome Project there was a lot of conversation about whether we’d build a super race or whether people would be discriminated against for their genetic conditions and so on,” Dugan explains. “People took that very seriously and were responsible about it, so they formed what was called a ELSI panel . . . By the time that we got the technology available to us, that framework, that contractual, ethical framework had already been built, so that work will be done here too. That work will have to be done.” . . . .
Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”
Facebook’s domination of social networking and advertising give it billions in profit per quarter to pour into R&D. But its old “Move fast and break things” philosophy is a lot more frightening when it’s building brain scanners. Hopefully Facebook will prioritize the assembly of the ELSI ethics board Dugan promised and be as transparent as possible about the development of this exciting-yet-unnerving technology.…
- In FTR #‘s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
- ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
- ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
9a. Nigel Oakes is the founder of SCL, the parent company of Cambridge Analytica. His comments are related in a New York Times article. ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
. . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .
9b. Mr. Oakes’ comments are related in detail in another Times article. ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
. . . . Adolf Hitler “didn’t have a problem with the Jews at all, but people didn’t like the Jews,” he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims.
This sort of campaign, he continued, did not require bells and whistles from technology or social science.
“What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,” he told Dr. Briant. “Trump had the balls, and I mean, really the balls, to say what people wanted to hear.” . . .
9c. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .
But like all teenagers, she seems to be angry with her mother.
Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”
In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .
9d. As noted in a Popular Mechanics article: ” . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
And we keep showing it our very worst selves.
We all know the half-joke about the AI apocalypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exterminated. It’s the subject of countless sci-fi stories and blog posts about robots, but maybe the real danger isn’t that AI comes to such a conclusion on its own, but that it gets that idea from us.
Yesterday Microsoft launched a fun little AI Twitter chatbot that was admittedly sort of gimmicky from the start. “A.I fam from the internet that’s got zero chill,” its Twitter bio reads. At its start, its knowledge was based on public data. As Microsoft’s page for the product puts it:
Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.
The real point of Tay however, was to learn from humans through direct conversation, most notably direct conversation using humanity’s current leading showcase of depravity: Twitter. You might not be surprised things went off the rails, but how fast and how far is particularly staggering.
Microsoft has since deleted some of Tay’s most offensive tweets, but various publications memorialize some of the worst bits where Tay denied the existence of the holocaust, came out in support of genocide, and went all kinds of racist.
Naturally it’s horrifying, and Microsoft has been trying to clean up the mess. Though as some on Twitter have pointed out, no matter how little Microsoft would like to have “Bush did 9/11″ spouting from a corporate sponsored project, Tay does serve to illustrate the most dangerous fundamental truth of artificial intelligence: It is a mirror. Artificial intelligence—specifically “neural networks” that learn behavior by ingesting huge amounts of data and trying to replicate it—need some sort of source material to get started. They can only get that from us. There is no other way.
But before you give up on humanity entirely, there are a few things worth noting. For starters, it’s not like Tay just necessarily picked up virulent racism by just hanging out and passively listening to the buzz of the humans around it. Tay was announced in a very big way—with a press coverage—and pranksters pro-actively went to it to see if they could teach it to be racist.
If you take an AI and then don’t immediately introduce it to a whole bunch of trolls shouting racism at it for the cheap thrill of seeing it learn a dirty trick, you can get some more interesting results. Endearing ones even! Multiple neural networks designed to predict text in emails and text messages have an overwhelming proclivity for saying “I love you” constantly, especially when they are otherwise at a loss for words.
So Tay’s racism isn’t necessarily a reflection of actual, human racism so much as it is the consequence of unrestrained experimentation, pushing the envelope as far as it can go the very first second we get the chance. The mirror isn’t showing our real image; it’s reflecting the ugly faces we’re making at it for fun. And maybe that’s actually worse.
Sure, Tay can’t understand what racism means and more than Gmail can really love you. And baby’s first words being “genocide lol!” is admittedly sort of funny when you aren’t talking about literal all-powerful SkyNet or a real human child. But AI is advancing at a staggering rate. . . .
. . . . When the next powerful AI comes along, it will see its first look at the world by looking at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.
Oh look, Facebook actually banned someone for posting neo-Nazi content on their platform. But there’s a catch: They banned Ukrainian activist Eduard Dolinsky for 30 days because he was posting examples of antisemitic graffiti. Dolinsky is the director of the Ukrainian Jewish Committee. According to Dolinksy, his far right opponents have a history of reporting Dolinksy’s posts to Facebook in order to get him suspended. And this time it worked. Dolinksy appealed the ban but to no avail.
So that happened. But first let’s take a quick look at an article from back in April that highlights how absurd this action was. The article is about a Ukrainian school teacher in Lviv, Marjana Batjuk, who posted birthday greetings to Adolf Hitler on her Facebook page on April 20 (Hitler’s birthday). She also taught her students the Nazi salute and even took some of her students to meet far right activists who had participated in a march wearing the uniform of the the 14th Waffen Grenadier Division of the SS.
Batjuk, who is a member of Svoboda, later claimed her Facebook account was hacked, but a news organization found that she has a history of posting Nazi imagery on social media networks. And there’s no mention in this report of Batjuk getting banned from Facebook:
“Marjana Batjuk, who teaches at a school in Lviv and also is a councilwoman, posted her greeting on April 20, the Nazi leader’s birthday, Eduard Dolinsky, director of the Ukrainian Jewish Committee, told JTA. He called the incident a “scandal.””
She’s not just a teacher. She’s also a councilwoman. A teacher councilwoman who likes to post about positive things about Hitler on her Facebook page. And it was Eduard Dolinsky who was talking to the international media about this.
But Batjuk doesn’t just post pro-Nazi things on her Facebook page. She also takes her students to meet the far right activists:
Batjuk later claimed that her Facebook page was hacked, and yet a media organization was able to find plenty of previous examples of similar posts on social media:
And if you look at that Strana news summary of her social media posts, a number of them are clearly Facebook posts. So if Strana news organization was able to find these old posts that’s a pretty clear indication Facebook wasn’t removing them.
That was back in April. Flash forward to today and we find a sudden willingness to ban people for post Nazi content...except it’s Eduard Dolinsky getting banned for making people aware of the pro-Nazi graffiti that has become rampant in Ukraine:
“Dolinsky, the director of the Ukrainian Jewish Committee, said he was blocked by the social media giant for posting a photo. “I had posted the photo which says in Ukrainian ‘kill the yid’ about a month ago,” he says. “I use my Facebook account for distributing information about antisemitic incidents and hate speech and hate crimes in Ukraine.””
The director of the Ukrainian Jewish Committee gets banned for post antisemitic content. That’s some world class trolling by Facebook.
And while it’s only a 30 day ban, that’s 30 days where Ukraine’s media and law enforcement won’t be getting Dolinsky’s updates. So it’s not just a morally absurd banning, it’s also actually going to be promoting pro-Nazi graffiti in Ukraine by silencing one of the key figures covering it:
And this isn’t the first time Dolinsky has been banned from Facebook for posting this kind of content. But it’s the longest he’s been banned. And the fact that this isn’t the first time he’s been banned suggest this isn’t just an ‘oops!’ genuine mistake:
Dolinsky also notes that he has people trying to silence him precisely because of the job he does highlighting Ukraine’s official embrace of Nazi collaborating historical figures:
So we likely have a situation where antisemites successfully got Dolinksy silence, with Facebook ‘playing dumb’ the whole time. And as a consequence Ukraine is facing a month without Dolinsky’s reports. Except it’s not even clear that Dolinksy is going to be allowed to clarify the situation and continue posting updates of Nazi graffiti after this month long ban is up. Because he says he’s been trying to appeal the ban, but with no success:
Given Dolinsky’s powerful criticisms of Ukraine’s embrace and historic whitewashing of the far right, it would be interesting to learn if the decision to ban Dolinsky originally came from the Atlantic Council, which is one of the main organization Facebook outsourced its troll-hunting duties to.
So for all we know, Dolinsky is effectively going to be banned permanently from using Facebook to make Ukraine and the rest of the world aware of the epidemic of pro-Nazi antisemitic graffiti in Ukraine. Maybe if he sets up a pro-Nazi Facebook persona he’ll be allowed to keep doing his work.
It looks like we’re in for another round of right-wing complaints about Big Tech political bias designed to pressure companies into pushing right-wing content onto users. Recall how complaints about Facebook suppressing conservatives in the Facebook News Feed resulted in a change in policy in 2016 that unleashed a flood of far right disinformation on the platform. This time, it’s Google’s turn to face the right-wing faux-outrage machine and it’s President Trump leading it:
Trump just accused Google of biasing the search results in its search engine to give negative stories about him. Apparently he googled himself and didn’t like the results. His tweet came after a Fox Business report on Monday evening that made the claim that 96 percent of Google News results for “Trump” came from the “national left-wing media.” The report was based on some ‘analysis’ by right-wing media outlet PJ Media.
Later, during a press conference, Trump declared that Google, Facebook, and Twitter “are treading on very, very troubled territory,” and his economic advisor Larry Kudlow told the press that the issue is being investigating by the White House. And as Facebook already demonstrated, while it seems highly unlikely that the Trump administration will actually take some sort of government action to force Google to promote positive stories about Trump, it’s not like loudly complaining can’t get the job done:
“Trump told reporters in the Oval Office Tuesday that the three technology companies “are treading on very, very troubled territory,” as he added his voice to a growing chorus of conservatives who claim internet companies favor liberal viewpoints.”
The Trumpian warning shots have been fired: feed the public positive news about Trump, or else...
“Republican/Conservative & Fair Media is shut out. Illegal.”
And he literally charged Google with illegality over allegedly shutting out “Republican/Conservative & Fair Media.” Which is, of course, an absurd charge for anyone familiar with Google’s news portal. But that was part of what made the tweet so potentially threatening to these companies since it implied there was a role the government should be playing to correct this perceived law-breaking.
At the same time, it’s unclear what, legally speaking, Trump could actually do. But that didn’t stop him from issue such threats, as he’s done in the past:
Ironically, when Trump muses about reinstating long-ended rules requiring equal time for opposing views (the “Fairness Doctrine” overturned by Reagan in 1987), he’s musing about doing something that would effectively destroy the right-wing media model, a model that is predicated on feeding the audience exclusively right-wing content. As many have noted, the demise of the Fairness Doctrine — which led to the explosion of right-wing talk radio hosts like Rush Limbaugh — probably played a big role in intellectually neutering the American public, paving the way for someone like Trump to eventually come along.
And yet, as unhinged as this latest threat may be, the administration is actually going to do “investigations and analysis” into the issue according to Larry Kudlow:
And as we should expect, this all appears to have been triggered by a Fox Business piece on Monday night that covered an ‘study’ done by PJ Media (a right-wing media outlet) that found 96 percent of Google News results for “Trump” come from the “national left-wing media”:
Putting aside the general questions of the scientific veracity of this PJ Media ‘study’, it’s kind of amusing to realize that it was study conducted specifically on a search for “Trump” on Google News. And if you had to choose a single topic that is going to inevitably have an abundance of negative news written about it, that would be the topic of “Trump”. In other words, if you were to actually conduct a real study that attempts to assess the political bias of Google News’s search results, you almost couldn’t have picked a worse search term to test that theory on than “Trump”.
Google not surprisingly refutes these charges. But it’s the people who work for companies dedicated to improving how their clients who give the most convincing responses since their businesses are literally dependents on them understanding Google’s algorithms:
All that said, it’s not like the topic of the blackbox nature of the algorithms behind things like Google’s search engine aren’t a legitimate topic of public interest. And that’s part of why these farcical tweets are so dangerous: the Big Tech giants like Google, Facebook, and Twitter know that it’s not impossible that they’ll be subject to algorithmic regulation someday. And they’re going to want to push that day off for a long as possible. So when Trump makes these kinds of complaints, it’s not at all inconceivable that he’s going to get the response from these companies that he wants as these companies attempt to placate him. It’s also highly likely that if these companies do decide to placate him, they’re not going to publicly announce this. Instead they’ll just start rigging their algorithms to serve up more pro-Trump content and more right-wing content in general.
Also keep in mind that, despite the reputation of Silicon Valley as being run by a bunch of liberals, the reality is Silicon Valley has a strong right-wing libertarian faction, and there’s going to be no shortage of people at these companies that would love to inject a right-wing bias into their services. Trump’s stunt gives that right-wing faction of Silicon Valley leadership an excuse to do exactly that from a business standpoint.
So if you use Google News to see what the latest the news is on “Trump” and you suddenly find that it’s mostly good news, keep in mind that that’s actually really, really bad news because it means this stunt worked.
The New York Times published a big piece on the inner workings of Facebook’s response to the array of scandals that have enveloped the company in recent years, from the charges of Russian operatives using the platform to spread disinformation to the Cambridge Analytica scandal. Much of the story focus on the actions of Sheryl Sandberg, who appears to be top person at Facebook who was overseeing the company’s response to these scandals. It describes a general pattern of Facebook’s executives first ignoring problems and then using various public relations strategies to deal with the problems when they are no longer able to ignore them. And it’s the choice of public relations firms that is perhaps the biggest scandal revealed in this story: In October of 2017, Facebook hired Definers Public Affair, a DC-based firm founded by veterans of Republican presidential politics that specialized in applying the tactics of political races to corporate public relations.
And one of the political strategies employed by Definers was simply putting out articles that put their clients in a positive light while simultaneously attacking their clients’ enemies. That’s what Definers did for Facebook, with Definers utilizing an affiliated conservative news site, NTK Network. NTK shares offices and stiff with Definers and many NTK stories are written by Definers staff and are basically attack ads on Definers’ clients’ enemies. So how does NTK get anyone to read their propaganda articles? By getting them picked up by other popular conservative outlets, including Breitbart.
Perhaps most controversially, Facebook had Definers attempt to tie various groups that are critical of Facebook to George Soros, implicitly harnessing the existing right-wing meme that George Soros is a super wealthy Jew who secretly controls almost everything. This attack by Definers centered around the Freedom from Facebook coalition. Back in July, The group had crashed the House Judiciary Committee hearings when a Facebook executive was testifying, holding up signs depicting Sheryl Sandberg and Mark Zuckerberg as two heads of an octopus stretching around the globe. The group claimed the sign was a reference to old cartoons about the Standard Oil monopoly. But such imagery also evokes classic anti-Semitic tropes, made more acute by the fact that both Sandberg and Zuckerberg are Jewish. So Facebook enlisted the ADL to condemn Freedom from Facebook over the imagery.
But charging Freedom from Facebook with anti-Semitism isn’t the only strategy Facebook used to address its critics. After the protest in congress, Facebook had Definers basically accuse the groups behind Freedom from Facebook of being puppets of George Soros and encouraged reporters to investigate the financial ties of the groups with Soros. And this was part of broader push by Definers to cast Soros as the man behind all of the anti-Facebook sentiments that have popped up in recent years. This, of course, is playing right into the growing right-wing meme that Soros, a billionaire Jew, is behind almost everything bad in the world. And it’s a meme that also happens to be exceptionally popular with the ‘Alt Right’ neo-Nazi wing of contemporary conservatism. So Facebook dealt with its critics by first charging them with indirect anti-Semitism and then used their hired Republican public relations firm to make an indirect anti-Semitic attacks on those same critics:
“While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.”
Imagine if your job was to handle Facebook’s bad press. That was apparently Sheryl Sandberg’s job behind the scenes while Mark Zuckerberg was acting as the apologetic public face of Facebook.
But both Zuckerberg and Sandberg appeared to have largely the same response to the scandals involving Facebook’s growing use as a platform for spreading hate and extremism: keep Facebook out of those disputes by arguing that it’s just a platform, not a publisher:
Sandberg also appears to have increasingly relied on Joel Kaplan, Facebook’s vice president of global public policy, for advice on how to handle these issues and scandal. Kaplan previously served in the George W. Bush administration. When Donald Trump first ran for president in 2015 and announced his plan for a “total and complete shutdown” on Muslims entering the United States and that message was shared more than 15,000 times on Facebook, the question was raised by Zuckerberg of whether or not Trump violated the platform’s terms of service. Sandberg turned to Kaplan for advice. Kaplan, unsurprisingly, recommended that any sort of crackdown on Trump’s use of Facebook would be seen as obstructing free speech and prompt a conservative backlash. Kaplan’s advice was taken:
And note how, after Trump won, Facebook hired a former aide to Jeff Sessions and lobbying firms linked to Republican lawmakers who had jurisdiction over internet companies. Facebook was making pleasing Republicans in Washington a top priority:
Kaplan also encouraged Facebook to avoid investigating too closely the alleged Russian troll campaigns. This was his advice even in 2016, while the campaign was ongoing, and after the campaign in 2017. Interestingly, Facebook apparently found accounts linked to ‘Russian hackers’ that were using Facebook to look up information on presidential campaigns. This was in the spring of 2016. Keep in mind that the initial reports of the hacked emails didn’t start until mid June of 2016. Summer technically started about a week later. So how did Facebook’s internal team know these accounts were associated with Russian hackers before the ‘Russian hacker’ scandal erupted? That’s unclear. But the article goes on to say that this same team also found accounts linked with the Russian hackers messaging journalists to share contents of the hacked emails. Was “Guccifer 2.0” using Facebook to talk with journalists? that’s also unclear. But it sounds like Facebook was indeed actively observing what it thought were Russian hackers using the platform:
Alex Stamos, Facebook’s head of security, directed a team to examine the Russian activity on Facebook. And yet Zuckerberg and Sandberg apparently never learned about their findings until December of 2016, after the election. And when they did learn, Sandberg got angry as Stamos for not getting approval before looking into this because it could leave the company legally exposed, highlighting again how not knowing about the abuses on its platform is a legal strategy of the company. By January of 2017, Stamos wanted to issue a public paper on their findings, but Joel Kaplan shot down the idea, arguing that doing so would cause Republicans to turn on the company. Sandberg again agreed with Kaplan:
“Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016, after Mr. Zuckerberg publicly scoffed at the idea that fake news on Facebook had helped elect Mr. Trump, Mr. Stamos — alarmed that the company’s chief executive seemed unaware of his team’s findings — met with Mr. Zuckerberg, Ms. Sandberg and other top Facebook leaders.”
Both Zuckerberg and Sandberg were apparently unaware of the findings of Stamos’s team that had been looking into Russian activity since the spring of 2016 and found early signs of the ‘Russian hacking teams’ setting up Facebook pages to distribute the emails. Huh.
And then we get to Definers Public Affairs, the company founded by Republican political operatives and specializing in bring political tactics to corporate public relations. In October of 2017, Facebook appears to have decided to double down on the Definers strategy. A strategy that appears to revolve around the strategy of simultaneously pushing out positive Facebook coverage while attacking Facebooks’s opponents and critics to muddy the waters:
Then, in March of this year, the Cambridge Analytica scandal blew open. In response, Kaplan convinced Sandberg to promote another Republican to help deal with the damage. Kevin Martin, a former FCC chairman and a Bush administration veteran, was chosen to lead Facebook’s US lobbying efforts. Definers was also tapped to deal with the scandal. And as part of that response, Definers used its affiliated NTK network to pump out waves of articles slamming Google and Apple for various reasons:
Finally, in July of this year, we find Facebook accusing its critics of anti-Semitism at the same time Definers uses an arguably anti-Semitic attack on these exact same critics as part of a general strategy by Definers to define Facebook’s critics as puppets of George Soros:
So as we can see, Facebook’s response to scandals appears to fall into the following pattern:
1. Intentionally ignore the scandal.
2. When it’s no longer possible to ignore, try to get ahead of it by going public with a watered down admission of the problem.
3. When getting ahead of the story doesn’t work, attack Facebook’s critics (like suggesting they are all pawns of George Soros)
4. Don’t piss off Republicans.
Also, regarding the discovery of Russian hackers setting up Facebook accounts in the spring of 2016 to distribute the hacked emails, here’s a Washington Post article from September of 2017 that talks about this. And according to the article, Facebook discovered these alleged Russian hacker accounts in June of 2016 (technically still spring) and promptly informed the FBI. The Facebook cybersecurity team was reportedly tracking APT28 (Fancy Bear) as just part of their normal work and discovered this activity as part of that work. They told the FBI, and then shortly afterwards they discovered that pages for Guccifer 2.0 and DCLeaks were being set up to promote the stolen emails. And recall in the above article that the Facebook team apparently discovered message from these account to journalists.
Interestingly, while the article says this was in June of 2016, it doesn’t say when in June of 2016. And that timing is rather important since the first Washington Post article on the hack of the DNC happened on June 14, and Guccifer 2.0 popped up and went public just a day later. So did Facebook discover this activity before the reports about the hacked emails? That’s remains unclear, but it sounds like Facebook knows how to track APT28/Fancy Bear’s activity on its platform and just routinely does this and that’s how they discovered the email hacking distribution operation. And that implies that if APT28/Fancy Bear really did run this operation, they did it in a manner that allowed cybersecurity researchers to track their activity all over the web and on sites like Facebook, which would be one more example of the inexplicably poor operation security by these elite Russian hackers:
“It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.”
It’s kind of an amazing story. Just by accident, Facebook’s cybersecurity experts were already tracking APT28 somehow and noticed a bunch of activity by the group on Facebook. They alert the FBI. This is in June of 2016. “Soon thereafter”, Facebook finds evidence that members of APT28 were setting up accounts for Guccifer 2.0 and DCLeaks. Facebook again informed the FBI:
So Facebook allegedly detected APT28/Fancy Bear activity in the spring of 2016. It’s unclear how they knew these were APT28/Fancy Bear hackers and unclear how they were tracking their activity. And then they discovered these APT28 hackers were setting pages for Guccifer 2.0 and DC Leaks. And as we saw in the above article, they also found messages from these accounts to journalists discussing the emails.
It’s a remarkable story, in part because it’s almost never told. We learn that Facebook apparently has the ability to track exactly the same Russian hacker group that’s accused of carrying out these hacks, and we learn that Facebook watched these same hackers set up the Facebook pages for Guccifer 2.0 and DC Leaks. And yet this is almost never mentioned as evidence that Russian government hackers were indeed behind the hacks. Thus far, the attribution of these hacks on APT28/Fancy Bear has relied on Crowdstrike and the US government and the direct investigation of the hacks Democratic Party servers. But here we’re learning that Facebook apparently has it’s own pool of evidence that can tie APT28 to Facebook accounts set up for Guccifer 2.0 and DCLeaks. A pool of evidence that’s almost never mentioned.
And, again, as we saw in the above article, Facebook’s chief of security, Alex Stamos, was alarmed in December of 2016 that Mark Zuckerberg and Sheryl Sandberg didn’t know about the findings of his team looking into this alleged ‘Russian’ activity. So Facebook discovered Guccifer 2.0 and DCLeaks accounts getting set up and Zuckerberg and Sandberg didn’t know or care about this during the 2016 election season. It all highlights how one of the meta-problems facing Facebook. A meta-problem we saw on display with the Cambridge Analytica scandal and the charges by former executive Sandy Parakilas that Facebook’s management warned him not to look into problems because they determined that knowing about a problem could make the company liable if the problem is explosed. So it’s a meta-problem of an apparent desire of top management to not face problems. Or at least pretend to not face problems while they knowingly ignore them and then unleash companies like Definers Public Affairs to clean up the mess after the fact.
And in related news, both Zuckerberg and Sandberg claim they had no idea who at Facebook even hired Definers and both had no idea the company even hired Definers at all until that New York Times report. In other words, Facebook’s upper management is claiming they had no idea about this latest scandal. Of course.
Now that the UK parliament’s seizure of internal Facebook documents from the Six4Three lawsuit threatens to expose what Six4Three argues was an app developer extortion scheme that was personally managed by Mark Zuckerberg — a bait-and-switch scheme that enticed app developers with offers of a wealth of access to user information and then extorted the most successful apps with threats of cutting off access to the user data unless they give Facebook a bigger cut of their profits — the question of just how many high-level Facebook scandals have yet to be revealed to the public is now a much more topical question. Because based on what we know so far about Facebook’s out of control behavior that appears to have been sanctioned by the company’s executives there’s no reason to assume there isn’t plenty of scandalous behavior yet to be revealed.
So in the spirit of speculating about just how corrupt Mark Zuckerberg might truly be, here’s an article that gives us some insight into the kinds of historic Zuckerberg spends time thinking about: Surprise! He really looks up to Caesar August, the Roman emperor who took “a really harsh approach” and “had to do certain things” to achieve his grand goals:
“Powerful men do love a transhistorical man-crush – fixating on an ancestor figure, who can be venerated, perhaps surpassed. Facebook’s Mark Zuckerberg has told the New Yorker about his particular fascination with the Roman emperor, Augustus – he and his wife, Priscilla Chan, have even called one of their children August.”
He literally named his daughter after the Roman emperor. That hints at more than just a casual historical interest.
So what is it about Caesar Augustus’s rule that Zuckerberg is so enamored with? Well, based on Zuckerberg’s own words, it sounds like it was the way Augustus took a “really harsh approach” to making decisions with difficult trade-offs in order to achieve Pax Romana, 200 years of peace for the Roman empire:
And while focusing a 200 years of peace puts an obsession with Augustus in the most positive possible light, it’s hard to ignore the fact that Augustus was still a master of propaganda and the man who saw the end of the Roman Republic and the imposition of an imperial model of government:
And that’s a little peek into Mark Zuckerberg’s mind that gives us a sense of what he spends time thinking about: historic figures who did a lot of harsh things to achieve historic ‘greatness’. That’s not a scary red flag or anything.
Here’s a new reason to hate Facebook: if you hate Facebook on Facebook, Facebook might put you on its “Be on the lookout” (BOLO) list and start using its location tracking technology to track your location. That’s according to a new report based on a number of current and former Facebook employees who discussed how the company’s BOLO list policy works. And according to security experts, while Facebook isn’t unique in having a BOLO list for company threats, it is highly unusual in that it can use its own technology to track the people on the BOLO list. Facebook can track BOLO users’ locations using their IP address or the smartphone’s location data collected through the Facebook app.
So how does one end up on this BOLO list? Well, there are the reasonable ways, like if someone posts posts on one of Facebook’s social media platforms a specific threat against Facebook or one of its employees. But it sounds like the standards are a lot more subjective and people are placed on the BOLO for simply posting things like “F— you, Mark,” “F— Facebook”. Another group routinely put on the list is former employees and contractors. Again, it doesn’t sound like it takes much to get on the list. Simply getting emotional if your contract isn’t extended appears to be enough. Given those standards, it’s almost surprising that it sounds like the BOLO list is only hundreds of people long and not thousands of people:
“Several of the former employees questioned the ethics of Facebook’s security strategies, with one of them calling the tactics “very Big Brother-esque.””
Yeah, “very Big Brother-esque” sounds like a pretty good description of the situation. In part because Facebook is doing the tracking with its own technology:
Getting on the list also sounds shockingly easy. A simple “F— you, Mark,” or “F— Facebook” post on Facebook is all it apparently takes. Given that, it’s almost unbelievable that the list only contains hundreds of people. Although it sounds like that “hundreds of people” estimate is based on former security employees who left the company since 2016. You have to wonder how much longer the BOLO list could be today compared to 2016 simply given the amount of bad press Facebook has received just in the last year alone:
And it sounds like former employees and contractors can get thrown on the list for basically no reason at all. If you’re fired from Facebook, don’t get emotional. Or the company will track your location indefinitely:
And as Facebook itself makes clear with its anecdote about how it tracked the location of a team of interns after the company became concerned about their safety on a camping trip, the BOLO list is just one reason the company might decide to track the locations of specific people. Employees being unresponsive to emails is another reason for the potential tracking. Given that Facebook is using its own in-house location tracking capabilities to do this there are probably all sorts of different excuses for using the technology:
So now you know, if you’re a former Facebook employee/contractor and/or have ever written a nasty thing about Facebook on Facebook’s platforms, Facebook is watching you.
Of course, Facebook is tracking the locations and everything else it can track about everyone to the greatest extent possible anyway. Tracking everyone is Facebook’s business model. So the distinction is really just whether or not Facebook’s security team is specifically watching you. Facebook the company is watching you whether or not you’re on the list or not.
So remember those reports from 2017 about how Facebook’s ad targeting options allowed users to target ads for Facebook users who have expressed an interest in topics like “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”? And remember how Facebook explained that this was an accident cause by their algorithms that auto-generate user interest groups and the company promised that they’ll have humans reviewing these auto-generated topics going forward? Surprise! It turns out the human reviewers are still allowing ads targeting user interested in topics like “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” and the neo-nazi punk band Skrewdriver:
” Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.”
Yes, despite Facebook’s promises of greater oversight following the previous reports of Nazi ad targeting categories, the Nazi ad targeting continues. And these ad categories don’t have just a handful of Facebook users. Each of the categories the LA Times tested had hundreds of thousands of users. And with just a $25 purchase, over 4,000 users saw the test ad in 24 hours, demonstrating that Facebook remains a remarkably cost-effective platform for directly reaching out to people with Nazi sympathies:
And these ads show up in as Instant Articles, so they would show up in the same part of the Facebook page where articles from sites like CNN and BBC might show up:
Of course, Facebook pledged to remove these neo-Nazi ad categories...just like they did before:
So how confident should we be that Facebook is actually going to purge its system of neo-Nazi ad categories? Well, as the article notes, Facebook’s current ad system earned the company a record $55 billion in ad revenue in 2018 with over 40% profit margins. And a big reason for those big profit margins is the lack of human oversight and the high degree of automation in the running of this system. In other words, Facebook’s record profits depends on exactly the kind of lack of human oversight that allowed for these neo-Nazi ad categories to proliferate:
Of course, we shouldn’t necessarily assume that Facebook’s ongoing problems with Nazi ad categories is simply due to a lack of human oversight. It’s also quite possible that Facebook simply sees the promotion of extremism as a great source of revenue. After all, the LA Times reporters discovered that the number of users Facebook categorized as having an interest in Joseph Mengele actually grew from 117,l150 users to 127,010 users during their investigation. That’s a growth of over 8%! So the extremist ad market might simply be seen as a lucrative growth market that the company can’t resist:
Could it be that the explosive growth of extremism is simply making the hate demographic irresistible? Perhaps, although as we’ve seen with virtually all of the major social media platforms like Twitter and YouTube, when it comes to social media platforms profiting off of extremism it’s very much a ‘chicken & egg’ situation.
Oh look at that: A new Wall Street Journal study discovered that several smartphone apps are sending sensitive information to Facebook without getting user consent. This included “Flo Health”, an app for women to track their periods and ovulation. Facebook was literally collecting information on users ovulation status. Another app, Instant Heart Rate: HR Monitor, was also sending Facebook data, along with the real-estate app Realtor.com. This is all happening using the toolkit Facebook provides app developers. And while Facebook defended itself by point out that its terms require that developers not send the company sensitive information, Facebook also appears to be accepting this information without telling developers to stop:
“In this case it looks like the main violators were the companies that wrote those applications...Facebook in this case is more the enabler than the bad actor.”
That’s one way to spin it: Facebook is more of the enabler than the primary bad actor in this case. That’s sort of an improvement. Specifically, Facebook’s “App Events” tool is enabling app developers to send sensitive user information back Facebook despite Facebook’s instructions to developers not to send sensitive information. And the fact that Facebook was clearly accepting this sensitive data without telling developers to stop sending it certainly adds to the enabling behavior. Even when that sensitive data included whether or not a woman is ovulating:
And the range of sensitive data includes everything from heart rate monitors to real estate apps. In other words, pretty much any app might be sending data to Facebook but we don’t necessarily know which apps because the apps aren’t informing users about this data collection and don’t give users a way to stop it:
And as the following BuzzFeed report from December describes, while app developers tend to assume that the information their apps are sending back to Facebook is anonymized because it doesn’t have your personal name attached, that’s basically a garbage conclusion because Facebook doesn’t need your name to know who you are. There’s plenty of other identifying information in what these apps are sending. Even if you don’t have a Facebook profile. And about half of the smartphone apps found to be sending information back to Facebook don’t even mention this in their privacy policies according to a study by the German mobile security initiative Mobilsicher. So what percent of smartphone apps overall are sending information back to Facebook? According to the estimates of privacy researcher collective App Census, about 30 percent of all apps on the Google Play store contact Facebook at startup:
“Major Android apps like Tinder, Grindr, and Pregnancy+ are quietly transmitting sensitive user data to Facebook, according to a new report by the German mobile security initiative Mobilsicher. This information can include things like religious affiliation, dating profiles, and health care data. It’s being purposefully collected by Facebook through the Software Developer Kit (SDK) that it provides to third-party app developers. And while Facebook doesn’t hide this, you probably don’t know about it.”
It’s not just the handful of apps described in the Wall Street Journal report. Major Android apps are routinely passing information to Facebook. And this information can include things like religious affiliation and data profiles in addition to health care data. And while developers might be doing this, in part, because they assume the data is anonymized, it’s not. At least not in any meaningful way. And even non-Facebook users are getting their data sent:
How common is this? According to privacy researcher collective App Census estimates, it’s about 30 percent of all apps in the Google Play store. And half of the apps tested by Mobilsicher didn’t even mention Facebook Analytics in their privacy policy:
And according to the following article, that 30 percent estimate might be low. According to a Privacy International study, at least 20 out of 34 popular Android apps that they tested were transmitting sensitive information back to Facebook without asking for permission:
“It’s not just dating and health apps that might be violating your privacy when they send data to Facebook. A Privacy International study has determined that “at least” 20 out of 34 popular Android apps are transmitting sensitive information to Facebook without asking permission, including Kayak, MyFitnessPal, Skyscanner and TripAdvisor. This typically includes analytics data that sends on launch, including your unique Android ID, but can also include data that sends later. The travel search engine Kayak, for instance, apparently sends destination and flight search data, travel dates and whether or not kids might come along.”
So if you don’t exactly whether or not an app is sending Facebook your data, it appears to be a safe bet that, yes, that an app is sending Facebook your data.
And if you’re tempted to delete all of the apps off of your smartphone, recall all the stories about device makers, including smartphone manufacturers, sending and receiving large amounts of user data with Facebook and literally being treated as “extensions” of Facebook by the company. So while smartphone apps are certainly going to be a major source of personal data leakage, don’t forget there’s a good chance your smartphone itself is basically working for Facebook.
Here’s an update on the brain-to-computer interface technology that Facebook is working on. First, recall how the initial use for the technology that Facebook has been touting thus far has been simply replacing using your brain for rapid typing. It always seemed like a rather limited application for a technology that’s basically reading your mind.
Now Mark Zuckerberg is giving us a hint at one of the more ambitious applications of these technology: Augmented Reality (AR). AR technology isn’t new. Google Glass was an earlier version of AR technology and Oculus, the virtual reality headset company owned by Facebook, has made it clear that AR is an area they are planning on getting into. But it sounds like Facebook has big plans for using the the brain-to-computer with AR technology. This was revealed during a talk Zuckerberg gave at Harvard last month during a two hour interview by with Harvard law school professor Jonathan Zittrain. According to Zuckerberg, the vision is to allow people to use their thoughts to navigate through augmented realities. This will presumably work in tandem with AR headsets.
So as we should expect, Facebook’s early plans for brain-to-computer interfaces aren’t limited to people typing with their minds at a computer. They are plans for incorporating the technology into the kind of technology that people can wear everywhere like AR glasses:
“All was going to plan. Zuckerberg had displayed a welcome humility about himself and his company. And then he described what really excited him about the future—and the familiar Silicon Valley hubris had returned. There was this promising new technology, he explained, a brain-computer interface, which Facebook has been researching.”
Yep, everything was going well at the Zuckerberg event until he started talking about his vision for the future. A future of augmented reality that you navigate with your thoughts using Facebook’s brain-to-computer interface technology. It might seem creepy, but Facebook is clearly betting on it not being too creepy to prevent people from using it:
What about potential abuses like violating the constitutional right to remain silent? Zuckerberg assured us that only people who choose to use the technology would actually use so we shouldn’t worry about abuse, a rather worrying response in part because of typical it is:
But at least it’s augmented reality that will be working with some sort of AR headset and the technology isn’t actually injecting augmented info into your brain. That would be a whole new level of creepy.
And according to the following article, a neuroscientist at Northwestern University, Dr. Moran Cerf, is working on on exactly that kind of technology and predicts it will be available to the public in as little as five years. Cerf is working on some sort chip that would be connected to the internet, read your thoughts, go to Wikipedia or some website to get an answer to your questions, and return the answer directly to your brain. Yep, internet-connected brain chips. He estimates that such technology could give people IQs of 200.
So will people have to go through brain surgery to get this new technology? Not necessarily. Cerf is asking the question “Can you eat something that will actually get to your brain? Can you eat things in parts that will assemble inside your head?” Yep, internet-connected brain chips that you eat. So not only will you not need brain surgery to get the chip...in theory, you might not even know you ate one.
Also note that it’s unclear if this brain chip can read your thoughts like Facebook’s brain-to-computer interface or if it’s only for feeding your the information from the internet. In other words, since Cerf’s vision for this chip requires the ability to read thoughts first in order to go on the internet and find answers and report them back, it’s possible that this is the kind of computer-to-brain technology that is intended to work with the kind of brain-to-computer mind reading technology Facebook is working on. And that’s particularly revelent because Cerf tells us that he’s collaborating with ‘Silicon Valley big wigs’ that he’d rather not name:
“In as little as five years, super smart people could be walking down the street; men and women who’ve paid to increase their intelligence.”
In just five years, you’ll be walking down the street, wonder about something, and your brain chip will go access Wikipedia, find the answer, and somehow deliver it to you. And you won’t even have to have gone through brain surgery. You’ll just eat something that will somehow insert the chip in your brain:
That’s the promise. Or, rather, the hype. It’s hard to imagine this all being ready in five years. It’s also worth noting that if the only thing this chip does is conduct internet queries it’s hard to see how this will effectively raise people’s IQs to 200. After all, people damn near have their brains connected to Wikipedia already with smartphones and there doesn’t appear to have been a smartphone-induced IQ boost. But who knows. Once you have the technology to rapidly feed information back and forth between the brain and a computer there could be all sorts of IQ-boosting technologies that could be developed. At a minimum, it could allow for some very fancy augmented reality technology.
So some sort of computer-to-brain interface technology appears to be on the horizon. And if Cerf’s chip ends up being technologically feasible it’s going to have Silicon Valley big wigs behind it. We just don’t know which big wigs because he won’t tell us:
So some Silicon Valley big wits are working on computer-to-brain interface technology that can potentially be fed to people. And they they want to keep their involvement in the development of this technology a secret. That’s super ominous, right?
Remember how the right-wing outrage machine created an uproar in 2016 over allegation that Facebook’s trending news was censoring conservative stories? And remember how Facebook responded by firing all the human editors and replacing them with an algorithm that turned the trending news section into a distributor or right-wing ‘fake news’ misinformation? And remember how Facebook announced a new set of news feed changes in January of 2018, then a couple of months later conservatives were again complaining that it was biased against them, so Facebook hired former Republican Senator John Kyl and the Heritage Foundation to do an audit of the company to determined whether or not Facebook had a political bias?
Well, it looks like we’re due for a round of fake outrage designed to make social media companies more compliant to right-wing disinformation campaigns. This time, it’s President Trump leading the way on the faux outrage, complaining that “Something’s happening with those groups of folks that are running Facebook and Google and Twitter and I do think we have to get to the bottom of it”:
“Trump and other conservatives have increasingly argued that companies like Google, Facebook and Twitter have an institutional bias that favors liberals. Trump tweeted Tuesday morning that the tech giants were “sooo on the side of the Radical Left Democrats.””
Yep, the social media giants are apparently “sooo on the side of the Radical Left Democrats.” Trump is convinced of this because he feels that “something has to be going on” and “we have to get to the bottom of it”. He’s also sure that Twitter is “different than it used to be” and “we have to do something” because it’s “big, big discrimination”:
And these comments by Trump come a day after Republican congressman Devin Nunes sued Twitter and for “shadow-banning” conservative voices. Nunes also sued a handful of Twitter users who had been particularly critical of him:
It’s worth noting that Twitter did admit to sort of inadvertently “shadow-banning” some prominent conservatives in June of last year, including Donald Trump, Jr. The company explained that they changed their algorithm for which names show in the auto-populated drop-down search box on Twitter in order to reduce the scope of accounts found engage in troll-like behavior and this had the effect of downgrading the accounts of a number of right-wing figures. Because of course that’s what would happen if you implement an algorithm to reduce the exposure of accounts engaging in troll-like behavior. Also, a couple of days after the reports on this Twitter claimed it ‘fixed’ the problem so prominent Republicans engaging in troll-like behavior will once again show up in the auto-populated search drop down box.
But Devin Nunes appears to feel so harmed by Twitter that he’s suing it for $250 million anyway. And as the following column notes, while the lawsuit is a joke on legal grounds and stands no chance of victory, it does serve an important purpose. And it’s the same purpose we’ve seen over and over: intimidating the tech companies into giving conservatives preferential treatment and giving them a green light to turn these platforms into disinformation machines.
But Nunes’s decision to sue some individuals who were very critical of him over Twitter also serves another purpose that we saw when Peter Thiel managed to sue Gawker into oblivion: sending out the general threat that if you publicly criticize wealthy right-wingers they will sue and cost you large amounts of money in legal fees whether they have a legal case or not:
“As tempting as it is to simply mock the suit, it also has to be said that it is part of something more disturbing: the rising use of legal actions, especially by right-wing forces, to shut down political opponents. As Susan Hennessey, a legal scholar at the Brookings Institute, noted, the suit “is a politician attempting to abuse the judicial process in order to scare people out of criticizing him by proving that he can cost them a lot in legal fees.””
This this form of right-wing intimidation of the media — intimidation that rises to the level of ‘we will financially destroy you if you criticize us’ — is exactly what we saw Peter Thiel unleashed when he revenge-bankrolled a lawsuit that drove Gawker into bankruptcy:
So it’s going to be interesting to see if Nunes’s lawsuit furthers this trend or ends up being a complete joke. But given that one metric of success is simply costing the defendants a lot of money it really could end up being quite successful. We’ll see.
And with all that in mind, here’s a review of the impact of changes Facebook made to their news feed algorithm last year. Surprise! It turns out Fox News stories lead in terms engagement on Facebook, where comments, shares, and user ‘reactions’ (like a smiley face or angry face reaction) about the story are used as the engagement metric. And if you filter the response to only ‘angry’ responses, Fox News dominates the rest of the pack, with Breitbart as #2 and officialbenshapiro as #3 (CNN is #4). So more people appear to be seeing Fox News stories than stories from any other outlet on the platform and it’s making them angry:
“But the data is in, and it shows Fox News rules the platform in terms of engagement, with “angry” reactions to its posts leading the way.”
Facebook’s news feed algorithm sure loves serving up Fox News stories. Especially the kinds of stories that make people angry:
So as President Trump and Rep Nunes continue waging their social media intimidation campaign it’s going to be worth keeping in mind the wild success these intimidation campaigns have already had. This is a tactic that clearly works.
And in related news, Trump just threatened to open federal investigation against Saturday Night Live for making too much fun of him...
Oh look, another Facebook data debacle: Facebook just admitted that it’s been storing hundreds of millions of passwords in plain-text log files, which is a huge security ‘no no’ for a company like Facebook. Normally, passwords are supposed to be stored as a hash (where the password is converted to a long strong of random-seeming text). This password-to-hash mapping approach allows companies like Facebook to check and make sure the password you input matches your account password without having to directly store the password. Only the hash is stored. And that basic security rule hasn’t been followed for up to 600 million Facebook accounts. As a result, the plaintext passwords that people have been using for Facebook has potentially been readable by Facebook employees for years. This has apparently been the case since 2012 and was discovered in January 2019 by a team of engineers who were reviewing some code and noticed this ‘bug’.
It sounds like the users of Facebook Lite — a version of Facebook for people with poor internet connections — were particularly hard hit. The way Facebook describes, hundreds of millions of Facebook Lite users will be getting an email about this, along with tens of millions of regular Facebook users and even tens of thousands of Instagram users (Facebook owns Instagram).
It’s unclear why Facebook didn’t report this sooner, but it sounds like it was only reported in the first place after an anonymous senior Facebook employee told KrebsOnSecurity — the blog for security expert Brian Krebs — about this. So for all we know Facebook had no intention of telling people at all, which would be particularly egregious if true because people often reuse passwords across different websites and so storing this information in a manner that is readable to thousands of Facebook employees represents a very real security threat for sites across the internet for people that reuse passwords (which is unfortunately a lot of people).
Is there any evidence of Facebook employees actually abusing this information? At this point Facebook is assuring us that it has seen no evidence of anyone intentionally trying to read the password data. But as we’re going to see, around 20,000 Facebook employees have had access to these logs. More alarmingly, Facebook admits that around 2,000 engineers and software developers have conducted around 9 million queries for data elements that contained the passwords. But we are assured by Facebook that there’s nothing to worry about:
“Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.”
So the bug caused the passwords of hundreds of millions of people using the Facebook Lite version of Facebook, but only tens of millions of regular Facebook users and tens of thousands of Instagram users to get logged in plain text. Was that the result of a single bug or separate bugs for Facebook and Instagram? Are these even bugs that were created by an innocent coding mistaking or did someone go out of their way to write code that would leave plain text passwords?
At this point we have no idea because Facebook isn’t saying how the bug came to be. Nor is the company saying how it is that they arrived at the conclusion that there were no employees abusing their access to this data:
And yet we learn from Krebs that this bug has existed since 2012 and some 2,000 engineers and developers have access those text logs. We also learn from Krebs that Facebook learned about this bug months ago and didn’t say anything:
So that’s pretty bad. But it gets worse. Because if you read the initial Krebs report, it sounds like an anonymous Facebook executive is the source for this story. In other words, Facebook probably had no intention of telling the public about this. In addition, while Facebook is acknowledging that 2,000 employees have actually access the log files, according to the Krebs report there were actually 20,000 employees who could have accessed them. So we have to hope Facebook isn’t low-balling that 2,000 estimate. Beyond that, Krebs reports that those 2,000 employees who did access those log files made around nine million internal queries for data elements that contained plain text user passwords. And despite all that, Facebook is assuring us that no password changes are necessary:
“Facebook is probing a series of security failures in which employees built applications that logged unencrypted password data for Facebook users and stored it in plain text on internal company servers. That’s according to a senior Facebook employee who is familiar with the investigation and who spoke on condition of anonymity because they were not authorized to speak to the press.”
An anonymous senior Facebook employee leaking to Krebs. That appears to be the only reason this story has gone public.
And according to this anonymous employee, those logs were searchable by more than 20,000 Facebook employees. And 9 million queries of those files were done by the 2,000 engineers and developers who did definitely access the files:
And yet Facebook is telling us that no password resets are required because no abuses have been found. Isn’t that reassuring:
So it sure looks like we have another case of a Facebook privacy scandal that Facebook had no intention of telling anyone about.
The whole episode also raises another interesting question about Facebook and Google and all the other social media giants that have become treasure troves of personal information: just how many spy agencies out there are trying to get their spies embedded at Facebook (or Google, or Twitter, etc) precisely to exploit exactly these kinds of internal security lapses? Because, again, keep in mind if people use the same password for Facebook that they use for other websites that means their accounts at those other websites are also potentially at risk. So people could have effectively had their passwords for Facebook and GMail and who knows what else compromised by this. Hundreds of millions of people. That’s part of why it’s so irresponsible to tell people no password resets are necessary. The appropriate response would be to tell people that not only should they reset their Facebook password but they also need to reset the passwords for any other sites that use the same password (preferably to something other than your reset Facebook password). Or, better yet, #DeleteFacebook.
In possibly related news, two top Facebook executives, including senior product engineer Chris Cox, just announced a few days ago that they’re leaving the company. It would be rather interesting of Cox was the anonymous senior engineer who was the Krebs source for this story. Although we should probably hope that’s note the case because that means there’s one less senior engineer working at Facebook who is willing to go to the press about these kinds of things and there’s clearly a shortage of such people at this point.
Here’s a pair of article to keep in mind regarding the role social media will play in the 2020 US election cycle and the questions over whether or not we’re going to see them reprise their roles as the key propagators of right-wing disinformation: President Trump did an interview with CNBC this morning where the issue of the EU’s lawsuits against US tech giants like Google and Facebook came up. The answer Trump gave is the kind of answer that could ensure those companies go as easy as possible on Trump and the Republicans when it comes to platform violations: Trump replied that it was improper for the EU to be suing these companies because the US should be doing it instead and he agrees with the EU that the monopoly concerns with these companies are valid:
““Every week you see them going after Facebook and Apple and all of these companies … The European Union is suing them all of the time,” Trump said. “Well, we should be doing this. They’re our companies. So, [the EU is] actually attacking our companies, but we should be doing what they’re doing. They think there’s a monopoly, but I’m not sure that they think that. They just think this is easy money.””
“Well, we should be doing this. They’re our companies.” That certainly had to get Silicon Valley’s attention. Especially this part:
So Trump is now openly talking about breaking of the US tech giants over monopoly concerns. GREAT! A monopoly inquiry is long overdue. Of course, it’s not actually going to be very great if it turns out Trump is just making these threats in order to extract more favorable treatment from these companies in the upcoming 2020 election cycle. And as the following article makes clear, that’s obviously what Trump was doing during this CNBC interview because he went on to complain that the tech giants were actually discriminating against him in the 2016 election and colluding with the Democrats. Of course, as Trump’s digital campaign manager Brad Parscale has described, the tech giants were absolutely instrumental for the success of the Trump campaign and companies like Facebook actually embedded employees in the Trump campaign to help the Trump team maximize their use of the platform. And Google-owned YouTube has basically become a dream recruitment tool for the ‘Alt Right’ Trumpian base. So the idea that the tech giants are somehow discriminating against Trump is laughable. It’s true that there have been tepid moves by these platforms to project the image that they won’t tolerate far right extremism, with YouTube pledging to ban white supremacist videos last week. The extent to which this was just a public relations stunt by YouTube remains to be seen, but removing overt neo-Nazi content isn’t going to address most of the right-wing disinformation on the platforms anyway since so much of that content cloaks the extremism in dog whistles. But as we should expect, the right-wing meme that the tech giants being run by a bunch of liberals and out to silence conservative voices is getting pushed heavily right now and President Trump just promoted that meme again in the CNBC interview as part of his threat over anti-trust inquiries:
““Well I can tell you they discriminate against me,” Trump said. “You know, people talk about collusion. The real collusion is between the Democrats and these companies. ‘Cause they were so against me during my election run. Everybody said, ‘If you don’t have them, you can’t win.’ Well, I won. And I’ll win again.””
Yes, according to right-wing fantasy world, the tech giants were actually all against Trump in 2016 and not his secret weapon. That’s become one of the fictional ‘facts’ being promoted as part of this right-wing meme. A meme about conservatives getting ‘shadow banned’ by tech companies. And whenever Alex Jones gets banned from a platform it’s now seen as part of this anti-conservative conspiracy:
Recall how Fox News was promoting this meme recently when Laura Ingraham’s prime time Fox News show as trying to present figures like Alex Jones, Milo Yiannopoulos, Laura Loomer, and neo-Nazi Paul Nehlen were banned from Facebook because of anti-conservative bias (and not because they kept breaking the rules of the platform). This meme is now a central component of the right-wing grievance politics and basically just an update to the long-standing ‘liberal media’ meme that helped fuel the rise of right-wing talk radio and Fox News. It’s exactly the kind of ‘working the ref’ meme that is designed to bully the media into giving right-wingers easier treatment. That’s what makes monopoly threats by Trump so disturbing. He’s now basically telling these tech giants, ‘go easy on right-wingers or I’ll break you up,’ heading into a 2020 election cycle where all indications are that disinformation is going to play a bigger role than ever. So the president basically warned all of these tech companies that any new tools they’ve created for dealing with disinformation being spread on their platforms during the 2020 election cycle had better not work too well, at least not if it’s right-wing disinformation.
Keep in mind that there’s been little indication that these platforms were seriously going to do anything about disinformation anyway, so it’s unlikely that Trump’s threat will make this bad situation worse.
So that’s a preview for the role disinformation is going to play in the 2020 elections: Trump is preemptively running a disinformation campaign in order to pressure the tech giants into not cracking down on the planned right-wing disinformation campaigns the tech giants weren’t seriously planning on cracking down on in the first place.
So remember the absurdist ‘civil rights audit’ that Facebook pledged to do last year? This was the audit conducted by retired GOP-Senator Jon Kyl to address the frequent claims of anti-conservative bias perpetually leveled against Facebook by Republican politicians and right-wing pundits. It’s a core element of the right-wing’s ‘working the refs’ strategy for getting a more compliant media. In this case, the audit involved interviewing 133 conservative lawmakers and interest groups about whether they think Facebook is biased against conservatives.
Well, Facebook is finally releasing the results of their audit. And while the audit doesn’t find any systemic bias, it did acknowledge some conservative frustrations like frustrations with a longer approval process for submitting ads to Facebook and the fear that the slowed ad approval process might disadvantage right-wing campaigns following the wild successes the right-wing had in 2016 using social media political ads. Amusingly, on the same day Facebook released this audit it also announced the return of human editors for curating Facebook’s news feeds. Recall how it was 2016 claims by a Facebook employee that the news feed editors were biased against conservatives (when they were really just biased against disinformation coming disproportionately from right-wing sources) that led to Facebook deciding to switch to an algorithm without human oversight for generating news feeds which, in turn, turned the news feeds into right-wing disinformation outlets during the 2016 campaign that was vital to the Trump campaign’s success. So the human news feed editors are apparently back, which will no doubt anger the right-wing. Although recall how Facebook hired Tucker Bounds, John McCain’s former adviser and spokesperson, to be Facebook’s Communications director focused on the News Feed back in January of 2017. In other words, yeah, there’s going to be human editors overseeing the news feeds again, but it’s probably going to a former Republican operative in charge of those human editors. It’s a reminder that Facebook is going to find a way to make sure its platform is a potent right-wing propaganda tool one way or another. The claims of anti-conservative discrimination is just propaganda designed to allowed Facebook to be a more effective right-wing propaganda outlet:
“On one hand, there’s no evidence of systematic bias against conservatives or any other mainstream political group on Facebook or other platforms. On the other hand, there are endless anecdotes about the lawmaker whose ad purchase was not approved, or who did not appear in search results, or whatever. Stack enough anecdotes on top of one another and you’ve got something that looks a lot like data — certainly enough to convene a bad-faith congressional hearing about platform bias, which Republicans have done repeatedly now.”
Sure, there’s no actual evidence of an anti-conservative bias. But there are 133 anonymous right-wing operatives who feel differently. That’s the basis for this audit. And despite the lengths Jon Kyl’s team went to describing the various feelings of bias felt by these 133 anonymous right-wing operatives, he’s still be accused of waging a cover-up on Facebook’s behalf by the right-wing media. Because you can’t stop ‘working the refs’:
And on the same day of the release of this report, Facebook announces the return of human editors for the news feed:
So it looks like we’re probably in store for a new round of allegations of anti-conservative bias at Facebook just in time for 2020 which will presumably include a new round of allegations of anti-conservative bias held by the human news feed editors. With that in mind, it’s worth noting that Facebook has expanded its approach to misinformation-detection since 2016 when it last had news feed human curation. For example, now Facebook has teamed up with the Poynter’s International Fact-Checking Network (IFCN) to find unbiased organizations that Facebook can outsource the responsibility of fact-checking to. In December of 2016, Facebook announced that it was partnering with ABC News, Snopes, PolitiFact, FactCheck.org, and the AP (all approved by IFCN) to help it identify misinformation on the platform. All non-partisan organizations, abeit the kinds of organizations the right-wing media routinely labels as ‘left-wing mainstream media’ outlets despite the lack of any meaningful left-wing bias. Then, in December of 2017, Facebook announced it was adding the right-wing Weekly Standard to its list of fact-checkers, which soon resulted in left-wing articles getting flagged for disinformation for spurious reasons. Note there was no left-wing site chosen at this point. But the Weekly Standard went out of business, so in April of this year, Facebook announced it was adding Check Your Fact to its list of fact-checking organizations. Who is behind Check Your Fact? The Daily Caller! This is almost like hiring Breitbart to do your fact-checking.
According to the following article, it was Joe Kaplan, the former White House aide to George W. Bush who now serves as Facebook’s global policy chief and is the company’s “protector against allegations of political bias,” who has been pushing to get Check Your Fact added to the list of Facebook’s fact-checkers. This was a rather contentious decision within Facebook’s boardroom but Mark Zuckerberg apparently generally backed Kaplan’s push.
And that tells us about his this new round of human-curated news feeds is going to go: The humans doing the curating are probably going to have their judgement curated by right-wing misinformation outlets like the Daily Caller
“Last week, Facebook announced that it’s partnering with Check Your Fact — a subsidiary of the right-wing Daily Caller, a site known for its ties to white nationalists — as one of six third-party organizations it currently works with to fact-check content for American users. The partnership has already come under intense criticism from climate journalists (among others) who are concerned that the Daily Caller’s editorial stance on issues like climate change, which is uncontroversial among scientists but isn’t treated as such on right-wing media, will spread even more misinformation Facebook.”
The Daily Caller — a cesspool of white nationalist propaganda — is fact-checker sugar-daddy for one of the biggest sources of news on the planet. This is the state of the media in 2019. It’s also a reminder that, while Donald Trump is widely recognized as the figure that ‘capured’ the heart and soul of the Republican Party in recent years, the real figure that accomplished this was Alex Jones. That’s why ensuring Facebook is safe for far right disinformation is so important to the party. Alex Jones’s message is the Republican Party’s unofficial zeitgeist at this point. Trump has just been riding Jones’s wave that’s been building for years.
Oh, but it gets worse. Of course: it turns out the Charles Koch Foundation accounted for 83 percent of the Daily Caller News Foundation’s revenues in 2016, and the Daily Caller News Foundation employs some of Check Your Fact’s fact-checkers. So this is more of a Daily Caller/Koch joint operation. But Facebook explains this decision by asserting that “we do believe in having a diverse set of fact-checking partners.” And yet there aren’t any actual left-wing organizations hired to do similar work:
And it’s been none other than former White House aide to George W. Bush, Joel Kaplan, who has been pushing to give the Daily Caller this kind of oversight over the platform’s content. Kaplan is apparently Facebook’s “protector against allegations of political bias.” And while some of Facebook’s executive’s recognized that the Daily Caller is a serial peddler of misinformation, Mark Zuckerberg reportedly took Kaplan’s side during these debates:
Yep, Check Your Fact wasn’t even initially transparent with the IFCN about its funding sources and instead hid the fact that it’s financed by the Koch-funded Daily Caller News Foundation. That’s kind of organization this is. And that’s all why the inevitable future right-wing claims of bias that’s we’re undoubtedly going to hear in the 2020 election will be such a bad joke.
In related news, Facebook recently announced that it’s banning the pro-Trump ads from the Epoch Times. Recall the recent reports about how The Epoch Times, funded by Falun Gong devotees, has become of the second biggest buyer of pro-Trump Facebook ads in the world (after only the Trump campaign itself) and has become a central player in generating all sorts of wild far right conspiracy theories like ‘QAnon’. So was The Epoch Times banned for aggressively pushing all sorts of misinformation? Nope, The Epoch Times was banned from buying Facebook ads for not being upfront about its funding sources. That was it.
This next article shows how Facebook promised to ban white nationalist content from its platform in March 2019. It was not until then that Facebook acknowledged that white nationalism “cannot be meaningfully separated from white supremacy and organized hate groups” and banned it. Facebook does not ban Holocaust denial, but does work to reduce the spread of such content by limiting the distribution of posts and preventing Holocaust-denying groups and pages from appearing in algorithmic recommendations. However, a Guardian analysis found longstanding Facebook pages for VDare, a white nationalist website focused on opposition to immigration; the Affirmative Right, a rebranding of Richard Spencer’s blog Alternative Right, which helped launch the “alt-right” movement; and American Free Press, a newsletter founded by the white supremacist Willis Carto, in addition to multiple pages associated with Red Ice TV. Also operating openly on the platform are two Holocaust denial organizations, the Committee for Open Debate on the Holocaust and the Institute for Historical Review. The Guardian reviewed of white nationalist outlets on Facebook amid a debate over the company’s decision to include Breitbart News in Facebook News, a new section of its mobile app dedicated to “high quality” journalism. Critics of Breitbart News object to its inclusion in what Zuckerberg has described as a “trusted source” of information on two fronts: its repeated publication of partisan misinformation and conspiracy theories – and its promotion of extreme right-wing views. Steve Bannon called the site Breitbart “the platform for the alt-right” in 2016. In 2017, BuzzFeed News reported on emails and documents showing how a former Breitbart editor had worked directly with a white nationalist and a neo-Nazi to write and edit an article about the “alt-right” movement. The SPLC and numerous news organizations have reported on a cache of emails between the senior Trump adviser Stephen Miller and the former Breitbart writer Katie McHugh showing how Miller pushed for coverage and inclusion of white nationalist ideas in the publication. The article provides an analogy where just because the KKK produced their own newspapers that it didn’t mean that it qualifies as news.” Breitbart is a political organ that was trying to do was give white supremacist politics a veneer of objectivity.”
The Guardian, Julia Carrie Wong , Thu 21 Nov 2019 06.00 EST
White nationalists are openly operating on Facebook. The company won’t act
Guardian analysis finds VDare and Red Ice TV among several outlets that are still on the platform despite Facebook’s promised ban
Last modified on Thu 21 Nov 2019 14.38 EST
On 7 November, Lana Lokteff, an American white nationalist, introduced a “thought criminal and political prisoner and friend” as a featured guest on her internet talk show, Red Ice TV.
For about 90 minutes, Lokteff and her guest – Greg Johnson, a prominent white nationalist and editor-in-chief of the white nationalist publisher Counter-Currents – discussed Johnson’s recent arrest in Norway amid authorities’ concerns about his past expression of “respect” for the far-right mass murderer Anders Breivik. In 2012, Johnson wrote that he was angered by Breivik’s crimes because he feared they would harm the cause of white nationalism but had discovered a “strange new respect” for him during his trial; Breivik’s murder of 77 people has been cited as an inspiration by the suspected Christchurch killer, the man who murdered the British MP Jo Cox, and a US coast guard officer accused of plotting a white nationalist terror attack.
Just a few weeks earlier, Red Ice TV had suffered a serious setback when it was permanently banned from YouTube for repeated violations of its policy against hate speech. But Red Ice TV still had a home on Facebook, allowing the channel’s 90,000 followers to stream the discussion on Facebook Watch – the platform Mark Zuckerberg launched as a place “to share an experience and bring people together who care about the same things”.
The conversation wasn’t a unique occurrence. Facebook promised to ban white nationalist content from its platform in March 2019, reversing a years-long policy to tolerate the ideology. But Red Ice TV is just one of several white nationalist outlets that remain active on the platform today.
A Guardian analysis found longstanding Facebook pages for VDare, a white nationalist website focused on opposition to immigration; the Affirmative Right, a rebranding of Richard Spencer’s blog Alternative Right, which helped launch the “alt-right” movement; and American Free Press, a newsletter founded by the white supremacist Willis Carto, in addition to multiple pages associated with Red Ice TV. Also operating openly on the platform are two Holocaust denial organizations, the Committee for Open Debate on the Holocaust and the Institute for Historical Review.
“There’s no question that every single one of these groups is a white nationalist group,” said Heidi Beirich, the director of the Southern Poverty Law Center’s (SPLC) Intelligence Project, after reviewing the Guardian’s findings. “It’s not even up for debate. There’s really no excuse for not removing this material.”
White nationalists support the establishment of whites-only nation states, both by excluding new non-white immigrants and, in some cases, by expelling or killing non-white citizens and residents. Many contemporary proponents of white nationalism fixate on conspiracy theories about demographic change and consider racial or ethnic diversity to be acts of “genocide” against the white race.
Facebook declined to take action against any of the pages identified by the Guardian. A company spokesperson said: “We are investigating to determine whether any of these groups violate our policies against organized hate. We regularly review organizations against our policy and any that violate will be banned permanently.”
The spokesperson also said that Facebook does not ban Holocaust denial, but does work to reduce the spread of such content by limiting the distribution of posts and preventing Holocaust-denying groups and pages from appearing in algorithmic recommendations. Such limitations are being applied to the two Holocaust denial groups identified by the Guardian, the spokesperson said.
The Guardian undertook a review of white nationalist outlets on Facebook amid a debate over the company’s decision to include Breitbart News in Facebook News, a new section of its mobile app dedicated to “high quality” journalism. Facebook has faced significant pressure to reduce the distribution of misinformation on its platform. Critics of Breitbart News object to its inclusion in what Zuckerberg has described as a “trusted source” of information on two fronts: its repeated publication of partisan misinformation and conspiracy theories – and its promotion of extreme rightwing views.
A growing body of evidence shows the influence of white nationalism on Breitbart’s politics. Breitbart’s former executive chairman Steve Bannon called the site “the platform for the alt-right” in 2016. In 2017, BuzzFeed News reported on emails and documents showing how a former Breitbart editor had worked directly with a white nationalist and a neo-Nazi to write and edit an article about the “alt-right” movement.
This month, the SPLC and numerous news organizations have reported on a cache of emails between the senior Trump adviser Stephen Miller and the former Breitbart writer Katie McHugh showing how Miller pushed for coverage and inclusion of white nationalist ideas in the publication. The emails show Miller directing McHugh to read links from VDare and another white nationalist publication, American Renaissance, among other sources. In one case, reported by NBC News, Breitbart ran an anti-immigration op-ed submitted by Miller under the byline “Breitbart News”.
A Breitbart spokeswoman, Elizabeth Moore, said that the outlet “is not now nor has it ever been a platform for the alt-right”. Moore also said McHugh was “a troubled individual” who had been fired for a number of reasons “including lying”.
“Breitbart is the funnel through which VDare’s ideas get out to the public,” said Beirich. “It’s basically a conduit of conspiracy theory and racism into the conservative movement … We don’t list them as a hate group, but to consider them a trusted news source is pandering at best.”
Drawing the line between politics and news
Facebook executives have responded defensively to criticism of Breitbart News’s inclusion in the Facebook News tab, arguing that the company should not pick ideological sides.
“Part of having this be a trusted source is that it needs to have a diversity of … views in there,” Zuckerberg said at an event in New York in response to a question about Breitbart’s inclusion. Campbell Brown, Facebook’s head of news partnerships, wrote in a lengthy Facebook post that she believed Facebook should “include content from ideological publishers on both the left and the right”. Adam Mosseri, the head of Instagram and a longtime Facebook executive, questioned on Twitter whether the company’s critics “really want a platform of our scale to make decisions to exclude news organizations based on their ideology”. In response to a question from the Guardian, Mosseri acknowledged that Facebook does ban the ideology of white nationalism, then added: “The tricky bit is, and this is always the case, where exactly to draw the line.”
One of the challenges for Facebook is that white nationalist and white supremacist groups adopt the trappings of news outlets or publications to disseminate their views, said Joan Donovan, the director of the Technology and Social Change Research Project at Harvard and an expert on media manipulation.
Red Ice TV is “a group that styles themselves as a news organization when they are primarily a political organization, and the politics are staunchly white supremacist”, Donovan said. “We have seen this happen in the past where organizations like the KKK have produced their own newspapers … It doesn’t mean that it qualifies as news.”
Many people argue that Breitbart is more of a “political front” than a news operation, she added. “When Steve Bannon left Breitbart in order to work much more concretely with campaigns, you could see that Breitbart was a political organ before anything else. Really what they were trying to do was give white supremacist politics a veneer of objectivity.”
Donovan said she expects platform companies will reassess their treatment of Breitbart following the release of the Miller emails. She also called for Facebook to take a more “holistic” approach to combating US domestic terrorism, as it does with foreign terrorist groups.
A Facebook spokesperson noted that Facebook News is still in a test phase and that Facebook is not paying Breitbart News for its inclusion in the program. The spokesperson said the company would continue to listen to feedback from news publishers.
A history of tolerance for hate
Facebook has long asserted that “hate speech has no space on Facebook”, whether it comes from a news outlet or not.
But the $566bn company has consistently allowed a variety of hate groups to use its platform to spread their message, even when alerted to their presence by the media or advocacy groups. In July 2017, in response to queries from the Guardian, Facebook said that more than 160 pages and groups identified as hate groups by SPLC did not violate its community standards. Those groups included:
American Renaissance, a white supremacist website and magazine;
The Council of Conservative Citizens, a white nationalist organization referenced in the manifesto written by Dylann Roof before he murdered nine people in a black church;
The Occidental Observer, an online publication described by the Anti-Defamation League as the “primary voice for antisemitism from far-right intellectuals”;
the Traditionalist Worker party, a neo-Nazi group that had already been involved in multiple violent incidents; and
Counter-Currents, the white nationalist publishing imprint run by the white nationalist Greg Johnson, the recent guest on Red Ice TV.
Three weeks later, following the deadly Unite the Right rally in Charlottesville, Facebook announced a crackdown on violent threats and removed pages associated with the the Traditionalist Worker party, Counter-Currents, and the neo-Nazi organization Gallows Tree Wotansvolk. Many of the rest remained.
A year later, a Guardian review found that many of the groups and individuals involved in the Charlottesville event were back on Facebook, including the neo-Confederate League of the South, Patriot Front and Jason Kessler, who organized Unite the Right. Facebook took those pages down following inquiries from the Guardian, but declined to take action against the page of David Duke, the notorious white supremacist and former Grand Wizard of the Ku Klux Klan.
In May 2018, Vice News’s Motherboard reported on internal Facebook training documents that showed the company was distinguishing between white supremacy and white nationalism – and explicitly allowing white nationalism.
In July 2018, Zuckerberg defended the motivations of people who engage in Holocaust denial during an interview, saying that he did not “think that they’re intentionally getting it wrong”. Following widespread criticism, he retracted his remarks.
It was not until March 2019 that Facebook acknowledged that white nationalism “cannot be meaningfully separated from white supremacy and organized hate groups” and banned it.
Beirich expressed deep frustration with Facebook’s track record.
“We have consulted with Facebook many, many times,” Beirich added. “We have sent them our list of hate groups. It’s not like they’re not aware, and I always get the sense that there is good faith desire [to take action], and yet over and over again [hate groups] keep popping up. It’s just not possible for civil rights groups like SPLC to play the role of flagging this stuff for Facebook. It’s a company that makes $42bn a year and I have a staff of 45.”
https://www.theguardian.com/technology/2019/nov/21/facebook-white-nationalists-ban-vdare-red-ice?CMP=Share_iOSApp_Other
Remember the story from earlier this year about Facebook outsourcing its ‘fact checking’ operations to organizations like the Koch-financed far right Daily Caller News Foundation? Well, here’s the flip side of stories like that: Facebook just lost its last fact checker organization in the Netherlands, the Dutch newspaper NU.nl. Why did the newspaper leave the program? Because Facebook forced NU.nl to reverse its ruling that the claims in a far right Dutch ad are unsubstantiated, in keeping with Facebook’s new policy of not fact checking politicians. The group labeled an ad by a far right politician that claimed that 10 percent of Romania’s land is owned by non-Europeans as unsubstantiated, but Facebook intervened and forced a reversal of that ruling. So NU.nl quit the fact checking program because it wasn’t allowed to check the facts of society’s biggest and loudest liars:
““What is the point of fighting fake news if you are not allowed to tackle politicians?” asked NU.nl’s editor-in-chief Gert-Jaap Hoekman in a blog post announcing the decision. “Let one thing be clear: we stand behind the content of our fact checks.””
What is the point of fighting fake news if you are not allowed to tackle politicians? That’s a pretty valid question for a fact checker. Especially in an era of the rise of the far right when trollish political gas-lighting has become the norm. At some point, being a fact checker with those kinds of constraints effectively turns these fact checking organizations into facilitators of these lies.
In related news, check out the recent addition to Facebook’s “trusted” news feed: Breitbart News:
“Facebook News is partnering with a variety of regional newspapers and some major national partners, including USA Today and The Wall Street Journal. But as The New York Times and Nieman Lab report, its “trusted” sources also include Breitbart, a far-right site whose co-founder Steve Bannon once described it as a platform for the white nationalist “alt-right.” Breitbart has been criticized for repeated inaccurate and incendiary reporting, often at the expense of immigrants and people of color. Last year, Wikipedia declared it an unreliable source for citations, alongside the British tabloid Daily Mail and the left-wing site Occupy Democrats.”
It’s not just a news feed. It’s a “trusted news” feed. That’s how Mark Zuckerberg envisions the Facebook News feature is supposed to work. And yet when asked why Breitbart News was invited into this “trusted” collection of news sources, Zuckerberg explains that in order for the Facebook News feed to be trusted it needs to draw from a wide variety of sources across the ideological spectrum. So in order for Facebook News to be trusted, it needs to include ideological sources from far right ideologies that thrive on warping the truth and creating fictional explanations of how the world works:
So as we can see, Facebook faces some challenges with its new Facebook News and fact checking services. Enormous challenges that are the same underlying challenge: the chronic deception at the foundation of far right worldviews and the enormous opportunity social media creates for profitably spreading those lies. And as we can also see, Facebook is, true to form, failing immensely at overcoming those challenges. Along with failing the enormous meta-challenge of overcoming Facebook’s insatiable corporate greed, also true to form.
There are a lot of questions swirling around the historic impeachment vote taking place in the House of Representatives today. But one thing is abundantly clear at this point: There’s going to be A LOT of political lying in the 2020 election. That’s literally what the impeachment is all about. It’s literally an impeachment over an scheme to extort the Ukrainian government into ginning up false criminal charges against the politician that Trump team saw as their likeliest 2020 opponent. You almost can’t come up with a bigger red flag about upcoming electoral lies than this impeachment.
And that’s part of what makes Facebook’s decision to explicitly allow politicians to run deceptive ads on Facebook so disturbing. The president is literally being impeached over a scheme to create a giant lie against his 2020 political opponent. The Trump team isn’t just planning on standard political exaggerations or mischaracterizations. The Trump reelection campaign is rooted creating and exploiting fake criminal charge against his presumed opponent and Facebook has already made clear to the Trump campaign, and any other campaigns, that they can lie as much as they want to and Facebook will gladly run their ads. This is despite Google and Twitter taking a very different approach and banning political ads altogether and Facebook’s own employees issuing open letters decrying Facebook’s ad policy. Keep in mind that Facebook does continue to fact-check political ads issued by non-politicians and will remove ads from non-politicians it deems to be deceptive. Only ads from politicians are being given this lie loophole.
Given that both Facebook and lying are key components of the Trump reelection strategy, perhaps it won’t come as a surprise to learn that it’s reportedly none other than Trump’s biggest backer at Facebook, Peter Thiel, who has been internally lobbying Mark Zuckerberg to keep Facebook’s policy of allow deceptive political ads:
“Thiel has reportedly urged Facebook to stick by a controversial policy first announced in September exempting political ads from being fact-checked, according to The Wall Street Journal. Though some directors and executives encouraged Facebook to crack down on unreliable information or ban political advertisements altogether, Thiel has reportedly urged Zuckerberg not to bow to public pressure.”
Surprise! Thiel came to the rescue of GOP’s lies. It’s a sign of how influential he continues to have at Facebook despite selling most of his original shares. His influence apparently has more to do with his personal influence over Mark Zuckerberg:
So Mark Zuckerberg values the insights and advice of one of the world’s most powerful fascists. That sounds about right for a Facebook story.
And as the following article points out, if giving the Trump campaign a license to openly lie seems like a recipe for disaster heading into 2020, don’t forget that it’s not hard for someone to technically become a politician. All they have to do is run for office. So if a group has a lot of money to spend on Facebook ads, and a lot of lies they want to push with those ads, all that group will need to do is field a candidate for office. At least in theory.
That theory was tested by Democrats shortly after Facebook announced its political ads policy when Democratic politicians started intentionally running obviously fake ads on Facebook to see what the company would do. A left-leaning political action committee, the Really Online Lefty League, also decided to test the new policy with an ad claiming Republican Senator Lindsey Graham was a supporter of the Green New Deal. Facebook responded that the ad was going to be taken down because this was political action committee, and not an actual politician. So one of the group’s members, Adriel Hampton, decided to run for governor of California. Facebook refused Hampton’s fake ads, saying, “This person has made clear he registered as a candidate to get around our policies, so his content, including ads, will continue to be eligible for third-party fact-checking.” So it sounds like the only thing preventing this plan from working is the fact that Hampton made it clear he was only a registered candidate to exploit Facebook’s fact-checking loophole:
“So the group found a workaround: One of the PAC members, Adriel Hampton, filed with the Federal Election Commission to run for California governor. Now a politician, as the logic of Facebook’s policies would go, he can run as many political ads as he wants.”
Just turn yourself into a politician and you can openly run as many lying ads as you want on Facebook. It’s that easy. In theory. But in this case Facebook still restricted the lying ads, but only because Adriel Hampton made it clear he wasn’t seriously running and was only doing this to test Facebook’s policies. So Facebook is unwilling to say if a politician’s ads contained lies, but they’re willing to say whether or not a politician is a real politician:
Finally, recall that Renée DiResta happens to be one of the figures who appears to have been involved with the New Knowledge project to create fake ‘Russian bot’ networks operating on Twitter and Facebook in the 2017 Alabama special Senate race, ostensibly to test how people respond to disinformation bot networks. So her expertise in media and misinformation includes real-world experience in running an actual disinformation network. And that disinformation network wasn’t running ads. It was bots just promoting memse and links:
It a reminder that even if Facebook bans lying ads from politicians, the platform is still going to be a heavy promoter of misinformation on a massive scale.
So we’ll see if there’s a flood of third-party candidates who don’t seem to be serious about running for office and only serious about spreading disinformation on Facebook. Fake third-party candidates who presumably won’t openly declare that they’re doing it just to exploit Facebook’s lie loophole so Facebook doesn’t have to ban them.
It’s also worth noting that this gimmick can work the other way around: If the Trump campaign is running a bunch of blatantly lying ads, the Democrats could take the content of that ad, repackage it in a new ad, and have a left-leaning political action committee that’s subject to Facebook’s fact-checking rules buy a very small audience for the deceptive ad to see if Facebook bans it at that point. Of course, if the ad was indeed banned, at that point the main recourse for the Democrats would be to buy a bunch of Facebook ads talking about how Facebook verified the ad is a bunch of lies. That could easily happen, which is reminder that Facebook’s policies aren’t just set up to help Republicans lie their way into office. They’re also set up to cynically sell more ads. Including ads to highlight the lies in other ads.
How many times is Steve Bannon allowed to call for the murder of government officials before Facebook suspends his account? That was the question Senator Richard Blumenthal asked the Mark Zuckerberg during a Senate Judiciary Hearing on Tuesday in reference to Bannon’s recent calls for the beheading of Anthony Fauci, a move that got Bannon banned from Twitter, but not Facebook.
So what was Zuckerberg’s answer? Well, it sounds like if Steve Bannon calls for the murder of government officials the posts calling for murder will be taken down but that won’t automatically result in the banning of Bannon’s account. The account banning is made on more of a case by case basis. So the rules are that if you call for the murder of government officials your calls for murder might be eventually taken down but you will probably still be allowed to continue posting on Facebook. At least that’s the case for Steve Bannon:
““Having a content violation does not automatically mean your account gets taken down and the number of strikes varies depending on the type of offense, so if people are posting terrorist content or child exploitation content then the first time that they do it, then we will take down their account,” Zuckerberg continued.”
AS Mark Zuckerberg clarified in his answer, when Steve Bannon called for the beheading of Anthony Fauci, that’s not the kind of content violation that gets you auto-banned. It’s not like terrorist content or child exploitation content. It’s merely Trump’s former national top advisor calling for the beheading of a government official:
It’s the kind of intentionally vague rules system that raises the question of whether or not Bannon is given this kind of lenient treatment because anyone can call for the killing of government officials on Facebook without getting banned or if that’s a privilege reserved for former government officials like Bannon.
But as the following report from back in August about leaked internal Facebook documents makes clear, one of the major factors in Facebook’s internal system for determining what kind of punishment people should receive for violating Facebook’s rules is whether or not they’re a conservative personality who might create a public relations headache for the company. And it’s a rule Facebook’s senior leadership makes sure is enforced:
“The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook’s fact-checking could go public and fuel allegations that the social network was biased against conservatives.”
As the leaked memos make clear, when Facebook’s “misinformation teams” make a decision it might be made under direct oversight from the Facebook’s leadership. And as the memos also make clear, Facebook’s leadership has one primary concern: not pissing off conservatives and avoiding accusations of an anti-conservative bias:
And that’s why we can infer that the ultimate reason Steve Bannon didn’t have his Facebook account banned after he called for the beheading of Anthony Fauci is that Mark Zuckerberg didn’t want Bannon banned. How many times is Steve Bannon allowed to call for the murder of government officials before Facebook suspends his account? It’s up to the whims of Mark Zuckerberg.
In related news, Steve Bannon is now calling for President Trump to launch an investigation of Fauci. No new death threats from Bannon so far. But when there are, Facebook will be sure to explain why those death threats are very bad and should be removed but also not a bannable offense. Well, ok, Facebook won’t actually explain why this is the case. Mark clearly doesn’t have to explain himself to anyone. Including senators.
Now that alleged censorship conservative voices by ‘Big Tech’ is one of the standard fake right-wing grievances that are poised to be amplified even more in coming years — especially if President Trump forms a right-wing media network to rival Fox News — here’s a pair of article that give us a preview of the Big Lie media landscape we should expect to dominate after companies are successfully intimidated into further limiting their already limited censorship of right-wing disinformation. A Big Lie media landscape that doubles as an extremist recruiting campaign.
First, here’s an article about a recent study that examined the relative media of left-wing and right-wing voices in the US social media platforms. As everyone should expect, social media is almost completely dominated by right-wing voices, with figures getting audiences that dwarf their left-wing counterparts. Right-wing figures like Ben Shapiro, who also happens to be one of the biggest complainers of Big Tech’s bias against conservatives. As Shapiro wrote on Twitter on October 15, “What we are watching — the militarization of social media on behalf of Democrats, and the overt suppression of material damaging to Democrats to the cheering of the press — is one of the single most dangerous political moments I have ever seen.” As the politico article points out, Shapiro’s Facebook page had roughly 33 million social media interactions during the month of October, compared to only 19 for Joe Biden’s Facebook page. So the guy who has a greater social media presence than the Democratic candidate was posting on social media about how we’re watching the militarization of social media on behalf of Democrats.
Now, we should note that Shapiro was specifically making that post in response to social media pulling posts promoting the NY Post’s highly dubiously sourced story about Hunter Biden’s laptop(s). It was a story with so little authentication that even Fox News turned it down (so it was instead laundered through the Murdoch-owned NY Post). But these claims of anti-conservative bias have been coming from Shapiro for years, despite his DailyWire being one of the most shared sites on Facebook. It points towards one of the underlying reasons the conservative myth of the ‘Big Tech anti-conservative bias’ isn’t going away any time soon: the conservative domination of social media allows for quite a bit of high-profile complaining about a supposed anti-conservative social media bias:
“Right-wing social media influencers, conservative media outlets and other GOP supporters dominate online discussions around two of the election’s hottest issues, the Black Lives Matter movement and voter fraud, according to the review of Facebook posts, Instagram feeds, Twitter messages and conversations on two popular message boards. And their lead isn’t close.”
It’s not even close. Right-wing social media content gets shared 10 times as much as the most popular left-wing post. And this includes posts about voter fraud claims. Right-wing sites really do effectively have to the power to digitally drown out other types of content, making today’s social media ecosystem the perfect Big Lie machine. Not only are right-wing Big Lies heavily promoted, largely with impunity, but there’s an entire crusade about ‘Big Tech’s bias against conservatives’ that, itself, relies on that very same right-wing domination of social media. It’s beyond Orwellian:
Perhaps the most topical example of this extreme imbalance is the myth of massive left-wing mail-in voter fraud. It started as a NY Post story in last August claiming that Democrats were planning on stealing the election through mass voter fraud, and within days, that narrative completely dominated how the story was covered. Denials of the accusation were just background noise:
And when social media companies made the decision to pull social media posts that were promoting the highly suspect ‘October Surprise’-ish NY Post story about Hunter Biden’s laptops, we have Ben Shapiro, one of the biggest voices on social media, calling it the “militarization of social media on behalf of Democrats”. Shapiro’s Facebook page got more shares than Joe Biden’s in October, and he’s complaining about the militarization of social media on behalf of the Democrats:
So what we’re obviously looking at here is a deliberate intimidation campaign intended to pressure social media companies from imposing the already limited and tepid restrictions they already have on right-wing disinformation. An intimidation campaign intended to pressure social media companies that’s largely waged on social media and largely relies on right-wing domination of social media to amplify the intimidation. Again, it’s beyond Orwellian.
With all that in mind, here’s a peek at what we can expect should the right-wing succeed in removing the already restrained censorship of right-wing content: the 4Chan-ization of social media. Because as the following Vice article makes clear, once you have moderators who are committed to interpreting right-wing content in the most generous light possible and who are intent on finding reasons not to remove anything but the most extreme content, it’s only a matter of time before sites are effectively turned into neo-Nazi recruiting zones, where the constant exposure to extremist memes and images effectively desensitizes audiences while building an echo-chamber for Big Lie propagation. And it only takes a relative handful of active players to create this environment. In the case of 4Chan, a site that started as a relatively progressive forum for anime, it was the actions of a single lead moderator, known as “RapeApe”, who was given the power to hire and fire other moderators (known as “janitors” on the site) after the site was sold to a new owner in 2015. Under RapeApe’s rule, the “/pol/” politics forum on the site became overrun with far right posters who soon started “raiding” other forums on the site, filling those forums with “sieg heils” and other far right content. Eventually “/pol/” and “RapeApe” won and 4Chan has effectively become one of the leading far right meme factories on the internet. It’s a reminder that the end result of the right-wing propaganda campaign to intimidate social media companies into allowing any and all right-wing content on social media is to turn inevitably the entire internet into 4Chan:
“Because of 4chan’s often wildly offensive content, many assume that the site is completely unmoderated. But 4chan has a corps of volunteers, called “janitors,” “mods,” or “jannies,” whose job it is—theoretically—to make sure that content on the site abides by the rules. (4chan draws a distinction between more senior “moderators,” who are responsible for all boards, and “janitors,” who patrol one or two; we refer to them interchangeably because janitors also moderate discussion.) The janitors we spoke to and a major trove of leaked chat logs from the janitors’ private communications channel tell the story of RapeApe’s rise from junior janny to someone who could decide what kind of content was allowed on the site and where, shaping 4chan into the hateful, radicalizing online community it’s known for today.”
Once “RapeApe” had the power to fire the other mods, he basically single-handedly turned the site into the internet’s home for far right radicalization. Well, he didn’t single-handedly do it. All of the people creating the far right content played a role. But they couldn’t have pulled it off without RapeApe having their back. The site started off, after all, as a fairly progressive anime site. They started the /pol/ forum literally as a means of siphoning off the races from the rest of the site:
But thanks to RapeApe’s oversite — or enforced lack of oversight — /pol/ ultimately won out and its politics now define the site. And yet, if we go by the internal “janitor” chat logs, RapeApe didn’t come across as an overt neo-Nazi. He came across as a typical right-winger with a focus on guns, gays, and the alleged censorship of conservatives:
And RapeApe managed to allow this far right takeover of the site despite its rules against posting racist content. How did he manage this? By setting a criteria so absurd almost all racist content could be excused: content only got banned if the poster appeared to have racist intent when posting it. It’s the perfect excuse in the age of ‘jokey’ far right memes. You weren’t advocating for another Holocaust. You were just joking about it, so it’s allowed:
As a consequence of these moderation standards, 4Chan is now arguably the biggest neo-Nazi recruiting ground on the internet:
We can’t say we weren’t warned. 4Chan isn’t just an apocryphal over-the-top story about the perils of unrestricted far right propaganda. It’s a very real over-the-top story about the perils of unrestricted far right propaganda, which is why it’s a warning of what’s to come for the rest of the internet should the right-wing media and Republicans manage to successfully create a standard where ‘anything goes’ for right-wing content.
But on the plus side, at least the situation can’t probably get too much worse. After all, it’s not as if right-wing content doesn’t already dominate social media. And it’s not as if social media giants haven’t already become neo-Nazi recruiting grounds. And it’s not as if right-wing threats of political violence aren’t tolerated, as Mark Zuckerberg made clear during his congressional hearing last week where he refused to ban Steve Bannon from Facebook despite Bannon’s seemingly sincere public calls for the beheading of Christopher Wray and Anthony Fauci. The current propaganda about right-wing censorship isn’t really about intimidating Big Tech into relaxing existing standards so much as it’s about intimidating Big Tech into maintaining the currently relaxed standards while their platforms continue to act as far right radicalization platforms.
Still, it’s possible that the QAnon movement, for example, could be given free reign to post whatever it wants wherever it wants to across social media and that would be a real gain for the far right. QAnon is, after all, basically the reshashed Protocols of the Elders of Zion. Similarly, President Trump’s tweets filled with disinformation might no longer get those disinformation warnings. There really are plenty of areas where the limited forms of content moderation that currently keep out the worst far right content get pulled and a whole new flood of garbage is allowed to spew across major social media platforms. In other words, while Facebook might be 50% 4chan at this point, that other 50% that it’s holding back really is the most awful content. And if the right-wing succeeds in its intimidation campaign the full 4chan-ization of the internet will be just a matter of time.
In related news, a right-wing outlet is now so upset with Fox News host Tucker Carlson over his calls for the Trump campaign to show actual evidence of widespread voter fraud that they’ve decided to #PizzaGate Carlson. Like, literally tie him to the whole #PizzaGate garbage conspiracy theory from 2016. It’s actually kind of incredible to see given the crucial role Carlson plays in the mainstreaming of far right content. But he clearly hit a nerve and now he has to pay in the form of being #PizzaGated by his fellow far right media brethren. We better hope no social media outlets do anything to hinder the spread of this allegation and show their anti-conservative bias.
With all of the focus on the role far right communication apps like Parler played in planning and coordinating the January 6 insurrection, here’s an interesting piece in BuzzFeed that looks at the role Mark Zuckerberg has been directly playing in protecting the purveyors of right-wing disinformation from Facebook’s own internal rule enforcers for years. And as we’ll see, Facebook’s own employees tasked with enforcing those rules are now coming forward claiming that it was Mark Zuckerberg’s previous actions that effectively facilitate the utilization of Facebook by the insurrectionists to carry out the insurrection in real-time. Specifically, it was the direct interventions by Zuckerberg to protect figures like Alex Jones and Ben Shapiro from Facebook’s rules that injected so much uncertainty into the enforcement process when it came to enforcing Facebook’s rules for conservative users — Zuckerberg was reportedly terrified of right-wing campaigns blaming Facebook with ‘shadow-banning’ conservatives — that Facebook’s employees found themselves effectively forced to allow the insurrectionists free reign of the platform.
But it’s not just Zuckerberg in Facebook’s leadership who has been spearheading the efforts to carve out a special rule exemption for conservatives. Joe Kaplan, the former White House aide to George W. Bush who now serves as Facebook’s global policy chief, has also been acting as the internal guardian of right-wing disinformation on the platform. Recall how we’ve already seen how Kaplan arranged to have the far right Daily Caller outlet added to Facebook’s list of ‘fact-checkers”. Hiring The Onion would have been more responsible.
We’re told that, in the weeks prior to the election, there was so much misinformation undermining trust in the integrity of the vote spreading across Facebook that executives decided the site would emphasize the News Ecosystem Quality (NEQ) internal score the company gives publishers based on assessments of their journalism for determining what articles show up in people’s news feeds. Implementing this NEQ feature did so much to improve the quality of the news being pushed to users’ new feeds that the vice president responsible for developing the NEQ system pushed to have it continued indefinitely. And then Joe Kaplan intervened and the feature was removed. It was only in the days after the insurrection that Facebook renewed the NEQ newsfeed feature.
It sounds like at least six Facebook employees have resigned since the November election with farewell posts that called out Facebook’s leadership for failing the heed the company’s own experts on misinformation and hate speech. And four of those departing employees cited the fact that Joe Kaplan is simultaneously head of the public policy team — which oversees lobbying and government relations — and the content policy team that sets and enforces the platform’s rules.
So between Zuckerberg and Kaplan, Facebook’s own employees tasked with enforcing the platforms rules are finding themselves unable to enforce those rules against conservatives, culminating in Facebook being used as a key platform for the insurrectionists. And this situation has, in turn, reportedly led to some severe morale issues in the rules-enforcement department, hence the whistleblowing we’re now hearing:
“Zuckerberg’s “more nuanced policy” set off a cascading effect, the two former employees said, which delayed the company’s efforts to remove right-wing militant organizations such as the Oath Keepers, which were involved the Jan. 6 insurrection at the US Capitol. It is also a case study in Facebook’s willingness to change its rules to placate America’s right wing and avoid political backlash.”
A “more nuanced policy”. It’s a darkly amusing way of characterizing Mark Zuckerberg’s demands that Facebook carve out special exemptions for figures like Alex Jones. But Zuckerberg wasn’t the only high-level executive making these demands of Facebook’s employees. Joe Kaplan reportedly guided Zuckerberg through this process of making right-wing misinformation super-spreaders a protected class on the platform:
And despite all of the public and internal blowback Facebook is experiencing as a result of the role it continues to play in spreading disinformation, Zuckerberg reportedly remains stalwart in his support for Kaplan. But Kaplan influence over how Facebook implements its internal rules isn’t limited to his relationship with Zuckerberg. Kaplan is literally in charge of lobbying governments and enforcing rules. So Facebook basically designed a corporate structure that ensures Facebook’s rules will be implemented in a manner the most politically palatable to key governments:
But we don’t have to merely look at warped corporate structure to realize there’s a major problem with how Facebook enforces the rules against right-wing figures. We just have to look at the endlessly growing list of examples of Facebook bending over backwards to appease far right disinformation outlets. An endless that keeps growing in large part due to the steady stream of demoralized Faceboook employees and ex-Facebook employees blowing the whistle. For example, there was the decision to continue with the “Groups You Should Join” feature in August of 2020 after it was determined by that the feature was leading to growing political polarization. It was Kaplan’s public policy team that prevented the ending of the feature until after the election over concerns that doing so beforehand “would have created thrash in the political ecosystem” during the election. Yes, Facebook deferred implementing a policy change until after the election that would have reduced political polarization over fears of a political backlash:
But the “Groups You Should Join” feature wasn’t the only feature Kaplan’s group decided to keep during this time. The “In Feed Recommendations” feature was also kept, despite the fact that the product was pushing right-wing outlets even though the feature wasn’t supposed to be pushing political content. Once again, it was fears of conservative accusations of ‘shadow-banning’ that apparently drove these decisions. And what’s more remarkable is that we are hearing that Facebook’s employees explicitly stated these fears as the reasons for not implementing these changes. It’s one thing to informally make these decision based on fears of ‘shadow-banning’ and come up with a different formal excuse. But it’s another level of capitulation if Facebook’s own internal memos cited these fears of ‘shadow-banning’ charges as the explicit reason to keep these policies in place. Facebook effectively ‘shadow-unbanned’ the far right:
Finally, we have the example of Kaplan’s teaming actively thwarting the implementation of the News Ecosystem Quality (NEQ) metric that could have kept the worst disinformation sources out of people’s news feeds. It was only after the insurrection that Facebook allowed the changes to take place:
But it wasn’t always Kaplan enforcing this protection racket. When it came to Alex Jones, it was Zuckerberg himself who stepped in to prevent Jones’s deplatforming. Why? Because Zuckerberg didn’t seem to actually see Jones as a hate figure. So they had to carve out an entire new rule system to allow Jones to stay on the platform, an action that employees are directly attributing to the extreme hesitancy to enforce the rules in 2020:
It’s the kind of anecdote about Zuckerberg that raises a question rarely asked about the guy: so what does he actually believe? Like, does he have some sort of political orientation? If so, what is it? Because it’s long seemed like Zuckerberg simply had no discernible moral core beyond caring about making money and amassing power. And protecting Alex Jones has a clear commercial motive for Facebook. But when we learn about his seemingly warm feelings towards Alex Jones, we have to ask: is Zuckerberg straight up red-pilled? Because it’s not exactly a huge leap to go from ‘assh*le who only cares about money and power’ to ‘far right ideologue aligned with Alex Jones’. It’s arguably not a leap at all.
So is the owner of the greatest disseminator of far right propaganda in history also a consumer of that propaganda? It would explain a lot. Not that being an assh*le who only cares about money and power wouldn’t also explain a lot. Still, given that Facebook has made itself into the premier global purveyor of right-wing disinformation under Zuckerberg’s rules, the question of whether or not Zuckerberg himself is actually a far right nut job is a pretty important question. Especially now that Facebook has transition from being the disinformation purveyor’s platform-of-choice to the insurrectionist’s platform of choice:
“Whilst the data doesn’t show definitively what app was the most popular amongst rioters, it does strongly indicate Facebook was rioters’ the preferred platform. Previously, Forbes had reported on cases where Facebook users had publicly posted their intention to attend the riots. One included the image of a bullet with the caption, “By Bullet or Ballot, Restoration of the Republic is Coming.” The man who posted the image was later arrested after posting images of himself at the Capitol on January 6, according to investigators. In other cases, the FBI found Facebook users had livestreamed their attack on the building. As the Washington Post previously reported, the #StopTheSteal hashtag was seen across Facebook in the days around January 6, with 128,000 users talking about it, according to data provided by Eric Feinberg, a vice president with the Coalition for a Safer Web.”
Facebook: the insurrectionist’s prefer platform. The numbers don’t lie.
Now, on the one hand, it is true that Facebook probably has the largest footprint in the DOJ filings because it’s the biggest social media platform that’s most widely used. But on the other hand, it’s also the case that Facebook has remained the most widely used social media platform for the right-wing precisely because of the steps taken by Facebook executives like Zuckerberg and Kaplan to ensure the platform doesn’t crack down too hard on the Nazis, fascists, and anyone else with an internet connection. Or at least anyone else with an internet connection espousing right-wing disinformation.
Following up on recent reports about Mark Zuckerberg’s and Joel Kaplan’s interference with the enforcement of Facebook’s rules in order to allow right-wing figures like Ben Shapiro to continue to get pushed on unsuspecting users by Facebook’s algorithm using the In Feed Recommendations (IFR) despite a ban on IFR political content, here’s a report about a far more alarming example of Facebook’s ‘recommended’ content pushing users towards radicalism:
The final witness in the prosecution of the Michigan COVID kidnapping plotters — the interstate militia plot to kidnap Michigan’s Governor Whitmer, hold her on trial, and execute her — testified on Friday during the case’s preliminary exam. The confidential FBI informant went only by “Dan” during the testimony. “Dan” became an FBI informant after he became aware of the plot and agreed to cooperate with law enforcement.
So how did “Dan”, a self-describe libertarian, become part of this plot? That’s where it gets scandalous for Facebook, in a scandalously typical manner: Facebook’s algorithm appears to have served up a suggestion to Dan that he join a Facebook group called the “Wolverine Watchmen”. When Dan click on the suggested page, a few questions popped up for him to answer. After answering the questions in a presumably satisfactory manner, Dan was admitted into the group and told to download an encrypted messaging app called Wire. The app prohibits screenshots and periodically deletes all messages so it’s basically designed for sensitive group communications.
After joining the group, Dan began what is described as a journey from lockdown protests at the Michigan state Capitol to rural training exercises with members of the group who expressed a desire to hurt and kill law enforcement and politicians. In an echo of the numerous reports of far right ‘Boogaloo’ members joining the anti-police-brutality protests over the summer in the hopes of instigating more mayhem and violence, Dan reports attending a Black Lives Matter protest in Detroit, with the group going there envision a possible gunfight with police if pepper spray was used on protesters.
So all it required for Dan to go from random libertarian to domestic-terrorist plotter was a Facebook group suggestion. That was it. Facebook’s basically recruited ‘Dan’ into this terrorist group. It raises the obvious and alarming question of how many other ‘Dans’ are out there? Especially ‘Dans’ who don’t decide to go to the FBI and just remain part of the plot. How many others ‘Dans’ did Facebook’s algorithms attempt to recruit into a far right domestic terror group last year? 10? 1,000? 10,000? We have no idea. We just know that at least one person, ‘Dan’, was recruited into a domestic terror plot by Facebook’s ‘suggested groups’ algorithm and ‘Dan’ probably wasn’t the only one:
“Dan described learning of the group — known as the Wolverine Watchmen — through a Facebook algorithm that he believed made the suggestion based on his interactions with other Facebook pages that support the Second Amendment and firearms training.”
Facebook just can’t help itself. It simply must connect world, including the world of terrorists. How many other people living in Michigan who interacted with Second Amendment and firearms training pages got the same suggestion? It’s almost surprising the Michigan coup plot wasn’t bigger in light of this revelation.
Then there’s the question of just how prevalent were these kinds of militia groups at the various police brutality protests across the summer of 2020. We just keep learning about these far right infiltrators:
But perhaps the biggest question raised by this disturbing story is the question of just how often other terror plots have been orchestrated in this manner over Facebook over the past year or so as the ‘Boogaloo’ movement transitioned into the Trumpian ‘Stop the Steal’ post-election insurrection? Was the “Wolverine Watchmen” group the only Facebook domestic terror group that was casually using a simple questionnaire to filter recruits? Because it really is quite remarkable how easy it was for ‘Dan’ to join this group. Facebook served up the suggestion for join the group and upon clicking the suggested link you get a few questions. Answer the questions in a predictably ‘correct’ manner and you’re in? That’s it? Is this typical for far right militia groups? Because if it is typically, there’s probably A LOT more groups like this out there. One of the natural barriers to domestic terror plots is the fact that you have to get two or more people who are crazy enough to attempt such a plot to already know each other before the plotting starts and this model of casual internet recruitment breaks that barrier. You know, kind of like the ISIS recruitment model. Except in this case, the plotters got together to meet in person. ISIS typically recruits, radicalizes, and gives orders from afar. So what took place with this Michigan story is like a wildly more successful version of the ISIS recruitment model. A wildly more successful version of the ISIS recruitment model that wouldn’t have been possible without Facebook.
Of all the disturbing questions raised by the January 6 Capitol insurrection, one of the most disturbing question is whether or not any of the high-level orchestrators of the event — from Roger Stone to then-President Trump — will face any legal repercussions over the roles they played into making it happen. But perhaps an even more disturbing question is the question of whether any of these leading figures will face repercussions after the next militia uprising they instigate. Because it’s hard to imagine Jan 6 was a one-off event, especially if the people who led it remain free to lead another one. The Office of the Director of National Intelligence issued a report a couple of weeks ago identifying “militia violent extremists” as being among the “most lethal” public safety threats, after all. A threat that persists long after the insurrection. Anyone who played a leadership role in that insurrection and continues to defend it has basically been playing a leadership role in all of the yet-to-come militia violence that authorities are now warning us is coming too.
But, of course, the leadership roles in the lead up to the insurrection shouldn’t be limited to political figures like Trump or Stone. What about Facebook’s role in instigating the January 6 insurrection? As we’ve seen, the platform has been consistently behind-the-curve in addressing long-standing complaints that its allowing itself to operate as a radicalization too. It’s a pattern exemplified by the story of how one of the members of Michigan militia plot to kidnap and execute Governor Whitmer was effectively recruited into the plot via Facebook’s algorithmic suggestions.
So what sort of leadership role is Facebook playing today in the militia movement’s ongoing recruitment and radicalization campaigns? Well, according to a new report by BuzzFeed and the Tech Transparency Project (TTP), Facebook continues to lead the way as the premier militia recruitment and radicalization platform. According to their report, as of March 18, Facebook hosted more than 200 militia pages and groups and at least 140 included the word “militia” in their name.
But is Facebook still prompting users to join militia groups, as was reported with the Michigian militia kidnapping case? Yep! For example, when BuzzFeed visited the “East Kentucky Malitia” page, Facebook suggested to visit the pages of Fairfax County Militia and the KY Mountain Rangers. When BuzzFeed visited the KY Mountain Rangers page, it then led to the page for the Texas Freedom Force, one of the groups currently under investigation for the role its members played in the insurrection. Other militias are continuing to use Facebook for organizing events, like the DFW Beacon Unit recently posting about holding a training session. Facebook is even automatically creating pages for militias that don’t already have a page, based on the content that people are sharing. Even militia that aren’t using Facebook to recruit are probably getting recruits thanks to the platform. This is all still happening. Because of course it is. This is Facebook. It would be weird if they weren’t somehow promoting the far right:
“More than 200 militia pages and groups were on Facebook as of March 18, according to a new report published Wednesday by the Tech Transparency Project (TTP), a nonprofit watchdog organization, and additional research by BuzzFeed News. Of them, at least 140 included the word “militia” in their name.”
These aren’t exactly stealth militias. But Facebook is happy to host them. Months after the Jan 6 insurrection. It’s clearly an important market for the company. So important that Facebook’s algorithm continues to create pages for militias that don’t yet have them:
And then there’s the continued algorithmic “suggestions” that push users to militia pages, presumably whenever they visit a page also frequented by militia members:
So as we can see, Facebook is living up to its mission statement of connecting the world. Like connecting people to militias. Even after Facebook promised it would stop doing this. It just keeps happening. Almost like the company can’t control itself. It’s like a new form of Facebook addiction just for Facebook. An addiction so powerful that Facebook appears to have been largely unable to do anything to address its promotion of militia groups months after BuzzFeed and TTP issued basically the same report back in October. Yes, five months before BuzzFeed and TTP issued this, and two and a half months before the Jan 6 insurrection, they issued basically the same report, detailing how Facebook was continuing to host militia pages despite pledges to remove them from the site. Militia pages were still there, some newly created, and still getting recruits over Facebook. Oh, and still able to purchase ads for their militias. Targeted ads that utilize Facebook’s micro-targeting algorithms to ensure that the people most likely to join the militia are the ones who see it:
“Despite efforts by Facebook to ban right-wing militant organizations, a new report published Monday has found that some of those groups continue to organize and run pages on the social network. Facebook also continues to profit from ads placed by extremists despite an announcement earlier this year that said it would ban all ads that “praise, support or represent militarized social movements.””
Facebook just can’t cut the habit. Selling ads for extremists is what it does. Remarkably affordable ads with a shocking reach: for $100 you can potentially send your pro-militia message to 500,000 to 1 million people. And not just random people. Targeted people selected by Facebook’s algorithms to be the most likely to react to that ad. Radicalization made affordable:
Again, this BuzzFeed/TTP report was released back in October, two and a half months before the insurrection. Five months later, we get a new report and nothing changed. Well, except the insurrection happened, inspiring these groups to even greater ambitions at the same time mainstream conservatives became even more radicalized under a wave of propaganda telling them the election was stolen and the insurrection was either justified or didn’t actually happen. So that’s new.
Here’s a pair of articles about the seemingly ever-growing influence and power of Peter Thiel and the role he’s playing in shaping the Republican Party. The first article is a report about the interest Thiel has taken in certain Republican primary races. Specifically, Thiel is backing primary challengers to the dwindling number of Republicans who have voiced opposition to role Donald Trump played in fomenting the January 6 Capitol insurrection. As the article notes, this comes a month after Thiel reportedly met with Trump privately for over an hour at Trump’s Bedminster golf club. So Thiel is fully on board with the GOP’s MAGA purge and now financing it. He donated the maximum-allowed, $5,800 check to Harriet Hageman, who is running to unseat Liz Cheney in Wyoming, and also donated to a challenger to Rep. Jaime Herrera Beutler, who, like Cheney, voted for Trump’s impeachment in January.
It’s the kind of report that raises a question we probably should have been asking all along: so what role did Thiel play in fomenting the Capitol insurrection? After all, not only doesn’t Thiel have the kind of political philosophy that would have no objection to seizing power via a political insurrection, it’s hard to think of something that’s more on brand for Thiel. This is the anti-democracy oligarch, after all. The guy’s life is basically a slow-motion coup against society. It’s almost inconceivable that Thiel wouldn’t have fully endorsed Trump succeeding with a coup attempt. The only thing that could plausibly give Thiel pause about whether or not to back such an act is if he thought it didn’t stand a chance of working.
But even then, those fears of failure would really only be a barrier to Thiel openly backing the insurrection. What about secret support? Just imagine how much relevant information a company like Palantir, for example, could have developed in relation to the insurrection. Or how about Thiel’s influence at Facebook? As we’ve seen, Facebook played a crucial role in facilitating the insurrection by ensuring the ‘stolen election’ disinformation was allowed to flow freely. So we really have to ask: what role in Thiel play in those decisions by Facebook? And that brings us to a recent except from a new biography on Thiel by Max Chafkin that describes the incredible influence Thiel has over Zuckerberg and, therefore, all of Facebook. As Chafkin notes, when Thiel made his initial $500,000 investment in Facebook, he did it under the condition that Facebook be reorganized to make Zuckerberg a kind of corporate dictator. And as Chafkin details, in one instance after another Zuckerberg has demonstrated a remarkable loyalty to Thiel and views him as a political ally. So for all of the justifiable heat Zuckerberg And Sharyle Sandberg have taken over Facebook’s pro-insurrection role, we really should be asking just what was Thiel doing to promote the insurrection leading up to January 6. And since Thiel is clearly still on Team Trump and actively purging the GOP of anti-insurrectionists, we have to also ask what steps Thiel is currently taking to ensure the next insurrection works:
“Thiel, one of the most sought-after GOP donors, has emerged as a financial force behind the effort to unseat Trump critics.”
Remember all those reports from July 2020 about Thiel giving up on Trump? He clearly had a change of heart. But given all the events that transpired since then, the question we really should be asking is whether or not those announcements were actually part of a plan by Thiel to hide his support for Trump. Don’t forget that it was already looking like the Trump team might effectively try to cancel the 2020 election over pandemic concerns by that point in time. The writing was already on the wall that the 2020 election wasn’t going to end well. So we have to ask: did Thiel foresee the insurrection, or something as extreme like a canceled election, and consciously distance himself from Trump in anticipation of that? Because as Thiel is making abundantly clear how, he’s totally cool with Trump and everything that Trumpism is about these days. And Trumpism is basically about the ‘stolen election’ Big Lie these days. That’s it. So Thiel is clearly fully on board with the ‘Stolen Election’ narrative, enough so to finance the movement to ensure that’s the core plank of the GOP going forward:
So if Thiel is an insurrectionist, we have to ask: if Peter Thiel had indeed wanted to assist the coup attempt, what could he have done given his incredible access to government information and influence over Facebook? These are the kinds of questions the teams investigating the insurrection really should be asking. Because as Max Chafkin’s biography makes clear, supporting political insurrection is about as ‘on brand’ an action as we could possibly expect from Thiel given his lifetime of embracing an ideology of cheating to get what you want:
” Anyone who has followed Thiel’s career will find much to recognize in the Route 17 encounter. The reflexive contrarianism, the unearned confidence, the impossibly favorable outcome — they feel familiar, both in Thiel himself and the companies he helped create. Today, of course, that scrawny chess nerd is the billionaire co-founder of PayPal and Palantir and arguably the greatest venture capitalist of his generation, with a sideline as patron of such far-right causes as the 2016 candidacy of Donald Trump. Thiel (who did not comment for this article, which is adapted from my new biography, The Contrarian) is perhaps the most important influence in the world’s most influential industry. Other Silicon Valley personas may be better known to the general public, including Jeff Bezos, Elon Musk, and even a few who don’t regularly launch rockets into space. But Thiel is the Valley’s true idol — the single person whom tech’s young aspirants and millennial moguls most seek to flatter and to emulate, the cult leader of the cult of disruption.”
Peter Thiel’s ethos is the heart and soul driving contemporary Silicon Valley. Take a moment and digest that. The guy who wrote a book articulating his Ayn Rand-ian philosophy that views company founders as godlike and monarchies are more efficient than democracies is arguably the most influential person in Silicon Valley. A whole generation of tech entrepreneurs have adopted his philosophy:
And note how Thiel was apparently a defender of South Africa’s apartheid system, asserting “it works”, back when he was student at Stanford. Recall how Thiel spent time living in South Africa growing. This wasn’t just a casual embrace of apartheid. Also note that the former student who recounted Thiel sharing these views was an African American female. Thiel was willing to defend apartheid to a black student:
Then we get to the story of Thiel’s role in the founding of PayPal. A story that appears to involve Thiel’s team stealing the underlying idea of making PayPal an internet-based currency from Elon Musk’s rival X.com company that was working on digital currencies at the same time. Keep in mind that, of all of Thiel’s various tech-related ventures, it was really only PayPal where one could make a case that Thiel himself provided some sort of technical innovation, as opposed to just be the guy providing the financing for the venture. And even in this case, Thiel’s original vision was far less revolutionary — having the PayPal transactions only take place via Palm Pilots directly communicating with each other — and he ended up stealing the real innovation from Musk’s company. It underscores how Thiel’s primary genuine innovation is limited to the moral ‘innovations’ he kept coming up with to get ahead. In other words, he innovated selfish rule-breaking and conniving. That’s his grand contribution to humanity. Way to go:
Then, after PayPal and X.com merge — eliminating the competition — Thiel leaves the merged company, but is later installed back into power after his sycophants stage a corporate coup and replace Musk with Thiel. Thiel then proceeds to attempt to funnel PayPal’s funds into his private hedge fund, Thiel Capital. Another moral ‘innovation’:
There’s even a term for Thiel’s form of growing a company by breaking any ethical code that gets in your way: “growth hacking”. It’s widely credited to Thiel and celebrated across the industry. From this perspective, the Capitol insurrection was really just a form of political growth hacking:
And then there’s the fascinating slew of questions about just how much influence Thiel holds of Mark Zuckerberg. The fact that Thiel arranged for Zuckerberg to effectively be an absolute corporate dictator as part of the condition of Thiel’s initial investment in Facebook is a hint. But it’s the behavior of Zuckerberg in the years since that’s the biggest hint. The is simply no denying that Zuckerberg acts as if he takes his orders from Thiel. And that’s why we have to ask the question: was Thiel directing Zuckerberg to ensure Facebook remained a pro-insurrection platform throughout the post-election period so the Trump team could reliably use it to push the ‘stolen election’ narrative? All available circumstantial evidence is pointing in that direction:
Should we expect investigators to ever seriously look into Thiel’s possible role in the insurrection? Of course not. The guy is effectively untouchable and arguably the most powerful person in Washington DC. He own a company that’s effectively the privatized NSA, after all. He’s probably blackmailed half of Congress by now. And it’s Thiel’s apparent untouchability that makes the question of his insurrectionist role all the more crucial. Donald Trump is going to die one of these years. Steve Bannon could end up in jail over over refusal to cooperate with investigators. But Thiel is untouchable, as he has been seemingly his entire life. And at this point it’s hard to see who is better positioned to control the future of the GOP than Peter Thiel. The guy who is position to be the post-Trump shadow-leader of GOP for the next generation appears to have has figured out how to back an insurrection while staying far enough back in the shadows to avoid facing any repercussions. What happens when arguably the most powerful person in the country can foment insurrections without it even being noticed? The US has apparently decided to find out.
Of all of the warped self-projecting grievances that comprise the define the contemporary right-wing, perhaps the most obviously absurd is the grievance about social media suppression of right-wing voices. Beyond the fact that these grievances are typically expressed by people on the social media platforms themselves, there’s the simple reality that social media platforms keep getting caught engaging in scandals centered around policies of actively turning a blind eye towards those right-wing abuses. Facebook was literally the primary January 6 Capitol insurrection recruitment tool, after all. So with that ongoing farcical narrative in mind, here’s a story about Silicon Valley actually engaging in exactly the kind of behavior described in that narrative. Well, almost exactly the same: Just days before Nicaragua’s upcoming November 7 elections, Facebook and Instagram purged thousands of account. All of them left-wing accounts, including the accounts of elected government officials. A nation-wide purge of the Left is taking place in Nicaragua, complements of the very same Silicon Valley giants routinely accused of silencing conservative voices. Because of course that’s what’s happening.
The purge included media outlets and some of the most widely followed personalities in the country. What was the basis for all this? Allegations these accounts were fake and being operated by government troll farms. The only problem was this was demonstrably not true and real people have a way of validating their existence. But when all of these very real people flooded onto Twitter to post videos of their very real selves, Twitter proceeded to delete those accounts too. That’s the nature of this op. It’s a profoundly bad faith op being executed in real-time and ignored by virtually the entire world. In other words, it’s succeeding. Silicon Valley silenced Nicaragua’s left and barely anyone noticed. Mission accomplished.
Oh, and it just happens to be the case that the figures inside Facebook who compiled and promoted the bogus report have ties to the US national security state. The author of the report and leader of Facebook’s “Threat Intelligence Team”, Ben Nimmo, is a former NATO press officer and consultant for Integrity Initiative (a real life troll farm). Nimmo served as head of investigations for Graphika, a DARPA-backed initiative set up with funding from the Pentagon’s Minerva Institute. The head of security policy at Facebook, Nathaniel Gleicher, also promoted the report. Gliecher was director for cybersecurity policy at the White House National Security Council. David Agranovich, Facebook’s “director of threat disruption” who also shared Nimmo report, served as director of intelligence for the White House National Security Council. It raises questions about the national security backgrounds of whoever was making these decisions at Twitter.
So we finally have an example of social media selectively targeting the users of specific political ideology. The legends are both true and the opposite of the right-wing narrative, because that’s how strategic projection/trolling works:
“The purges exclusively targeted supporters of the socialist, anti-imperialist Sandinista Front party. Zero right-wing opposition supporters in Nicaragua were impacted.”
It’s not subtle. Facebook, Instagram, and Twitter took explicitly partisan steps to silence the Left in Nicaragua. Like the whole left across society, from everyday activists, to elected officials, representing many of the most followed personalities in the country. A giant coordinated effort carried out under the false pretense that these are fake account. First Facebook and Instagram carried out the mass purge under the pretense that they were fake accounts. And when these real people went onto Twitter to prove they were real, Twitter proceeded to ban them too:
Facebook even published a report on November 1 explaining the purge as being in response to “coordinated inauthentic behavior”. A report obviously made in bad faith. So we have a coordinated Silicon Valley move against alleged bad faith activity made in bad faith:
So it should come as no surprise to learn that this coordinated bad faith action by these Silicon Valley giants that just happens to align with the US’s long-standing policy of squashing Central American leftist movements. In other words, we’re watching the latest US op targeting Nicaragua’s Left in action. An op executed by Facebook executives who just happen to have previously worked in national security jobs with the US government:
Finally, note this ominous warning from these activists: What Silicon Valley is doing is making Nicaragua safe for the execution of a right-wing coup. The voices who may have been capable informing the world about what’s happening have been preemptively silenced. It’s part of the reason this story goes far beyond Nicaragua. We’re watching what could be interpreted as pre-coup digital prep work:
So at this point it sounds like we shouldn’t be entirely surprised to hear about a new right-wing coup attempt in Nicaragua in the coming weeks. But we should maybe be a little surprised if we hear about it from any of the dissenting voices in the country, who are of course all government trolls anyway.
When are people in the United States going to realize that these goddamed social media websites are nothing but tools of the national security state?
Hell, the Google search engine is literally the end result of DARPA scientist William Hermann Godel’s and Saigon Embassy Minister Edward Geary Lansdale’s computer-based “Project AGILE” high-value target identification “Subproject V” program for the “hunter-killer” teams that operated within the Civil Operations and Rural Development Support-run Provincial Reconnaissance Unit, which was later renamed “ARPANET”.
I mean, that was the original use of the “ARPANET” in Vietnam.
Period.
Maj. Gen. Edward Geary Lansdale used it as computer program to strategically target and assassinate thousands of innocent human beings by building electronic “meta-data” dossiers on them (exactly what modern day social media does).
And Department of the Navy Deputy Director of the Office of Special Operations William Hermann Godel, apparently used the same computer network to traffic tons of narcotics out of Indochina on Civil Air Transport aircraft & Sea Supply Corp. ships.
It can be argued that the Internet as we know it, started out in Godel and Lansdale’s “Combat Development and Test Center and the Vietcong Motivation and Morale Project”, which conducted some of the most heinous and vile torture and interrogation experiments against Indochinese civilians, a joint CIA — United States military operation that was later known as Phoenix Program, which was overseen by Robert William “Blowtorch Bob” Komer and William Egan Colby...
I have often considered that the “look-alikes” that surrounded Lee Harvey Oswald and James Earl Ray may have been selected in a “Project Agile” related subproject computer server.
Potentially though the Bundesnachrichtendienst’s ZR/OARLOCK computer server program?
Purportedly, William H. Godel also oversaw all Civil Air Transport operations in South Vietnam for the better part of half of the 1960’s.
Considering David William Ferrie was aide to the National Commander of the Civil Air Patrol by personal order of USAF Brig. Gen. Stephen Davenport McElroy (himself commander of the Ground Electronics Engineering-Installation Agency with Headquarters at Griffiss Air Force Base, N.Y. in 1964), I personally find the idea more than a sight possibility...
...but I digress.
@Robert Ward Mongtenegro–
Yes, indeed. The whole damned internet is an “op” and always was.
As discussed in the “Surveillance Valley” series of For the Record programs, not only is the internet itself, and social media in particular, an “op” but the so-called privacy advocates, including St. Edward [Snowden] and St. Julian [Assange] are a key part of the vacuum cleaner operation.
This is the domestic Phoenix Program made manifest.
It is interesting, in particular, to contemplate the Cambridge Analytica affair.
https://spitfirelist.com/for-the-record/ftr-1077-surveillance-valley-part-3-cambridge-analytica-democracy-and-counterinsurgency/
In his speech to the Industry Club of Dusseldorf, Hitler equated democracy with Communism, which went over very well.
The purpose of Project Agile, and the Internet, is “Counter Insurgency.”
If you are going to do that in a Hitlerian context, you have to know what people are thinking and doing.
Democracy=Communism is the dominant equation.
We’re doomed.
Thanks for you continued input and dedication.
Best,
Dave
We got another update on issue of Facebook’s tolerance and embrace of Spanish-language right-wing disinformation. Recall how Spanish-language media in the US was getting overwhelmed with Q‑Anon-style far right memes in 2020 and arguably swung the state of Florida towards Trump. Well, the Los Angeles Times has a new piece out describing the internal Facebook efforts in the final weeks of the 2020 campaign to deal with the deluge of Spanish-language misinformation on its platform. Although it’s not so much describing an effort to combat this disinformation as it was an internal effort to justify the lack of action.
As the article describes, activist groups were finding that misinformation flagged and taken down in English was remaining up on Facebook and only slowly taken down when it showed up in Spanish. So what was the company’s response? According to a 2020 product risk assessment, the Spanish-language misinformation detection remains “very low-performance,” and yet the suggested response wasn’t to provide more resources towards combating this misinformation. No, it was to “Just keep trying to improve. Addition of resources will not help.” As the article notes, another employee replied to this suggestion by pointing out that “My understanding is we have 1 part time [software engineer] dedicated on [Instagram] detection right now.” So the implied internal response by Facebook to the flood of Spanish-language disinformation flooding Facebook’s Instagram platform during the 2020 election cycle was to ask that one part-time software engineer to try harder.
Keep in mind the recent context of this report: Facebook’s decision this month to take down nearly everyone in Nicaragua associated with the left-wing Sandinista government. Including prominent private supporters of the government. A nationwide purge of the left. That just happened like two weeks ago, right before the elections that the US is now declaring a fraud.
Now juxtapose Facebook’s actions in Nicaragua with its behavior in Honduras. As we’ve seen, Facebook was essentially tolerating the inauthentic use of Facebook by the right-wing Honduran government to carry out misinformation campaigns that included encouraging people to join migrant caravans by pretending to be prominent left-wing organizers on Facebook. Yes, the Honduran was literally caught hyping migrant caravans to the US by faking the Facebook profiles or real migrant activists and Facebook basically did nothing about this other than protect the identity of the perpetrator. As we’re going to see in an update on that story from back in April from Sophie Zhang, Zhang informed the company about the Honduran government’s activities in August of 2018 but the company dragged its feet on doing anything about it for 11 months.
Oh, and here’s the best/worst part of this story: according to Facebook’s own metrics, the resources it puts towards combating Spanish-language disinformation is eclipsed only by the resources it puts into combating English-language disinformation. Spanish-language disinformation efforts get the second largest chunk of Facebook’s anti-disinformation efforts. So while these stories are describing an utter nightmare of disinformation having taken hold on the Spanish-language communities on these platforms, the situation described here is actually pretty good, relatively speaking by Facebook’s internal standards:
“Yet for all the concern from within — and criticism from outside — Spanish is a relatively well-supported language — by Facebook standards.”
Yep, that nightmare of a report on Facebook’s near complete lack of disinformation management for Spanish-language content was actually a feel good story for Facebook, relatively speaking. Spanish has the second highest levels of support inside the company. The situation is even worse for other languages:
This is paired with the observations of activist groups that misinformation that shows up in both English and Spanish was only having the English content flagged and removed. Facebook wasn’t even able to, or willing to, remove Spanish language content even after it’s already determined that content to be misinformation:
And then there’s the 2020 internal Facebook report that explicitly stated “Addition of resources will not help” after noting that Spanish-language misinformation detection remains “very low-performance”. A conclusion other Facebook employees understandably took issue with, observing that a single part time software engineer was dedicated to Spanish-language targeted misinformation on Instagram. A single part time employee. But no additional resources are needed:
And yet it’s hard to ignore the underlying conclusion that the cynical anonymous Facebook employee who concluded that an “Addition of resources will not help” was ultimately speaking for Facebook’s management and reflecting the company’s policy today. A policy towards misinformation that’s apparently, “Well, we tried! Nothing more we can do but try harder!”
And in case it wasn’t clear that it’s specifically right-wing misinformation, and not just generic misinformation, that is inundating these Spanish-language Facebook-owned platforms, here’s an update from back in April on the story of Facebook’s willing toleration of the right-wing government of Honduras using Facebook to simultaneously promote his own government while fomenting disinformation campaigns against his left-wing opponents and activists. As we’ve already seen, there is ample evidence that the Honduran government was literally waging a secret campaign to encourage people to join the migrant caravans heading to the US in 2017, with pro-government cable TV leading the messaging campaign. But as we also saw, inauthentic Facebook activity was heavily used to amplify the Honduran government’s disinformation message. And yet, when pressed with evidence of this inauthentic activity by government actors who were pretending to be migrant activists promoting the caravans, Facebook refused to identify the bad actors, citing privacy concerns.
That’s all part of the context of the update we got on Facebook’s bad behavior in Honduras back in April. The update came viz Facebook whistleblower Sophie Zhang, who had the job of combating misinformation at the company. It was Zhang who uncovered a coordinated disinformation campaign by the Honduran government in August of 2018. 90% of all identified ‘fake engagement’ identified in Honduras were identified with the Honduran government. Despite this, the company dragged its feet and took over a year before actually taking any actions against this inauthentic behavior in July of 2019. What was the internal rationale for this foot-dragging? A need to prioritize influence operations targeting the US and Western Europe and focus on the bad behavior of Russian and Iran. Yep. So at least part of the internal reasoning inside Facebook for why it didn’t need to prioritize Spanish-language misinformation is that it is literally a lower priority:
“Although the activity violated Facebook’s policy against “coordinated inauthentic behavior” – the kind of deceptive campaigning used by a Russian influence operation during the 2016 US election – Facebook dragged its feet for nearly a year before taking the campaign down in July 2019.”
It took Facebook 11 months to stop the overwhelmingly obvious inauthentic behavior of the Honduran government. Compare that to Facebook’s recent take down of large swathes of Nicaragua’s left-wing society based on unfounded fears of government involvement in their online activities. Why the 11 month delay? Well, it’s just a “bummer”, but Honduras just wasn’t a priority. The US, Western Europe, and the activities of Russia and Iran are the priorities:
It’s also important to take in this context that this Honduran right-wing government was, at the time, seen as key US ally in the region:
. Underscoring how Facebook really is operating as a tool of the national security state. It’s an aspect of this whole scandal that underscores the cynical absurdity of US conservatives complaining about Facebook censorship: Facebook is effectively acting as a tool of the US national security state and is constantly finding excuses to promote right-wing disinformation. It’s a reminder that it we really want to get to the bottom of why Facebook is constantly coddling the far right, it requires asking the much larger questions about the US national security state’s decades-long coddling of the far right globally. Rather difficult questions.
Why are the two GOP Senate candidates with the closest ties to Peter Thiel jointly pushing a new narrative about Mark Zuckerberg stealing the election for Joe Biden? That’s the question raised by the following story about how Arizona Senate candidate Blake Masters and Ohio Senate candidate JD Vance are both close Thiel associates, both backed by $10 million, and both aggressively promoting the latest ‘stolen election’ conservative narrative. A narrative where Mark Zuckerberg himself stole the election for Biden. Not Facebook, just Zuckerberg and his wife.
So how did Mark Zuckerberg and his wife steal the election for Biden? Through a pro-democracy foundation the pair set up in 2012. That entity, the Center for Tech and Civic Life (CTCL), which played a last-minute emergency role in 2020 assisting localities in raising the money and resources needed to run an election during an unprecedented pandemic. As the following Yahoo News piece describes, while the federal government provided $400 million in emergency assistance to localities, that number was far less than what experts said was needed but Republicans were blocking additional resources. This is where the CTCL stepped in, providing another $400 million in grants to localities. One study found the was spent on “increased pay for poll workers, expanded early voting sites and extra equipment to more quickly process millions of mailed ballots.”
So how did this $400 million in emergency grants steal the election for Biden? Well, according to conservative ‘analyses’, the money was disproportionately given to urban counties, which benefited Democrats. Now the complaint that the group gave more the urban vs rural area is demonstrably absurd. Of course it would and should give more just based on population density. But other also point to the CTCL giving grants to Democratic-leaning urban counties that Biden won without giving to Republican-leaning urban counties Trump won. CTCL replied that it gave grants to all counties that requested them.
So it appears that someone noticed that the CTCL ended up giving disproportionately to Democratic-leaning urban counties vs Republican-leaning urban counties and decided to concoct a ‘Mark Zuckerberg stole the election’ narrative around this. A narrative that conveniently ignores the extensive evidence of the role Facebook played as a key tool for the Republicans and the far right, and also conveniently ignores how the GOP was systematically refusing additional funds to help localities run elections during the pandemic.
And this narrative, which is simultaneously convenient for Facebook but inconvenient for Mark Zuckerberg (and therefore kind of inconvenient for Facebook too), is being heavily promoted by the two Senate candidates with the closest ties to Peter Thiel. What’s going on here? Is this pure theatrics? Don’t forget the secret White House dinner in October of 2019 arranged by Thiel, where Zuckerberg and the White House came to some sort of secret agreement to go easy on conservative sites. Theatrical arrangements between the GOP and Facebook are to be expected.
And yet, this is a highly inconvenient narrative for Mark Zuckerberg personally, being pushed by his apparent mentor. Why is this happening? Is this really theatrics? Or are we getting a better idea of the power moves behind why Mark Zuckerberg finds Peter Thiel to be so indispensable:
“Masters and J.D. Vance, a Republican running for Senate in Ohio, are seeking together to repackage Trump’s deception in a new narrative. Both are backed by $10 million from Thiel, co-founder of PayPal and data mining company Palantir Technologies.”
The two GOP Senate candidates championing this ‘Mark Zuckerberg stole the election for Biden’ narrative just happen to be the two candidates heavily backed by Thiel, the guy who is widely seen as Zuckerberg’s de facto mentor. What’s going on here? Is this purely theatrics? Or an example of how Thiel keeps Zuckerberg in check? Note how Vance and Masters both jointly published an op-ed pushing this narrative. It’s like they want the world to know this narrative was a Thiel-financed production:
Crucially, it’s not a narrative that involves electoral shenanigans playing out on Facebook. No, the platform itself has nothing to do with this conspiracy theory, conveniently for both Zuckerberg and Thiel. Instead, it’s a conspiracy focused solely on the actions of Zuckerberg’s non-profit, the Center for Tech and Civic Life (CTCL), which was used to channel $400 million in emergency donations Zuckerberg and Chan made for the purpose of helping localities run elections. One study found the money was spent on “increased pay for poll workers, expanded early voting sites and extra equipment to more quickly process millions of mailed ballots.” As the article notes, this money was given after the federal government itself allocated around $400 million for emergency spending, but Republicans blocked more funds despite experts saying much more was needed. So The Zuckerberg/Chan $400 million was almost like a private matching fund for the fed’s $400 million. Arguably a very necessary matching fund due to the fact that the GOP was blocking anything more. And according to this narrative, that $400 million was spent in an unbalanced manner that helped Democrats more than Republicans. That’s the big ‘Facebook stole the election for Biden’ conspiracy theory: the observation that the non-partisan Zuckerberg-financed election-assistance activities ended up helping Democrats relatively more than Republicans because it helped cities more than rural areas. A narrative that has absolutely nothing to do with Facebook itself. Again, it’s a remarkably convenient narrative for Thiel, Facebook, and Zuckerberg. At least Zuckerberg doesn’t have to defend himself against more accusations about Facebook directly manipulating people. For Zuckerberg, this must be a refreshing non-Facebook-related accusation, if still annoying:
Also note how this narrative about Zuckerberg and the CTCL swinging the election emanating from a right-wing group that’s promising to put out more ‘research’ on this topic. Research based on William Doyle who has been arguing that Zuckerberg’s initiative “significantly increased Biden’s vote margin in key swing states.” And yet they’re simultaneously distancing themselves from the wild claims of election fraud made by figures like Mike Lindell and Sidney Powell. So it looks like this could be like a next generation ‘the election was stolen’ narrative. In other words, there’s going to be a lot more put out around this narrative:
Finally, note the CTCL response to these accusations: it gave grants to any counties that requested them. So if there was a partisan pattern in how CTCL distributed its grants, that was due to a partisan refusal to accept them by conservative-led counties:
So what’s actually happening here? Thiel and Zuckerberg are reportedly quite close. It’s a demostrable fact given how long both have remained at Facebook despite all the controversy. And yet the two Thiel proteges running for the Senate are jointly championing this narrative.
Is this pure theatrics? There was that now-notorious Thiel-arranged secret dinner at the Trump White House in October 2019 where Zuckerberg reportedly agreed to take a hands-off approach to conservative content. It’s not like theatrical arrangements between Zuckerberg, Thiel, and the GOP are unprecedented. And it’s hard to ignore how these narrative conveniently ignores the role Facebook itself played in the election.
And yet, as convenient as this narrative is for Facebook and Thiel, it’s still kind of a giant pain in the ass for Zuckerberg. It’s a narrative that casts him as a central villain in the theft of the election for Biden. It’s hard to imagine he’s just chuckling about it all. So, again, what’s going on here?
And that brings to the following interview of Thiel biographer, Max Chafkin, who was asked directly whether or not Zuckerberg should fear Thiel. As Chafkin sees it, while Zuckerberg is powerful enough himself to fire Thiel from Facebook, he’s unlikeley to do so. In part because he values Thiel’s advice. And in part because he doesn’t want the giant headache that would come after he fires him. So there’s an implied understanding that firing Thiel from Facebook would have very real repercussions:
“MC: I think Zuckerberg could fire Thiel. I mean, Mark Zuckerberg is a formidable guy. He’s worth a lot of money. He could afford a war with Peter Thiel, and he could afford the backlash. But I think there’s a question about whether he’d want to, because right now, the reason Thiel is able to get away with what he’s able to get away with, with respect to both serving on the board and being this public critic, has to do with the fact that there would be a price to pay if Mark Zuckerberg fired him, and the price would be it would be a huge freaking story.”
Zuckerberg could fire Thiel. He’s is wealthy and powerful in his own right, after all. But there would be a price paid. A price in the form of having a big public story about his split with Thiel. A price that has obvious implications when it comes to Zuckerberg’s relationship with the conservative movement. It’s part of what’s so ironic and absurd about this whole situation: Zuckerberg apparently keeps Thiel around, in part, as a kind of shield against even more attacks from the right-wing.
And then there’s the one known instances of Zuckerberg seemingly trying to fire Thiel in 2017. Keep in mind this would have been when Thiel’s public toxicity was probably at its highest following the his open closeness with the new Trump administration. Zuckerberg reportedly asked Thiel if he thought he should resign, Thiel said, no, and Zuckerberg didn’t fire him. It tells us something about the nature of their relationship. Zuckerberg fears Thiel too much to fire him:
Now here’s a piece with a few more details on that 2017 attempted firing of Thiel from Max Chafkin’s biography on Thiel. According to Chafkin, the whole incident took place after the NY Times published a leaked email from Facebook board member Reed Hastings telling Thiel that his endorsement of Trump reflected poorly on Facebook. Zuckerberg then asked Thiel to step down. “ ‘I will not quit,’ he told Zuckerberg. ‘You’ll have to fire me.’ ” He was not fired, obviously. So it wasn’t simply that Zuckerberg asked Thiel if he felt he should resign. Zuckerberg asked Thiel to resign, Thiel refused, and won:
““ ‘I will not quit,’ he told Zuckerberg. ‘You’ll have to fire me.’ ” He did nothing when Thiel refused.”
Zuckerberg tried to fire him. Tepidly, by asking for a resignation. And that was as far as he was willing to go. Why? Is he reliant on Thiel? Or scared or him? Well, based on what we’ve heard, it’s probably a bit of both: he relies on Thiel, in particular when it comes to Facebook’s relationship with the conservative movement, which is precisely why he’s so terrified of Thiel. Without Thiel’s protection, the GOP would be even more publicly oppositional towards Facebook:
Don’t forget that Thiel’s potential leverage over Zuckerberg isn’t limited to his role as a far right beastmaster who can hold the wolves at bay. Thiel’s role as the co-founder of Palantir probably gives him all sorts of leverage over both Facebook and Zuckerberg personally that we can barely begin to meaningfully speculate about.
And then there’s the fact that the two have known each other and operated in the same social circles for years. Just straight up blackmail could be a possibility. Is Thiel blackmailing Zuckerberg?
Another possibility that that Thiel has gotten wind of Zuckerberg planning on finally firing him for real, and the whole Vance/Masters public relations ploy is Thiel’s warning to Zuckerberg. Might that be what we’re looking at here? Who knows, but whether or not Mark Zuckerberg truly is the most powerful person at Facebook, he doesn’t behave as if he believes that himself.
It’s the end of an era. It was an awful era. But at least it’s over. Not that we have any reason to believe the general awfulness of the era is going to recede: Peter Thiel is leaving the board of Facebook.
The particular reasons for Thiel’s departure isn’t entirely clear. Or rather, ominously vague. We’re told he’s leaving in order to focus on the 2022 mid-term elections which Thiel reportedly views as crucial to changing the direction of the country. But the focus isn’t just on getting Republicans elected to office. Thiel is trying to ensure it’s the pro-MAGA candidate who ultimately win, with 3 of the 12 House candidates he’s backing running primary challenges to Republicans who voted in favor of impeaching Trump over the Jan 6 Capitol insurrection. So Thiel is now basically pro-insurrection, and trying to ensure the GOP remains the party of insurrection and becomes even more pro-insurrection going forward. In that sense, he’s not incorrect. 2022 really is crucial to changing the direction of the country. It’s going to be the greatest opportunity fascists like Thiel have ever had to ultimately drive a stake through the heart of the dying husk of American democracy, with a pro-insurrection GOP poised to retake control of the House and potential engage in criminal prosecutions of the Democrats who dared investigate the insurrection.
So is Thiel truly leaving the Facebook board just to focus on the 2022 mid-terms? It doesn’t really add up. It’s not like he hasn’t been doing exactly that for years. So what’s new? Why now? Is there a new scandal involving secret deals between Facebook and the GOP, like the 2019 secret dinner party at the White House? Does it have to does with Thiel’s investments in Boldend, a hacking firm offering products that can hack Facebook-owned WhatsApp? Or is Thiel perhaps planning on using Facebook’s propaganda power for mobilizing GOP voters in manner that will be so scandalous the company needs to preemptively part ways? That’s what makes this so announcement ominously vague. The expressed reason — spending more time on helping the GOP win elections — doesn’t really makes sense. Thiel is far more helpful for helping the GOP win elections when he’s sitting on the board of Facebook. So why leave?
There’s another somewhat amusing possible motive: two of the Senate candidates backed by Thiel who he is particularly close to are Blake Master and JD Vance. And Masters and Blake have both made bashing Facebook and ‘Big Tech’ signature campaign themes, making their close ties to Thiel an obvious complication. The two candidates even accuse Zuckerberg of stealing the 2012 election for Barack Obama. Beyond that, Vance is investing in ‘Alt Right’-friendly social media platforms of his own, like Rumble. So is Thiel’s departure a purely cosmetic move to allow Republicans to disingenuously complain about ‘Big Tech censorship’ more effectively?
Note that we aren’t told Thiel sold off all his remaining shares in the company. We’re just told he’s leaving the board. There’s also no reports of any new divide between Thiel and Mark Zuckerberg. So it’s not like this is an announcement that Thiel is no longer acting as Zuckerberberg’s confidante and mentor. It’s really just an announcement about a curious public relations move by Thiel. A curious ominous public relations move:
“Mr. Thiel, 54, wants to focus on influencing November’s midterm elections, said a person with knowledge of Mr. Thiel’s thinking who declined to be identified. Mr. Thiel sees the midterms as crucial to changing the direction of the country, this person said, and he is backing candidates who support the agenda of former president Donald J. Trump.”
Thiel clearly has big plans for the 2022 election. The question is whether or not this move is directly related to those big plans. After all, it’s not like Thiel hasn’t like been a GOP sugar-daddy for years. Making large donations to repugnant candidates is what he has long done. Sure, it sounds like Thiel has increased his contributions to GOP candidates this year, with two Senate candidates — Blake Masters and JD Vance — have notorious close ties to Thiel. But, again, it’s entirely unclear what’s changed from before. Why the big shakeup now?
So the whole ‘stepping away to focus on the GOP’ excuse doesn’t entirely add up. But that doesn’t mean Thiel’s ties to the GOP aren’t a factor. Recall the now notorious secret 2019 dinner party Thiel and Zuckerberg had at the Trump White House where they allegedly hammered out a deal to ensure Facebook took it easy on the GOP’s strategy of relying on a cyclone of disinformation. Was there another secret Facebook-GOP dinner party that we have yet to learn about? Could Thiel’s departure be in anticipation of yet-to-be-disclosed new secret Facebook-GOP arrangement?
There was a new report about the scummy behavior of Facebook (Meta) that adds a new angle to the many questions about Facebook’s deep, and rather secretive, ties to the Republican Party. Questions that include the details of the apparent secret agreement that was hammered out between Mark Zuckerberg, Peter Thiel, and then-President Donald Trump during a secret 2019 dinner party at the White House where Zuckerberg allegedly promised to go easy on right-wing disinformation in the 2020 campaign. The new report also tangentially relates to the stories about Peter Thiel and Steve Bannon pushing to foment a kind of ‘yellow peril’ in the US government and Silicon Valley about the dangers China tech firms in order to damage a direct rival (Google):
Newly leaked emails reveal that Facebook has been using a GOP-affiliated public relations firm, Targeted Victory, to secretly push alarmist stories about the dangers TikTok poses to US children. The propaganda campaign included pushing stories into the local media markets in the congressional districts of key members of congress, with some success apparently. Oh, and it it turns out that some of the toxic viral memes that were allegedly being promoted to children on TikTok weren’t found on TikTok at all but instead originated on Facebook. It was that sleazy.
So how long has Facebook been hiring GOP PR firms to secretly conduct smear campaigns on its rivals? That’s unclear, but in the case of Targeted Victory, we are told that its relationship with Facebook goes back to 2016. Yep, Facebook was already using this GOP firm for secret sleaze back in 2016, which is another wrinkle in that whole sordid story. Although this particular anti-TikTok campaign appears to be ongoing, with some of the leaked emails being sent in February of this year.
And Targeted Victory isn’t the only GOP affiliated PR firm used by Facebook. In 2018, Facebook hired another GOP-affiliated firm, Definers Public Affairs, to attack critics and other tech companies, including Apple and Google, during the Cambridge Analytica scandal. It’s that broader secret relationship between Facebook (Meta) and the GOP that’s the larger story story. And based on these leaked emails, it appears that secret relationship has been deeper and sleazier than previously appreciated:
“The emails, which have not been previously reported, show the extent to which Meta and its partners will use opposition-research tactics on the Chinese-owned, multibillion-dollar rival that has become one of the most downloaded apps in the world, often outranking even Meta’s popular Facebook and Instagram apps. In an internal report last year leaked by the whistleblower Frances Haugen, Facebook researchers said teens were spending “2–3X more time” on TikTok than Instagram, and that Facebook’s popularity among young people had plummeted.”
It’s all quite a coincidence that the one social media app that is more popular than Facebook and Instagram is the target of a secret smear campaign. A smear campaign that included attributing toxic viral memes that originated on Facebook to TikTok. The sleaze abounds:
And while some of these incriminating leaked emails are from February of 2022 — right around the time Facebook was forced to announce its first loss in users in its 18 year history — this relationship between Facebook Targeted Victory has been ongoing since 2016. In other words, this GOP PR firm has presumably done quite few other sleazy public relations campaigns for Facebook that we just haven’t learned about yet:
And as the article hints at, this story is merely one example of how Facebook works with GOP-connected public relations firms to carry out dirty tricks campaigns. How many other GOP-connected firms has Facebook secretly working with for dirty PR? Who knows, but we can be pretty confident there are plenty more stories of this nature given that Meta outspends all but six of the US’s biggest lobbying groups:
So are we going to learn that Facebook has now dropped Targeted Victory, just as it dropped Definers Public Affairs in 2018 after its work was exposed? We’ll see. Maybe. Either way, we probably won’t see any reports about the new GOP-affiliated PR firm that gets hired to replace them. At least not until the next round of reports with more revelations about this enduring ‘secret’ Meta-GOP relationship.
But it also all raises an intriguing question about this arrangement, where Facebook is hiring GOP-affiliated firms to carry out deceptive attacks on its rivals: Doesn’t this effectively give the GOP leverage over Facebook/Meta? After all, it’s clearly an embarrassment when these kinds of stories come out. An embarrassment pretty much just for Meta. It’s not like the GOP PR firm cares. So given that we still don’t know the exact nature of the deal worked out between Mark Zuckerberg, Peter Thiel, and Donald Trump at that secret 2019 dinner at the White House, it’s going to be worth keeping in mind that Facebook has apparently decided to use the GOP for its secret sleaze projects. A decision that presumably gave the GOP quite a few ‘favors’ it could ask for in return.