- Spitfire List - http://spitfirelist.com -

FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE [1]. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE [2].

You can subscribe to e-mail alerts from Spitfirelist.com HERE [3].

You can subscribe to RSS feed from Spitfirelist.com HERE [3].

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE [4].

This broadcast was recorded in one, 60-minute segment. [5]

[6]

Peter Thiel

Introduction: This program follows up FTR #’s 718 [7] and 946 [8], we examined Facebook, noting how it’s cute, warm, friendly public facade obscured a cynical, reactionary, exploitative and, ultimately “corporatist” ethic and operation.

The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims [9] that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”

Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. [10] The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.

The article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation [11] following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”

In addition, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting associated with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. This is a service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.

Next, we note that Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool [12]:

“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.  . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”

The above-mentioned Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

Is this truly the end of Cambridge Analytica?

No.

They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because  the firm’s directors include Johnson Ko Chun Shun, [13] a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm [14], Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . 

In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information.  ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”

Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.

Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service.  This is a dubious assurance, in light of Facebook’s past behavior.

” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”

Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel [15] — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie [16], a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”

Program Highlights Include:

  1. Facebook’s project [17] to incorporate brain-to-computer interface into its operating system: “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily [18] in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  4. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
  5. ” . . . . Facebook hopes to use [19] optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  6. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
  7. Some telling observations [20] by Nigel Oakes, the founder of Cambridge Analytica parent firm SCL: ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
  8. Further exposition [21] of Oakes’ statement: ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant [22], a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
  9. Observations about the possibilities of Facebook’s goal of having AI governing the editorial functions of its content: As noted in a Popular Mechanics [23]article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
  10. Microsoft’s Tay Chatbot offers a glimpse [24] into this future: As one Twitter user noted, employing sarcasm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

1. The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims [9] that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”

“Undercover Facebook moderator Was Instructed Not to Remove Fringe Groups or Hate Speech” by Nick Statt; The Verge; 07/17/2018 [25]

An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups “exceed deletion threshold,” and that those pages are “subject to different treatment in the same category as pages belonging to governments and news organizations.” The accusation is a damning one, undermining Facebook’s claims [9] that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. The investigation outlines questionable practices on behalf of CPL Resources [26], a third-party content moderator firm based in Dublin that Facebook has worked with since 2010.

Those questionable practices primarily involve a hands-off approach to flagged and reported content like graphic violence, hate speech, and racist and other bigoted rhetoric from far-right groups. The undercover reporter says he was also instructed to ignore users who looked as if they were under 13 years of age, which is the minimum age requirement to sign up for Facebook in accordance with the Child Online Protection Act, a 1998 privacy law passed in the US designed to protect young children from exploitation and harmful and violent content on the internet. The documentary insinuates that Facebook takes a hands-off approach to such content, including blatantly false stories parading as truth, because it engages users for longer and drives up advertising revenue. . . . 

. . . . And as the Channel 4 documentary makes clear, that threshold appears to be an ever-changing metric that has no consistency across partisan lines and from legitimate media organizations to ones that peddle in fake news, propaganda, and conspiracy theories. It’s also unclear how Facebook is able to enforce its policy with third-party moderators all around the world, especially when they may be incentivized by any number of performance metrics and personal biases. .  . . .

Meanwhile, Facebook is ramping up efforts in its artificial intelligence division, with the hope that one day algorithms can solve these pressing moderation problems without any human input. Earlier today, the company said it would be accelerating its AI research efforts [27] to include more researchers and engineers, as well as new academia partnerships and expansions of its AI research labs in eight locations around the world. . . . .The long-term goal of the company’s AI division is to create “machines that have some level of common sense” and that learn “how the world works by observation, like young children do in the first few months of life.” . . . .

2. Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. [10] The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.

The following article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation [11] following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”

Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.

It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period [28]. An ideological purge of the State Department is reportedly already underway [29].  

“AggregateIQ Had Data of Thousands of Facebook Users” by Aliya Ram and Hannah Kuchler; Financial Times; 06/01/2018 [30]

AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation [11] following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.

The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.

Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.

According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.

Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.

“I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”

He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.

AggregateIQ has sought to distance itself [31] from an international privacy scandal engulfing Facebook and Cambridge Analytica, despite allegations from Christopher Wylie [32], a whistleblower at the now-defunct UK firm, that it had acted as the Canadian branch of the organisation.

The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.

The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said. . . .

. . . . “The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.” . . . .

3. Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool [12]:

“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.  . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”

“Zuckerberg Set Up Fraudulent Scheme to ‘Weaponise’ Data, Court Case Alleges” by Carole Cadwalladr and Emma Graham-Harrison; The Guardian; 05/24/2018 [12]

Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.

A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.

Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.

It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.

Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University [33], said, in her opinion, this exposed a potential tension for Facebook.

“Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.”

The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.

The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.

It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours. . . .

. . . . The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.

The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal [34] by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.

“The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.

The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.

Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.

“Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.

In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. 

Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.

Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.

The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.

It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents. . . .

. . . . David Godkin, Six4Three’s lead counsel said: “We believe the public has a right to see the evidence and are confident the evidence clearly demonstrates the truth of our allegations, and much more.”

Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.

“They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.” . . .

4. Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

Is this truly the end of Cambridge Analytica?

No.

They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because  the firm’s directors include Johnson Ko Chun Shun, [13] a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm [14], Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . 

“Cambridge Analytica to File for Bankruptcy After Misuse of Facebook Data” by Nicholas Confessore and Matthew Rosenberg; The New York Times; 5/02/2018. [13]

. . . . In a statement posted to its website [35], Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.

But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . 

. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm [14], Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.

Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.

An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . 

5. In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information.  ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”

“BlackRock Is Worried Technology Firms Are About to Know ‘Every Single Thing You Do’” by John Detrixhe; Quartz; 11/02/2017 [36]

The president of BlackRock, the world’s biggest asset manager, is among those who think big technology firms [37] could invade the financial industry’s turf. Google and Facebook have thrived by collecting and storing data about consumer habits—our emails, search queries, and the videos we watch. Understanding of our financial lives could be an even richer source of data for them to sell to advertisers.

“I worry about the data,” said BlackRock president Robert Kapito at a conference in London today (Nov. 2). “We’re going to have some serious competitors.”

If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said.

Kapito is worried because the effort to win control of payment systems is already underway—Apple will allow iMessage users [38] to send cash to each other, and Facebook is integrating person-to-person PayPal payments [39] into its Messenger app.

As more payments flow through mobile phones, banks are worried they could get left behind, relegated to serving as low-margin utilities. To fight back, they’ve started initiatives such as Zelle to compete with payment services like PayPal.

Barclays CEO Jes Staley pointed out at the conference that banks probably have the “richest data pool” of any sector, and he said some 25% of the UK’s economy flows through Barlcays’ payment systems. The industry could use that information to offer better services. Companies could alert people that they’re not saving enough for retirement, or suggest ways to save money on their expenses. The trick is accessing that data and analyzing it like a big technology company would.

And banks still have one thing going for them: There’s a massive fortress of rules and regulations surrounding the industry. “No one wants to be regulated like we are,” Staley said.

6. Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.

Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service.  This is a dubious assurance, in light of Facebook’s past behavior.

” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”

“Facebook to Banks: Give Us Your Data, We’ll Give You Our Users” by Emily Glazer, Deepa Seetharaman and AnnaMaria Andriotis; The Wall Street Journal; 08/06/2018 [40]

Facebook Inc. wants your financial data.

The social-media giant has asked large U.S. banks to share detailed financial information about their customers, including card transactions and checking-account balances, as part of an effort to offer new services to users.

Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter.

Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said.

Data privacy [41] is a sticking point in the banks’ conversations with Facebook, according to people familiar with the matter. The talks are taking place as Facebook faces several investigations over its ties to political analytics firm Cambridge Analytica, which accessed data on as many as 87 million Facebook users without their consent.

One large U.S. bank pulled away from talks due to privacy concerns, some of the people said.

Facebook has told banks that the additional customer information could be used to offer services that might entice users to spend more time on Messenger, a person familiar with the discussions said. The company is trying to deepen user engagement: Investors shaved more than $120 billion from its market value in one day last month after it said its growth is starting to slow. [42].

Facebook said it wouldn’t use the bank data for ad-targeting purposes or share it with third parties. . . .

. . . . Alphabet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to provide basic banking services on applications such as Google Assistant and Alexa, according to people familiar with the conversations. . . . 

7. In FTR #946 [43], we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.

Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel [15] — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie [16], a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”

“Spy Contractor’s Idea Helped Cambridge Analytica Harvest Facebook Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018 [44]

As a start-up called Cambridge Analytica [45] sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica [46] went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel [15] — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie [16], a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .

. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .

. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

. . . . But he [Wylie] said some Palantir employees helped engineer Cambridge’s psychographic models.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .

8a. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

Facebook wants to read your thoughts [17].

  1. ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily [18] in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  4. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

Facebook Literally Wants to Read Your Thoughts” by Kristen V. Brown; Gizmodo; 4/19/2017. [17]

At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.

What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.

“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily [18] in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”

She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.

Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.

Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.

The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.

“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”

“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily [18] in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

8b. More about Facebook’s brain-to-computer [19] interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

“Facebook Plans Ethics Board to Monitor Its Brain-Computer Interface Work” by Josh Constine; Tech Crunch; 4/19/2017. [19]

Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface [47] it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”

Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.

Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.”

So at least there will be independent ethicists working to minimize the potential for malicious use of Facebook’s brain-reading technology to steal or police people’s thoughts.

During our interview, Dugan showed her cognizance of people’s concerns, repeating the start of her keynote speech today saying, “I’ve never seen a technology that you developed with great impact that didn’t have unintended consequences that needed to be guardrailed or managed. In any new technology you see a lot of hype talk, some apocalyptic talk and then there’s serious work which is really focused on bringing successful outcomes to bear in a responsible way.”

In the past, she says the safeguards have been able to keep up with the pace of invention. “In the early days of the Human Genome Project there was a lot of conversation about whether we’d build a super race or whether people would be discriminated against for their genetic conditions and so on,” Dugan explains. “People took that very seriously and were responsible about it, so they formed what was called a ELSI panel . . . By the time that we got the technology available to us, that framework, that contractual, ethical framework had already been built, so that work will be done here too. That work will have to be done.” . . . .

Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”

Facebook’s domination of social networking and advertising give it billions in profit per quarter to pour into R&D. But its old “Move fast and break things” philosophy is a lot more frightening when it’s building brain scanners. Hopefully Facebook will prioritize the assembly of the ELSI ethics board Dugan promised and be as transparent as possible about the development of this exciting-yet-unnerving technology.…

  1. In FTR #’s 718 [7] and 946 [43], we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer [17] technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:  “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

9a. Nigel Oakes is the founder of SCL, the parent company of Cambridge Analytica. His comments are related in a New York Times article. ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”

“Facebook Gets Grilling in U.K. That It Avoided in U.S.” by Adam Satariano; The New York Times [Western Edition]; 4/27/2018; p. B3. [20]

. . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .

9b. Mr. Oakes’ comments are related in detail in another Times article. ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant [22], a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”

“The Origins of an Ad Man’s Manipulation Empire” by Ellen Barry; The New York Times [Western Edition]; 4/21/2018; p. A4. [21]

. . . . Adolf Hitler “didn’t have a problem with the Jews at all, but people didn’t like the Jews,” he told the academic, Emma L. Briant [22], a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims.

This sort of campaign, he continued, did not require bells and whistles from technology or social science.

“What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,” he told Dr. Briant. “Trump had the balls, and I mean, really the balls, to say what people wanted to hear.” . . .

9c. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
“Microsoft Terminates Its Tay AI Chatbot after She Turns into a Nazi” by Peter Bright; Ars Technica; 3/24/2016. [24]

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot [48], into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring [49] that “Hitler was right I hate the jews.”

@TheBigBrebowski [50] ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016 [51]

 Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one [52] where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .

But like all teenagers, she seems to be angry with her mother.

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot [48], into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring [49] that “Hitler was right I hate the jews.”

@TheBigBrebowski [50] ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016 [51]

Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one [52] where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .

9d. As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”

The Most Dan­ger­ous Thing About AI Is That It Has to Learn From Us” by Eric Limer; Popular Mechanics; 3/24/2016. [23]

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot  [53]that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it [54]:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly staggering. 

Microsoft has since deleted some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions  [55]memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have pointed out [56], no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neural net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get started. They can only get that from us. There is no other way. 

But before you give up on human­ity entirely, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age [57]—and pranksters pro-actively went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neural net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly [58], espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actual, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can really love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of funny when you aren’t talk­ing about lit­eral all-powerful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. . . .

. . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.