Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

Peter Thiel

Introduction: This program follows up FTR #’s 718 and 946, we examined Facebook, noting how it’s cute, warm, friendly public facade obscured a cynical, reactionary, exploitative and, ultimately “corporatist” ethic and operation.

The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”

Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.

The article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”

In addition, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting associated with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. This is a service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.

Next, we note that Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:

“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.  . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”

The above-mentioned Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

Is this truly the end of Cambridge Analytica?

No.

They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because  the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . 

In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information.  ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”

Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.

Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service.  This is a dubious assurance, in light of Facebook’s past behavior.

” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”

Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”

Program Highlights Include:

  1. Facebook’s project to incorporate brain-to-computer interface into its operating system: “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  4. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
  5. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  6. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”
  7. Some telling observations by Nigel Oakes, the founder of Cambridge Analytica parent firm SCL: ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”
  8. Further exposition of Oakes’ statement: ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”
  9. Observations about the possibilities of Facebook’s goal of having AI governing the editorial functions of its content: As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”
  10. Microsoft’s Tay Chatbot offers a glimpse into this future: As one Twitter user noted, employing sarcasm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

1. The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”

“Undercover Facebook moderator Was Instructed Not to Remove Fringe Groups or Hate Speech” by Nick Statt; The Verge; 07/17/2018

An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups “exceed deletion threshold,” and that those pages are “subject to different treatment in the same category as pages belonging to governments and news organizations.” The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. The investigation outlines questionable practices on behalf of CPL Resources, a third-party content moderator firm based in Dublin that Facebook has worked with since 2010.

Those questionable practices primarily involve a hands-off approach to flagged and reported content like graphic violence, hate speech, and racist and other bigoted rhetoric from far-right groups. The undercover reporter says he was also instructed to ignore users who looked as if they were under 13 years of age, which is the minimum age requirement to sign up for Facebook in accordance with the Child Online Protection Act, a 1998 privacy law passed in the US designed to protect young children from exploitation and harmful and violent content on the internet. The documentary insinuates that Facebook takes a hands-off approach to such content, including blatantly false stories parading as truth, because it engages users for longer and drives up advertising revenue. . . . 

. . . . And as the Channel 4 documentary makes clear, that threshold appears to be an ever-changing metric that has no consistency across partisan lines and from legitimate media organizations to ones that peddle in fake news, propaganda, and conspiracy theories. It’s also unclear how Facebook is able to enforce its policy with third-party moderators all around the world, especially when they may be incentivized by any number of performance metrics and personal biases. .  . . .

Meanwhile, Facebook is ramping up efforts in its artificial intelligence division, with the hope that one day algorithms can solve these pressing moderation problems without any human input. Earlier today, the company said it would be accelerating its AI research efforts to include more researchers and engineers, as well as new academia partnerships and expansions of its AI research labs in eight locations around the world. . . . .The long-term goal of the company’s AI division is to create “machines that have some level of common sense” and that learn “how the world works by observation, like young children do in the first few months of life.” . . . .

2. Next, we present a frightening story about AggregateIQ (AIQ), the Cambridge Analytica offshoot to which Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development, and which later played a key role in the pro-Brexit campaign. The article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. If Facebook really was trying to kick Cambridge Analytica off of its platform, it’s not trying very hard. One app is even named “AIQ Johnny Scraper” and it’s registered to AIQ.

The following article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”

Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.

It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period. An ideological purge of the State Department is reportedly already underway.  

“AggregateIQ Had Data of Thousands of Facebook Users” by Aliya Ram and Hannah Kuchler; Financial Times; 06/01/2018

AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.

The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.

Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.

According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.

Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.

“I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”

He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.

AggregateIQ has sought to distance itself from an international privacy scandal engulfing Facebook and Cambridge Analytica, despite allegations from Christopher Wylie, a whistleblower at the now-defunct UK firm, that it had acted as the Canadian branch of the organisation.

The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.

The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said. . . .

. . . . “The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.” . . . .

3. Facebook is getting sued by an app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:

“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.  . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”

“Zuckerberg Set Up Fraudulent Scheme to ‘Weaponise’ Data, Court Case Alleges” by Carole Cadwalladr and Emma Graham-Harrison; The Guardian; 05/24/2018

Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.

A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.

Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.

It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.

Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University, said, in her opinion, this exposed a potential tension for Facebook.

“Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.”

The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.

The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.

It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours. . . .

. . . . The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.

The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.

“The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.

The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.

Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.

“Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.

In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. 

Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.

Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.

The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.

It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents. . . .

. . . . David Godkin, Six4Three’s lead counsel said: “We believe the public has a right to see the evidence and are confident the evidence clearly demonstrates the truth of our allegations, and much more.”

Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.

“They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.” . . .

4. Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

Is this truly the end of Cambridge Analytica?

No.

They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because  the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . 

“Cambridge Analytica to File for Bankruptcy After Misuse of Facebook Data” by Nicholas Confessore and Matthew Rosenberg; The New York Times; 5/02/2018.

. . . . In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.

But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . 

. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.

Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.

An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . 

5. In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information.  ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”

“BlackRock Is Worried Technology Firms Are About to Know ‘Every Single Thing You Do’” by John Detrixhe; Quartz; 11/02/2017

The president of BlackRock, the world’s biggest asset manager, is among those who think big technology firms could invade the financial industry’s turf. Google and Facebook have thrived by collecting and storing data about consumer habits—our emails, search queries, and the videos we watch. Understanding of our financial lives could be an even richer source of data for them to sell to advertisers.

“I worry about the data,” said BlackRock president Robert Kapito at a conference in London today (Nov. 2). “We’re going to have some serious competitors.”

If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said.

Kapito is worried because the effort to win control of payment systems is already underway—Apple will allow iMessage users to send cash to each other, and Facebook is integrating person-to-person PayPal payments into its Messenger app.

As more payments flow through mobile phones, banks are worried they could get left behind, relegated to serving as low-margin utilities. To fight back, they’ve started initiatives such as Zelle to compete with payment services like PayPal.

Barclays CEO Jes Staley pointed out at the conference that banks probably have the “richest data pool” of any sector, and he said some 25% of the UK’s economy flows through Barlcays’ payment systems. The industry could use that information to offer better services. Companies could alert people that they’re not saving enough for retirement, or suggest ways to save money on their expenses. The trick is accessing that data and analyzing it like a big technology company would.

And banks still have one thing going for them: There’s a massive fortress of rules and regulations surrounding the industry. “No one wants to be regulated like we are,” Staley said.

6. Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.

Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service.  This is a dubious assurance, in light of Facebook’s past behavior.

” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”

“Facebook to Banks: Give Us Your Data, We’ll Give You Our Users” by Emily Glazer, Deepa Seetharaman and AnnaMaria Andriotis; The Wall Street Journal; 08/06/2018

Facebook Inc. wants your financial data.

The social-media giant has asked large U.S. banks to share detailed financial information about their customers, including card transactions and checking-account balances, as part of an effort to offer new services to users.

Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends. The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter.

Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said.

Data privacy is a sticking point in the banks’ conversations with Facebook, according to people familiar with the matter. The talks are taking place as Facebook faces several investigations over its ties to political analytics firm Cambridge Analytica, which accessed data on as many as 87 million Facebook users without their consent.

One large U.S. bank pulled away from talks due to privacy concerns, some of the people said.

Facebook has told banks that the additional customer information could be used to offer services that might entice users to spend more time on Messenger, a person familiar with the discussions said. The company is trying to deepen user engagement: Investors shaved more than $120 billion from its market value in one day last month after it said its growth is starting to slow..

Facebook said it wouldn’t use the bank data for ad-targeting purposes or share it with third parties. . . .

. . . . Alphabet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to provide basic banking services on applications such as Google Assistant and Alexa, according to people familiar with the conversations. . . . 

7. In FTR #946, we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.

Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”

“Spy Contractor’s Idea Helped Cambridge Analytica Harvest Facebook Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .

. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .

. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

. . . . But he [Wylie] said some Palantir employees helped engineer Cambridge’s psychographic models.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .

8a. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

Facebook wants to read your thoughts.

  1. ” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  4. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

Facebook Literally Wants to Read Your Thoughts” by Kristen V. Brown; Gizmodo; 4/19/2017.

At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.

What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.

“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”

She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.

Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.

Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.

The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.

“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”

“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

8b. More about Facebook’s brain-to-computer interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

“Facebook Plans Ethics Board to Monitor Its Brain-Computer Interface Work” by Josh Constine; Tech Crunch; 4/19/2017.

Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”

Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.

Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.”

So at least there will be independent ethicists working to minimize the potential for malicious use of Facebook’s brain-reading technology to steal or police people’s thoughts.

During our interview, Dugan showed her cognizance of people’s concerns, repeating the start of her keynote speech today saying, “I’ve never seen a technology that you developed with great impact that didn’t have unintended consequences that needed to be guardrailed or managed. In any new technology you see a lot of hype talk, some apocalyptic talk and then there’s serious work which is really focused on bringing successful outcomes to bear in a responsible way.”

In the past, she says the safeguards have been able to keep up with the pace of invention. “In the early days of the Human Genome Project there was a lot of conversation about whether we’d build a super race or whether people would be discriminated against for their genetic conditions and so on,” Dugan explains. “People took that very seriously and were responsible about it, so they formed what was called a ELSI panel . . . By the time that we got the technology available to us, that framework, that contractual, ethical framework had already been built, so that work will be done here too. That work will have to be done.” . . . .

Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”

Facebook’s domination of social networking and advertising give it billions in profit per quarter to pour into R&D. But its old “Move fast and break things” philosophy is a lot more frightening when it’s building brain scanners. Hopefully Facebook will prioritize the assembly of the ELSI ethics board Dugan promised and be as transparent as possible about the development of this exciting-yet-unnerving technology.…

  1. In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:  “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

9a. Nigel Oakes is the founder of SCL, the parent company of Cambridge Analytica. His comments are related in a New York Times article. ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”

“Facebook Gets Grilling in U.K. That It Avoided in U.S.” by Adam Satariano; The New York Times [Western Edition]; 4/27/2018; p. B3.

. . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .

9b. Mr. Oakes’ comments are related in detail in another Times article. ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”

“The Origins of an Ad Man’s Manipulation Empire” by Ellen Barry; The New York Times [Western Edition]; 4/21/2018; p. A4.

. . . . Adolf Hitler “didn’t have a problem with the Jews at all, but people didn’t like the Jews,” he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims.

This sort of campaign, he continued, did not require bells and whistles from technology or social science.

“What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,” he told Dr. Briant. “Trump had the balls, and I mean, really the balls, to say what people wanted to hear.” . . .

9c. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .

But like all teenagers, she seems to be angry with her mother.

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .

9d. As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly staggering. 

Microsoft has since deleted some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have pointed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neural net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get started. They can only get that from us. There is no other way. 

But before you give up on human­ity entirely, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-actively went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neural net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actual, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can really love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of funny when you aren’t talk­ing about lit­eral all-powerful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. . . .

. . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.

 

Discussion

2 comments for “FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)”

  1. Oh look, Facebook actually banned someone for posting neo-Nazi content on their platform. But there’s a catch: They banned Ukrainian activist Eduard Dolinsky for 30 days because he was posting examples of antisemitic graffiti. Dolinsky is the director of the Ukrainian Jewish Committee. According to Dolinksy, his far right opponents have a history of reporting Dolinksy’s posts to Facebook in order to get him suspended. And this time it worked. Dolinksy appealed the ban but to no avail.

    So that happened. But first let’s take a quick look at an article from back in April that highlights how absurd this action was. The article is about a Ukrainian school teacher in Lviv, Marjana Batjuk, who posted birthday greetings to Adolf Hitler on her Facebook page on April 20 (Hitler’s birthday). She also taught her students the Nazi salute and even took some of her students to meet far right activists who had participated in a march wearing the uniform of the the 14th Waffen Grenadier Division of the SS.

    Batjuk, who is a member of Svoboda, later claimed her Facebook account was hacked, but a news organization found that she has a history of posting Nazi imagery on social media networks. And there’s no mention in this report of Batjuk getting banned from Facebook:

    Jewish Telegraph Agency

    Ukrainian teacher allegedly praises Hitler, performs Nazi salute with students

    By Cnaan Liphshiz
    April 23, 2018 4:22pm

    (JTA) — A public school teacher in Ukraine allegedly posted birthday greetings to Adolf Hitler on Facebook and taught her students the Nazi salute.

    Marjana Batjuk, who teaches at a school in Lviv and also is a councilwoman, posted her greeting on April 20, the Nazi leader’s birthday, Eduard Dolinsky, director of the Ukrainian Jewish Committee, told JTA. He called the incident a “scandal.”

    She also took some of her students to meet far-right activists who over the weekend marched on the city’s streets while wearing the uniform of the 14th Waffen Grenadier Division of the SS, an elite Nazi unite with many ethnic Ukrainians also known as the 1st Galician.

    Displaying Nazi imagery is illegal in Ukraine, but Dolinsky said law enforcement authorities allowed the activists to parade on main streets.

    Batjuk had the activists explain about their replica weapons, which they paraded ahead of a larger event in honor of the 1st Galician unit planned for next week in Lviv.

    The events honoring the 1st Galician SS unit in Lviv are not organized by municipal authorities.

    Batjuk, 28, a member of the far-right Svoboda party, called Hitler “a great man” and quoted from his book “Mein Kampf” in her Facebook post, Dolinsky said. She later claimed that her Facebook account was hacked and deleted the post, but the Strana news site found that she had a history of posting Nazi imagery on social networks.

    She also posted pictures of children she said were her students performing the Nazi salute with her.

    Education Ministry officials have started a disciplinary review of her conduct, the KP news site reported.

    Separately, in the town of Poltava, in eastern Ukraine, Dolinsky said a swastika and the words “heil Hitler” were spray-painted Friday on a monument for Holocaust victims of the Holocaust. The vandals, who have not been identified, also wrote “Death to the kikes.”

    In Odessa, a large graffiti reading “Jews into the sea” was written on the beachfront wall of a hotel.

    “The common factor between all of these incidents is government inaction, which ensures they will continue happening,” Dolinsky said.
    ———-

    “Ukrainian teacher allegedly praises Hitler, performs Nazi salute with students” by Cnaan Liphshiz; Jewish Telegraph Agency; 04/23/2018

    “Marjana Batjuk, who teaches at a school in Lviv and also is a councilwoman, posted her greeting on April 20, the Nazi leader’s birthday, Eduard Dolinsky, director of the Ukrainian Jewish Committee, told JTA. He called the incident a “scandal.””

    She’s not just a teacher. She’s also a councilwoman. A teacher councilwoman who likes to post about positive things about Hitler on her Facebook page. And it was Eduard Dolinsky who was talking to the international media about this.

    But Batjuk doesn’t just post pro-Nazi things on her Facebook page. She also takes her students to meet the far right activists:


    She also took some of her students to meet far-right activists who over the weekend marched on the city’s streets while wearing the uniform of the 14th Waffen Grenadier Division of the SS, an elite Nazi unite with many ethnic Ukrainians also known as the 1st Galician.

    Displaying Nazi imagery is illegal in Ukraine, but Dolinsky said law enforcement authorities allowed the activists to parade on main streets.

    Batjuk had the activists explain about their replica weapons, which they paraded ahead of a larger event in honor of the 1st Galician unit planned for next week in Lviv.

    The events honoring the 1st Galician SS unit in Lviv are not organized by municipal authorities.

    Batjuk later claimed that her Facebook page was hacked, and yet a media organization was able to find plenty of previous examples of similar posts on social media:


    Batjuk, 28, a member of the far-right Svoboda party, called Hitler “a great man” and quoted from his book “Mein Kampf” in her Facebook post, Dolinsky said. She later claimed that her Facebook account was hacked and deleted the post, but the Strana news site found that she had a history of posting Nazi imagery on social networks.

    She also posted pictures of children she said were her students performing the Nazi salute with her.

    And if you look at that Strana news summary of her social media posts, a number of them are clearly Facebook posts. So if Strana news organization was able to find these old posts that’s a pretty clear indication Facebook wasn’t removing them.

    That was back in April. Flash forward to today and we find a sudden willingness to ban people for post Nazi content…except it’s Eduard Dolinsky getting banned for making people aware of the pro-Nazi graffiti that has become rampant in Ukraine:

    The Jerusalem Post

    Jewish activist: Facebook banned me for posting antisemitic graffiti
    “I use my Facebook account for distributing information about antisemitic incidents, hate speech and hate crimes in Ukraine,” said the Ukrainian Jewish activist.

    By Seth J. Frantzman
    August 21, 2018 16:39

    Eduard Dolinksy, a prominent Ukrainian Jewish activist, was banned from posting on Facebook Monday night for a post about antisemitic graffiti in Odessa.

    Dolinsky, the director of the Ukrainian Jewish Committee, said he was blocked by the social media giant for posting a photo. “I had posted the photo which says in Ukrainian ‘kill the yid’ about a month ago,” he says. “I use my Facebook account for distributing information about antisemitic incidents and hate speech and hate crimes in Ukraine.”

    Now Dolinsky’s account has disabled him from posting for thirty days, which means media, law enforcement and the local community who rely on his social media posts will receive no updates.

    Dolinsky tweeted Monday that his account had been blocked and sent The Jerusalem Post a screenshot of the image he posted which shows a badly drawn swastika and Ukrainian writing. “You recently posted something that violates Facebook policies, so you’re temporarily blocked from using this feature,” Facebook informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from getting blocked again, please make sure you’ve read and understand Facebook’s Community Standards.”

    Dolinksy says that he has been targeted in the past by nationalists and anti-semites who oppose his work. Facebook has banned him temporarily in the past also, but never for thirty days. “The last time I was blocked, the media also reported this and I felt some relief.

    It was as if they stopped banning me. But now I don’t know – and this has again happened. They are banning the one who is trying to fight antisemitism. They are banning me for the very thing I do.”

    Based on Dolinsky’s work the police have opened criminal files against perpetrators of antisemitic crimes, in Odessa and other places.

    He says that some locals are trying to silence him because he is critical of the way Ukraine has commemorated historical nationalist figures, “which is actually denying the Holocaust and trying to whitewash the actions of nationalists during the Second World War.”

    Dolinksy has been widely quoted, and his work, including posts on Facebook, has been referenced by media in the past. “These incidents are happening and these crimes and the police should react.

    The society also. But their goal is to cut me off.”

    Ironically, the activist opposing antisemitism is being targeted by antisemites who label the antisemitic examples he reveals as hate speech. “They are specifically complaining to Facebook for the content, and they are complaining that I am violating the rules of Facebook and spreading hate speech. So Facebook, as I understand [it, doesn’t] look at this; they are banning me and blocking me and deleting these posts.”

    He says he tried to appeal the ban but has not been successful.

    “I use my Facebook exclusively for this, so this is my working tool as director of Ukrainian Jewish Committee.”

    Facebook has been under scrutiny recently for who it bans and why. In July founder Mark Zuckerberg made controversial remarks appearing to accept Holocaust denial on the site. “I find it offensive, but at the end of the day, I don’t believe our platform should take that down because I think there are things that different people get wrong. I don’t think they’re doing it intentionally.” In late July, Facebook banned US conspiracy theorist Alex Jones for bullying and hate speech.

    In a similar incident to Dolinsky, Iranian secular activist Armin Navabi was banned from Facebook for thirty days for posting the death threats that he receives. “This is ridiculous. My account is blocked for 30 days because I post the death threats I’m getting? I’m not the one making the threat!” he tweeted.

    ———

    “Jewish activist: Facebook banned me for posting antisemitic graffiti” by Seth J. Frantzman; The Jerusalem Post; 08/21/2018

    “Dolinsky, the director of the Ukrainian Jewish Committee, said he was blocked by the social media giant for posting a photo. “I had posted the photo which says in Ukrainian ‘kill the yid’ about a month ago,” he says. “I use my Facebook account for distributing information about antisemitic incidents and hate speech and hate crimes in Ukraine.”

    The director of the Ukrainian Jewish Committee gets banned for post antisemitic content. That’s some world class trolling by Facebook.

    And while it’s only a 30 day ban, that’s 30 days where Ukraine’s media and law enforcement won’t be getting Dolinsky’s updates. So it’s not just a morally absurd banning, it’s also actually going to be promoting pro-Nazi graffiti in Ukraine by silencing one of the key figures covering it:


    Now Dolinsky’s account has disabled him from posting for thirty days, which means media, law enforcement and the local community who rely on his social media posts will receive no updates.

    Dolinsky tweeted Monday that his account had been blocked and sent The Jerusalem Post a screenshot of the image he posted which shows a badly drawn swastika and Ukrainian writing. “You recently posted something that violates Facebook policies, so you’re temporarily blocked from using this feature,” Facebook informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from getting blocked again, please make sure you’ve read and understand Facebook’s Community Standards.”

    And this isn’t the first time Dolinsky has been banned from Facebook for posting this kind of content. But it’s the longest he’s been banned. And the fact that this isn’t the first time he’s been banned suggest this isn’t just an ‘oops!’ genuine mistake:


    Dolinksy says that he has been targeted in the past by nationalists and anti-semites who oppose his work. Facebook has banned him temporarily in the past also, but never for thirty days. “The last time I was blocked, the media also reported this and I felt some relief.

    It was as if they stopped banning me. But now I don’t know – and this has again happened. They are banning the one who is trying to fight antisemitism. They are banning me for the very thing I do.”

    Based on Dolinsky’s work the police have opened criminal files against perpetrators of antisemitic crimes, in Odessa and other places.

    Dolinsky also notes that he has people trying to silence him precisely because of the job he does highlighting Ukraine’s official embrace of Nazi collaborating historical figures:


    He says that some locals are trying to silence him because he is critical of the way Ukraine has commemorated historical nationalist figures, “which is actually denying the Holocaust and trying to whitewash the actions of nationalists during the Second World War.”

    Dolinksy has been widely quoted, and his work, including posts on Facebook, has been referenced by media in the past. “These incidents are happening and these crimes and the police should react.

    The society also. But their goal is to cut me off.”

    Ironically, the activist opposing antisemitism is being targeted by antisemites who label the antisemitic examples he reveals as hate speech. “They are specifically complaining to Facebook for the content, and they are complaining that I am violating the rules of Facebook and spreading hate speech. So Facebook, as I understand [it, doesn’t] look at this; they are banning me and blocking me and deleting these posts.”

    So we likely have a situation where antisemites successfully got Dolinksy silence, with Facebook ‘playing dumb’ the whole time. And as a consequence Ukraine is facing a month without Dolinsky’s reports. Except it’s not even clear that Dolinksy is going to be allowed to clarify the situation and continue posting updates of Nazi graffiti after this month long ban is up. Because he says he’s been trying to appeal the ban, but with no success:


    He says he tried to appeal the ban but has not been successful.

    “I use my Facebook exclusively for this, so this is my working tool as director of Ukrainian Jewish Committee.”

    Given Dolinsky’s powerful criticisms of Ukraine’s embrace and historic whitewashing of the far right, it would be interesting to learn if the decision to ban Dolinsky originally came from the Atlantic Council, which is one of the main organization Facebook outsourced its troll-hunting duties to.

    So for all we know, Dolinsky is effectively going to be banned permanently from using Facebook to make Ukraine and the rest of the world aware of the epidemic of pro-Nazi antisemitic graffiti in Ukraine. Maybe if he sets up a pro-Nazi Facebook persona he’ll be allowed to keep doing his work.

    Posted by Pterrafractyl | August 23, 2018, 12:49 pm
  2. It looks like we’re in for another round of right-wing complaints about Big Tech political bias designed to pressure companies into pushing right-wing content onto users. Recall how complaints about Facebook suppressing conservatives in the Facebook News Feed resulted in a change in policy in 2016 that unleashed a flood of far right disinformation on the platform. This time, it’s Google’s turn to face the right-wing faux-outrage machine and it’s President Trump leading it:

    Trump just accused Google of biasing the search results in its search engine to give negative stories about him. Apparently he googled himself and didn’t like the results. His tweet came after a Fox Business report on Monday evening that made the claim that 96 percent of Google News results for “Trump” came from the “national left-wing media.” The report was based on some ‘analysis’ by right-wing media outlet PJ Media.

    Later, during a press conference, Trump declared that Google, Facebook, and Twitter “are treading on very, very troubled territory,” and his economic advisor Larry Kudlow told the press that the issue is being investigating by the White House. And as Facebook already demonstrated, while it seems highly unlikely that the Trump administration will actually take some sort of government action to force Google to promote positive stories about Trump, it’s not like loudly complaining can’t get the job done:

    Bloomberg

    Trump Warns Tech Giants to ‘Be Careful,’ Claiming They Rig Searches

    By Kathleen Hunter and Ben Brody
    August 28, 2018, 4:58 AM CDT Updated on August 28, 2018, 2:17 PM CDT

    * President tweets conservative media being blocked by Google
    * Company denies any political agenda in its search results

    President Donald Trump warned Alphabet Inc.’s Google, Facebook Inc. and Twitter Inc. “better be careful” after he accused the search engine earlier in the day of rigging results to give preference to negative news stories about him.

    Trump told reporters in the Oval Office Tuesday that the three technology companies “are treading on very, very troubled territory,” as he added his voice to a growing chorus of conservatives who claim internet companies favor liberal viewpoints.

    “This is a very serious situation-will be addressed!” Trump said in a tweet earlier Tuesday. The President’s comments came the morning after a Fox Business TV segment that said Google favored liberal news outlets in search results about Trump. Trump provided no substantiation for his claim.

    “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Illegal.”

    The allegation, dismissed by online search experts, follows the president’s Aug. 24 claim that social media “giants” are “silencing millions of people.” Such accusations — along with assertions that the news media and Special Counsel Robert Mueller’s Russia meddling probe are biased against him — have been a chief Trump talking point meant to appeal to the president’s base.

    Google issued a statement saying its searches are designed to give users relevant answers.

    “Search is not used to set a political agenda and we don’t bias our results toward any political ideology,” the statement said. “Every year, we issue hundreds of improvements to our algorithms to ensure they surface high-quality content in response to users’ queries. We continually work to improve Google Search and we never rank search results to manipulate political sentiment.”

    Yonatan Zunger, an engineer who worked at Google for almost a decade, went further. “Users can verify that his claim is specious by simply reading a wide range of news sources themselves,” he said. “The ‘bias’ is that the news is all bad for him, for which he has only himself to blame.”

    Google’s news search software doesn’t work the way the president says it does, according to Mark Irvine, senior data scientist at WordStream, a company that helps firms get websites and other online content to show up higher in search results. The Google News system gives weight to how many times a story has been linked to, as well as to how prominently the terms people are searching for show up in the stories, Irvine said.

    “The Google search algorithm is a fairly agnostic and apathetic algorithm towards what people’s political feelings are,” he said.

    “Their job is essentially to model the world as it is,” said Pete Meyers, a marketing scientist at Moz, which builds tools to help companies improve how they show up in search results. “If enough people are linking to a site and talking about a site, they’re going to show that site.”

    Trump’s concern is that search results about him appear negative, but that’s because the majority of stories about him are negative, Meyers said. “He woke up and watched his particular flavor and what Google had didn’t match that.”

    Complaints that social-media services censor conservatives have increased as companies such as Facebook Inc. and Twitter Inc. try to curb the reach of conspiracy theorists, disinformation campaigns, foreign political meddling and abusive posters.

    Google News rankings have sometimes highlighted unconfirmed and erroneous reports in the early minutes of tragedies when there’s little information to fill its search results. After the Oct. 1, 2017, Las Vegas shooting, for instance, several accounts seemed to coordinate an effort to smear a man misidentified as the shooter with false claims about his political ties.

    Google has since tightened requirements for inclusion in news rankings, blocking outlets that “conceal their country of origin” and relying more on authoritative sources, although the moves have led to charges of censorship from less established outlets. Google currently says it ranks news based on “freshness” and “diversity” of the stories. Trump-favored outlets such as Fox News routinely appear in results.

    Google’s search results have been the focus of complaints for more than a decade. The criticism has become more political as the power and reach of online services has increased in recent years.

    Eric Schmidt, Alphabet’s former chairman, supported Hillary Clinton against Trump during the last election. There have been unsubstantiated claims the company buried negative search results about her during the 2016 election. Scores of Google employees entered government to work under President Barack Obama.

    White House economic adviser Larry Kudlow, responding to a question about the tweets, said that the administration is going to do “investigations and analysis” into the issue but stressed they’re “just looking into it.”

    Trump’s comment followed a report on Fox Business on Monday evening that said 96 percent of Google News results for “Trump” came from the “national left-wing media.” The segment cited the conservative PJ Media site, which said its analysis suggested “a pattern of bias against right-leaning content.”

    The PJ Media analysis “is in no way scientific,” said Joshua New, a senior policy analyst with the Center for Data Innovation.

    “This frequency of appearance in an arbitrary search at one time is in no way indicating a bias or a slant,” New said. His non-partisan policy group is affiliated with the Information Technology and Innovation Foundation, which in turn has executives from Silicon Valley companies, including Google, on its board of directors.

    Services such as Google or Facebook “have a business incentive not to lower the ranking of a certain publication because of news bias. Because that lowers the value as a news platform,” New said.

    News search rankings use factors including “use timeliness, accuracy, the popularity of a story, a users’ personal search history, their location, quality of content, a website’s reputation — a huge amount of different factors,” New said.

    Google is not the first tech stalwart to receive criticism from Trump. He has alleged Amazon.com Inc. has a sweetheart deal with the U.S. Postal Service and slammed founder Jeff Bezos’s ownership of what Trump calls “the Amazon Washington Post.”

    Google is due to face lawmakers at a hearing on Russian election meddling on Sept. 5. The company intended to send Senior Vice President for Global Affairs Kent Walker to testify, but the panel’s chairman, Senator Richard Burr, who wanted Chief Executive Officer Sundar Pichai, has rejected Walker.

    Despite Trump’s comments, it’s unclear what he or Congress could do to influence how internet companies distribute online news. The industry treasures an exemption from liability for the content users post. Some top members of Congress have suggested limiting the protection as a response to alleged bias and other misdeeds, although there have been few moves to do so since Congress curbed the shield for some cases of sex trafficking earlier in the year.

    The government has little ability to dictate to publishers and online curators what news to present despite the president’s occasional threats to use the power of the government to curb coverage he dislikes and his tendency to complain that news about him is overly negative.

    Trump has talked about expanding libel laws and mused about reinstating long-ended rules requiring equal time for opposing views, which didn’t apply to the internet. Neither has resulted in a serious policy push..

    ———-

    “Trump Warns Tech Giants to ‘Be Careful,’ Claiming They Rig Searches” by Kathleen Hunter and Ben Brody; Bloomberg; 08/28/2018

    “Trump told reporters in the Oval Office Tuesday that the three technology companies “are treading on very, very troubled territory,” as he added his voice to a growing chorus of conservatives who claim internet companies favor liberal viewpoints.”

    The Trumpian warning shots have been fired: feed the public positive news about Trump, or else…


    “This is a very serious situation-will be addressed!” Trump said in a tweet earlier Tuesday. The President’s comments came the morning after a Fox Business TV segment that said Google favored liberal news outlets in search results about Trump. Trump provided no substantiation for his claim.

    “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Illegal.”

    The allegation, dismissed by online search experts, follows the president’s Aug. 24 claim that social media “giants” are “silencing millions of people.” Such accusations — along with assertions that the news media and Special Counsel Robert Mueller’s Russia meddling probe are biased against him — have been a chief Trump talking point meant to appeal to the president’s base.

    “Republican/Conservative & Fair Media is shut out. Illegal.”

    And he literally charged Google with illegality over allegedly shutting out “Republican/Conservative & Fair Media.” Which is, of course, an absurd charge for anyone familiar with Google’s news portal. But that was part of what made the tweet so potentially threatening to these companies since it implied there was a role the government should be playing to correct this perceived law-breaking.

    At the same time, it’s unclear what, legally speaking, Trump could actually do. But that didn’t stop him from issue such threats, as he’s done in the past:


    Despite Trump’s comments, it’s unclear what he or Congress could do to influence how internet companies distribute online news. The industry treasures an exemption from liability for the content users post. Some top members of Congress have suggested limiting the protection as a response to alleged bias and other misdeeds, although there have been few moves to do so since Congress curbed the shield for some cases of sex trafficking earlier in the year.

    The government has little ability to dictate to publishers and online curators what news to present despite the president’s occasional threats to use the power of the government to curb coverage he dislikes and his tendency to complain that news about him is overly negative.

    Trump has talked about expanding libel laws and mused about reinstating long-ended rules requiring equal time for opposing views, which didn’t apply to the internet. Neither has resulted in a serious policy push..

    Ironically, when Trump muses about reinstating long-ended rules requiring equal time for opposing views (the “Fairness Doctrine” overturned by Reagan in 1987), he’s musing about doing something that would effectively destroy the right-wing media model, a model that is predicated on feeding the audience exclusively right-wing content. As many have noted, the demise of the Fairness Doctrine – which led to the explosion of right-wing talk radio hosts like Rush Limbaugh – probably played a big role in intellectually neutering the American public, paving the way for someone like Trump to eventually come along.

    And yet, as unhinged as this latest threat may be, the administration is actually going to do “investigations and analysis” into the issue according to Larry Kudlow:


    White House economic adviser Larry Kudlow, responding to a question about the tweets, said that the administration is going to do “investigations and analysis” into the issue but stressed they’re “just looking into it.”

    And as we should expect, this all appears to have been triggered by a Fox Business piece on Monday night that covered an ‘study’ done by PJ Media (a right-wing media outlet) that found 96 percent of Google News results for “Trump” come from the “national left-wing media”:


    Trump’s comment followed a report on Fox Business on Monday evening that said 96 percent of Google News results for “Trump” came from the “national left-wing media.” The segment cited the conservative PJ Media site, which said its analysis suggested “a pattern of bias against right-leaning content.”

    The PJ Media analysis “is in no way scientific,” said Joshua New, a senior policy analyst with the Center for Data Innovation.

    “This frequency of appearance in an arbitrary search at one time is in no way indicating a bias or a slant,” New said. His non-partisan policy group is affiliated with the Information Technology and Innovation Foundation, which in turn has executives from Silicon Valley companies, including Google, on its board of directors.

    Services such as Google or Facebook “have a business incentive not to lower the ranking of a certain publication because of news bias. Because that lowers the value as a news platform,” New said.

    News search rankings use factors including “use timeliness, accuracy, the popularity of a story, a users’ personal search history, their location, quality of content, a website’s reputation — a huge amount of different factors,” New said.

    Putting aside the general questions of the scientific veracity of this PJ Media ‘study’, it’s kind of amusing to realize that it was study conducted specifically on a search for “Trump” on Google News. And if you had to choose a single topic that is going to inevitably have an abundance of negative news written about it, that would be the topic of “Trump”. In other words, if you were to actually conduct a real study that attempts to assess the political bias of Google News’s search results, you almost couldn’t have picked a worse search term to test that theory on than “Trump”.

    Google not surprisingly refutes these charges. But it’s the people who work for companies dedicated to improving how their clients who give the most convincing responses since their businesses are literally dependents on them understanding Google’s algorithms:


    Google’s news search software doesn’t work the way the president says it does, according to Mark Irvine, senior data scientist at WordStream, a company that helps firms get websites and other online content to show up higher in search results. The Google News system gives weight to how many times a story has been linked to, as well as to how prominently the terms people are searching for show up in the stories, Irvine said.

    “The Google search algorithm is a fairly agnostic and apathetic algorithm towards what people’s political feelings are,” he said.

    “Their job is essentially to model the world as it is,” said Pete Meyers, a marketing scientist at Moz, which builds tools to help companies improve how they show up in search results. “If enough people are linking to a site and talking about a site, they’re going to show that site.”

    Trump’s concern is that search results about him appear negative, but that’s because the majority of stories about him are negative, Meyers said. “He woke up and watched his particular flavor and what Google had didn’t match that.”

    All that said, it’s not like the topic of the blackbox nature of the algorithms behind things like Google’s search engine aren’t a legitimate topic of public interest. And that’s part of why these farcical tweets are so dangerous: the Big Tech giants like Google, Facebook, and Twitter know that it’s not impossible that they’ll be subject to algorithmic regulation someday. And they’re going to want to push that day off for a long as possible. So when Trump makes these kinds of complaints, it’s not at all inconceivable that he’s going to get the response from these companies that he wants as these companies attempt to placate him. It’s also highly likely that if these companies do decide to placate him, they’re not going to publicly announce this. Instead they’ll just start rigging their algorithms to serve up more pro-Trump content and more right-wing content in general.

    Also keep in mind that, despite the reputation of Silicon Valley as being run by a bunch of liberals, the reality is Silicon Valley has a strong right-wing libertarian faction, and there’s going to be no shortage of people at these companies that would love to inject a right-wing bias into their services. Trump’s stunt gives that right-wing faction of Silicon Valley leadership an excuse to do exactly that from a business standpoint.

    So if you use Google News to see what the latest the news is on “Trump” and you suddenly find that it’s mostly good news, keep in mind that that’s actually really, really bad news because it means this stunt worked.

    Posted by Pterrafractyl | August 28, 2018, 3:55 pm

Post a comment