- Spitfire List - http://spitfirelist.com -

The Cambridge Analytica Microcosm in Our Panoptic Macrocosm

Let the Great Unfriending Commence! Specifically, the mass unfriending of Facebook. Which would be a well deserved unfriending after the scandalous revelations in a recent series of articles centered around the claims of Christopher Wylie, a Cambridge Analytica whistle-blower who helped found the firm and worked there until late 2014 until he and others grew increasingly uncomfortable with the far right goals and questionable actions of the firm.

And it turns out those questionable actions by Cambridge involve a far larger and more scandalous Facebook policy brought forth by another whistle-blower, Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012.

So here’s a rough breakdown of what’s been learned so far:

According to Christopher Wylie, Cambridge Analytica was “harvesting” massive amount data off of Facebook from people who did not give their permission by utilizing a Facebook loophole. This “friends permissions” loophole allowed app developers to scrape information not just from the Facebook profiles of the people that agree to use their apps but also their friends’ profiles too. In other words, if your Facebook friend downloaded Cambridge Analytica’s app, Cambridge Analytica was allowed to grab private information from your Facebook profile without your permission. And you would never know it.

So how many profiles was Cambridge Analytica allowed to “harvest” utilizing this “friends permission” feature? About 50 million, and only a tiny fraction (~270,000) of that 50 million people actually agreed to use Cambridge Analytica’s app. The rest were all their friends. So Facebook literally used the connectivity of Facebook users against them.

Keep in mind that this isn’t a new revelation. There were reports last year about how Cambridge Analytica paid ~100,000 people a dollar or two (via Amazon’s Mechanical Turks micro-task platform) to take an online survey. But the only way they could be paid was to download an app that gave Cambridge Analytica access to the profiles of all their Facebook friends, eventually yielding ~30 million “harvested” profiles [1]. Although according to these new reports that number is closer to 50 million profiles.

Before that, there was also a report from December of 2015 about Cambridge Analytica’s building of “psychographic profiles” for the Ted Cruz campaign. And that report also included the fact that this involved Facebook data harvested largely without users’ permissions [2].

So the fact that Cambridge Analytica was secretly harvesting private Facebook user data without their permissions isn’t the big revelation here. What’s new is the revelation that what Cambridge Analytica did was integral to Facebook’s business model for years and very widespread.

This is where Sandy Parakilas comes into the picture. According to Parakilas, this profile-scraping loophole that Cambridge Analytica was exploiting with its app was routinely exploiting by possibly hundreds of thousands of other app developers for years. Yep. It turns out that Facebook had an arrangement going back to 2007 where the company would get a 30 percent cut in the money app developers make off their Facebook apps and in exchange these developers were given the ability to scrape the profiles of not just the people who used their apps but also their friends. In other words, Facebook was essentially selling the private information of its users to app developers. Secretly. Well, except it wasn’t a secret to all those app developers. That’s also part of this scandal

This “friends permission” feature started getting phased out around 2012, although it turns out Cambridge Analytica was one of the very last apps allowed to use it up into 2014.

Facebook has tried to defend itself by asserting that Facebook was only making this available for things like academic research and that Cambridge Analytica was therefore misusing that data. And academic research was in fact the cover story Cambridge Analytica used. Cambridge Analytic actually set up a shell company, Global Science Research (GRS), that was run by a Cambridge University professor, Aleksandr Kogan, and claimed to be purely interested in using that Facebook data for academic research. The collected data was then sent off to Cambridge Analytica. But according to Parakilas, Facebook was allowing developers to utilize this “friends permissions” feature reasons as vague as “improving user experiences”. Parakilas saw plenty of apps harvesting this data for commercial purposes. Even worse, both Parakilas and Wylie paint a picture of Facebook releasing this data and then doing almost nothing to ensure that it’s not misused.

So we’ve learned that Facebook was allowing app developers to “harvest” private data on Facebook users without their permissions from 2007-2014, and now we get to perhaps the most chilling part: According to Parakilas, this data almost certainly floating around in the black market. And it was so easy to set up an app and start collecting this kind of data that anyone with basic app create skills could start trawling Facebook for data. And a majority of Facebook users probably had their profiles secretly “harvested” during this period. If true, that means there’s likely a massive black market of Facebook user profiles just floating around out there and Facebook has done little to nothing to address this.

Parakilas, whose job it was to police data breaches by third-party software developers from 2011-2012, understandably grew quite concerned over the risks to user data inherent in this business model. So what did Facebook’s leadership do when he raised these concerns? They essentially asked him “do you really want to know how this data is being use” attitude and actively discouraged him from investigating how this data may be abused. Intentionally not knowing about abuses was other part of the business model. Cracking down on “rogue developers” was very rare and the approval of Facebook CEO Mark Zuckerberg himself was required to get an app kicked off the platform.

Facebook has been publicly denying allegations like this for years. It was the public denials that led Parakilas to come forward.

And it gets worse. It turns out that Aleksandr Kogan, the University of Cambridge academic who ended up teaming up with Cambridge Analytica and built the app that harvested the data, has a remarkably close working relationship with Facebook. So close that Kogan actually co-authored an academic study published in 2015 with Facebook employees. In addition, one of Kogan’s partners in the data harvesting, Joseph Chancellor, was also an author on the study and went on to join Facebook a few months after it was published.

It also looks like Steve Bannon was overseeing this entire process, although he claims to know nothing.

Oh, and Palantir, the private intelligence firm with deep ties to the US national security state owned by far right Facebook board member Peter Thiel, appears to have had an informal relationship with Cambridge Analytica this whole time, with Palantir employees reportedly traveling to Cambridge Analytica’s office to help build the psychological profiles. And this state of affairs is an extension of how the internet has been used from its very conception a half century ago.

And that’s all part of why the Great Unfriending of Facebook really is long overdue. It’s one really big reason to delete your Facebook account comprised of many many many small egregious reasons.

So let’s start taking a look at those many small reasons to delete your Facebook account with a look at a New York Times story about Christopher Wylie and his story of the origins of Cambridge Analytica and the crucial role Facebook “harvesting” played in providing the company with the data it needed to carry out the goals of its chief financiers: waging the kind of ‘culture war’ the billionaire far right Mercer family and Steve Bannon wanted to wage [3]:

The New York Times

How Trump Consultants Exploited the Facebook Data of Millions

by Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr;
03/17/2018

As the upstart voter-profiling company Cambridge Analytica [4] prepared to wade into the 2014 American midterm elections, it had a problem.

The firm had secured a $15 million investment from Robert Mercer [5], the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.

So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.

Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”

“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”

Details of Cambridge’s acquisition [6] and use of Facebook data [2] have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate [7] about the merits of the firm’s so-called psychographic modeling techniques.

But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.

Cambridge paid to acquire the personal information through an outside researcher who, Facebook says, claimed to be collecting it for academic purposes.

During a week of inquiries from The Times, Facebook downplayed the scope of the leak and questioned whether any of the data still remained out of its control. But on Friday, the company posted a statement [8] expressing alarm and promising to take action.

“This was a scam — and a fraud,” Paul Grewal, a vice president and deputy general counsel at the social network, said in a statement to The Times earlier on Friday. He added that the company was suspending Cambridge Analytica, Mr. Wylie and the researcher, Aleksandr Kogan, a Russian-American academic, from Facebook. “We will take whatever steps are required to see that the data in question is deleted once and for all — and take action against all offending parties,” Mr. Grewal said.

Alexander Nix, the chief executive of Cambridge Analytica [9], and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.

In Britain, Cambridge Analytica is facing intertwined investigations by Parliament and government regulators into allegations that it performed illegal work on the “Brexit” campaign. The country has strict privacy laws, and its information commissioner announced on Saturday [10] that she was looking into whether the Facebook data was “illegally acquired and used.”

In the United States, Mr. Mercer’s daughter, Rebekah, a board member, Mr. Bannon and Mr. Nix received warnings from their lawyer that it was illegal to employ foreigners in political campaigns, according to company documents and former employees.

Congressional investigators have questioned Mr. Nix about the company’s role in the Trump campaign. And the Justice Department’s special counsel, Robert S. Mueller III, has demanded [11] the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.

While the substance of Mr. Mueller’s interest is a closely guarded secret, documents viewed by The Times indicate that the firm’s British affiliate claims to have worked in Russia and Ukraine. And the WikiLeaks founder, Julian Assange, disclosed in October [12] that Mr. Nix had reached out to him during the campaign in hopes of obtaining private emails belonging to Mr. Trump’s Democratic opponent, Hillary Clinton.

The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.” Only a tiny fraction of the users had agreed to release their information to a third party.

“Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”

Still, he added, “it’s a serious abuse of our rules.”

Reading Voters’ Minds

The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.

Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.

The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.

Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth [13].

Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.

Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.

The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.

Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.

But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.

Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.

When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.

All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”

He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept [6] — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.

Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.

“We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.”

Mr. Nix tells a different story. Appearing before a parliamentary committee last month, he described Dr. Kogan’s contributions as “fruitless.”

An International Effort

Just as Dr. Kogan’s efforts were getting underway, Mr. Mercer agreed to invest $15 million in a joint venture with SCL’s elections division. The partners devised a convoluted corporate structure, forming a new American company, owned almost entirely by Mr. Mercer, with a license to the psychographics platform developed by Mr. Wylie’s team, according to company documents. Mr. Bannon, who became a board member and investor, chose the name: Cambridge Analytica.

The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European.

But in July 2014, an American election lawyer advising the company, Laurence Levy, warned that the arrangement could violate laws limiting the involvement of foreign nationals in American elections.

In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.”

In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican.

Cambridge Analytica, in its statement to The Times, said that all “personnel in strategic roles were U.S. nationals or green card holders.” Mr. Nix “never had any strategic or operational role” in an American election campaign, the company said.

Whether the company’s American ventures violated election laws would depend on foreign employees’ roles in each campaign, and on whether their work counted as strategic advice under Federal Election Commission rules.

Cambridge Analytica appears to have exhibited a similar pattern in the 2016 election cycle, when the company worked for the campaigns of Mr. Cruz and then Mr. Trump. While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees.

Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support.

Cambridge executives have offered conflicting accounts about the use of psychographic data on the campaign. Mr. Nix has said that the firm’s profiles helped shape Mr. Trump’s strategy — statements disputed by other campaign officials — but also that Cambridge did not have enough time to comprehensively model Trump voters.

In a BBC interview last December, Mr. Nix said that the Trump efforts drew on “legacy psychographics” built for the Cruz campaign.

After the Leak

By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.

Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”

Near the end of that year, a report in The Guardian revealed [2] that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.

Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”

Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”

But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.

While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.

Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations.

All the scrutiny appears to have damaged Cambridge Analytica’s political business. No American campaigns or “super PACs” have yet reported paying the company for work in the 2018 midterms, and it is unclear whether Cambridge will be asked to join Mr. Trump’s re-election campaign.

In the meantime, Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it [14]. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on.

———-

“How Trump Consultants Exploited the Facebook Data of Millions” by Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr; The New York Times; 03/17/2018 [3]

“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”

Cambridge Analytica was supposed to be the arsenal of weapons to fight the culture war Cambridge Analytica’s leadership wanted to wage. But that arsenal couldn’t be built without data on what makes us ‘tick’. That’s where Facebook profile harvesting came in:

The firm had secured a $15 million investment from Robert Mercer [5], the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.

So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.

Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”

And the acquisition of these 50 million Facebook profiles has never been acknowledge by Facebook, until now. And most or perhaps all of that data is still in the hands of Cambridge Analytica:


But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.

And Facebook isn’t alone in suddenly discovering that its data was “harvested” by Cambridge Analytica. Cambridge Analytica itself wouldn’t admit this either. Until now. Now Cambridge Analytica admits it did indeed obtained Facebook’s data. But the company blames it all on Aleksandr Kogan, the Cambridge University academic who ran the front-company that paid people to take the psychological profile surveys, for violating Facebook’s data usage rules. It also claims it deleted all the “harvested” information two years ago as soon as it learned there was a problem. That’s Cambridge Analytica’s new story and it’s sticking to it. For now:


Alexander Nix, the chief executive of Cambridge Analytica [9], and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.

But Christopher Wylie has a very different recollection of events. In 2013, Wylie was a 24-year-old political operative with ties to veterans of President Obama’s campaigns interested in using psychological traits to affect voters’ behavior. He even had a team of psychologists and data scientists, some of them affiliated with Cambridge University (where Aleksandr Kogan was also working at the time). And that expertise in psychological profiling for political purposes is why Mr. Nix recruited Wylie and his team.

Then Nix has a chance meeting with Steve Bannon and Robert Mercer. Mercer shows interest in the company because he believes it can make him a Republican kingmaker, while Bannon was focused on the possibility of using personality profiling to shift America’s culture and rewire its politics. The Mercers end up investing $1.5 million in a pilot project: polling voters and testing psychographic messaging in Virginia’s 2013 gubernatorial race:


The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.

Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.

The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.

Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth [13].

Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.

Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.

So the pilot project proceed, but there was a problem: Wylie’s team simply did not have the data it needed. They only had the kind of data traditional analytics firms had: voting records and consumer purchase histories. And getting the kind of data they wanted to gain insight into voter neuroticisms and psychological traits could be very expensive:


The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.

Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.

But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.

And that’s where Aleksandr Kogan enters the picture: First, Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of set up he needed. Researchers there claimed to have developed techniques for mapping personality traits based on what people “liked” on Facebook. Better yet, this team already had an app that paid users small sums to take a personality quiz and download an app that would scrape private information from their Facebook profiles and from their friends’ Facebook profiles. In other words, Cambridge University’s Psychometrics Centre was already employing exactly the same kind of “harvesting” model Kogan and Cambridge Analytica eventually ended up doing.

But there was a problem for Wylie and his team: Cambridge University’s Psychometrics Centre declined to work with them:


Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.

But it wasn’t a particularly big problem because Wylie found another Cambridge University psychology professor who was familiar with the techniques and willing to do the job: Aleksandr Kogan. So Kogan built his own psychological profile app and began harvesting data for Cambridge Analytica in June 2014. Kogan was even allowed to keep the harvested data for his own research according to his contract with Cambridge Analytica. According to Facebook, the only thing Kogan told them and told the users of his app in the fine print was that he was collecting information for academic purposes. Although Facebook didn’t appear to have ever attempted to verify that claim:


When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.

All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”

In the end, Kogan’s app managed to “harvest” 50 million Facebook profiles based on a mere 270,000 people actually signing up for Kogan’s app. So for each person who signed up for the app there were ~185 other people who had their profiles sent to Kogan too.

And 30 million of those profiles contained information like places of residence that allowed them to match that Facebook profile with other records (presumably non-Facebook records) and build psychographic profiles, implying that those 30 million records were mapped to real life people:


He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept [6] — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.

Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.

So this harvesting starts in mid-2014, but by early 2015, Wylie and more than half his original team leave the firm to start a rival firm, although it sounds lie concerns over the far right cause they were working for was also behind their departure:


By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.

Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”

Finally, this whole scandal goes public. Well, at least partially: At the end of 2015, the Guardian reports this Facebook profile collection scheme Cambridge Analytica was doing for the Ted Cruz campaign. Facebook doesn’t publicly acknowledge the truth of this report, but it did publicly state that it was “carefully investigating this situation.” Facebook also sent a letter to Cambridge Analytica demanding that it destroy this data…except the letter wasn’t sent until August of 2016.


Near the end of that year, a report in The Guardian revealed [2] that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.

Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”

Facebook now claims that Cambridge Analytica “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.” But, of course, this was a lie. The New York Times was shown sets of the raw data.

And even more disturbing, a former Cambridge Analytica employee claims he recently saw hundreds of gigabytes on Cambridge Analytica’s servers. Unencrypted. Which means that data could potentially be grabbed by any Cambridge Analytica employee with access to that server:


Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”

But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.

While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.

So, to summarize the key points from this New York Times article:

1. In 2013, Cambridge Analytica is formed when Alexander Nix, then a salesman for the small elections division at SCL Group, recruits Christopher Wylie and a team of psychologist to help develop a “political data” unit at the company, with an eye on the 2014 US mid-terms.

2. By chance, Nix and Wylie meet Steve Bannon and Robert Mercer, who are quickly sold on the idea of psychographic profiling for political purposes. Bannon was intrigue by the idea of using this data to wage the “culture war.” Mercer agrees to invest $1.5 Billion in a pilot project involving the Virginia gubernatorial race. Their success is limited as Wylie soon discovers that they don’t have the data they really need to carry out their psychographic profiling project. But Robert Mercer remained committed to the project.

3. Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of data they were seeking. Data that was being collected via an app administered through Facebook, where people were paid small amounts a money to take a survey, and in exchange Cambridge University’s Psychometrics Centre was allowed to scrape their Facebook profile as well as the profiles of all their Facebook friends.

4. Cambridge University’s Psychometrics Centre rejected Wylies offer to work with them, but there was another Cambridge University psychology professor who was willing to do so, Aleksandr Kogan. Kogan proceeded to start a company (as a front for Cambridge Analytica) and develop his own app, getting ~270,000 people to download it and give their permission for their profiles to be collected. But using the “friends permission” feature, Kogan’s app ended collecting another ~50 million Facebook profiles from the friends of those 270,000 people. ~30 million of those profiles were matched to US voters.

5. By early 2015, Wylie and his left-leaning team members leave Cambridge Analytica and form their own company, apparently due to concerns over the far right goals of the firm.

6. Cambridge Analytica goes on to work for the Ted Cruz campaign. In late 2015, it’s reported that Cambridge Analytica work for Cruz involved working with Facebook data from people who didn’t give it permission. Facebook issues a vague statement about how it’s going to investigate.

7. In August 2016, Facebook sends a letter to Cambridge Analytica asserting that the data was obtained and used without permission and must be deleted immediately. The New York Times was just shown copies of exactly that data to write this article. Hundreds of gigabytes of data that is completely outside Facebook’s control.

8. Cambridge Analytica CEO (now former CEO) Alexander Nix told lawmakers that the firm didn’t possess any Facebook data. So he was clearly completely lying.

9. Finally, a former Cambridge Analytica employee showed the New York Times hundreds of gigabytes of Facebook data. And it was unencrypted, so anyone with access to it could make a copy and give it to whoever they want.

And that’s what we learned from just the New York Times’s version of this story. The Guardian Observer was also talking with Christopher Wylie and other Cambridge Analytica whistle-blowers. And while it largely covers the same story as the New York Times report, the Observer article contains some additional details.
1. For starters, the following article notes that the Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. That’s important to note because the stated use of the data grabbed by Aleksandr Kogan’s app was for research purposes. But “improving user experience in the app” is a far more generic reason for grabbing that data than academic research purposes. And that hints at something we’re going to see below from a Facebook whistle-blower: that all sorts of app developers were grabbing this kind of data using the ‘friends’ loophole for reasons that had absolutely nothing to do with academic purposes and this was deemed fine by Facebook.

2. Facebook didn’t formally suspend Cambridge Analytica and Aleksandr Kogan from the platform until one day before the Observer article was published, which is more than two years after the initial reports in late 2015 about the Cambridge Analytica misusing Facebook data for the Ted Cruz campaign. So if Facebook felt like Cambridge Analytica and Aleksandr Kogan was improperly obtaining and misusing its data it sure tried hard not to let on until the very last moment.

3. Simon Milner, Facebook’s UK policy director, told the UK MP when asked if Cambridge Analytica had Facebook data that, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.” Which, again, as we’re going to see, was a total lie according to a Facebook whistle-blower because Facebook was routinely providing exactly the kind of data Kogan’s app was collecting to thousands of developers.

4. Aleksandr Kogan had a license from Facebook to collect profile data, but for research purposes, so when he used the data for commercial purposes he was violating his agreement, according to the article. Also, Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps. And as we’re going to see in subsequent articles, it does indeed look like Kogan is correct and he was very open about using the data from the Cambridge Analytica app for commercial purposes and Facebook had no problem with this.

5. In addition to being a Cambridge University professor, Aleksandr Kogan has links to a Russian university and took Russian grants for research. This will undoubtedly raise speculation about the possibility that Kogan’s data was handed over to the Kremlin and used in the social-media influencing campaign carried out by the Kremlin-linked Internet Research Agency. If so, it’s still important to keep in mind that, based on what we’re going to see from Facebook whistle-blower Sandy Parakilas, the Kremlin could have easily set up all sorts of Facebook apps for collecting this kind of data because apparently anyone could do it as long as the data was for “improving the user experience”. That’s how obscene this situation is. Kogan was not at all needed to provide this data to the Kremlin because it was so easy for anyone to obtain. In other words, we should assume all sorts of governments have this kind of data.

6. The legal letter sent by Facebook to Cambridge Analytica in August 2016 demanding that it delete the data was sent just days before it was officially announced that Steve Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him. That sure does seem like Facebook knew about Bannon’s involvement with Cambridge Analytica and the fact that Bannon was going to become Trump’s campaign manager and bring Cambridge Analytica into the campaign.

7. Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.”

So as we can see, like the proverbial onion, the more layers you peel back on the story Cambridge Analytica and Facebook have been peddling about how this data was obtained and used, the more acrid and malodorous it gets. With a distinct tinge of BS [15]:

The Guardian

Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach

Whistleblower describes how firm linked to former Trump adviser Steve Bannon compiled user data to target American voters

Carole Cadwalladr and Emma Graham-Harrison

Sat 17 Mar 2018 18.03 EDT

The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box.

A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.

Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.

Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.

The New York Times is reporting [3] that copies of the data harvested for Cambridge Analytica could still be found online; its reporting team had viewed some of the raw data.

The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians [16] by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.

Cambridge Analytica and Facebook are one focus of an inquiry into data and politics by the British Information Commissioner’s Office. Separately, the Electoral Commission is also investigating what role Cambridge Analytica played in the EU referendum.

On Friday, four days after the Observer sought comment for this story, but more than two years after the data breach was first reported, Facebook announced [8] that it was suspending Cambridge Analytica and Kogan from the platform, pending further information over misuse of data. Separately, Facebook’s external lawyers warned the Observer it was making “false and defamatory” allegations, and reserved Facebook’s legal position.

The revelations provoked widespread outrage. The Massachusetts Attorney General Maura Healey announced that the state would be launching an investigation. “Residents deserve answers immediately from Facebook and Cambridge Analytica,” she said on Twitter.

The Democratic senator Mark Warner said the harvesting of data on such a vast scale for political targeting underlined the need for Congress to improve controls. He has proposed an Honest Ads Act to regulate online political advertising the same way as television, radio and print. “This story is more evidence that the online political advertising market is essentially the Wild West. Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, it’s clear that, left unregulated, this market will continue to be prone to deception and lacking in transparency,” he said.

Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.

Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”

Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”

Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.

At the time of the data breach, Wylie was a Cambridge Analytica employee, but Facebook described him as working for Eunoia Technologies, a firm he set up on his own after leaving his former employer in late 2014.

The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.

That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.

“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.

Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.

“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”

Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.

“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”

A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.

Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.

Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.

Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.

The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.

The algorithm and database together made a powerful political tool [17]. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.

“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.

At the time, more than 50 million profiles represented around a third of active North American Facebook users, and nearly a quarter of potential US voters. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.”

Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.

GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.

SCL Elections, an affiliate, worked with Facebook over the period to ensure it was satisfied no terms had been “knowingly breached” and provided a signed statement that all data and derivatives had been deleted, he said. Cambridge Analytica also said none of the data was used in the 2016 presidential election.

Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.

———-

“Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach” by Carole Cadwalladr and Emma Graham-Harrison; The Guardian; 03/17/2018 [15]

“Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.””

Exploiting everyone’s inner demons. Yeah, that sounds like something Steve Bannon [18] and Robert Mercer [19] would be interested in. And it explains why Facebook data would have been potentially so useful for exploiting those demons. Recall that the original non-Facebook data that Christopher Wylie and initial Cambridge Analytica team was working with with in 2013 and 2014 wasn’t seen as effective. It didn’t have that inner-demon-influencing granularity. And then they discovered the Facebook data available through this app loophole and it was taken to a different level. Remember when Facebook ran that controversial experiment on users where they tried to manipulate their emotions by altering their news feeds [20]? It sounds like that’s what Cambridge Analytica was basically trying to do using Facebook ads instead of the newsfeed, but perhaps in a more microtargeted way.

And that’s all because Facebook’s “platform policy” allowed for the collection of friends’ data to “improve user experience in the app” with the non-enforced request that the data not be sold on or used for advertising:


The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians [16] by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.

Just imagine how many app developers were using this over the 2007-2014 period Facebook had this “platform policy” that allowed data captures of friends’ “to improve user experience in the app”. It wasn’t just Cambridge Analytica that took advantage of this. That’s a big part of the story here.

And yet when Simon Milner, Facebook’s UK policy director, was asked if Cambridge Analytica had Facebook data, he said, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”:


Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.

Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”

Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”

And note how the article appears to say the data Cambridge Analytica collected on Facebook users included “emails, invoices, contracts and bank transfers that reveal more than 50 million profiles.” It’s not clear if that’s a reference to emails, invoices, contracts and bank transfers that involved with setting up Cambridge Analytica or emails, invoices, contracts and bank transfers from Facebook users, but if that was from users that would be wildly scandalous:


Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profilesmostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.

So it will be interesting to see if that point of ambiguity is ever clarified somewhere. Because wow would that be scandalous if emails, invoices, contracts and bank transfers of Facebook users were released through this “platform policy”.

Either way, it looks unambiguously awful for Facebook. Especially now that we learn that the cease and destroy letter Facebook sent to Cambridge Analytica in August of 2016 was suspiciously sent just days before Steve Bannon, a founder and officer of Cambridge Analytica, becomes Trump’s campaign manager and brings the company into the Trump campaign:


The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.

That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.

“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.

And the only thing Facebook did to confirm that the Facebook data wasn’t misused, according to Christopher Wylie, was to ask that a box be checked a box on a form:


Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.

“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”

And, again, Facebook denied it’s data based passed along to Cambridge Analytica when questioned by both the US Congress and UK Parliament:


Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.

“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”

A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.

And not how Facebook now admits Aleksandr Kogan did indeed get the data legally. It just wasn’t used properly. It’s why Facebook is saying it shouldn’t be called a “data breach”: because it wasn’t a breach because the data was obtained properly:


Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.

Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.

But Aleksandr Kogan isn’t simply arguing that he did nothing wrong when he obtained that Facebook data via his app. Kogan also argues that he had a “close working relationship” with Facebook, which has granted him permission for his apps, and everything he did with the data was legal. So Aleksandr Kogan’s story is quite notable because, again, as we’ll see below, there is evidence that his story is closest to the truth of all the stories we’re hearing: that Facebook was totally fine with Kogan’s apps obtaining the private data of millions of Facebook friends. And Facebook was perfectly fine with how that data was used or was at least consciously trying to not know how the data might be misused. That’s the picture that’s going to emerge so keep that in mind when Kogan asserts that he had a “close working relationship” with Facebook. He probably did based on available evidence:


Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.

Kogan maintains everything he did was legal, and guess what? It probably was legal. That’s part of the scandal here.

And regarding those testimony’s by Cambridge Analytica’s now-former CEO Alexander Nix that the company never worked with Facebook, note how the Observer got to see a copy of the contract Cambridge Analytica entered into with Kogan’s GSR and the contract was entirely premised on harvesting and processing the Facebook data. Which, again, hints at the likelihood that they thought what they were doing at the time (2014) was completely legal. They talked about it in the contract:


The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.

“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.

Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.

GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.

““The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.”

A contract to create a ‘gold standard’ of 2 million Facebook accounts that are ‘matched’ to real life voters for the use of “understanding personality from Facebook profile information.” That was the actual contract Kogan had with Cambridge Analytica. All for the purpose of developing a system that would allow Cambridge Analytica to infer your inner demons from your Facebook profile and then manipulate them.

So it’s worth noting how the app permissions setup Facebook allowed from 2007-2014 of letting app developers collect Facebook profile information of the people who use their apps and their friends created this amazing arrangement where app developers could generate a ‘gold standard’ of of people using apps and a test set from all their friends. If the goal was getting people to encourage their friends to download an app that would have been a very useful data set. But it would of course also have been an incredibly useful data set for anyone who wanted to collect the profile information of Facebook users. Because, again, as we’re going to see, a Facebook whistle-blower is claiming that Facebook user profile information was routinely handed out to app developers.

So if an app developer wanted to experiment on, say, how to use that available Facebook profile information to manipulate people, getting a ‘gold standard’ of people to take a psychological profile survey would be an important step in carrying out that experiment. Because those people who take your psychological survey form the data set you can use to train your algorithms that take Facebook profile information as the input and create psychological profile data as the output.

And that’s what Aleksandr Kogan’s app was doing: grabbing psychological information from the survey while simultaneously grabbing the Facebook profile data from the test-takers, along with the Facebook profile data of all their friends. Kogan’s ‘gold standard’ training set was the people who actually used his app and handed over a bunch of personality information from the survey and the test set would have been the tens of millions of friends whose data was also collected. Since the goal of Cambridge Analytica was to infer personality characteristics from people’s Facebook profiles, pairing the personality surveys from the ~270,000 people who took the app survey to their Facebook profiles allowed Cambridge Analytica to train their algorithms that guessed at personality characteristics from the Facebook profile information. Then they had all the rest of the profile information on the rest of the ~50 million people to apply those algorithms.

Recall how Trump’s 2016 campaign digital director, Brad Parscale, curiously downlplayed the utility of Cambridge Analytica’s data during interviews where he was bragging about how they were using Facebook’s ad micro-targeting features to run “A/B testing on steriods” on micro-targeted audiences i.e. strategically exposing micro-targeted Facebook audiences sets of ads that differed in some specific way design to explore a particular psychological dimension of that micro-audience [21]. So it’s worth noting that the “A/B testing on steroids” Brad Parscale referred to was probably focused on the ~30 million of that ~50 million set of people that Cambridge Analytica obtained a Facebook profile who could be matched back to real people. Those 30 million Facebook users that Cambridge Analytica had Facebook profile data on were the test set. And the algorithms designed to guess the psychological makeup of people from their Facebook profiles that Cambridge Analytica refined on the training set of ~270,000 Facebook users who took the psychological profiles were likely unleashed on that test set of ~30 million people.

So when we find out that the Cambridge Analytica contract with Aleksandr Kogan’s GSR company included language like building a “gold standard”, keep in mind that this implied that there was a lot of testing to do after the algorithmic refinements based on that gold standard. And the ~30-50 million profiles they collected from the friends of the ~270,000 people who downloaded Kogan’s app made for quite a test set.

Also keep in mind that the denials that Cambridge Analytica worked with Facebook data by former CEO Alexander Nix aren’t the only laughable denials of Cambridge Analytica’s officers. Any denials by Steve Bannon and his lawyers that he knew about Cambridge Analytica’s use of Facebook profile data should also be seen laughable, starting with the denials from Bannon’s lawyers that he knows nothing about what Wylie and others are claiming:


Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.

Steve Bannon: the Boss Who Knows Nothing (Or So He Says)

Steve Bannon “knows nothing about the claims being asserted.” LOL! Yeah, well, not according to Christopher Wylie, who, in the following article, has some rather significant claims about the role Steve Bannon in all this. According to Wylie:

1. Steve Bannon was the person overseeing the acquisition of Facebook data by Cambridge Analytica. As Wylie put it, “We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss.” Now, when Wylie says Bannon was Nix’s boss, note that Bannon served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016. And Nix was CEO during this period. So technically Nix was the boss. But it sounds like Bannon was effectively the boss, according to Wylie.

2. Wylie acknowledges that it’s unclear whether Bannon knew how Cambridge Analytica was obtaining the Facebook data. But Wylie does say that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And Bannon “approved the data-collection scheme we were proposing”. So if Bannon and Mercer didn’t know the details of how the purchase of massive amounts of Facebook data took place that would be pretty remarkable. Remarkably uncurious, given that acquiring this data was at the core of what the company was doing and they approved of the data-collection scheme. A scheme that involved having Aleksandr Kogan set up a separate company. That was the “scheme” Bannon and Mercer would have had to approve so the question if they didn’t realize that they were acquire this Facebook data using this “friend sharing” feature Facebook made available to app developers that would have been a significant oversight.

The article goes on to include a few more fun facts, like…

3. Cambridge Analytica was doing focus group tests on voters in 2014 and identified many of the same underlying emotional sentiments in voters that formed the core message behind Donald Trump’s campaign. In focus groups for the 2014 midterms, the firm found that voters responded to calls for building a wall with Mexico, “draining the swamp” int Washington DC, and to thinly veiled forms of racism toward African Americans called “race realism”. The firm also tested voter attitudes towards Russian President Vladimir Putin and discovered that there’s a lot of Americans who really like the idea of a really strong authoritarian leader. Again, this was all discovered before Trump even jumped into the race.

4. The Trump campaign rejected early overtures to hire Cambridge Analytica, which suggests that Trump was actually the top choice of the Mercers and Bannon, ahead of Ted Cruz.

5. Cambridge Analytica CEO Alexander Nix was caught by Channel 4 News in the UK boasting about the secrecy of his firm, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”

So based on these allegations, Steve Bannon was closely involved in approval the various schemes to acquire Facebook data and probably using self-destructing emails in the process [22]:

The Washington Post

Bannon oversaw Cambridge Analytica’s collection of Facebook data, according to former employee

By Craig Timberg, Karla Adam and Michael Kranish
March 20, 2018 at 7:53 PM

LONDON — Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.

The 2014 effort was part of a high-tech form of voter persuasion touted by the company, which under Bannon identified and tested the power of anti-establishment messages that later would emerge as central themes in President Trump’s campaign speeches, according to Chris Wylie, who left the company at the end of that year.

Among the messages tested were “drain the swamp” and “deep state,” he said.

Cambridge Analytica, which worked for Trump’s 2016 campaign, is now facing questions about alleged unethical practices, including charges that the firm improperly handled the data of tens of millions of Facebook users. On Tuesday, the company’s board announced that it was suspending [23] its chief executive, Alexander Nix, after British television released secret recordings that appeared to show him talking about entrapping political opponents.

More than three years before he served as Trump’s chief political strategist, Bannon helped launch Cambridge Analytica with the financial backing of the wealthy Mercer family as part of a broader effort to create a populist power base [24]. Earlier this year, the Mercers cut ties [25] with Bannon after he was quoted making incendiary comments about Trump and his family.

In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.

“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”

Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.

Bannon received more than $125,000 in consulting fees from Cambridge Analytica in 2016 and owned “membership units” in the company worth between $1 million and $5 million, according to his financial disclosure.

It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.

Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.

Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.

Bannon “approved the data-collection scheme we were proposing,” Wylie said.

The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.

“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”

The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.

In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.

The firm also tested views of Russian President Vladimir Putin.

“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”

The controversy over Cambridge Analytica’s data collection erupted in recent days amid news reports that an app created by a Cambridge University psychologist, Aleksandr Kogan, accessed extensive personal data of 50 million Facebook users. The app, called thisisyourdigitallife, was downloaded by 270,000 users. Facebook’s policy, which has since changed, allowed Kogan to also collect data —including names, home towns, religious affiliations and likes — on all of the Facebook “friends” of those users. Kogan shared that data with Cambridge Analytica for its growing database on American voters.

Facebook on Friday banned the parent company of Cambridge Analytica, Kogan and Wylie for improperly sharing that data.

The Federal Trade Commission has opened an investigation [26] into Facebook to determine whether the social media platform violated a 2011 consent decree governing its privacy policies when it allowed the data collection. And Wylie plans to testify [27] to Democrats on the House Intelligence Committee as part of their investigation of Russian interference in the election, including possible ties to the Trump campaign.

Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”

The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”

Nix could not be reached for comment.

Cambridge Analytica was set up as a U.S. affiliate of British-based SCL Group, which had a wide range of governmental clients globally, in addition to its political work.

Wylie said that Bannon and Nix first met in 2013, the same year that Wylie — a young data whiz with some political experience in Britain and Canada — was working for SCL Group. Bannon and Wylie met soon after and hit it off in conversations about culture, elections and how to spread ideas using technology.

Bannon, Wylie, Nix, Rebekah Mercer and Robert Mercer met in Rebekah Mercer’s Manhattan apartment in the fall of 2013, striking a deal in which Robert Mercer would fund the creation of Cambridge Analytica with $10 million, with the hope of shaping the congressional elections a year later, according to Wylie. Robert Mercer, in particular, seemed transfixed by the group’s plans to harness and analyze data, he recalled.

The Mercers were keen to create a U.S.-based business to avoid bad optics and violating U.S. campaign finance rules, Wylie said. “They wanted to create an American brand,” he said.

The young company struggled to quickly deliver on its promises, Wiley said. Widely available information from commercial data brokers provided people’s names, addresses, shopping habits and more, but failed to distinguish on more fine-grained matters of personality that might affect political views.

Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”

After Cruz faded, the Mercers switched their allegiance to Trump and pitched their services to Trump’s digital director, Brad Parscale. The company’s hiring was approved by Trump’s son-in-law, Jared Kushner, who was informally helping to manage the campaign with a focus on digital strategy.

Kushner said in an interview [28] with Forbes magazine that the campaign “found that Facebook and digital targeting were the most effective ways to reach the audiences. …We brought in Cambridge Analytica.” Kushner said he “built” a data hub for the campaign “which nobody knew about, until towards the end.”

Kushner’s spokesman and lawyer both declined to comment Tuesday.

Two weeks before Election Day, Nix told a Post reporter [29] at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”

The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.

Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.

Cambridge Analytica’s parent company, SCL Group, has an ongoing contract with the State Department’s Global Engagement Center. The company was paid almost $500,000 to interview people overseas to understand the mind-set of Islamist militants as part of an effort to counter their online propaganda and block recruits.

Heather Nauert, the acting undersecretary for public diplomacy, said Tuesday that the contract was signed in November 2016, under the Obama administration, and has not expired yet. In public records, the contract is dated in February 2017, and the reason for the discrepancy was not clear. Nauert said that the State Department had signed other contracts with SCL Group in the past.

———-

“Bannon oversaw Cambridge Analytica’s collection of Facebook data, according to former employee” by Craig Timberg, Karla Adam and Michael Kranish; The Washington Post; 03/20/2018 [22]

“Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.”

Steve Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data. That’s what Christopher Wylie claims, and given Bannon’s role as vice president of the company it’s not, on its face, an outlandish claim. And Bannon apparently approved the spending of nearly $1 million to acquire that Facebook data in 2014. Because, according to Wylie, Alexander Nix didn’t actually have permission to spend that kind of money without approval. Bannon, on the hand, did have permission to make those kinds of expenditure approvals. That’s how high up Bannon was at that company even though he was technically the vice president while Nix was the CEO:


In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.

“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”

Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.

“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss…Alexander Nix didn’t have the authority to spend that much money without approval.””

And while Wylie acknowledges that unclear whether Bannon knew how Cambridge Analytica was obtaining the data, Wylie does assert that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And, generally speaking, if Bannon was approval $1 million expenditures on acquiring Facebook data he probably sat in on at least one meeting where they described how they were planning on actually getting the data by spending on that money. Don’t forget the scheme involved paying individuals small amounts of money to take the psychological survey on Kogan’s app, so at a minimum you would expect Bannon to know about how these apps were going to result in the gathering of Facebook profile information:


It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.

Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.

Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.

Bannon “approved the data-collection scheme we were proposing,” Wylie said.

What’s Bannon hiding by claiming ignorance? Well, that’s a good question after Britain’s Channel 4 News aired a video Tuesday in which Nix was highlighting his firm’s secrecy, including the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing”:


Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”

The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”

Self-destructing emails. That’s not suspicious or anything.

And note how Cambridge Analytica was apparently already honing in on a very ‘Trumpian’ message in 2014, long before Trump was on the radar:


The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.

“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”

The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.

In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.

The firm also tested views of Russian President Vladimir Putin.

“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”

Intriguingly, given these early Trumpian findings in their 2014 voter research, it appears that the Trump campaign turned down early overtures to hire Cambridge Analytica, which suggests that Trump really was the top preference for Bannon and the Mercers, not Ted Cruz:


Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”

And as the article reminds us, the Trump campaign has completely denied EVER using Cambridge Analytica’s data [21]. Brad Parscale, Trump’s digital director, claimed he got all the data they were working with from the Republican National Committee:


Two weeks before Election Day, Nix told a Post reporter [29] at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”

The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.

Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.

And that denial by Parscale raises an obvious question: when Parscale claims they only used data from the RNC, it’s clearly very possible that he’s just straight up lying. But it’s also possible that he’s lying while technically telling the truth. Because if Cambridge Analytica gave its data to the RNC, it’s possible the Trump campaign acquired the Camgridge Analytica data from the RNC at that point, giving the campaign a degree of deniability about the use of such scandalously acquired data if the story of it ever became public. Like now.

Don’t forget that data of this nature would have been potentially useful for EVERY 2016 race, not just the presidential campaign. So if Bannon and Mercer were intent on helping Republicans win across the board, handing that data over to the RNC would have just made sense.

Also don’t forget that the New York Times was shown unencrypted copies of the Facebook data collected by Cambridge Analytica. If the New York Times saw this data, odds are the RNC has too. And who knows who else.

Facebook’s Sandy Parakilas Blows an “Utterly Horrifying” Whistle

It all raises the question of whether or not the Republican National Committee now possess all that Cambridge Analytica data/Facebook data right now. And that brings us to perhaps the most scandalous article of all that we’re going to look at. It’s about Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012 who is now a whistle-blower about exactly the kind of “friend’s permission” loophole Cambridge Analytica exploited. And as the following article makes horrifically clear:

1. It’s not just Cambridge Analytica or the RNC that might possess this treasure trove of personal information. It’s the entire data brokerage industry that probably has thier hands on this data. Along with anyone who has picked it up through the black market.

2. It was relatively easy to write an app that could exploit this “friends permissions” feature and start trawling Facebook for profile data for app users and their friends. Anyone with basic app coding skills could do it.

3. Parakilas estimates that perhaps hundreds of thousands of developers likely exploited exactly the same ‘for research purposes only’ loophole exploited by Cambridge Analytica. And Facebook had no way of tracking how this data was used by developers once it left Facebook’s servers.

4. Parakilas suspects that this amount of data will inevitably end up in the black market meaning there is probably a massive amount of personally identifiable Facebook data just floating around for the entire marketing industry and anyone else (like the GOP) to data mine.

5. Parakilas knew of many commercial apps that were using the same “friends permission” feature to grab Facebook profile data use it commercial purposes.

6. Facebook’s policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings. That appears to be part of the legal protection Facebook employed when it had this policy: don’t complain, it’s in the fine print.

7. Perhaps most scandalous of all, Facebook took a 30% cut of payments made through apps in exchange for giving these app developers access to Facebook user data. Yep, Facebook was effectively selling user data, but by structuring the sale of this data as a 30% share of the payments made through the app Facebook also created an incentive to help developers maximize the profits they made through the app. So Facebook literally set up a system that incentivized itself to help app developers make as much money as possible off of the user data they were handing over.

8. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users. So as a 2010, ~1 in 10 Facebook apps were using this app loophole to grab information about both the users of the app and their friends.

9. While Cambridge Analytica was far from alone in exploiting this loophole, it was actually one of the very last firms given permission to be allowed to do so. Which means that particular data set collected by Cambridge Analytica could be uniquely valuable simply be being larger and containing and more recent data than most other data sets of this nature.

10. When Parakilas brought up these concerns to Facebook’s executives and suggested the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach. One Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”

11. Shortly after arriving at the company’s Silicon Valley headquarters, Parakilas was told that any decision to ban an app required the personal approval of Mark Zuckerberg. Although the policy was later relaxed to make it easier to deal with rogue developers. That said, rogue developers were rarely dealt with.

12. When Facebook eventually phased out this “friends permissions” policy for app developers, it was likely done out of concerns over the commercial value of all this data they were handing out. Executives were apparently concerned that competitors were going to use this data to build their own social networks.

So, as we can see, the entire saga of Cambridge Analytica’s scandalous acquisition of private Facebook profiles on ~50 million Americans is something Facebook made routine for developers of all sorts from 2007-2014, which means this is far from a ‘Cambridge Analytica’ story. It’s a Facebook story about a massive problem Facebook created for itself (for its own profits) [30]:

The Guardian

‘Utterly horrifying’: ex-Facebook insider says covert data harvesting was routine

Sandy Parakilas says numerous companies deployed these techniques – likely affecting hundreds of millions of users – and that Facebook looked the other way

Paul Lewis in San Francisco
Tue 20 Mar 2018 07.46 EDT

Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to Cambridge Analytica, according to a new whistleblower.

Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.

“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.

Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.

Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.

“It has been painful watching,” he said, “because I know that they could have prevented it.”

Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”

Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.

He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.

He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”

Facebook did not respond to a request for comment on the information supplied by Parakilas, but directed the Guardian to a November 2017 blogpost [31] in which the company defended its data sharing practices, which it said had “significantly improved” over the last five years.

“While it’s fair to criticise how we enforced our developer policies more than five years ago, it’s untrue to suggest we didn’t or don’t care about privacy,” that statement said. “The facts tell a different story.”

‘A majority of Facebook users’

Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.

That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.

The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.

Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.

Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.

Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.

Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.

Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”

Facebook announced on Monday that it had hired a digital forensics firm to conduct an audit of Cambridge Analytica. The decision comes more than two years after Facebook was made aware of the reported data breach.

During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.

While the previous policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings, Parakilas said he believed the policy was problematic.

“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”

It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.

The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.

While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.

“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps [32], concluded that around 11% of third-party developers requested data belonging to friends of users.

If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.

The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.

Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.

He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.

“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”

‘They treated it like a PR exercise’

Parakilas said he lobbied internally at Facebook for “a more rigorous approach” to enforcing data protection, but was offered little support. His warnings included a PowerPoint presentation he said he delivered to senior executives in mid-2012 “that included a map of the vulnerabilities for user data on Facebook’s platform”.

“I included the protective measures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do malicious things with the data,” he said. “On the list of bad actors I included foreign state actors and data brokers.”

Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”

That changed, Parakilas said, when he heard the congressional testimony given by Facebook lawyers to Senate and House investigators in late 2017 about Russia’s attempt to sway the presidential election. “They treated it like a PR exercise,” he said. “They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.”

It was at that point that Parakilas decided to go public with his concerns, writing an opinion article in the New York Times [33] that said Facebook could not be trusted to regulate itself. Since then, Parakilas has become an adviser to the Center for Humane Technology [34], which is run by Tristan Harris, a former Google employee turned whistleblower on the industry.

———-

“‘Utterly horrifying’: ex-Facebook insider says covert data harvesting was routine” by Paul Lewis; The Guardian; 03/20/2018 [30]

“Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.”

The platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012: That’s who is making these claims. In other words, Sandy Parakilas is indeed someone who should be intimately familiar with Facebook’s policies of handing user data over to app developers because it was his job to ensure that data wasn’t breached.

And as Parakilas makes clear, he wasn’t actually able to do his job. When the data left Facebook’s servers after getting handed over to app developer Facebook had no idea what developers were doing with the data and apparently no interest in learning:


“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.

Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.

Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.

“It has been painful watching,” he said, “because I know that they could have prevented it.”

Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”

And this completely lack of oversight by Facebook led Parakilas to assume there was “something of a black market” for that Facebook data. But when he expressed these concerns with fellow executives he was warned not to look. Not knowing how this data was being used was ironically part of Facebook’s legal strategy, it seems:


Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.

He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.

He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”

“They felt that it was better not to know. I found that utterly shocking and horrifying.”

Well, at least one executive at Facebook was utterly shocked and horrified by the “better not to know” policy towards handing personal private information over to developers. And that one executive, Parakilas, left the company and is now a whistle-blower.

And one of the things that made Parakilas particularly concerned that this was widespread among app was the fact that it was so easy to create apps that could then just be released onto Facebook to trawl for Facebook profile data from users and their unwitting friends:


The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.

And while rogue app developers were at times dealt with, it was exceedingly rare with Parakilas not witnessing a single audit of a developer’s systems during his time there.

Even more alarming is that Facebook was apparently quite on encouraging app developers to grab this Facebook profile data as an incentive to encourage even more app develop. Apps were seen as so important to Facebook that Mark Zuckerberg himself had to give his personal approval to ban on app. And while that policy was later relaxed to not require Zuckerberg’s approval, it doesn’t sound like that policy change actually resulted in more apps getting banned:


Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.

Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”

During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.

So how many Facebook users had their private profile information likely via this ‘fine print’ feature that allowed app developers to scrape the profiles of app users and their friends? According to Parakilas, probably a majority of Facebook users. So that black market of Facebook profiles probably includes a majority of Facebook users. But even more amazing is that Facebook handed out this personal user information to app developers in exchange for a 30 share of the money they made through the app. Facebook was basically directly selling private user data to developers, which is a big reason why Parakilas’s estimate that a majority of Facebook users were impacted by this is likely true. Especially if, as Parakilas hints, the number of developers grabbing user profile information via these apps might be in the hundreds of thousands. That’s a lot of developers potentially feeding into that black market:


‘A majority of Facebook users’

Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.

That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.

The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.

Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.

Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.

Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.

During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.

“Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.”

And that, right there, is perhaps the biggest scandal here: Facebook just handed user data away in exchange for revenue streams from app developers. And this was a key element of its business model during this 2007-2014 period. “Read the fine print” in the terms of service was the excuse they use:


“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”

It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.

And this is all why Aleksandr Kogan’s assertions that he had a close working relationship with Facebook and did nothing technically wrong do actually seem to be backed up by Parakilas’s whistle-blowing. Both because it’s hard to see what Kogan did that wasn’t part of Facebook’s business model and also because it’s hard to ignore that Kogan’s GSR shell company was one of the very last apps to have permission to exploit their “friends’ permission” app loophole. That sure does suggest that Kogan really did have a “close working relationship” with Facebook. So close he got seemingly favored treatment, and that’s compared to the seemingly vast number of apps that were apparently using this “friends permissions” feature: 1 in 10 Facebook apps, according to a 2010 study:


The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.

While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.

“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps [32], concluded that around 11% of third-party developers requested data belonging to friends of users.

If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.

““Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps [32], concluded that around 11% of third-party developers requested data belonging to friends of users.”

As of 2010, around 11 percent of app developers requested data belonging to friends of users. Keep that in mind when Facebook claims that Aleksandr Kogan improperly obtained data from the friends of the people who downloaded Kogan’s app.

So what made Facebook eventually end this “friends permissions” policy in mid-2014? While Parakilas has already left the company by then, he does recall conversations with executive who were nervous about competitors building their own social networks from all the data Facebook was giving away:


Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.

He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.

“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”

That’s how much data Facebook was handing out to encourage new app development: so much data that they were concerned about creating competitors.

Finally, it’s important to note that the picture painted by Parakilas only goes until the end of 2012, when he left in frustration. So we don’t actually have testimony of Facebook insiders who were involved with app data breaches like Parakilas during the period when Cambridge Analytica was engaged in its mass data collection scheme:


Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”

Now, it seems like a safe bet that the problem only got worse after Parakilas left given how the Cambridge Analytica situation played out, but we don’t know yet just had bad it was at this point.

Aleksandr Kogan: Facebook’s Close Friend (Until He Belatedly Wasn’t)

So, factoring in what we just saw with Parakilas’s claims about extent to which Facebook was handing out private Facebook profile data – the internal profile that Facebook builds up about you – to app developers for widespread commercial applications, let’s take a look at the some of the claims Aleksandr Kogan has made about his relationship with Facebook. Because while Kogan makes some extraordinary claims, they are also consistent with Parakilas’s claims, although in some cases Kogan’s description actually goes much further than Parakilas.

For instance, according to the following Observer article …

1. In an email to colleagues at the University of Cambridge, Aleksandr Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions.

2. Kogan also claims in that email that the contract his GSR company signed with Facebook in 2014 made it absolutely clear the data was going to be used for commercial applications and that app users were granting Kogan’s company the right to license or resell the data. “We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote.We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.” So Kogan says he made it clear to Facebook and user the app was for commercial purposes and that the data might be resold which sounds like the kind of situation Sandy Parakilas said he witnessed except even more open (which should be easily verifiable if the app code still exists).

3. Facebook didn’t actually kick Kogan off of its platform until March 16th of this year, just days before this story broke. Which consistent with Kogan’s claims that he had a good working relationship with Facebook.

4. When Kogan founded Global Science Research (GSR) in May 2014, he co-founded it with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.

5. Facebook gave Kogan’s University of Cambridge lab provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level”. 57 billion Facebook relationships in all. The data was anonymized and aggregated, so it didn’t literally include details on individual Facebook friends and was instead the aggregate “friend” counts at a national. The data was used to publish a study in Personality and Individual Differences in 2015 and two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. But it’s still a sign that Kogan is indeed being honest when he says he had a close working relationship with Facebook. It’s also a reminder that when Facebook claims that it was just handing out data for “research purposes” only, if that was true it would have handed out anonymized aggregated data like they did in this situation with Kogan.

6. That study co-authored by Kogan’s team and Facebook didn’t just use the anonymized aggregated friendship data. The study also used non-anonymized Facebook ata collected through Facebook apps using exactly the same techniques Kogan’s app for Cambridge Analytica used. This study was published in August of 2015. Again, it was a study co-authored by Facebook. GSR co-founder Joseph Chancellor left GSR a month later and joined Facebook as a user experience research in November 2015. Recall that it was a month later, December 2015, when we saw the first news reports of Ted Cruz’s campaign using Facebook data. Also recall that Facebook responded to that December 2015 report by saying it would look into the matter. Facebook finally sent Cambridge Analytica a letter in August of 2016, days before Steve Bannon became Trump’s campaign manager, asking that Cambridge Analytica delete the data. So the fact that Facebook co-authored a paper with Kogan and Chancellor in August of the 2015 and then Chancellor joined Facebook in 2015 is a pretty significant bit of context for looking into Facebook’s behavior. Because Facebook didn’t just know it was guilty of working closely with Kogan. They also knew they just co-authored an academic paper using data gathered with the same technique Cambridge Analytica was charged with using.

7. Kogan does challenge one of the claims by Christopher Wylie. Specifically, Wylie claimed that Facebook became alarmed over the volume of data Kogan’s app was scooping up (50 million profiles) but Kogan assuaged those concerns by saying it was all for research. Kogan says this is a fabrication and Facebook never actually contacted him expressing alarm.

So, according to Aleksandr Kogan, Facebook really did have an exceptionally close relationship with Kogan and Facebook really was totally on board with what Kogan and Cambridge Analytica were doing [35]:

The Guardian

Facebook gave data about 57bn friendships to academic
Volume of data suggests trusted partnership with Aleksandr Kogan, says analyst

Julia Carrie Wong and Paul Lewis in San Francisco
Thu 22 Mar 2018 10.56 EDT
Last modified on Sat 24 Mar 2018 22.56 EDT

Before Facebook suspended Aleksandr Kogan [8] from its platform for the data harvesting “scam [3]” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.

Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships [36] published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.

A University of Cambridge press release [37] on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.

“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”

Facebook downplayed the significance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was literally numbers – numbers of how many friendships were made between pairs of countries – ie x number of friendships made between the US and UK,” Facebook spokeswoman Christine Chen said by email. “There was no personally identifiable information included in this data.”

Facebook’s relationship with Kogan has since soured.

“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported [2] that the data had been obtained by Cambridge Analytica.

“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.

On Friday 16 March, in anticipation of the Observer [38]’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans [15], Facebook suspended Kogan from the platform, issued a statement [8] saying that he “lied” to the company, and characterised [3] his activities as “a scam – and a fraud”.

On Tuesday, Facebook went further, saying [39] in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement [40] on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.

But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting [2] activities – for it to inform the users whose personal information was improperly shared.

And Kogan has offered a defence of his actions in an interview [41] with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.

The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.

Between June and August of that year, GSR paid approximately 270,000 individuals to use a Facebook questionnaire app that harvested data from their own Facebook profiles, as well as from their friends, resulting in a dataset of more than 50 million users. The data was subsequently given to Cambridge Analytica, in what Facebook has said was a violation of Kogan’s agreement to use the data solely for academic purposes.

In his email to colleagues at Cambridge, Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions. CNN first reported on the Cambridge email. Kogan did not respond to the Guardian’s request for comment on this article.

“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”

Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.

“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.

Whistleblower Christopher Wylie told the Observer [42] that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and … we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”

The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.

For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.

The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.

———-

“Facebook gave data about 57bn friendships to academic” by Julia Carrie Wong and Paul Lewis; The Guardian; 03/22/2018 [35]

“Before Facebook suspended Aleksandr Kogan [8] from its platform for the data harvesting “scam [3]” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.

An anonymized, aggregate dataset of 57bn Facebook friendships sure makes it a lot easier to take Kogan at his word when he claims a close working relationship with Facebook.

Now, keep in mind that the aggregate anonymized data was aggregate at the national level, so it’s not as if Facebook gave Kogan a list of 57 billion Facebook friendships. And when you think about it, that aggregated anonymized data is far less sensitive than the personal Facebook profile data Kogan and other app developers were routinely grabbing during this period. It’s the fact that Facebook gave this data to Kogan in the first place that lends credence to his claims.

But the biggest factor lending credence to Kogan’s claims is the fact that Facebook co-authored a study with Kogan and other at the University of Cambridge using that anonymized aggregated data. Two Facebook employees were named as co-authors of the study. That is definitely a sign of close working relationship:


Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships [36] published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.

A University of Cambridge press release [37] on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.

“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”

Even more damning for Facebook is that the research co-authored by Kogan, Facebook, and other researchers didn’t just included the anonymized aggregated data. It also included a second data set of non-anonymized data that was harvested in exactly the same way Kogan’s GSR app worked. And while Facebook apparently wasn’t involved in that part of the study, that’s beside the point. Facebook clearly knew about it if they co-authored the study:


The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.

For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.

The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.

But, alas, Kogan’s relationship with Facebook as since soured, with Facebook now acting as if Kogan had totally violated their trust. And yet it’s hard to ignore the fact that Kogan wasn’t formally kicked off Facebook’s platform until March 16th of this year, just a few days before all these stories about Kogan and Facebook were about to go public:


Facebook’s relationship with Kogan has since soured.

“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported [2] that the data had been obtained by Cambridge Analytica.

“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.

On Friday 16 March, in anticipation of the Observer [38]’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans [15], Facebook suspended Kogan from the platform, issued a statement [8] saying that he “lied” to the company, and characterised [3] his activities as “a scam – and a fraud”.

On Tuesday, Facebook went further, saying [39] in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement [40] on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.

““The entire company is outraged we were deceived.” And on Wednesday, in his first public statement [40] on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.”

Mark Zuckerberg is complaining about a “breach of trust.” LOL!

And yet Facebook has yet to explain the nature of its relationship with Kogan or why it was that they didn’t kick him off the platform until only recently. But Kogan has an explanation: He’s a scapegoat and he wasn’t doing anything Facebook didn’t know he was doing. And when you notice that Kogan’s co-founder of GSR, Joseph Chancellor, is now a Facebook employee, it’s hard not to take his claims seriously:


But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting [2] activities – for it to inform the users whose personal information was improperly shared.

And Kogan has offered a defence of his actions in an interview [41] with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.

The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.

But if Kogan’s claims are to be taken seriously, we have a pretty serious scandal on our hands. Because Kogan claims that not only did he make it clear to Facebook and his app users that the data they were collecting was for commercial use – with no mention of academic or research purposes of the University of Cambridge – but he also claims that he made it clear the data GSR was collecting could be licensed and resold. And Facebook at no point raised any concerns at all about any of this:


“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”

Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.

“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.

Now, it’s worth noting that the casual acceptance of the commercial use of the data collected over these Facebook apps and the potential licensing and reselling of that data is actually a far more seriously situation than the one Sandy Parakilas described during his time at Facebook. Recall that, according to Parakilas, app developers simply had to tell Facebook was that they were going to use the profile data on app users and their friends to ‘improve the user experience.’ It was fine if they were commercial apps from Facebook’s perspective. But Parakilas didn’t describe a situation where app developers openly made it clear they might license or resell the data. So Kogan’s claim that it was clear his app had commercial applications and might involve reselling the data is even more egregious than the situation Parakilas described. But don’t forget that Parakilas left Facebook in late 2012 and Kogan’s app would have been approved in 2014 so it’s entirely possible Facebook’s policies got even more egregious after Parakilas left.

And it’s worth noting how Kogan’s claims differ from Christopher Wylie’s. Wylie asserts that Facebook grew alarmed by the volume of data GSR’s app was pulling from Facebook users and Kogan assured them it was for research purposes. Whereas Kogan says Facebook never expressed any alarm at all:


Whistleblower Christopher Wylie told the Observer [42] that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and … we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”

So as we can see, when it comes to Facebook’s “friends permissions” data sharing policy, its arrangement with Aleksandr Kogan was probably one of the more responsible ones it engaged in because, hey, at least Kogan’s work was ostensibly for research purposes and involved at least some anonymized data.

Cambridge Analytica’s Informal Friend: Palantir

And as we can also see, the more we learn about this situation, the harder it gets to dismiss Kogan’s claims that Facebook is making in a scapegoat in order to cover up not just the relationship Facebook had with Kogan but the fact that what Kogan was doing was routine for app developers for years.

But as the following New York Times article makes clear, Facebook’s relationship with Aleksandr Kogan isn’t the only working relationship Facebook needs to worry about that might lead back to Cambridge Analytica. Because it turns out there’s another Facebook connection to Cambridge Analytica and it’s potentially far, far more scandalous than Facebook’s relationship with Kogan: It turns out Palantir might be the originator of the idea to create Kogan’s app for the purpose of collecting psychological profiles. That’s right, according to documents the New York Times has seen, Palantir, the private intelligence firm with a close relationship with the US national security state, was in talks with Cambridge Analytica from 2013-2014 about psychologically profiling voters and it was an employee of Palantir who raised the idea of creating that app in the first place.

And this is of course wildly scandalous if true because Palantir was founded by the Facebook executive Peter Thiel who also happens to be a far right political activist and a close ally of President Trump.

But it gets worse. And weirder. Because it sounds like one of the people encouraging SCL (Cambridge Analytica’s parent company) to work with Palantir was none other than Sophie Schmidt, daughter of Google CEO Eric Schmidt.

Keep in mind that this isn’t the first time we’ve heard about Palantir’s ties to Cambridge Analytica and Sophie Schmidt’s role in this. It was reported by the Observer last May [43]. According to that May 2017 article in the Observer, Schmidt was passing through London in June of 2013 when she decided to called up her former boss at SCL and recommend that they contact Palantir. Also if interest is that if you look at the current version of that Observer article, all mention of Sophie Schmidt has been removed and there’s a note that the article is the subject of legal complaints on behalf of Cambridge Analytica LLC and SCL Elections Limited [17]. But in the original article [44] she’s mentioned quite extensively. It would appear that someone is very upset about the Sophie Schmidt angle to this story.

So this Palantir/Sophie Schmidt side of this story isn’t a new. But we’re learning a lot more information about that relationship now. For instance:

1. In early 2013, Cambridge Analytica CEO Alexander Nix, an SCL director at the time, and a Palantir executive discussed working together on election campaigns.

2. And SCL employee wrote to a colleague in a June 2013 email that Schmidt is pushing them to work with Palantir. “Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” .

3. According to Christopher Wylie’s testimony to lawmakers, “There were Palantir staff who would come into the office and work on the data…And we would go and meet with Palantir staff at Palantir.” Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014.

4. The Palantir employee who floated the idea of create the app ultimately built by Aleksandr Kogan is Alfredas Chmieliauskas. Chmieliauskas works on business development for Palantire according to his LinkedIn page.

5. Palantir and Cambridge Analytica never formally started working together. A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. Wylie acknowledges that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge Analytica’s psychographic models. In other words, while there was never a formal relationship, there was an pretty significant informal relationship.

6. Mr. Chmieliauskas was in communication with Wylie’s team in 2014 during the period when Cambridge Analytica was initially trying to convince the University of Cambridge team to work with them. Recall that Cambridge Analytica initially discovered that the University of Cambridge team had exactly the kind of data they were interested in collected via a Facebook app, but the negotiations ultimately failed and it was then that Cambridge Analytica found Aleksandr Kogan who agreed to create his own app. Well, according to this report, it was Chmieliauskas who initially suggested to Cambridge Analytica that the firm create its own version of the University of Cambridge team’s app as leverage in those negotiations. In essence, Chmieliauskas wanted Cambridge Analytica to show the University of Cambridge team that they could collect the information themselves, presumably to drive a harder bargain. And when those negotiations failed Cambridge Analytica did indeed create their own app after teaming up with Kogan.

7. Palantir asserts that Chmieliauskas was acting in his own capacity when he continued communicating with Wylie and made the suggestion to create their own app. Palantir initially told the New York Times that it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Palantir later revised this, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

And, again, do not forget that Palantir is own by Peter Thiel, the far right [45] billionaire [46] early investor in Facebook and one of Facebook’s board members to this day [47]. He was also a Trump delegate in 2016 [48] and was in discussions with the Trump administration to lead the powerful President’s Intelligence Advisory Board, although he ultimately turned that offer down [49]. Oh, and he’s an advocate of the Dark Enlightenment [50].

Basically, Peter Thiel was a member of the ‘Alt Right’ before that term was ever coined. And he’s a very powerful influence at Facebook. So learning that Palantir and Cambridge Analytica were in discussion to work together on election projects in 2013 and 2014, a Palantir employee was advising Cambridge Analytica during the negotiations with the University of Cambridge team, and that Palantir employees helped engineer Cambridge Analytica’s psychographic model based on Facebook is the kind of revelation that just might qualify as the most scandalous revelation in this entire mess [51]:

“Spy Contractor’s Idea Helped Cambridge Analytica Harvest Facebook Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018 [51]

As a start-up called Cambridge Analytica [3] sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica [52] went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel [53] — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie [54], a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday.

The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook.

The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

“We learned today that an employee, in 2013-2014, engaged in an entirely personal capacity with people associated with Cambridge Analytica,” the company said. “We are looking into this and will take the appropriate action.”

The company said it was continuing to investigate but knew of no other employees who took part in the effort. Mr. Wylie told lawmakers that multiple Palantir employees played a role.

Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.

In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.

A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.

In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.

“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”

Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.

———-

“Spy Contractor’s Idea Helped Cambridge Analytica Harvest Facebook Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018 [51]

“The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel [53] — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

Yep, a Facebook board member’s private intelligence firm was working closely with Cambrige Analytica as they developed their psychological profiling technology. It’s quite a revelation. The kind of explosive revelation that has Palantir first denying that there was any relationship at all, followed with acknowledgement/denial that, yes, a Palantir employee, Alfredas Chmieliauskas, was indeed working with Cambridge Analytica but not on behalf of Palantir:


It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

Adding the scandalous nature of it all is that Google CEO Eric Schmidt’s daughter suddenly appeared in June of 2013 to also promote to her old boss at SCL a relationship with Palantir:


Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.

But this June 2013 proposal by Sophie Schmidt wasn’t what started Cambridge Analytica’s relationship with Palantir. Because that reportedly started in early 2013, when Alexander Nix and a Palantir executive discussed working together on election campaigns:


In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.

So Sophie Schmidt swooped in to promote Palantir to Cambridge Analytica months after the negotiations began. It raises the question of who encouraged her to do that.

Palantir now admits these negotiations happened, but claims that they chose not to work with Cambridge Analytica because they “wanted to steer clear of election work.” And emails indicate that Palantir did indeed formally turn down the idea of working with Cambridge Analytica since the emails show that Nix and Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. And yet, according to Christopher Wylie, some Palantir employees helped engineer their psychogrophic models. And that suggests Palantir turned down a formal relationship in favor of an informal one:


A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.

In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

“There were Palantir staff who would come into the office and work on the data…And we would go and meet with Palantir staff at Palantir.”

That sure sounds like a relationship! Formal or not.

And that informal relationship continued during the period when Cambridge Analytica was in negotiation with the initial University of Cambridge Psychometrics Centre in 2014:


In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.

And it was during those negotiations, in May of 2014, when Chmieliauskas first proposed the idea of just replicating what the University of Cambridge Psychometrics Centre was doing for leverage in the negotiations. When those negotiations ultimately failed, Cambridge Analytica found another Cambridge University psychologist, Aleksandr Kogan, to build the app for them:


“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”

Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.

And that’s what we know so far about the relationship between Cambridge Analytica and Palantir. Which raises a number of questions. Like whether or not this informal relationship continued well after Cambridge Analytica started harvesting all that Facebook information. Let’s look at seven key the facts about we know Palantir’s involvement in this so far:

1. Palantir employees helped build the psychographic profiles.

2. Mr. Chmieliauskas was in contact with Wylie at least as late as May of 2014 as Cambridge Analytica was negotiating with the University of Cambridge’s Psychometrics Centre.

3. We don’t know when this informal relationship between Palantir and Cambridge Analytica ended.

4. We don’t know if the informal relationship between Palantir and Cambridge Analytica – which largely appears to center around Mr. Chmieliauskas – really was largely Chmieliauskas’s initiative alone after Palantir initially rejected a formal relationship (it’s possible) or if Chmieliauskas was directed to pursue this relationship informally but on behalf of Palantir to maintain deniability in the case of awkward situations like the present one (also very possible, and savvy given the current situation).

5. We don’t know if the Palantir employees who helped build those psychographic profiles were working with the data Cambridge Analytica harvested from Facebook or were they working with the earlier, inadequate sets of data that didn’t include the Facebook data? Because if the Palantir employees helped build the psychographic profiles based on the Facebook data that implies this informal relationship went on a lot longer than May of 2014 since that’s when it first started getting collected via Kogan’s app. How long? We don’t yet know.

6. Neither do we know how much of this data ultimately fell into the hands of Palantir. As Wylie described it, “There were Palantir staff who would come into the office and work on the data…And we would go and meet with Palantir staff at Palantir.” So did those Palantir employees who were working on “the data” take any of that data back to Palantir?

7. For that matter, given that Peter Thiel sits on the board of Facebook, and given how freely Facebook hands out this kind of data, we have to ask the question of whether or not Palantir already has direct access to exactly the kind of data Cambridge Analytica was harvesting. Did Palantir even need Cambridge Analytica’s data? Perhaps Palantir was already using apps of their own to harvest this kind of data? We don’t know. At the same time, don’t forget that even if Palantir had ready access to the same Facebook profile data gathered by Kogan’s app, it’s still possible Palantir would have had an interest in the company purely to see how the data was analyzed and learn from that. In other words, the interest in Cambridge Analytica may be been more related to the algorithms, and not the data, for Peter Thiel’s Palantir. Don’t forget that if anyone is the real power behind the throne at Facebook it’s probably Thiel.

8. What on earth is going on with Sophie Schmidt, daughter of Google CEO Eric Schmidt, pushing Cambridge Analytica to work with Palantir in June of 2013, months after Cambridge Analytic and Palantir began talking with each other? That seems potentially significant.

Those are just some of the questions raised about Palantir’s ambiguously ominous relationship with Cambridge Analytica. Bad don’t forget that it’s not just Palantir that we need to ask these kinds of questions. For instance, what about Steve Bannon’s Breitbart? Does Breitbart, home the neo-Nazi ‘Alt Right’, also have access to all that harvested Cambridge Analytica data? Not just the raw Facebook data but also the processed psychological profile data on 50 million Americans that Cambridge Analytica generated. Does Breitbart have the processed profiles too? And what about the Republican Party? And all the other entities out there who gained access to this Facebook profile data. Just how many different entities around the globe possess that Cambridge Analytica data set?

It’s Not Just Cambridge Analytica. Or Facebook. Or Google. It’s Society.

Of course, as we saw with Sandy Parakilas’s whistle-blower claims, when it comes to the question of who might possess Facebook profile data harvested during the 2007-2014 period when Facebook had “friends permissions” policy, the list of suspects includes potentially hundreds of thousands of developers and anyone who has purchased this information on the black market.

Don’t forget one of the other amazing aspects of this whole situation: if hundreds of thousands of developers were using this feature to scrape user profiles, that means this really was an open secret. Lots and lots of people were doing this. For years. So, like many scandals, perhaps the most scandalous part of it is that we’re learning about something we should have known all along and many of did know all along. It’s not like it’s a secret that people are being surveilled in detail in the internet age and this data is being stored and aggregated in public and private databases and put up for sale. We’ve collectively known this all along. At least on some level.

And yet this surveillance is so pervasive that it’s almost never thought about on a moment by moment basis at an individual level. When people browse the web they presumably aren’t thinking about the volume of tracking cookies and other personal information slurped up as a result of that mouse click. Nor are they thinking about how that click contributes to the numerous personal profiles of them floating around the commercial data brokerage marketplace. So in a more fundamental sense we don’t actually know we’re being surveilled because we’re not thinking about it.

It’s one example of how humans aren’t wired to naturally think about the macro forces impacting their lives in day to day decisions, which was fine when we were cave men but becomes a problematic instinct when we’re literally mastering the laws of physics and shaping our world and environment. From physics and nature to history and contemporary trends, the vast majority of humanity spends very little time studying these topics. Which is completely understandable given the lack of time or resources to do so, but that understandable instinct creates world perfectly set up for abuse by surveillance states, both public and private, which makes it less understandable and much more problematic.

So, in the interest of gaining perspective on how we got to this point where the Facebook emerged as an ever-growing Panopticon in just a few short years after its conception, let’s take a look at one last article. It’s an article by investigative journalist Yasha Levine, who recently published the must-read book Surveillance Valley: The Secret Military History of the Internet [55]. It’s a book filled with vital historical fun fact about the internet. Fun facts like…

1. How the internet began as a system built for national security purposes with a focus on military hardware and command and control communication purposes in general. But there was also a focus on building a system that could collect, store, process, and distribute of massive volumes of information used to wage the Vietnam war. Beyond that, these early computer networks also acted as a collection and sharing system for dealing with domestic national security concerns (concerns that centered around tracking anti-war protesters, civil rights activists, etc). That’s what the internet started out as. A system for storing data about people and conflict for US national security purposes.

2. Building databases of profiles on people (foreign and domestic) was one of the very first goals of these internet predecessors. In fact, one of the key visionaries behind the development of the internet, Ithiel de Sola Pool, both helped shape the development of the early internet as a surveillance and counterinsurgency technology and also pioneered data-driven election campaigns. He even started a private firm to do this: Simulmatics. Pool’s vision was a world where the surveillance state acted as a benign master that the kept the peace peacefully by using superior knowledge to nudge people in the ‘right’ direction.

3. This vision of vast database of personal profiles for the purpose was largely a secret at first, but it didn’t remain that way. And there was actually quite a bit of public paranoia in the US about these internet-predecessors, especially within the anti-Vietnam war activist communities. Flash forward a couple decades and that paranoia has faded almost entirely…until scandals like the current one erupt and we temporarily grow concerned.

4. What Cambridge Analytica is accused of doing is what the data giants like Facebook and Google do every day and have been going for years. And it’s not just the giants. Smaller firms are scooping up fast amounts of information too…it’s just not as vast as what the giants are collecting. Even cute apps, like the wildly popular Angry Birds, has been found to collect all sorts of data about users.

5. While it’s great that public attention is being directed at the kind of sleazy manipulative activities Cambridge Analytica was engaging in, deceptively wielding real power over real unwitting people, it is a wild mischaracterization to act like Cambridge Analytica was exerting mass mind-control over the masses using internet marketing voodoo. What Cambridge Analytica, or any of the other sleazy manipulators, were doing was indeed influential, but it needs to be viewed in the context of a political state of affairs where massive numbers of Americans, including Trump voters, really have been collectively failed by the American power establishment for decades. The collapse of the American middle class and rise of the plutocracy is what created the kind of macro environment where carnival barker like Donald Trump could use firms like Cambridge Analytica to ‘nudge’ people in the direction of voting for him. In other words, the focus on Cambridge Analytica’s manipulation of people’s psychological profiles in the absence of the recognition of the massive political failures of last several decades in America – the mass socioeconomic failures of the American embrace of ‘Reaganonics’ and right-wing economic gospel coupled with the American Left’s failure to effectively repudiate these doctrines – is profoundly ahistorical. The story of the rise of the power of firms like Facebook, Google, and Cambridge Analytica is a story the implicitly includes the story of that entire history of political/socioeconomic failures tied to failure to effectively respond to the rise of the American right-wing over the last several decades. And we are making a massive mistake if we forget that. Cambridge Analytica wouldn’t have been nearly as effective in nudging people towards voting for someone like Trump if so many people weren’t already so ready to burn the current system down.

These are the kinds of historical chapters that can’t be left out of any analysis of Cambridge Analytica. Because Cambridge Analytica isn’t the exception. It’s an exceptionally sleazy example of the rules we’ve been playing by for a while, whether we realized it or not [56]:

The Baffler

The Cambridge Analytica Con

Yasha Levine,
March 21, 2018

“The man with the proper imagination is able to conceive of any commodity in such a way that it becomes an object of emotion to him and to those to whom he imparts his picture, and hence creates desire rather than a mere feeling of ought.”

Walter Dill Scott, Influencing Men in Business: Psychology of Argument and Suggestion (1911)

This week, Cambridge Analytica, the British election data outfit funded by billionaire Robert Mercer and linked to Steven Bannon and President Donald Trump, blew up the news cycle. The charge, as reported by twin exposés in the New York Times [3] and the Guardian [42], is that the firm inappropriately accessed Facebook profile information belonging to 50 million people and then used that data to construct a powerful internet-based psychological influence weapon. This newfangled construct was then used to brainwash-carpet-bomb the American electorate, shredding our democracy and turning people into pliable zombie supporters of Donald Trump.

In the words of a pink-haired Cambridge Analytica data-warrior-turned-whistleblower, the company served as a digital armory that turned “Likes” into weapons and produced “Steve Bannon’s psychological warfare mindfuck tool.”

Scary, right? Makes me wonder if I’m still not under Cambridge Analytica’s influence right now.

Naturally, there are also rumors of a nefarious Russian connection. And apparently there’s more dirt coming. Channel 4 News in Britain just published an investigation showing top Cambridge Analytica execs bragging to an undercover reporter that their team uses high-tech psychometric voodoo to win elections for clients all over the world, but also dabbles in traditional meatspace techniques as well: bribes, kompromat, blackmail, Ukrainian escort honeypots—you know, the works.

It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.

But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.

So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society [57], going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.

Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.

Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass [58]. It’s Lockheed Martin [59]. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.

All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.

What do these companies know about us, their users? Well, just about everything.

Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.

On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.

It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games [60] like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data [61] on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.

In launching Android, Google made a gamble [60] that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business [62].

These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.

Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.

The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.

The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”

Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies [63], and bots [64].

Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star [65] is NGP VAN [66]. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.

Here’s how The National Journal’s Andrew Rice described i360 in 2015:

Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.

Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”

In 2008, the hip young Blackberry-toting Barack Obama was the first major-party candidate on the national scene to truly leverage the power of internet-targeted agitprop. With help from Facebook cofounder Chris Hughes, who built and ran Obama’s internet campaign division, the first Obama campaign built an innovative micro-targeting initiative to raise huge amounts of money in small chunks directly from Obama’s supporters and sell his message with a hitherto unprecedented laser-guided precision in the general election campaign.

Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted [67], Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.

What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion [68]”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained [69] the company’s lead scientist on this study.

On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.

So our present-day freakout over Cambridge Analytica needs to be put in the broader historical context of our decades-long complacency over Silicon Valley’s business model. The fact is that companies like Facebook and Google are the real malicious actors here—they are vital public communications systems that run on profiling and manipulation for private profit without any regulation or democratic oversight from the society in which it operates. But, hey, let’s blame Cambridge Analytica. Or better yet, take a cue from the Times and blame the Russians [9] along with Cambridge Analytica.

***

There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.

This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.

The truth is that the internet has never been about egalitarianism or democracy.

The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting [70].

The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.

Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal [71], where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”

In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.

Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project [72] at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.

He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.

These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.

Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.

At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”

With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.

This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.

Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.

A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.

***

Simulmatics and its first-generation imitations are now ancient history—dating back from the long-ago time when computers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The internet surrounds us, engulfing and monitoring everything we do. We are tracked and watched and profiled every minute of every day by countless companies—from giant platform monopolies like Facebook and Google to boutique data-driven election firms like i360 and Cambridge Analytica.

Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.

Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.

***

And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.

Cambridge Analytica has been one of the lesser bogeyman [73] used to explain Trump’s victory for quite a while [74], going back more than year. Back in March 2017, the New York Times, which now trumpets the saga of Cambridge Analytica’s Facebook heist, was skeptically questioning the company’s technology and its role [7] in helping bring about a Trump victory. With considerable justification, Times reporters then chalked up the company’s overheated rhetoric to the competition for clients in a crowded field of data-driven election influence ops.

Yet now, with Robert Meuller’s Russia investigation dragging on and producing no smoking gun pointing to definitive collusion, it seems that Cambridge Analytica has been upgraded to Class A supervillain. Now the idea that Steve Bannon and Robert Mercer concocted a secret psychological weapon to bewitch the American electorate isn’t just a far-fetched marketing ploy [75]—it’s a real and present danger to a virtuous info-media status quo. And it’s most certainly not the extension of a lavishly funded initiative that American firms have been pursuing for half a century. No, like the Trump uprising it has allegedly midwifed into being, it is an opportunistic perversion of the American way. Employing powerful technology that rewires the inner workings of our body politic, Cambridge Analytica and its backers duped the American people into voting for Trump and destroying American democracy.

It’s a comforting idea for our political elite, but it’s not true. Alexander Nix, Cambridge Analytica’s well-groomed CEO, is not a cunning mastermind but a garden-variety digital hack. Nix’s business plan is but an updated version of Ithiel de Sola Pool’s vision of permanent peace and prosperity won through a placid regime of behaviorally managed social control. And while Nix has been suspended following the bluster-filled video footage of his cyber-bragging aired on Channel 4, we’re kidding ourselves if we think his punishment will serve as any sort of deterrent for the thousands upon thousands of Big Data operators nailing down billions in campaign, military, and corporate contracts to continue monetizing user data into the void. Cambridge Analytica is undeniably a rogue’s gallery of bad political actors, but to finger the real culprits behind Donald Trump’s takeover America, the self-appointed watchdogs of our country’s imperiled political virtue had best take a long and sobering look in the mirror.

———-

“The Cambridge Analytica Con” by Yasha Levine; The Baffler; 03/21/2018 [56]

“It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.”

Yes indeed, it is great to see that this topic is finally getting the attention it has long deserved. But it’s not great to see the topic limited to Cambridge Analytica and Facebook. As Levine puts it, “We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries.” Societies in general are held together via overt and covert influence, but we’ve gotten really, really good at that over the last half century in America and the story of Cambridge Analytica, and the larger story of Sandy Parakilas’s whistle-blowing about mass data collection, can’t really be understood outside that historical context:


But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.

So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society [57], going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.

And the first step in putting the Cambridge Analytica story in proper perspective is recognizing that what it is accused of doing – grabbing personal data and building profiles for the purpose of influencing voters – is done every day by entities like Facebook and Google. It’s a regular part of our lives. And you don’t even need to use Facebook or Google to become part of this vast commercial surveillance system. You just need to communicate with someone who does use those platforms:


Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.

Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass [58]. It’s Lockheed Martin [59]. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.

All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.

What do these companies know about us, their users? Well, just about everything.

Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.

On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.

The next step in contextualizing this is recognizing that Facebook and Google are merely the biggest fish in an ocean of data brokerage markets that has many smaller inhabitants trying to do the same thing. This is part of what makes Facebook’s handing over of profile data to app developers so scandalous…Facebook clearly new there was a voracious market for this information and made a lot of money selling into that market:


It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games [60] like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data [61] on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.

In launching Android, Google made a gamble [60] that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business [62].

These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.

And when it comes to political campaigns, the digital giants like Facebook and Google already have special election units set up to give privileged access to political campaigns so they can influence voters even more effectively. The stories about the Trump campaign’s use of Facebook “embeds” to run a massive systematic advertising campaign of “A/B testing on steroids” to systematically experiment on voter ad responses [21] is part of that larger story of how these giants have already made the manipulation of voters big business:


Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.

The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.

The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”

Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies [63], and bots [64].

Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star [65] is NGP VAN [66]. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.

Here’s how The National Journal’s Andrew Rice described i360 in 2015:

Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.

Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”

And offering special services to campaign manipulate voters isn’t just big business. It’s a largely unregulated business. If Facebook decides to covertly manipulate you by altering its newsfeed algorithms so it shows you news articles more from your conservative-leaning friends (or liberal-leaning friends), that’s totally legal. Because, again, subtly manipulating people is as American as apple pie:


Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted [67], Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.

What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion [68]”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained [69] the company’s lead scientist on this study.

On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.

And this contemporary state of affairs didn’t emerge spontaneously. As Levine covers in Surveillance Valley, this is what the internet – back when it was the ARPANET military network – was all about from its very conception:


There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.

This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.

The truth is that the internet has never been about egalitarianism or democracy.

The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting [70].

The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.

“The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting [70]

And one of the key figures behind this early ARPANET version of the internet, Ithiel de Sola Pool, got his start in this area in the 1950’s working at the Hoover Institution at Stanford University to understand the nature and causes of left-wing revolutions and distill this down to a mathematical formula. Pool, an virulent anti-Communist, also worked for JFK’s 1960 campaign and went on to start a private company, Simulmatics, offering services in modeling and manipulating human behavior based on large data sets on people:


Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal [71], where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”

In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.

Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project [72] at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.

He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.

These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.

Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.

And part of what drove Pool’s was a utopian belief that computers and massive amounts of data could be used to run society harmoniously. Left-wing revolutions were problems to be managed with Big Data. It’s a pretty important historical context when thinking about the role Cambridge Analytica played in electing Donald Trump:


At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”

And guess what: the American public wasn’t enamored with Pool’s vision of a world managed by computing technology and Big Data models of society. When the public learned about these early version of the internet inspired by visions of a computer-managed world in the 60’s and 70’s, the public got scared:


With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.

This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.

Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.

A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.

But that fear somehow disappeared in subsequent decades, only to be replaced with a faith in our benevolent techno-elite. And a faith that this mass public/private surveillance system is actually an empowering tool that will lead to a limitless future. And that is perhaps the biggest scandal here: The public didn’t just forgot to keep an eye on the powerful. The public forgot to keep an eye on the people whose power is derived from keeping an eye on the public. We built a surveillance state at the same time we fell into a fog of civic and historical amnesia. And that has coincided with the rise of a plutocracy, the dominance of right-wing anti-government economic doctrines, and the larger failure of the American political and economic elites to deliver a society that actually works for average people. To put it another way, the rise of the modern surveillance state is one element of a massive, decades-long process of collectively ‘dropping the ball’. We screwed up massively and Facebook and Google are just one of the consequences of this. And yet we still don’t view the Trump phenomena within the context of that massive collective screw up, which means we’re still screwing up massively:


Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.

Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.

***

And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.

So the biggest story here isn’t that Cambridge Analytica was engaged in mass manipulation campaign. And the biggest story isn’t even that Cambridge Analytica was engaged in a cutting-edge commercial mass manipulation campaign. Because both of those stories are eclipsed by the story that even if Cambridge Analytica really was engaged in a commercial cutting edge campaign, it probably wasn’t nearly as cutting edge as what Facebook and Google and the other data giants routinely engage in. And this situation has been building for decades and within the context of the much larger scandal of the rise of a oligarchy that more or less runs America by and for powerful interests. Powerful interests that are overwhelmingly dedicated to right-wing elitist doctrines that view the public as a resources to be controlled and exploited for private profit.

It’s all a reminder that, like so many incredibly complex issues, creating very high quality government is the only feasible answer. A high quality government managed by a self-aware public. Some sort of ‘surveillance state’ is almost an inevitability as long as we have ubiquitous surveillance technology. Even the array of ‘crypto’ tools touted in recent years have consistently proven to be vulnerable, which isn’t necessarily a bad thing since ubiquitous crypto-technology comes with its own suite of mega-collective headaches [76]. National security and personal data insecurity really are intertwined in both mutually inclusive and exclusive ways. It’s not as if the national security hawk arguments that “you can’t be free if you’re dead from [insert war, terror, random chaos things a national security state is supposed to deal with]” isn’t valid. But fears of Big Brother are also valid, as our present situation amply demonstrates. The path isn’t clear, which is why a national security state with a significant private sector component and access to ample intimate details is likely for the foreseeable future whether you like it or not. People err on immediate safety. So we better have very high quality government. Especially high quality regulations for the private sector components of that national security state.

And while digital giants like Google and Facebook will inevitably have access to a troves of personal data that they need to offer the kinds of services people need, there’s no reason any sort of regulating them heavily so they don’t become personal data repository for sale. Which is what they are now.

What do we do about services that people use to run their lives which, by definition, necessitate the collection of private data by a third-party? How do we deal with these challenges? Well, again, it starts with being aware of them and actually trying to collectively grapple with them so some sort of general consensus can be arrive at. And that’s all why we need to recognize that it is imperative that the public surveils the surveillance state along with surveilling the rest of the world going on around us too. A self-aware surveillance state comprised of a self-aware populace of people who know what’s going on with their surveillance state and the world. In other words, part of the solution to ‘Big Data Big Brother’ really is a society of ‘Little Brothers and Sisters’ who are collectively very informed about what is going on in the world and politically capable of effecting changes to that surveillance state – and the rest of government or the private sector – when necessary change is identified. In other other words, the one ‘utopian’ solution we can’t afford to give up on is the utopia of a well-function democracy populated by a well-informed citizenry. A well-armed citizenry armed with relevant facts and wisdom (and an extensive understanding of the history and technique of fascism and other authoritarian movements). Because a clueless society will be an abusively surveilled society.

But the fact that this Cambridge Analytica scandal is a surprise and is being covered largely in isolation of this broader historic and contemporary context is a reminder that we are no where near that democratic ideal of a well-informed citizenry. Well, guess what would be a really valuable tool for surveilling the surveillance state and the rest of the world around us and becoming that well-informed citizenry: the internet! Specifically, we really do need to read and digest growing amounts of information to make sense of an increasingly complex world. But the internet is just the start. The goal needs to be the kind of functional, self-aware democracy were situations like the current one don’t develop in a fog of collective amnesia and can be pro-actively managed. To put it another way, we need an inverse of Ithiel de Sola Pool’s vision of world with benevolent elites use computers and Big Data to manage the rabble and ward of political revolutions. Instead, we need a political revolution of the rabble fueled by the knowledge of our history and world the internet makes widely accessible. And one of the key goals of the political revolution needs to be to create a world with the knowledge the internet makes widely available is used to reign in our elites and build a world that works for everyone.

And yes, that implicitly implies a left-wing revolution since left-wing democratic movements those are the only kind that have everyone in mind. And yes, this implies an economic revolution that systematically frees up time for virtually everyone one so people actually have the time to inform themselves. Economic security and time security. We need to build a world that provide both to everyone.

So when we ask ourselves how we should respond to the growing Cambridge Analytica/Facebook scandal, don’t forget that one of the key lessons that the story of Cambridge Analytica teaches us is that there is an immense amount of knowledge about ourselves – our history and contemporary context- that we needed to learn and didn’t. And that includes envisioning what a functional democratic society and economy that works for everyone would look like and building it. Yes, the internet could be very helpful in that process, just don’t forget about everything else that will be required to build that functional democracy [77].