Let the Great Unfriending Commence! Specifically, the mass unfriending of Facebook. Which would be a well deserved unfriending after the scandalous revelations in a recent series of articles centered around the claims of Christopher Wylie, a Cambridge Analytica whistle-blower who helped found the firm and worked there until late 2014 until he and others grew increasingly uncomfortable with the far right goals and questionable actions of the firm.
And it turns out those questionable actions by Cambridge involve a far larger and more scandalous Facebook policy brought forth by another whistle-blower, Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012.
So here’s a rough breakdown of what’s been learned so far:
According to Christopher Wylie, Cambridge Analytica was “harvesting” massive amount data off of Facebook from people who did not give their permission by utilizing a Facebook loophole. This “friends permissions” loophole allowed app developers to scrape information not just from the Facebook profiles of the people that agree to use their apps but also their friends’ profiles too. In other words, if your Facebook friend downloaded Cambridge Analytica’s app, Cambridge Analytica was allowed to grab private information from your Facebook profile without your permission. And you would never know it.
So how many profiles was Cambridge Analytica allowed to “harvest” utilizing this “friends permission” feature? About 50 million, and only a tiny fraction (~270,000) of that 50 million people actually agreed to use Cambridge Analytica’s app. The rest were all their friends. So Facebook literally used the connectivity of Facebook users against them.
Keep in mind that this isn’t a new revelation. There were reports last year about how Cambridge Analytica paid ~100,000 people a dollar or two (via Amazon’s Mechanical Turks micro-task platform) to take an online survey. But the only way they could be paid was to download an app that gave Cambridge Analytica access to the profiles of all their Facebook friends, eventually yielding ~30 million “harvested” profiles. Although according to these new reports that number is closer to 50 million profiles.
Before that, there was also a report from December of 2015 about Cambridge Analytica’s building of “psychographic profiles” for the Ted Cruz campaign. And that report also included the fact that this involved Facebook data harvested largely without users’ permissions.
So the fact that Cambridge Analytica was secretly harvesting private Facebook user data without their permissions isn’t the big revelation here. What’s new is the revelation that what Cambridge Analytica did was integral to Facebook’s business model for years and very widespread.
This is where Sandy Parakilas comes into the picture. According to Parakilas, this profile-scraping loophole that Cambridge Analytica was exploiting with its app was routinely exploiting by possibly hundreds of thousands of other app developers for years. Yep. It turns out that Facebook had an arrangement going back to 2007 where the company would get a 30 percent cut in the money app developers make off their Facebook apps and in exchange these developers were given the ability to scrape the profiles of not just the people who used their apps but also their friends. In other words, Facebook was essentially selling the private information of its users to app developers. Secretly. Well, except it wasn’t a secret to all those app developers. That’s also part of this scandal
This “friends permission” feature started getting phased out around 2012, although it turns out Cambridge Analytica was one of the very last apps allowed to use it up into 2014.
Facebook has tried to defend itself by asserting that Facebook was only making this available for things like academic research and that Cambridge Analytica was therefore misusing that data. And academic research was in fact the cover story Cambridge Analytica used. Cambridge Analytic actually set up a shell company, Global Science Research (GRS), that was run by a Cambridge University professor, Aleksandr Kogan, and claimed to be purely interested in using that Facebook data for academic research. The collected data was then sent off to Cambridge Analytica. But according to Parakilas, Facebook was allowing developers to utilize this “friends permissions” feature reasons as vague as “improving user experiences”. Parakilas saw plenty of apps harvesting this data for commercial purposes. Even worse, both Parakilas and Wylie paint a picture of Facebook releasing this data and then doing almost nothing to ensure that it’s not misused.
So we’ve learned that Facebook was allowing app developers to “harvest” private data on Facebook users without their permissions from 2007–2014, and now we get to perhaps the most chilling part: According to Parakilas, this data almost certainly floating around in the black market. And it was so easy to set up an app and start collecting this kind of data that anyone with basic app create skills could start trawling Facebook for data. And a majority of Facebook users probably had their profiles secretly “harvested” during this period. If true, that means there’s likely a massive black market of Facebook user profiles just floating around out there and Facebook has done little to nothing to address this.
Parakilas, whose job it was to police data breaches by third-party software developers from 2011–2012, understandably grew quite concerned over the risks to user data inherent in this business model. So what did Facebook’s leadership do when he raised these concerns? They essentially asked him “do you really want to know how this data is being use” attitude and actively discouraged him from investigating how this data may be abused. Intentionally not knowing about abuses was other part of the business model. Cracking down on “rogue developers” was very rare and the approval of Facebook CEO Mark Zuckerberg himself was required to get an app kicked off the platform.
Facebook has been publicly denying allegations like this for years. It was the public denials that led Parakilas to come forward.
And it gets worse. It turns out that Aleksandr Kogan, the University of Cambridge academic who ended up teaming up with Cambridge Analytica and built the app that harvested the data, has a remarkably close working relationship with Facebook. So close that Kogan actually co-authored an academic study published in 2015 with Facebook employees. In addition, one of Kogan’s partners in the data harvesting, Joseph Chancellor, was also an author on the study and went on to join Facebook a few months after it was published.
It also looks like Steve Bannon was overseeing this entire process, although he claims to know nothing.
Oh, and Palantir, the private intelligence firm with deep ties to the US national security state owned by far right Facebook board member Peter Thiel, appears to have had an informal relationship with Cambridge Analytica this whole time, with Palantir employees reportedly traveling to Cambridge Analytica’s office to help build the psychological profiles. And this state of affairs is an extension of how the internet has been used from its very conception a half century ago.
And that’s all part of why the Great Unfriending of Facebook really is long overdue. It’s one really big reason to delete your Facebook account comprised of many many many small egregious reasons.
So let’s start taking a look at those many small reasons to delete your Facebook account with a look at a New York Times story about Christopher Wylie and his story of the origins of Cambridge Analytica and the crucial role Facebook “harvesting” played in providing the company with the data it needed to carry out the goals of its chief financiers: waging the kind of ‘culture war’ the billionaire far right Mercer family and Steve Bannon wanted to wage:
The New York Times
How Trump Consultants Exploited the Facebook Data of Millions
by Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr;
03/17/2018As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem.
The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.
Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”
“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”
Details of Cambridge’s acquisition and use of Facebook data have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate about the merits of the firm’s so-called psychographic modeling techniques.
But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
Cambridge paid to acquire the personal information through an outside researcher who, Facebook says, claimed to be collecting it for academic purposes.
During a week of inquiries from The Times, Facebook downplayed the scope of the leak and questioned whether any of the data still remained out of its control. But on Friday, the company posted a statement expressing alarm and promising to take action.
“This was a scam — and a fraud,” Paul Grewal, a vice president and deputy general counsel at the social network, said in a statement to The Times earlier on Friday. He added that the company was suspending Cambridge Analytica, Mr. Wylie and the researcher, Aleksandr Kogan, a Russian-American academic, from Facebook. “We will take whatever steps are required to see that the data in question is deleted once and for all — and take action against all offending parties,” Mr. Grewal said.
Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.
In Britain, Cambridge Analytica is facing intertwined investigations by Parliament and government regulators into allegations that it performed illegal work on the “Brexit” campaign. The country has strict privacy laws, and its information commissioner announced on Saturday that she was looking into whether the Facebook data was “illegally acquired and used.”
In the United States, Mr. Mercer’s daughter, Rebekah, a board member, Mr. Bannon and Mr. Nix received warnings from their lawyer that it was illegal to employ foreigners in political campaigns, according to company documents and former employees.
Congressional investigators have questioned Mr. Nix about the company’s role in the Trump campaign. And the Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.
While the substance of Mr. Mueller’s interest is a closely guarded secret, documents viewed by The Times indicate that the firm’s British affiliate claims to have worked in Russia and Ukraine. And the WikiLeaks founder, Julian Assange, disclosed in October that Mr. Nix had reached out to him during the campaign in hopes of obtaining private emails belonging to Mr. Trump’s Democratic opponent, Hillary Clinton.
The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.” Only a tiny fraction of the users had agreed to release their information to a third party.
“Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”
Still, he added, “it’s a serious abuse of our rules.”
Reading Voters’ Minds
The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.
Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.
The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.
Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.
Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.
The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.
...
Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.
But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.
Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.
When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.
All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”
He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.
Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.
“We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.”
Mr. Nix tells a different story. Appearing before a parliamentary committee last month, he described Dr. Kogan’s contributions as “fruitless.”
An International Effort
Just as Dr. Kogan’s efforts were getting underway, Mr. Mercer agreed to invest $15 million in a joint venture with SCL’s elections division. The partners devised a convoluted corporate structure, forming a new American company, owned almost entirely by Mr. Mercer, with a license to the psychographics platform developed by Mr. Wylie’s team, according to company documents. Mr. Bannon, who became a board member and investor, chose the name: Cambridge Analytica.
The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European.
But in July 2014, an American election lawyer advising the company, Laurence Levy, warned that the arrangement could violate laws limiting the involvement of foreign nationals in American elections.
In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.”
In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican.
Cambridge Analytica, in its statement to The Times, said that all “personnel in strategic roles were U.S. nationals or green card holders.” Mr. Nix “never had any strategic or operational role” in an American election campaign, the company said.
Whether the company’s American ventures violated election laws would depend on foreign employees’ roles in each campaign, and on whether their work counted as strategic advice under Federal Election Commission rules.
Cambridge Analytica appears to have exhibited a similar pattern in the 2016 election cycle, when the company worked for the campaigns of Mr. Cruz and then Mr. Trump. While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees.
Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support.
Cambridge executives have offered conflicting accounts about the use of psychographic data on the campaign. Mr. Nix has said that the firm’s profiles helped shape Mr. Trump’s strategy — statements disputed by other campaign officials — but also that Cambridge did not have enough time to comprehensively model Trump voters.
In a BBC interview last December, Mr. Nix said that the Trump efforts drew on “legacy psychographics” built for the Cruz campaign.
After the Leak
By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.
Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”
Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.
Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”
Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”
But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.
While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.
Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations.
All the scrutiny appears to have damaged Cambridge Analytica’s political business. No American campaigns or “super PACs” have yet reported paying the company for work in the 2018 midterms, and it is unclear whether Cambridge will be asked to join Mr. Trump’s re-election campaign.
In the meantime, Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on.
———-
“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”
Cambridge Analytica was supposed to be the arsenal of weapons to fight the culture war Cambridge Analytica’s leadership wanted to wage. But that arsenal couldn’t be built without data on what makes us ‘tick’. That’s where Facebook profile harvesting came in:
The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.
Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”
...
And the acquisition of these 50 million Facebook profiles has never been acknowledge by Facebook, until now. And most or perhaps all of that data is still in the hands of Cambridge Analytica:
...
But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
...
And Facebook isn’t alone in suddenly discovering that its data was “harvested” by Cambridge Analytica. Cambridge Analytica itself wouldn’t admit this either. Until now. Now Cambridge Analytica admits it did indeed obtained Facebook’s data. But the company blames it all on Aleksandr Kogan, the Cambridge University academic who ran the front-company that paid people to take the psychological profile surveys, for violating Facebook’s data usage rules. It also claims it deleted all the “harvested” information two years ago as soon as it learned there was a problem. That’s Cambridge Analytica’s new story and it’s sticking to it. For now:
...
Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.
...
But Christopher Wylie has a very different recollection of events. In 2013, Wylie was a 24-year-old political operative with ties to veterans of President Obama’s campaigns interested in using psychological traits to affect voters’ behavior. He even had a team of psychologists and data scientists, some of them affiliated with Cambridge University (where Aleksandr Kogan was also working at the time). And that expertise in psychological profiling for political purposes is why Mr. Nix recruited Wylie and his team.
Then Nix has a chance meeting with Steve Bannon and Robert Mercer. Mercer shows interest in the company because he believes it can make him a Republican kingmaker, while Bannon was focused on the possibility of using personality profiling to shift America’s culture and rewire its politics. The Mercers end up investing $1.5 million in a pilot project: polling voters and testing psychographic messaging in Virginia’s 2013 gubernatorial race:
...
The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.
The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.
Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.
Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.
...
So the pilot project proceed, but there was a problem: Wylie’s team simply did not have the data it needed. They only had the kind of data traditional analytics firms had: voting records and consumer purchase histories. And getting the kind of data they wanted to gain insight into voter neuroticisms and psychological traits could be very expensive:
...
The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win....
Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.
But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.
...
And that’s where Aleksandr Kogan enters the picture: First, Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of set up he needed. Researchers there claimed to have developed techniques for mapping personality traits based on what people “liked” on Facebook. Better yet, this team already had an app that paid users small sums to take a personality quiz and download an app that would scrape private information from their Facebook profiles and from their friends’ Facebook profiles. In other words, Cambridge University’s Psychometrics Centre was already employing exactly the same kind of “harvesting” model Kogan and Cambridge Analytica eventually ended up doing.
But there was a problem for Wylie and his team: Cambridge University’s Psychometrics Centre declined to work with them:
...
Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.
...
But it wasn’t a particularly big problem because Wylie found another Cambridge University psychology professor who was familiar with the techniques and willing to do the job: Aleksandr Kogan. So Kogan built his own psychological profile app and began harvesting data for Cambridge Analytica in June 2014. Kogan was even allowed to keep the harvested data for his own research according to his contract with Cambridge Analytica. According to Facebook, the only thing Kogan told them and told the users of his app in the fine print was that he was collecting information for academic purposes. Although Facebook didn’t appear to have ever attempted to verify that claim:
...
When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”
...
In the end, Kogan’s app managed to “harvest” 50 million Facebook profiles based on a mere 270,000 people actually signing up for Kogan’s app. So for each person who signed up for the app there were ~185 other people who had their profiles sent to Kogan too.
And 30 million of those profiles contained information like places of residence that allowed them to match that Facebook profile with other records (presumably non-Facebook records) and build psychographic profiles, implying that those 30 million records were mapped to real life people:
...
He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.
...
So this harvesting starts in mid-2014, but by early 2015, Wylie and more than half his original team leave the firm to start a rival firm, although it sounds lie concerns over the far right cause they were working for was also behind their departure:
...
By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”
...
Finally, this whole scandal goes public. Well, at least partially: At the end of 2015, the Guardian reports this Facebook profile collection scheme Cambridge Analytica was doing for the Ted Cruz campaign. Facebook doesn’t publicly acknowledge the truth of this report, but it did publicly state that it was “carefully investigating this situation.” Facebook also sent a letter to Cambridge Analytica demanding that it destroy this data...except the letter wasn’t sent until August of 2016.
...
Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”
...
Facebook now claims that Cambridge Analytica “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.” But, of course, this was a lie. The New York Times was shown sets of the raw data.
And even more disturbing, a former Cambridge Analytica employee claims he recently saw hundreds of gigabytes on Cambridge Analytica’s servers. Unencrypted. Which means that data could potentially be grabbed by any Cambridge Analytica employee with access to that server:
...
Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.
While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.
...
So, to summarize the key points from this New York Times article:
1. In 2013, Cambridge Analytica is formed when Alexander Nix, then a salesman for the small elections division at SCL Group, recruits Christopher Wylie and a team of psychologist to help develop a “political data” unit at the company, with an eye on the 2014 US mid-terms.
2. By chance, Nix and Wylie meet Steve Bannon and Robert Mercer, who are quickly sold on the idea of psychographic profiling for political purposes. Bannon was intrigue by the idea of using this data to wage the “culture war.” Mercer agrees to invest $1.5 Billion in a pilot project involving the Virginia gubernatorial race. Their success is limited as Wylie soon discovers that they don’t have the data they really need to carry out their psychographic profiling project. But Robert Mercer remained committed to the project.
3. Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of data they were seeking. Data that was being collected via an app administered through Facebook, where people were paid small amounts a money to take a survey, and in exchange Cambridge University’s Psychometrics Centre was allowed to scrape their Facebook profile as well as the profiles of all their Facebook friends.
4. Cambridge University’s Psychometrics Centre rejected Wylies offer to work with them, but there was another Cambridge University psychology professor who was willing to do so, Aleksandr Kogan. Kogan proceeded to start a company (as a front for Cambridge Analytica) and develop his own app, getting ~270,000 people to download it and give their permission for their profiles to be collected. But using the “friends permission” feature, Kogan’s app ended collecting another ~50 million Facebook profiles from the friends of those 270,000 people. ~30 million of those profiles were matched to US voters.
5. By early 2015, Wylie and his left-leaning team members leave Cambridge Analytica and form their own company, apparently due to concerns over the far right goals of the firm.
6. Cambridge Analytica goes on to work for the Ted Cruz campaign. In late 2015, it’s reported that Cambridge Analytica work for Cruz involved working with Facebook data from people who didn’t give it permission. Facebook issues a vague statement about how it’s going to investigate.
7. In August 2016, Facebook sends a letter to Cambridge Analytica asserting that the data was obtained and used without permission and must be deleted immediately. The New York Times was just shown copies of exactly that data to write this article. Hundreds of gigabytes of data that is completely outside Facebook’s control.
8. Cambridge Analytica CEO (now former CEO) Alexander Nix told lawmakers that the firm didn’t possess any Facebook data. So he was clearly completely lying.
9. Finally, a former Cambridge Analytica employee showed the New York Times hundreds of gigabytes of Facebook data. And it was unencrypted, so anyone with access to it could make a copy and give it to whoever they want.
And that’s what we learned from just the New York Times’s version of this story. The Guardian Observer was also talking with Christopher Wylie and other Cambridge Analytica whistle-blowers. And while it largely covers the same story as the New York Times report, the Observer article contains some additional details.
1. For starters, the following article notes that the Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. That’s important to note because the stated use of the data grabbed by Aleksandr Kogan’s app was for research purposes. But “improving user experience in the app” is a far more generic reason for grabbing that data than academic research purposes. And that hints at something we’re going to see below from a Facebook whistle-blower: that all sorts of app developers were grabbing this kind of data using the ‘friends’ loophole for reasons that had absolutely nothing to do with academic purposes and this was deemed fine by Facebook.
2. Facebook didn’t formally suspend Cambridge Analytica and Aleksandr Kogan from the platform until one day before the Observer article was published, which is more than two years after the initial reports in late 2015 about the Cambridge Analytica misusing Facebook data for the Ted Cruz campaign. So if Facebook felt like Cambridge Analytica and Aleksandr Kogan was improperly obtaining and misusing its data it sure tried hard not to let on until the very last moment.
3. Simon Milner, Facebook’s UK policy director, told the UK MP when asked if Cambridge Analytica had Facebook data that, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.” Which, again, as we’re going to see, was a total lie according to a Facebook whistle-blower because Facebook was routinely providing exactly the kind of data Kogan’s app was collecting to thousands of developers.
4. Aleksandr Kogan had a license from Facebook to collect profile data, but for research purposes, so when he used the data for commercial purposes he was violating his agreement, according to the article. Also, Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps. And as we’re going to see in subsequent articles, it does indeed look like Kogan is correct and he was very open about using the data from the Cambridge Analytica app for commercial purposes and Facebook had no problem with this.
5. In addition to being a Cambridge University professor, Aleksandr Kogan has links to a Russian university and took Russian grants for research. This will undoubtedly raise speculation about the possibility that Kogan’s data was handed over to the Kremlin and used in the social-media influencing campaign carried out by the Kremlin-linked Internet Research Agency. If so, it’s still important to keep in mind that, based on what we’re going to see from Facebook whistle-blower Sandy Parakilas, the Kremlin could have easily set up all sorts of Facebook apps for collecting this kind of data because apparently anyone could do it as long as the data was for “improving the user experience”. That’s how obscene this situation is. Kogan was not at all needed to provide this data to the Kremlin because it was so easy for anyone to obtain. In other words, we should assume all sorts of governments have this kind of data.
6. The legal letter sent by Facebook to Cambridge Analytica in August 2016 demanding that it delete the data was sent just days before it was officially announced that Steve Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him. That sure does seem like Facebook knew about Bannon’s involvement with Cambridge Analytica and the fact that Bannon was going to become Trump’s campaign manager and bring Cambridge Analytica into the campaign.
7. Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.”
So as we can see, like the proverbial onion, the more layers you peel back on the story Cambridge Analytica and Facebook have been peddling about how this data was obtained and used, the more acrid and malodorous it gets. With a distinct tinge of BS:
The Guardian
Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach
Whistleblower describes how firm linked to former Trump adviser Steve Bannon compiled user data to target American voters
Carole Cadwalladr and Emma Graham-Harrison
Sat 17 Mar 2018 18.03 EDT
The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box.
A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.
Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”
Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.
The New York Times is reporting that copies of the data harvested for Cambridge Analytica could still be found online; its reporting team had viewed some of the raw data.
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.
However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.
Cambridge Analytica and Facebook are one focus of an inquiry into data and politics by the British Information Commissioner’s Office. Separately, the Electoral Commission is also investigating what role Cambridge Analytica played in the EU referendum.
...
On Friday, four days after the Observer sought comment for this story, but more than two years after the data breach was first reported, Facebook announced that it was suspending Cambridge Analytica and Kogan from the platform, pending further information over misuse of data. Separately, Facebook’s external lawyers warned the Observer it was making “false and defamatory” allegations, and reserved Facebook’s legal position.
The revelations provoked widespread outrage. The Massachusetts Attorney General Maura Healey announced that the state would be launching an investigation. “Residents deserve answers immediately from Facebook and Cambridge Analytica,” she said on Twitter.
The Democratic senator Mark Warner said the harvesting of data on such a vast scale for political targeting underlined the need for Congress to improve controls. He has proposed an Honest Ads Act to regulate online political advertising the same way as television, radio and print. “This story is more evidence that the online political advertising market is essentially the Wild West. Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, it’s clear that, left unregulated, this market will continue to be prone to deception and lacking in transparency,” he said.
Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.
Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”
Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”
Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.
At the time of the data breach, Wylie was a Cambridge Analytica employee, but Facebook described him as working for Eunoia Technologies, a firm he set up on his own after leaving his former employer in late 2014.
The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.
That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.
“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.
Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.
“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”
Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.
“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”
A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.
Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.
Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.
Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.
The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.
The algorithm and database together made a powerful political tool. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.
“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.
At the time, more than 50 million profiles represented around a third of active North American Facebook users, and nearly a quarter of potential US voters. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.”
Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.
GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.
SCL Elections, an affiliate, worked with Facebook over the period to ensure it was satisfied no terms had been “knowingly breached” and provided a signed statement that all data and derivatives had been deleted, he said. Cambridge Analytica also said none of the data was used in the 2016 presidential election.
Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.
———-
“Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.””
Exploiting everyone’s inner demons. Yeah, that sounds like something Steve Bannon and Robert Mercer would be interested in. And it explains why Facebook data would have been potentially so useful for exploiting those demons. Recall that the original non-Facebook data that Christopher Wylie and initial Cambridge Analytica team was working with with in 2013 and 2014 wasn’t seen as effective. It didn’t have that inner-demon-influencing granularity. And then they discovered the Facebook data available through this app loophole and it was taken to a different level. Remember when Facebook ran that controversial experiment on users where they tried to manipulate their emotions by altering their news feeds? It sounds like that’s what Cambridge Analytica was basically trying to do using Facebook ads instead of the newsfeed, but perhaps in a more microtargeted way.
And that’s all because Facebook’s “platform policy” allowed for the collection of friends’ data to “improve user experience in the app” with the non-enforced request that the data not be sold on or used for advertising:
...
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.
...
Just imagine how many app developers were using this over the 2007–2014 period Facebook had this “platform policy” that allowed data captures of friends’ “to improve user experience in the app”. It wasn’t just Cambridge Analytica that took advantage of this. That’s a big part of the story here.
And yet when Simon Milner, Facebook’s UK policy director, was asked if Cambridge Analytica had Facebook data, he said, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”:
...
Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”
Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”
...
And note how the article appears to say the data Cambridge Analytica collected on Facebook users included “emails, invoices, contracts and bank transfers that reveal more than 50 million profiles.” It’s not clear if that’s a reference to emails, invoices, contracts and bank transfers that involved with setting up Cambridge Analytica or emails, invoices, contracts and bank transfers from Facebook users, but if that was from users that would be wildly scandalous:
...
Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.
...
So it will be interesting to see if that point of ambiguity is ever clarified somewhere. Because wow would that be scandalous if emails, invoices, contracts and bank transfers of Facebook users were released through this “platform policy”.
Either way, it looks unambiguously awful for Facebook. Especially now that we learn that the cease and destroy letter Facebook sent to Cambridge Analytica in August of 2016 was suspiciously sent just days before Steve Bannon, a founder and officer of Cambridge Analytica, becomes Trump’s campaign manager and brings the company into the Trump campaign:
...
The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.
“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.
...
And the only thing Facebook did to confirm that the Facebook data wasn’t misused, according to Christopher Wylie, was to ask that a box be checked a box on a form:
...
Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”
...
And, again, Facebook denied it’s data based passed along to Cambridge Analytica when questioned by both the US Congress and UK Parliament:
...
Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”
A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.
...
And not how Facebook now admits Aleksandr Kogan did indeed get the data legally. It just wasn’t used properly. It’s why Facebook is saying it shouldn’t be called a “data breach”: because it wasn’t a breach because the data was obtained properly:
...
Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.
...
But Aleksandr Kogan isn’t simply arguing that he did nothing wrong when he obtained that Facebook data via his app. Kogan also argues that he had a “close working relationship” with Facebook, which has granted him permission for his apps, and everything he did with the data was legal. So Aleksandr Kogan’s story is quite notable because, again, as we’ll see below, there is evidence that his story is closest to the truth of all the stories we’re hearing: that Facebook was totally fine with Kogan’s apps obtaining the private data of millions of Facebook friends. And Facebook was perfectly fine with how that data was used or was at least consciously trying to not know how the data might be misused. That’s the picture that’s going to emerge so keep that in mind when Kogan asserts that he had a “close working relationship” with Facebook. He probably did based on available evidence:
...
Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.
...
Kogan maintains everything he did was legal, and guess what? It probably was legal. That’s part of the scandal here.
And regarding those testimony’s by Cambridge Analytica’s now-former CEO Alexander Nix that the company never worked with Facebook, note how the Observer got to see a copy of the contract Cambridge Analytica entered into with Kogan’s GSR and the contract was entirely premised on harvesting and processing the Facebook data. Which, again, hints at the likelihood that they thought what they were doing at the time (2014) was completely legal. They talked about it in the contract:
...
The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour....
“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.
...
Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.
GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.
...
““The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.”
A contract to create a ‘gold standard’ of 2 million Facebook accounts that are ‘matched’ to real life voters for the use of “understanding personality from Facebook profile information.” That was the actual contract Kogan had with Cambridge Analytica. All for the purpose of developing a system that would allow Cambridge Analytica to infer your inner demons from your Facebook profile and then manipulate them.
So it’s worth noting how the app permissions setup Facebook allowed from 2007–2014 of letting app developers collect Facebook profile information of the people who use their apps and their friends created this amazing arrangement where app developers could generate a ‘gold standard’ of of people using apps and a test set from all their friends. If the goal was getting people to encourage their friends to download an app that would have been a very useful data set. But it would of course also have been an incredibly useful data set for anyone who wanted to collect the profile information of Facebook users. Because, again, as we’re going to see, a Facebook whistle-blower is claiming that Facebook user profile information was routinely handed out to app developers.
So if an app developer wanted to experiment on, say, how to use that available Facebook profile information to manipulate people, getting a ‘gold standard’ of people to take a psychological profile survey would be an important step in carrying out that experiment. Because those people who take your psychological survey form the data set you can use to train your algorithms that take Facebook profile information as the input and create psychological profile data as the output.
And that’s what Aleksandr Kogan’s app was doing: grabbing psychological information from the survey while simultaneously grabbing the Facebook profile data from the test-takers, along with the Facebook profile data of all their friends. Kogan’s ‘gold standard’ training set was the people who actually used his app and handed over a bunch of personality information from the survey and the test set would have been the tens of millions of friends whose data was also collected. Since the goal of Cambridge Analytica was to infer personality characteristics from people’s Facebook profiles, pairing the personality surveys from the ~270,000 people who took the app survey to their Facebook profiles allowed Cambridge Analytica to train their algorithms that guessed at personality characteristics from the Facebook profile information. Then they had all the rest of the profile information on the rest of the ~50 million people to apply those algorithms.
Recall how Trump’s 2016 campaign digital director, Brad Parscale, curiously downlplayed the utility of Cambridge Analytica’s data during interviews where he was bragging about how they were using Facebook’s ad micro-targeting features to run “A/B testing on steriods” on micro-targeted audiences i.e. strategically exposing micro-targeted Facebook audiences sets of ads that differed in some specific way design to explore a particular psychological dimension of that micro-audience. So it’s worth noting that the “A/B testing on steroids” Brad Parscale referred to was probably focused on the ~30 million of that ~50 million set of people that Cambridge Analytica obtained a Facebook profile who could be matched back to real people. Those 30 million Facebook users that Cambridge Analytica had Facebook profile data on were the test set. And the algorithms designed to guess the psychological makeup of people from their Facebook profiles that Cambridge Analytica refined on the training set of ~270,000 Facebook users who took the psychological profiles were likely unleashed on that test set of ~30 million people.
So when we find out that the Cambridge Analytica contract with Aleksandr Kogan’s GSR company included language like building a “gold standard”, keep in mind that this implied that there was a lot of testing to do after the algorithmic refinements based on that gold standard. And the ~30–50 million profiles they collected from the friends of the ~270,000 people who downloaded Kogan’s app made for quite a test set.
Also keep in mind that the denials that Cambridge Analytica worked with Facebook data by former CEO Alexander Nix aren’t the only laughable denials of Cambridge Analytica’s officers. Any denials by Steve Bannon and his lawyers that he knew about Cambridge Analytica’s use of Facebook profile data should also be seen laughable, starting with the denials from Bannon’s lawyers that he knows nothing about what Wylie and others are claiming:
...
Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.
Steve Bannon: the Boss Who Knows Nothing (Or So He Says)
Steve Bannon “knows nothing about the claims being asserted.” LOL! Yeah, well, not according to Christopher Wylie, who, in the following article, has some rather significant claims about the role Steve Bannon in all this. According to Wylie:
1. Steve Bannon was the person overseeing the acquisition of Facebook data by Cambridge Analytica. As Wylie put it, “We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss.” Now, when Wylie says Bannon was Nix’s boss, note that Bannon served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016. And Nix was CEO during this period. So technically Nix was the boss. But it sounds like Bannon was effectively the boss, according to Wylie.
2. Wylie acknowledges that it’s unclear whether Bannon knew how Cambridge Analytica was obtaining the Facebook data. But Wylie does say that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And Bannon “approved the data-collection scheme we were proposing”. So if Bannon and Mercer didn’t know the details of how the purchase of massive amounts of Facebook data took place that would be pretty remarkable. Remarkably uncurious, given that acquiring this data was at the core of what the company was doing and they approved of the data-collection scheme. A scheme that involved having Aleksandr Kogan set up a separate company. That was the “scheme” Bannon and Mercer would have had to approve so the question if they didn’t realize that they were acquire this Facebook data using this “friend sharing” feature Facebook made available to app developers that would have been a significant oversight.
The article goes on to include a few more fun facts, like...
3. Cambridge Analytica was doing focus group tests on voters in 2014 and identified many of the same underlying emotional sentiments in voters that formed the core message behind Donald Trump’s campaign. In focus groups for the 2014 midterms, the firm found that voters responded to calls for building a wall with Mexico, “draining the swamp” int Washington DC, and to thinly veiled forms of racism toward African Americans called “race realism”. The firm also tested voter attitudes towards Russian President Vladimir Putin and discovered that there’s a lot of Americans who really like the idea of a really strong authoritarian leader. Again, this was all discovered before Trump even jumped into the race.
4. The Trump campaign rejected early overtures to hire Cambridge Analytica, which suggests that Trump was actually the top choice of the Mercers and Bannon, ahead of Ted Cruz.
5. Cambridge Analytica CEO Alexander Nix was caught by Channel 4 News in the UK boasting about the secrecy of his firm, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”
So based on these allegations, Steve Bannon was closely involved in approval the various schemes to acquire Facebook data and probably using self-destructing emails in the process:
The Washington Post
Bannon oversaw Cambridge Analytica’s collection of Facebook data, according to former employee
By Craig Timberg, Karla Adam and Michael Kranish
March 20, 2018 at 7:53 PMLONDON — Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.
The 2014 effort was part of a high-tech form of voter persuasion touted by the company, which under Bannon identified and tested the power of anti-establishment messages that later would emerge as central themes in President Trump’s campaign speeches, according to Chris Wylie, who left the company at the end of that year.
Among the messages tested were “drain the swamp” and “deep state,” he said.
Cambridge Analytica, which worked for Trump’s 2016 campaign, is now facing questions about alleged unethical practices, including charges that the firm improperly handled the data of tens of millions of Facebook users. On Tuesday, the company’s board announced that it was suspending its chief executive, Alexander Nix, after British television released secret recordings that appeared to show him talking about entrapping political opponents.
More than three years before he served as Trump’s chief political strategist, Bannon helped launch Cambridge Analytica with the financial backing of the wealthy Mercer family as part of a broader effort to create a populist power base. Earlier this year, the Mercers cut ties with Bannon after he was quoted making incendiary comments about Trump and his family.
In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.
“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”
Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.
Bannon received more than $125,000 in consulting fees from Cambridge Analytica in 2016 and owned “membership units” in the company worth between $1 million and $5 million, according to his financial disclosure.
...
It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.
Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.
Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.
Bannon “approved the data-collection scheme we were proposing,” Wylie said.
...
The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.
“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”
The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.
In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.
The firm also tested views of Russian President Vladimir Putin.
“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”
The controversy over Cambridge Analytica’s data collection erupted in recent days amid news reports that an app created by a Cambridge University psychologist, Aleksandr Kogan, accessed extensive personal data of 50 million Facebook users. The app, called thisisyourdigitallife, was downloaded by 270,000 users. Facebook’s policy, which has since changed, allowed Kogan to also collect data —including names, home towns, religious affiliations and likes — on all of the Facebook “friends” of those users. Kogan shared that data with Cambridge Analytica for its growing database on American voters.
Facebook on Friday banned the parent company of Cambridge Analytica, Kogan and Wylie for improperly sharing that data.
The Federal Trade Commission has opened an investigation into Facebook to determine whether the social media platform violated a 2011 consent decree governing its privacy policies when it allowed the data collection. And Wylie plans to testify to Democrats on the House Intelligence Committee as part of their investigation of Russian interference in the election, including possible ties to the Trump campaign.
Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”
The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”
Nix could not be reached for comment.
Cambridge Analytica was set up as a U.S. affiliate of British-based SCL Group, which had a wide range of governmental clients globally, in addition to its political work.
Wylie said that Bannon and Nix first met in 2013, the same year that Wylie — a young data whiz with some political experience in Britain and Canada — was working for SCL Group. Bannon and Wylie met soon after and hit it off in conversations about culture, elections and how to spread ideas using technology.
Bannon, Wylie, Nix, Rebekah Mercer and Robert Mercer met in Rebekah Mercer’s Manhattan apartment in the fall of 2013, striking a deal in which Robert Mercer would fund the creation of Cambridge Analytica with $10 million, with the hope of shaping the congressional elections a year later, according to Wylie. Robert Mercer, in particular, seemed transfixed by the group’s plans to harness and analyze data, he recalled.
The Mercers were keen to create a U.S.-based business to avoid bad optics and violating U.S. campaign finance rules, Wylie said. “They wanted to create an American brand,” he said.
The young company struggled to quickly deliver on its promises, Wiley said. Widely available information from commercial data brokers provided people’s names, addresses, shopping habits and more, but failed to distinguish on more fine-grained matters of personality that might affect political views.
Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”
After Cruz faded, the Mercers switched their allegiance to Trump and pitched their services to Trump’s digital director, Brad Parscale. The company’s hiring was approved by Trump’s son-in-law, Jared Kushner, who was informally helping to manage the campaign with a focus on digital strategy.
Kushner said in an interview with Forbes magazine that the campaign “found that Facebook and digital targeting were the most effective ways to reach the audiences. ...We brought in Cambridge Analytica.” Kushner said he “built” a data hub for the campaign “which nobody knew about, until towards the end.”
Kushner’s spokesman and lawyer both declined to comment Tuesday.
Two weeks before Election Day, Nix told a Post reporter at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”
The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.
Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.
Cambridge Analytica’s parent company, SCL Group, has an ongoing contract with the State Department’s Global Engagement Center. The company was paid almost $500,000 to interview people overseas to understand the mind-set of Islamist militants as part of an effort to counter their online propaganda and block recruits.
Heather Nauert, the acting undersecretary for public diplomacy, said Tuesday that the contract was signed in November 2016, under the Obama administration, and has not expired yet. In public records, the contract is dated in February 2017, and the reason for the discrepancy was not clear. Nauert said that the State Department had signed other contracts with SCL Group in the past.
———-
“Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.”
Steve Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data. That’s what Christopher Wylie claims, and given Bannon’s role as vice president of the company it’s not, on its face, an outlandish claim. And Bannon apparently approved the spending of nearly $1 million to acquire that Facebook data in 2014. Because, according to Wylie, Alexander Nix didn’t actually have permission to spend that kind of money without approval. Bannon, on the hand, did have permission to make those kinds of expenditure approvals. That’s how high up Bannon was at that company even though he was technically the vice president while Nix was the CEO:
...
In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”
Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.
...
“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss...Alexander Nix didn’t have the authority to spend that much money without approval.””
And while Wylie acknowledges that unclear whether Bannon knew how Cambridge Analytica was obtaining the data, Wylie does assert that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And, generally speaking, if Bannon was approval $1 million expenditures on acquiring Facebook data he probably sat in on at least one meeting where they described how they were planning on actually getting the data by spending on that money. Don’t forget the scheme involved paying individuals small amounts of money to take the psychological survey on Kogan’s app, so at a minimum you would expect Bannon to know about how these apps were going to result in the gathering of Facebook profile information:
...
It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.
Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.
Bannon “approved the data-collection scheme we were proposing,” Wylie said.
...
What’s Bannon hiding by claiming ignorance? Well, that’s a good question after Britain’s Channel 4 News aired a video Tuesday in which Nix was highlighting his firm’s secrecy, including the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing”:
...
Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”
...
Self-destructing emails. That’s not suspicious or anything.
And note how Cambridge Analytica was apparently already honing in on a very ‘Trumpian’ message in 2014, long before Trump was on the radar:
...
The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”
The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.
In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.
The firm also tested views of Russian President Vladimir Putin.
“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”
...
Intriguingly, given these early Trumpian findings in their 2014 voter research, it appears that the Trump campaign turned down early overtures to hire Cambridge Analytica, which suggests that Trump really was the top preference for Bannon and the Mercers, not Ted Cruz:
...
Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”
...
And as the article reminds us, the Trump campaign has completely denied EVER using Cambridge Analytica’s data. Brad Parscale, Trump’s digital director, claimed he got all the data they were working with from the Republican National Committee:
...
Two weeks before Election Day, Nix told a Post reporter at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.
Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.
...
And that denial by Parscale raises an obvious question: when Parscale claims they only used data from the RNC, it’s clearly very possible that he’s just straight up lying. But it’s also possible that he’s lying while technically telling the truth. Because if Cambridge Analytica gave its data to the RNC, it’s possible the Trump campaign acquired the Camgridge Analytica data from the RNC at that point, giving the campaign a degree of deniability about the use of such scandalously acquired data if the story of it ever became public. Like now.
Don’t forget that data of this nature would have been potentially useful for EVERY 2016 race, not just the presidential campaign. So if Bannon and Mercer were intent on helping Republicans win across the board, handing that data over to the RNC would have just made sense.
Also don’t forget that the New York Times was shown unencrypted copies of the Facebook data collected by Cambridge Analytica. If the New York Times saw this data, odds are the RNC has too. And who knows who else.
Facebook’s Sandy Parakilas Blows an “Utterly Horrifying” Whistle
It all raises the question of whether or not the Republican National Committee now possess all that Cambridge Analytica data/Facebook data right now. And that brings us to perhaps the most scandalous article of all that we’re going to look at. It’s about Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012 who is now a whistle-blower about exactly the kind of “friend’s permission” loophole Cambridge Analytica exploited. And as the following article makes horrifically clear:
1. It’s not just Cambridge Analytica or the RNC that might possess this treasure trove of personal information. It’s the entire data brokerage industry that probably has thier hands on this data. Along with anyone who has picked it up through the black market.
2. It was relatively easy to write an app that could exploit this “friends permissions” feature and start trawling Facebook for profile data for app users and their friends. Anyone with basic app coding skills could do it.
3. Parakilas estimates that perhaps hundreds of thousands of developers likely exploited exactly the same ‘for research purposes only’ loophole exploited by Cambridge Analytica. And Facebook had no way of tracking how this data was used by developers once it left Facebook’s servers.
4. Parakilas suspects that this amount of data will inevitably end up in the black market meaning there is probably a massive amount of personally identifiable Facebook data just floating around for the entire marketing industry and anyone else (like the GOP) to data mine.
5. Parakilas knew of many commercial apps that were using the same “friends permission” feature to grab Facebook profile data use it commercial purposes.
6. Facebook’s policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings. That appears to be part of the legal protection Facebook employed when it had this policy: don’t complain, it’s in the fine print.
7. Perhaps most scandalous of all, Facebook took a 30% cut of payments made through apps in exchange for giving these app developers access to Facebook user data. Yep, Facebook was effectively selling user data, but by structuring the sale of this data as a 30% share of the payments made through the app Facebook also created an incentive to help developers maximize the profits they made through the app. So Facebook literally set up a system that incentivized itself to help app developers make as much money as possible off of the user data they were handing over.
8. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users. So as a 2010, ~1 in 10 Facebook apps were using this app loophole to grab information about both the users of the app and their friends.
9. While Cambridge Analytica was far from alone in exploiting this loophole, it was actually one of the very last firms given permission to be allowed to do so. Which means that particular data set collected by Cambridge Analytica could be uniquely valuable simply be being larger and containing and more recent data than most other data sets of this nature.
10. When Parakilas brought up these concerns to Facebook’s executives and suggested the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach. One Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”
11. Shortly after arriving at the company’s Silicon Valley headquarters, Parakilas was told that any decision to ban an app required the personal approval of Mark Zuckerberg. Although the policy was later relaxed to make it easier to deal with rogue developers. That said, rogue developers were rarely dealt with.
12. When Facebook eventually phased out this “friends permissions” policy for app developers, it was likely done out of concerns over the commercial value of all this data they were handing out. Executives were apparently concerned that competitors were going to use this data to build their own social networks.
So, as we can see, the entire saga of Cambridge Analytica’s scandalous acquisition of private Facebook profiles on ~50 million Americans is something Facebook made routine for developers of all sorts from 2007–2014, which means this is far from a ‘Cambridge Analytica’ story. It’s a Facebook story about a massive problem Facebook created for itself (for its own profits):
The Guardian
‘Utterly horrifying’: ex-Facebook insider says covert data harvesting was routine
Sandy Parakilas says numerous companies deployed these techniques – likely affecting hundreds of millions of users – and that Facebook looked the other way
Paul Lewis in San Francisco
Tue 20 Mar 2018 07.46 EDTHundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to Cambridge Analytica, according to a new whistleblower.
Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.
“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.
Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.
Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.
“It has been painful watching,” he said, “because I know that they could have prevented it.”
Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”
Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.
He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.
He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”
...
Facebook did not respond to a request for comment on the information supplied by Parakilas, but directed the Guardian to a November 2017 blogpost in which the company defended its data sharing practices, which it said had “significantly improved” over the last five years.
“While it’s fair to criticise how we enforced our developer policies more than five years ago, it’s untrue to suggest we didn’t or don’t care about privacy,” that statement said. “The facts tell a different story.”
‘A majority of Facebook users’
Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.
That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.
The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.
Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.
Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.
Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.
Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.
Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”
Facebook announced on Monday that it had hired a digital forensics firm to conduct an audit of Cambridge Analytica. The decision comes more than two years after Facebook was made aware of the reported data breach.
During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.
While the previous policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings, Parakilas said he believed the policy was problematic.
“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”
It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.
The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.
While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.
“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.
If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.
The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.
Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.
He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.
“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”
‘They treated it like a PR exercise’
Parakilas said he lobbied internally at Facebook for “a more rigorous approach” to enforcing data protection, but was offered little support. His warnings included a PowerPoint presentation he said he delivered to senior executives in mid-2012 “that included a map of the vulnerabilities for user data on Facebook’s platform”.
“I included the protective measures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do malicious things with the data,” he said. “On the list of bad actors I included foreign state actors and data brokers.”
Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”
That changed, Parakilas said, when he heard the congressional testimony given by Facebook lawyers to Senate and House investigators in late 2017 about Russia’s attempt to sway the presidential election. “They treated it like a PR exercise,” he said. “They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.”
It was at that point that Parakilas decided to go public with his concerns, writing an opinion article in the New York Times that said Facebook could not be trusted to regulate itself. Since then, Parakilas has become an adviser to the Center for Humane Technology, which is run by Tristan Harris, a former Google employee turned whistleblower on the industry.
———-
“Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.”
The platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012: That’s who is making these claims. In other words, Sandy Parakilas is indeed someone who should be intimately familiar with Facebook’s policies of handing user data over to app developers because it was his job to ensure that data wasn’t breached.
And as Parakilas makes clear, he wasn’t actually able to do his job. When the data left Facebook’s servers after getting handed over to app developer Facebook had no idea what developers were doing with the data and apparently no interest in learning:
...
“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.
Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.
“It has been painful watching,” he said, “because I know that they could have prevented it.”
Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”
...
And this completely lack of oversight by Facebook led Parakilas to assume there was “something of a black market” for that Facebook data. But when he expressed these concerns with fellow executives he was warned not to look. Not knowing how this data was being used was ironically part of Facebook’s legal strategy, it seems:
...
Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.
He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”
...
“They felt that it was better not to know. I found that utterly shocking and horrifying.”
Well, at least one executive at Facebook was utterly shocked and horrified by the “better not to know” policy towards handing personal private information over to developers. And that one executive, Parakilas, left the company and is now a whistle-blower.
And one of the things that made Parakilas particularly concerned that this was widespread among app was the fact that it was so easy to create apps that could then just be released onto Facebook to trawl for Facebook profile data from users and their unwitting friends:
...
The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.
...
And while rogue app developers were at times dealt with, it was exceedingly rare with Parakilas not witnessing a single audit of a developer’s systems during his time there.
Even more alarming is that Facebook was apparently quite on encouraging app developers to grab this Facebook profile data as an incentive to encourage even more app develop. Apps were seen as so important to Facebook that Mark Zuckerberg himself had to give his personal approval to ban on app. And while that policy was later relaxed to not require Zuckerberg’s approval, it doesn’t sound like that policy change actually resulted in more apps getting banned:
...
Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”
During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.
...
So how many Facebook users had their private profile information likely via this ‘fine print’ feature that allowed app developers to scrape the profiles of app users and their friends? According to Parakilas, probably a majority of Facebook users. So that black market of Facebook profiles probably includes a majority of Facebook users. But even more amazing is that Facebook handed out this personal user information to app developers in exchange for a 30 share of the money they made through the app. Facebook was basically directly selling private user data to developers, which is a big reason why Parakilas’s estimate that a majority of Facebook users were impacted by this is likely true. Especially if, as Parakilas hints, the number of developers grabbing user profile information via these apps might be in the hundreds of thousands. That’s a lot of developers potentially feeding into that black market:
...
‘A majority of Facebook users’Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.
That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.
The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.
Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.
Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.
Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.
...
During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.
...
“Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.”
And that, right there, is perhaps the biggest scandal here: Facebook just handed user data away in exchange for revenue streams from app developers. And this was a key element of its business model during this 2007–2014 period. “Read the fine print” in the terms of service was the excuse they use:
...
“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.
...
And this is all why Aleksandr Kogan’s assertions that he had a close working relationship with Facebook and did nothing technically wrong do actually seem to be backed up by Parakilas’s whistle-blowing. Both because it’s hard to see what Kogan did that wasn’t part of Facebook’s business model and also because it’s hard to ignore that Kogan’s GSR shell company was one of the very last apps to have permission to exploit their “friends’ permission” app loophole. That sure does suggest that Kogan really did have a “close working relationship” with Facebook. So close he got seemingly favored treatment, and that’s compared to the seemingly vast number of apps that were apparently using this “friends permissions” feature: 1 in 10 Facebook apps, according to a 2010 study:
...
The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.
“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.
If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.
...
““Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.”
As of 2010, around 11 percent of app developers requested data belonging to friends of users. Keep that in mind when Facebook claims that Aleksandr Kogan improperly obtained data from the friends of the people who downloaded Kogan’s app.
So what made Facebook eventually end this “friends permissions” policy in mid-2014? While Parakilas has already left the company by then, he does recall conversations with executive who were nervous about competitors building their own social networks from all the data Facebook was giving away:
...
Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.
“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”
...
That’s how much data Facebook was handing out to encourage new app development: so much data that they were concerned about creating competitors.
Finally, it’s important to note that the picture painted by Parakilas only goes until the end of 2012, when he left in frustration. So we don’t actually have testimony of Facebook insiders who were involved with app data breaches like Parakilas during the period when Cambridge Analytica was engaged in its mass data collection scheme:
...
Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”
...
Now, it seems like a safe bet that the problem only got worse after Parakilas left given how the Cambridge Analytica situation played out, but we don’t know yet just had bad it was at this point.
Aleksandr Kogan: Facebook’s Close Friend (Until He Belatedly Wasn’t)
So, factoring in what we just saw with Parakilas’s claims about extent to which Facebook was handing out private Facebook profile data — the internal profile that Facebook builds up about you — to app developers for widespread commercial applications, let’s take a look at the some of the claims Aleksandr Kogan has made about his relationship with Facebook. Because while Kogan makes some extraordinary claims, they are also consistent with Parakilas’s claims, although in some cases Kogan’s description actually goes much further than Parakilas.
For instance, according to the following Observer article ...
1. In an email to colleagues at the University of Cambridge, Aleksandr Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions.
2. Kogan also claims in that email that the contract his GSR company signed with Facebook in 2014 made it absolutely clear the data was going to be used for commercial applications and that app users were granting Kogan’s company the right to license or resell the data. “We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.” So Kogan says he made it clear to Facebook and user the app was for commercial purposes and that the data might be resold which sounds like the kind of situation Sandy Parakilas said he witnessed except even more open (which should be easily verifiable if the app code still exists).
3. Facebook didn’t actually kick Kogan off of its platform until March 16th of this year, just days before this story broke. Which consistent with Kogan’s claims that he had a good working relationship with Facebook.
4. When Kogan founded Global Science Research (GSR) in May 2014, he co-founded it with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.
5. Facebook gave Kogan’s University of Cambridge lab provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level”. 57 billion Facebook relationships in all. The data was anonymized and aggregated, so it didn’t literally include details on individual Facebook friends and was instead the aggregate “friend” counts at a national. The data was used to publish a study in Personality and Individual Differences in 2015 and two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. But it’s still a sign that Kogan is indeed being honest when he says he had a close working relationship with Facebook. It’s also a reminder that when Facebook claims that it was just handing out data for “research purposes” only, if that was true it would have handed out anonymized aggregated data like they did in this situation with Kogan.
6. That study co-authored by Kogan’s team and Facebook didn’t just use the anonymized aggregated friendship data. The study also used non-anonymized Facebook ata collected through Facebook apps using exactly the same techniques Kogan’s app for Cambridge Analytica used. This study was published in August of 2015. Again, it was a study co-authored by Facebook. GSR co-founder Joseph Chancellor left GSR a month later and joined Facebook as a user experience research in November 2015. Recall that it was a month later, December 2015, when we saw the first news reports of Ted Cruz’s campaign using Facebook data. Also recall that Facebook responded to that December 2015 report by saying it would look into the matter. Facebook finally sent Cambridge Analytica a letter in August of 2016, days before Steve Bannon became Trump’s campaign manager, asking that Cambridge Analytica delete the data. So the fact that Facebook co-authored a paper with Kogan and Chancellor in August of the 2015 and then Chancellor joined Facebook in 2015 is a pretty significant bit of context for looking into Facebook’s behavior. Because Facebook didn’t just know it was guilty of working closely with Kogan. They also knew they just co-authored an academic paper using data gathered with the same technique Cambridge Analytica was charged with using.
7. Kogan does challenge one of the claims by Christopher Wylie. Specifically, Wylie claimed that Facebook became alarmed over the volume of data Kogan’s app was scooping up (50 million profiles) but Kogan assuaged those concerns by saying it was all for research. Kogan says this is a fabrication and Facebook never actually contacted him expressing alarm.
So, according to Aleksandr Kogan, Facebook really did have an exceptionally close relationship with Kogan and Facebook really was totally on board with what Kogan and Cambridge Analytica were doing:
The Guardian
Facebook gave data about 57bn friendships to academic
Volume of data suggests trusted partnership with Aleksandr Kogan, says analystJulia Carrie Wong and Paul Lewis in San Francisco
Thu 22 Mar 2018 10.56 EDT
Last modified on Sat 24 Mar 2018 22.56 EDTBefore Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.
Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.
A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.
“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”
Facebook downplayed the significance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was literally numbers – numbers of how many friendships were made between pairs of countries – ie x number of friendships made between the US and UK,” Facebook spokeswoman Christine Chen said by email. “There was no personally identifiable information included in this data.”
Facebook’s relationship with Kogan has since soured.
“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported that the data had been obtained by Cambridge Analytica.
“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.
On Friday 16 March, in anticipation of the Observer’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans, Facebook suspended Kogan from the platform, issued a statement saying that he “lied” to the company, and characterised his activities as “a scam – and a fraud”.
On Tuesday, Facebook went further, saying in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.
But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting activities – for it to inform the users whose personal information was improperly shared.
And Kogan has offered a defence of his actions in an interview with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.
The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.
Between June and August of that year, GSR paid approximately 270,000 individuals to use a Facebook questionnaire app that harvested data from their own Facebook profiles, as well as from their friends, resulting in a dataset of more than 50 million users. The data was subsequently given to Cambridge Analytica, in what Facebook has said was a violation of Kogan’s agreement to use the data solely for academic purposes.
In his email to colleagues at Cambridge, Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions. CNN first reported on the Cambridge email. Kogan did not respond to the Guardian’s request for comment on this article.
“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”
Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.
“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.
Whistleblower Christopher Wylie told the Observer that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”
In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and ... we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”
The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.
For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.
The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.
...
———-
“Before Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.”
An anonymized, aggregate dataset of 57bn Facebook friendships sure makes it a lot easier to take Kogan at his word when he claims a close working relationship with Facebook.
Now, keep in mind that the aggregate anonymized data was aggregate at the national level, so it’s not as if Facebook gave Kogan a list of 57 billion Facebook friendships. And when you think about it, that aggregated anonymized data is far less sensitive than the personal Facebook profile data Kogan and other app developers were routinely grabbing during this period. It’s the fact that Facebook gave this data to Kogan in the first place that lends credence to his claims.
But the biggest factor lending credence to Kogan’s claims is the fact that Facebook co-authored a study with Kogan and other at the University of Cambridge using that anonymized aggregated data. Two Facebook employees were named as co-authors of the study. That is definitely a sign of close working relationship:
...
Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.
“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”
...
Even more damning for Facebook is that the research co-authored by Kogan, Facebook, and other researchers didn’t just included the anonymized aggregated data. It also included a second data set of non-anonymized data that was harvested in exactly the same way Kogan’s GSR app worked. And while Facebook apparently wasn’t involved in that part of the study, that’s beside the point. Facebook clearly knew about it if they co-authored the study:
...
The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.
The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.
...
But, alas, Kogan’s relationship with Facebook as since soured, with Facebook now acting as if Kogan had totally violated their trust. And yet it’s hard to ignore the fact that Kogan wasn’t formally kicked off Facebook’s platform until March 16th of this year, just a few days before all these stories about Kogan and Facebook were about to go public:
...
Facebook’s relationship with Kogan has since soured.“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported that the data had been obtained by Cambridge Analytica.
“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.
On Friday 16 March, in anticipation of the Observer’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans, Facebook suspended Kogan from the platform, issued a statement saying that he “lied” to the company, and characterised his activities as “a scam – and a fraud”.
On Tuesday, Facebook went further, saying in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.
...
““The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.”
Mark Zuckerberg is complaining about a “breach of trust.” LOL!
And yet Facebook has yet to explain the nature of its relationship with Kogan or why it was that they didn’t kick him off the platform until only recently. But Kogan has an explanation: He’s a scapegoat and he wasn’t doing anything Facebook didn’t know he was doing. And when you notice that Kogan’s co-founder of GSR, Joseph Chancellor, is now a Facebook employee, it’s hard not to take his claims seriously:
...
But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting activities – for it to inform the users whose personal information was improperly shared.And Kogan has offered a defence of his actions in an interview with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.
The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.
...
But if Kogan’s claims are to be taken seriously, we have a pretty serious scandal on our hands. Because Kogan claims that not only did he make it clear to Facebook and his app users that the data they were collecting was for commercial use — with no mention of academic or research purposes of the University of Cambridge — but he also claims that he made it clear the data GSR was collecting could be licensed and resold. And Facebook at no point raised any concerns at all about any of this:
...
“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.
“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.
...
Now, it’s worth noting that the casual acceptance of the commercial use of the data collected over these Facebook apps and the potential licensing and reselling of that data is actually a far more seriously situation than the one Sandy Parakilas described during his time at Facebook. Recall that, according to Parakilas, app developers simply had to tell Facebook was that they were going to use the profile data on app users and their friends to ‘improve the user experience.’ It was fine if they were commercial apps from Facebook’s perspective. But Parakilas didn’t describe a situation where app developers openly made it clear they might license or resell the data. So Kogan’s claim that it was clear his app had commercial applications and might involve reselling the data is even more egregious than the situation Parakilas described. But don’t forget that Parakilas left Facebook in late 2012 and Kogan’s app would have been approved in 2014 so it’s entirely possible Facebook’s policies got even more egregious after Parakilas left.
And it’s worth noting how Kogan’s claims differ from Christopher Wylie’s. Wylie asserts that Facebook grew alarmed by the volume of data GSR’s app was pulling from Facebook users and Kogan assured them it was for research purposes. Whereas Kogan says Facebook never expressed any alarm at all:
...
Whistleblower Christopher Wylie told the Observer that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and ... we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”
...
So as we can see, when it comes to Facebook’s “friends permissions” data sharing policy, its arrangement with Aleksandr Kogan was probably one of the more responsible ones it engaged in because, hey, at least Kogan’s work was ostensibly for research purposes and involved at least some anonymized data.
Cambridge Analytica’s Informal Friend: Palantir
And as we can also see, the more we learn about this situation, the harder it gets to dismiss Kogan’s claims that Facebook is making in a scapegoat in order to cover up not just the relationship Facebook had with Kogan but the fact that what Kogan was doing was routine for app developers for years.
But as the following New York Times article makes clear, Facebook’s relationship with Aleksandr Kogan isn’t the only working relationship Facebook needs to worry about that might lead back to Cambridge Analytica. Because it turns out there’s another Facebook connection to Cambridge Analytica and it’s potentially far, far more scandalous than Facebook’s relationship with Kogan: It turns out Palantir might be the originator of the idea to create Kogan’s app for the purpose of collecting psychological profiles. That’s right, according to documents the New York Times has seen, Palantir, the private intelligence firm with a close relationship with the US national security state, was in talks with Cambridge Analytica from 2013–2014 about psychologically profiling voters and it was an employee of Palantir who raised the idea of creating that app in the first place.
And this is of course wildly scandalous if true because Palantir was founded by the Facebook executive Peter Thiel who also happens to be a far right political activist and a close ally of President Trump.
But it gets worse. And weirder. Because it sounds like one of the people encouraging SCL (Cambridge Analytica’s parent company) to work with Palantir was none other than Sophie Schmidt, daughter of Google CEO Eric Schmidt.
Keep in mind that this isn’t the first time we’ve heard about Palantir’s ties to Cambridge Analytica and Sophie Schmidt’s role in this. It was reported by the Observer last May. According to that May 2017 article in the Observer, Schmidt was passing through London in June of 2013 when she decided to called up her former boss at SCL and recommend that they contact Palantir. Also if interest is that if you look at the current version of that Observer article, all mention of Sophie Schmidt has been removed and there’s a note that the article is the subject of legal complaints on behalf of Cambridge Analytica LLC and SCL Elections Limited. But in the original article she’s mentioned quite extensively. It would appear that someone is very upset about the Sophie Schmidt angle to this story.
So this Palantir/Sophie Schmidt side of this story isn’t a new. But we’re learning a lot more information about that relationship now. For instance:
1. In early 2013, Cambridge Analytica CEO Alexander Nix, an SCL director at the time, and a Palantir executive discussed working together on election campaigns.
2. And SCL employee wrote to a colleague in a June 2013 email that Schmidt is pushing them to work with Palantir. “Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” .
3. According to Christopher Wylie’s testimony to lawmakers, “There were Palantir staff who would come into the office and work on the data...And we would go and meet with Palantir staff at Palantir.” Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014.
4. The Palantir employee who floated the idea of create the app ultimately built by Aleksandr Kogan is Alfredas Chmieliauskas. Chmieliauskas works on business development for Palantire according to his LinkedIn page.
5. Palantir and Cambridge Analytica never formally started working together. A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. Wylie acknowledges that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge Analytica’s psychographic models. In other words, while there was never a formal relationship, there was an pretty significant informal relationship.
6. Mr. Chmieliauskas was in communication with Wylie’s team in 2014 during the period when Cambridge Analytica was initially trying to convince the University of Cambridge team to work with them. Recall that Cambridge Analytica initially discovered that the University of Cambridge team had exactly the kind of data they were interested in collected via a Facebook app, but the negotiations ultimately failed and it was then that Cambridge Analytica found Aleksandr Kogan who agreed to create his own app. Well, according to this report, it was Chmieliauskas who initially suggested to Cambridge Analytica that the firm create its own version of the University of Cambridge team’s app as leverage in those negotiations. In essence, Chmieliauskas wanted Cambridge Analytica to show the University of Cambridge team that they could collect the information themselves, presumably to drive a harder bargain. And when those negotiations failed Cambridge Analytica did indeed create their own app after teaming up with Kogan.
7. Palantir asserts that Chmieliauskas was acting in his own capacity when he continued communicating with Wylie and made the suggestion to create their own app. Palantir initially told the New York Times that it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Palantir later revised this, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
And, again, do not forget that Palantir is own by Peter Thiel, the far right billionaire early investor in Facebook and one of Facebook’s board members to this day. He was also a Trump delegate in 2016 and was in discussions with the Trump administration to lead the powerful President’s Intelligence Advisory Board, although he ultimately turned that offer down. Oh, and he’s an advocate of the Dark Enlightenment.
Basically, Peter Thiel was a member of the ‘Alt Right’ before that term was ever coined. And he’s a very powerful influence at Facebook. So learning that Palantir and Cambridge Analytica were in discussion to work together on election projects in 2013 and 2014, a Palantir employee was advising Cambridge Analytica during the negotiations with the University of Cambridge team, and that Palantir employees helped engineer Cambridge Analytica’s psychographic model based on Facebook is the kind of revelation that just might qualify as the most scandalous revelation in this entire mess:
As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.
It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.
Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.
The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.
“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday.
...
The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook.
The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
“We learned today that an employee, in 2013–2014, engaged in an entirely personal capacity with people associated with Cambridge Analytica,” the company said. “We are looking into this and will take the appropriate action.”
The company said it was continuing to investigate but knew of no other employees who took part in the effort. Mr. Wylie told lawmakers that multiple Palantir employees played a role.
Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.
In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.
A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.
In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.
Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.
“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”
Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.
———-
“The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.”
Yep, a Facebook board member’s private intelligence firm was working closely with Cambrige Analytica as they developed their psychological profiling technology. It’s quite a revelation. The kind of explosive revelation that has Palantir first denying that there was any relationship at all, followed with acknowledgement/denial that, yes, a Palantir employee, Alfredas Chmieliauskas, was indeed working with Cambridge Analytica but not on behalf of Palantir:
...
It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times....
The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
...
Adding the scandalous nature of it all is that Google CEO Eric Schmidt’s daughter suddenly appeared in June of 2013 to also promote to her old boss at SCL a relationship with Palantir:
...
Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.
...
But this June 2013 proposal by Sophie Schmidt wasn’t what started Cambridge Analytica’s relationship with Palantir. Because that reportedly started in early 2013, when Alexander Nix and a Palantir executive discussed working together on election campaigns:
...
In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.
...
So Sophie Schmidt swooped in to promote Palantir to Cambridge Analytica months after the negotiations began. It raises the question of who encouraged her to do that.
Palantir now admits these negotiations happened, but claims that they chose not to work with Cambridge Analytica because they “wanted to steer clear of election work.” And emails indicate that Palantir did indeed formally turn down the idea of working with Cambridge Analytica since the emails show that Nix and Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. And yet, according to Christopher Wylie, some Palantir employees helped engineer their psychogrophic models. And that suggests Palantir turned down a formal relationship in favor of an informal one:
...
A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
...
“There were Palantir staff who would come into the office and work on the data...And we would go and meet with Palantir staff at Palantir.”
That sure sounds like a relationship! Formal or not.
And that informal relationship continued during the period when Cambridge Analytica was in negotiation with the initial University of Cambridge Psychometrics Centre in 2014:
...
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.
...
And it was during those negotiations, in May of 2014, when Chmieliauskas first proposed the idea of just replicating what the University of Cambridge Psychometrics Centre was doing for leverage in the negotiations. When those negotiations ultimately failed, Cambridge Analytica found another Cambridge University psychologist, Aleksandr Kogan, to build the app for them:
...
“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.
...
And that’s what we know so far about the relationship between Cambridge Analytica and Palantir. Which raises a number of questions. Like whether or not this informal relationship continued well after Cambridge Analytica started harvesting all that Facebook information. Let’s look at seven key the facts about we know Palantir’s involvement in this so far:
1. Palantir employees helped build the psychographic profiles.
2. Mr. Chmieliauskas was in contact with Wylie at least as late as May of 2014 as Cambridge Analytica was negotiating with the University of Cambridge’s Psychometrics Centre.
3. We don’t know when this informal relationship between Palantir and Cambridge Analytica ended.
4. We don’t know if the informal relationship between Palantir and Cambridge Analytica — which largely appears to center around Mr. Chmieliauskas — really was largely Chmieliauskas’s initiative alone after Palantir initially rejected a formal relationship (it’s possible) or if Chmieliauskas was directed to pursue this relationship informally but on behalf of Palantir to maintain deniability in the case of awkward situations like the present one (also very possible, and savvy given the current situation).
5. We don’t know if the Palantir employees who helped build those psychographic profiles were working with the data Cambridge Analytica harvested from Facebook or were they working with the earlier, inadequate sets of data that didn’t include the Facebook data? Because if the Palantir employees helped build the psychographic profiles based on the Facebook data that implies this informal relationship went on a lot longer than May of 2014 since that’s when it first started getting collected via Kogan’s app. How long? We don’t yet know.
6. Neither do we know how much of this data ultimately fell into the hands of Palantir. As Wylie described it, “There were Palantir staff who would come into the office and work on the data...And we would go and meet with Palantir staff at Palantir.” So did those Palantir employees who were working on “the data” take any of that data back to Palantir?
7. For that matter, given that Peter Thiel sits on the board of Facebook, and given how freely Facebook hands out this kind of data, we have to ask the question of whether or not Palantir already has direct access to exactly the kind of data Cambridge Analytica was harvesting. Did Palantir even need Cambridge Analytica’s data? Perhaps Palantir was already using apps of their own to harvest this kind of data? We don’t know. At the same time, don’t forget that even if Palantir had ready access to the same Facebook profile data gathered by Kogan’s app, it’s still possible Palantir would have had an interest in the company purely to see how the data was analyzed and learn from that. In other words, the interest in Cambridge Analytica may be been more related to the algorithms, and not the data, for Peter Thiel’s Palantir. Don’t forget that if anyone is the real power behind the throne at Facebook it’s probably Thiel.
8. What on earth is going on with Sophie Schmidt, daughter of Google CEO Eric Schmidt, pushing Cambridge Analytica to work with Palantir in June of 2013, months after Cambridge Analytic and Palantir began talking with each other? That seems potentially significant.
Those are just some of the questions raised about Palantir’s ambiguously ominous relationship with Cambridge Analytica. Bad don’t forget that it’s not just Palantir that we need to ask these kinds of questions. For instance, what about Steve Bannon’s Breitbart? Does Breitbart, home the neo-Nazi ‘Alt Right’, also have access to all that harvested Cambridge Analytica data? Not just the raw Facebook data but also the processed psychological profile data on 50 million Americans that Cambridge Analytica generated. Does Breitbart have the processed profiles too? And what about the Republican Party? And all the other entities out there who gained access to this Facebook profile data. Just how many different entities around the globe possess that Cambridge Analytica data set?
It’s Not Just Cambridge Analytica. Or Facebook. Or Google. It’s Society.
Of course, as we saw with Sandy Parakilas’s whistle-blower claims, when it comes to the question of who might possess Facebook profile data harvested during the 2007–2014 period when Facebook had “friends permissions” policy, the list of suspects includes potentially hundreds of thousands of developers and anyone who has purchased this information on the black market.
Don’t forget one of the other amazing aspects of this whole situation: if hundreds of thousands of developers were using this feature to scrape user profiles, that means this really was an open secret. Lots and lots of people were doing this. For years. So, like many scandals, perhaps the most scandalous part of it is that we’re learning about something we should have known all along and many of did know all along. It’s not like it’s a secret that people are being surveilled in detail in the internet age and this data is being stored and aggregated in public and private databases and put up for sale. We’ve collectively known this all along. At least on some level.
And yet this surveillance is so pervasive that it’s almost never thought about on a moment by moment basis at an individual level. When people browse the web they presumably aren’t thinking about the volume of tracking cookies and other personal information slurped up as a result of that mouse click. Nor are they thinking about how that click contributes to the numerous personal profiles of them floating around the commercial data brokerage marketplace. So in a more fundamental sense we don’t actually know we’re being surveilled because we’re not thinking about it.
It’s one example of how humans aren’t wired to naturally think about the macro forces impacting their lives in day to day decisions, which was fine when we were cave men but becomes a problematic instinct when we’re literally mastering the laws of physics and shaping our world and environment. From physics and nature to history and contemporary trends, the vast majority of humanity spends very little time studying these topics. Which is completely understandable given the lack of time or resources to do so, but that understandable instinct creates world perfectly set up for abuse by surveillance states, both public and private, which makes it less understandable and much more problematic.
So, in the interest of gaining perspective on how we got to this point where the Facebook emerged as an ever-growing Panopticon in just a few short years after its conception, let’s take a look at one last article. It’s an article by investigative journalist Yasha Levine, who recently published the must-read book Surveillance Valley: The Secret Military History of the Internet. It’s a book filled with vital historical fun fact about the internet. Fun facts like...
1. How the internet began as a system built for national security purposes with a focus on military hardware and command and control communication purposes in general. But there was also a focus on building a system that could collect, store, process, and distribute of massive volumes of information used to wage the Vietnam war. Beyond that, these early computer networks also acted as a collection and sharing system for dealing with domestic national security concerns (concerns that centered around tracking anti-war protesters, civil rights activists, etc). That’s what the internet started out as. A system for storing data about people and conflict for US national security purposes.
2. Building databases of profiles on people (foreign and domestic) was one of the very first goals of these internet predecessors. In fact, one of the key visionaries behind the development of the internet, Ithiel de Sola Pool, both helped shape the development of the early internet as a surveillance and counterinsurgency technology and also pioneered data-driven election campaigns. He even started a private firm to do this: Simulmatics. Pool’s vision was a world where the surveillance state acted as a benign master that the kept the peace peacefully by using superior knowledge to nudge people in the ‘right’ direction.
3. This vision of vast database of personal profiles for the purpose was largely a secret at first, but it didn’t remain that way. And there was actually quite a bit of public paranoia in the US about these internet-predecessors, especially within the anti-Vietnam war activist communities. Flash forward a couple decades and that paranoia has faded almost entirely...until scandals like the current one erupt and we temporarily grow concerned.
4. What Cambridge Analytica is accused of doing is what the data giants like Facebook and Google do every day and have been going for years. And it’s not just the giants. Smaller firms are scooping up fast amounts of information too...it’s just not as vast as what the giants are collecting. Even cute apps, like the wildly popular Angry Birds, has been found to collect all sorts of data about users.
5. While it’s great that public attention is being directed at the kind of sleazy manipulative activities Cambridge Analytica was engaging in, deceptively wielding real power over real unwitting people, it is a wild mischaracterization to act like Cambridge Analytica was exerting mass mind-control over the masses using internet marketing voodoo. What Cambridge Analytica, or any of the other sleazy manipulators, were doing was indeed influential, but it needs to be viewed in the context of a political state of affairs where massive numbers of Americans, including Trump voters, really have been collectively failed by the American power establishment for decades. The collapse of the American middle class and rise of the plutocracy is what created the kind of macro environment where carnival barker like Donald Trump could use firms like Cambridge Analytica to ‘nudge’ people in the direction of voting for him. In other words, the focus on Cambridge Analytica’s manipulation of people’s psychological profiles in the absence of the recognition of the massive political failures of last several decades in America — the mass socioeconomic failures of the American embrace of ‘Reaganonics’ and right-wing economic gospel coupled with the American Left’s failure to effectively repudiate these doctrines — is profoundly ahistorical. The story of the rise of the power of firms like Facebook, Google, and Cambridge Analytica is a story the implicitly includes the story of that entire history of political/socioeconomic failures tied to failure to effectively respond to the rise of the American right-wing over the last several decades. And we are making a massive mistake if we forget that. Cambridge Analytica wouldn’t have been nearly as effective in nudging people towards voting for someone like Trump if so many people weren’t already so ready to burn the current system down.
These are the kinds of historical chapters that can’t be left out of any analysis of Cambridge Analytica. Because Cambridge Analytica isn’t the exception. It’s an exceptionally sleazy example of the rules we’ve been playing by for a while, whether we realized it or not:
The Baffler
The Cambridge Analytica Con
Yasha Levine,
March 21, 2018“The man with the proper imagination is able to conceive of any commodity in such a way that it becomes an object of emotion to him and to those to whom he imparts his picture, and hence creates desire rather than a mere feeling of ought.”
—Walter Dill Scott, Influencing Men in Business: Psychology of Argument and Suggestion (1911)
This week, Cambridge Analytica, the British election data outfit funded by billionaire Robert Mercer and linked to Steven Bannon and President Donald Trump, blew up the news cycle. The charge, as reported by twin exposés in the New York Times and the Guardian, is that the firm inappropriately accessed Facebook profile information belonging to 50 million people and then used that data to construct a powerful internet-based psychological influence weapon. This newfangled construct was then used to brainwash-carpet-bomb the American electorate, shredding our democracy and turning people into pliable zombie supporters of Donald Trump.
In the words of a pink-haired Cambridge Analytica data-warrior-turned-whistleblower, the company served as a digital armory that turned “Likes” into weapons and produced “Steve Bannon’s psychological warfare mindfuck tool.”
Scary, right? Makes me wonder if I’m still not under Cambridge Analytica’s influence right now.
Naturally, there are also rumors of a nefarious Russian connection. And apparently there’s more dirt coming. Channel 4 News in Britain just published an investigation showing top Cambridge Analytica execs bragging to an undercover reporter that their team uses high-tech psychometric voodoo to win elections for clients all over the world, but also dabbles in traditional meatspace techniques as well: bribes, kompromat, blackmail, Ukrainian escort honeypots—you know, the works.
It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.
But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.
So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.
Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.
Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.
All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.
What do these companies know about us, their users? Well, just about everything.
Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.
On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.
It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.
In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.
These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.
Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.
The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.
The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”
Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.
Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.
Here’s how The National Journal’s Andrew Rice described i360 in 2015:
Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.
Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”
In 2008, the hip young Blackberry-toting Barack Obama was the first major-party candidate on the national scene to truly leverage the power of internet-targeted agitprop. With help from Facebook cofounder Chris Hughes, who built and ran Obama’s internet campaign division, the first Obama campaign built an innovative micro-targeting initiative to raise huge amounts of money in small chunks directly from Obama’s supporters and sell his message with a hitherto unprecedented laser-guided precision in the general election campaign.
...
Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.
What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.
On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.
So our present-day freakout over Cambridge Analytica needs to be put in the broader historical context of our decades-long complacency over Silicon Valley’s business model. The fact is that companies like Facebook and Google are the real malicious actors here—they are vital public communications systems that run on profiling and manipulation for private profit without any regulation or democratic oversight from the society in which it operates. But, hey, let’s blame Cambridge Analytica. Or better yet, take a cue from the Times and blame the Russians along with Cambridge Analytica.
***
There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.
This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.
The truth is that the internet has never been about egalitarianism or democracy.
The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.
The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.
Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”
In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.
Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.
He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.
These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.
Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.
At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”
With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.
This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.
Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.
A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.
***
Simulmatics and its first-generation imitations are now ancient history—dating back from the long-ago time when computers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The internet surrounds us, engulfing and monitoring everything we do. We are tracked and watched and profiled every minute of every day by countless companies—from giant platform monopolies like Facebook and Google to boutique data-driven election firms like i360 and Cambridge Analytica.
Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.
Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.
***
And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.
Cambridge Analytica has been one of the lesser bogeyman used to explain Trump’s victory for quite a while, going back more than year. Back in March 2017, the New York Times, which now trumpets the saga of Cambridge Analytica’s Facebook heist, was skeptically questioning the company’s technology and its role in helping bring about a Trump victory. With considerable justification, Times reporters then chalked up the company’s overheated rhetoric to the competition for clients in a crowded field of data-driven election influence ops.
Yet now, with Robert Meuller’s Russia investigation dragging on and producing no smoking gun pointing to definitive collusion, it seems that Cambridge Analytica has been upgraded to Class A supervillain. Now the idea that Steve Bannon and Robert Mercer concocted a secret psychological weapon to bewitch the American electorate isn’t just a far-fetched marketing ploy—it’s a real and present danger to a virtuous info-media status quo. And it’s most certainly not the extension of a lavishly funded initiative that American firms have been pursuing for half a century. No, like the Trump uprising it has allegedly midwifed into being, it is an opportunistic perversion of the American way. Employing powerful technology that rewires the inner workings of our body politic, Cambridge Analytica and its backers duped the American people into voting for Trump and destroying American democracy.
It’s a comforting idea for our political elite, but it’s not true. Alexander Nix, Cambridge Analytica’s well-groomed CEO, is not a cunning mastermind but a garden-variety digital hack. Nix’s business plan is but an updated version of Ithiel de Sola Pool’s vision of permanent peace and prosperity won through a placid regime of behaviorally managed social control. And while Nix has been suspended following the bluster-filled video footage of his cyber-bragging aired on Channel 4, we’re kidding ourselves if we think his punishment will serve as any sort of deterrent for the thousands upon thousands of Big Data operators nailing down billions in campaign, military, and corporate contracts to continue monetizing user data into the void. Cambridge Analytica is undeniably a rogue’s gallery of bad political actors, but to finger the real culprits behind Donald Trump’s takeover America, the self-appointed watchdogs of our country’s imperiled political virtue had best take a long and sobering look in the mirror.
———-
“The Cambridge Analytica Con” by Yasha Levine; The Baffler; 03/21/2018
“It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.”
Yes indeed, it is great to see that this topic is finally getting the attention it has long deserved. But it’s not great to see the topic limited to Cambridge Analytica and Facebook. As Levine puts it, “We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries.” Societies in general are held together via overt and covert influence, but we’ve gotten really, really good at that over the last half century in America and the story of Cambridge Analytica, and the larger story of Sandy Parakilas’s whistle-blowing about mass data collection, can’t really be understood outside that historical context:
...
But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.
...
And the first step in putting the Cambridge Analytica story in proper perspective is recognizing that what it is accused of doing — grabbing personal data and building profiles for the purpose of influencing voters — is done every day by entities like Facebook and Google. It’s a regular part of our lives. And you don’t even need to use Facebook or Google to become part of this vast commercial surveillance system. You just need to communicate with someone who does use those platforms:
...
Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.
All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.
What do these companies know about us, their users? Well, just about everything.
Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.
On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.
...
The next step in contextualizing this is recognizing that Facebook and Google are merely the biggest fish in an ocean of data brokerage markets that has many smaller inhabitants trying to do the same thing. This is part of what makes Facebook’s handing over of profile data to app developers so scandalous...Facebook clearly new there was a voracious market for this information and made a lot of money selling into that market:
...
It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.
These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.
...
And when it comes to political campaigns, the digital giants like Facebook and Google already have special election units set up to give privileged access to political campaigns so they can influence voters even more effectively. The stories about the Trump campaign’s use of Facebook “embeds” to run a massive systematic advertising campaign of “A/B testing on steroids” to systematically experiment on voter ad responses is part of that larger story of how these giants have already made the manipulation of voters big business:
...
Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.
The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”
Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.
Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.
Here’s how The National Journal’s Andrew Rice described i360 in 2015:
Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.
Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”
...
And offering special services to campaign manipulate voters isn’t just big business. It’s a largely unregulated business. If Facebook decides to covertly manipulate you by altering its newsfeed algorithms so it shows you news articles more from your conservative-leaning friends (or liberal-leaning friends), that’s totally legal. Because, again, subtly manipulating people is as American as apple pie:
...
Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.
On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.
...
And this contemporary state of affairs didn’t emerge spontaneously. As Levine covers in Surveillance Valley, this is what the internet — back when it was the ARPANET military network — was all about from its very conception:
...
There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.
The truth is that the internet has never been about egalitarianism or democracy.
The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.
The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.
...
“The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting”
And one of the key figures behind this early ARPANET version of the internet, Ithiel de Sola Pool, got his start in this area in the 1950’s working at the Hoover Institution at Stanford University to understand the nature and causes of left-wing revolutions and distill this down to a mathematical formula. Pool, an virulent anti-Communist, also worked for JFK’s 1960 campaign and went on to start a private company, Simulmatics, offering services in modeling and manipulating human behavior based on large data sets on people:
...
Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.
Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.
He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.
These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.
Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.
...
And part of what drove Pool’s was a utopian belief that computers and massive amounts of data could be used to run society harmoniously. Left-wing revolutions were problems to be managed with Big Data. It’s a pretty important historical context when thinking about the role Cambridge Analytica played in electing Donald Trump:
...
At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”
...
And guess what: the American public wasn’t enamored with Pool’s vision of a world managed by computing technology and Big Data models of society. When the public learned about these early version of the internet inspired by visions of a computer-managed world in the 60’s and 70’s, the public got scared:
...
With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.
Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.
A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.
...
But that fear somehow disappeared in subsequent decades, only to be replaced with a faith in our benevolent techno-elite. And a faith that this mass public/private surveillance system is actually an empowering tool that will lead to a limitless future. And that is perhaps the biggest scandal here: The public didn’t just forgot to keep an eye on the powerful. The public forgot to keep an eye on the people whose power is derived from keeping an eye on the public. We built a surveillance state at the same time we fell into a fog of civic and historical amnesia. And that has coincided with the rise of a plutocracy, the dominance of right-wing anti-government economic doctrines, and the larger failure of the American political and economic elites to deliver a society that actually works for average people. To put it another way, the rise of the modern surveillance state is one element of a massive, decades-long process of collectively ‘dropping the ball’. We screwed up massively and Facebook and Google are just one of the consequences of this. And yet we still don’t view the Trump phenomena within the context of that massive collective screw up, which means we’re still screwing up massively:
...
Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.
***
And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.
...
So the biggest story here isn’t that Cambridge Analytica was engaged in mass manipulation campaign. And the biggest story isn’t even that Cambridge Analytica was engaged in a cutting-edge commercial mass manipulation campaign. Because both of those stories are eclipsed by the story that even if Cambridge Analytica really was engaged in a commercial cutting edge campaign, it probably wasn’t nearly as cutting edge as what Facebook and Google and the other data giants routinely engage in. And this situation has been building for decades and within the context of the much larger scandal of the rise of a oligarchy that more or less runs America by and for powerful interests. Powerful interests that are overwhelmingly dedicated to right-wing elitist doctrines that view the public as a resources to be controlled and exploited for private profit.
It’s all a reminder that, like so many incredibly complex issues, creating very high quality government is the only feasible answer. A high quality government managed by a self-aware public. Some sort of ‘surveillance state’ is almost an inevitability as long as we have ubiquitous surveillance technology. Even the array of ‘crypto’ tools touted in recent years have consistently proven to be vulnerable, which isn’t necessarily a bad thing since ubiquitous crypto-technology comes with its own suite of mega-collective headaches. National security and personal data insecurity really are intertwined in both mutually inclusive and exclusive ways. It’s not as if the national security hawk arguments that “you can’t be free if you’re dead from [insert war, terror, random chaos things a national security state is supposed to deal with]” isn’t valid. But fears of Big Brother are also valid, as our present situation amply demonstrates. The path isn’t clear, which is why a national security state with a significant private sector component and access to ample intimate details is likely for the foreseeable future whether you like it or not. People err on immediate safety. So we better have very high quality government. Especially high quality regulations for the private sector components of that national security state.
And while digital giants like Google and Facebook will inevitably have access to a troves of personal data that they need to offer the kinds of services people need, there’s no reason any sort of regulating them heavily so they don’t become personal data repository for sale. Which is what they are now.
What do we do about services that people use to run their lives which, by definition, necessitate the collection of private data by a third-party? How do we deal with these challenges? Well, again, it starts with being aware of them and actually trying to collectively grapple with them so some sort of general consensus can be arrive at. And that’s all why we need to recognize that it is imperative that the public surveils the surveillance state along with surveilling the rest of the world going on around us too. A self-aware surveillance state comprised of a self-aware populace of people who know what’s going on with their surveillance state and the world. In other words, part of the solution to ‘Big Data Big Brother’ really is a society of ‘Little Brothers and Sisters’ who are collectively very informed about what is going on in the world and politically capable of effecting changes to that surveillance state — and the rest of government or the private sector — when necessary change is identified. In other other words, the one ‘utopian’ solution we can’t afford to give up on is the utopia of a well-function democracy populated by a well-informed citizenry. A well-armed citizenry armed with relevant facts and wisdom (and an extensive understanding of the history and technique of fascism and other authoritarian movements). Because a clueless society will be an abusively surveilled society.
But the fact that this Cambridge Analytica scandal is a surprise and is being covered largely in isolation of this broader historic and contemporary context is a reminder that we are no where near that democratic ideal of a well-informed citizenry. Well, guess what would be a really valuable tool for surveilling the surveillance state and the rest of the world around us and becoming that well-informed citizenry: the internet! Specifically, we really do need to read and digest growing amounts of information to make sense of an increasingly complex world. But the internet is just the start. The goal needs to be the kind of functional, self-aware democracy were situations like the current one don’t develop in a fog of collective amnesia and can be pro-actively managed. To put it another way, we need an inverse of Ithiel de Sola Pool’s vision of world with benevolent elites use computers and Big Data to manage the rabble and ward of political revolutions. Instead, we need a political revolution of the rabble fueled by the knowledge of our history and world the internet makes widely accessible. And one of the key goals of the political revolution needs to be to create a world with the knowledge the internet makes widely available is used to reign in our elites and build a world that works for everyone.
And yes, that implicitly implies a left-wing revolution since left-wing democratic movements those are the only kind that have everyone in mind. And yes, this implies an economic revolution that systematically frees up time for virtually everyone one so people actually have the time to inform themselves. Economic security and time security. We need to build a world that provide both to everyone.
So when we ask ourselves how we should respond to the growing Cambridge Analytica/Facebook scandal, don’t forget that one of the key lessons that the story of Cambridge Analytica teaches us is that there is an immense amount of knowledge about ourselves — our history and contemporary context- that we needed to learn and didn’t. And that includes envisioning what a functional democratic society and economy that works for everyone would look like and building it. Yes, the internet could be very helpful in that process, just don’t forget about everything else that will be required to build that functional democracy.
Here’s a good example of many of the problem with Facebook are facilitated by the many privacy problems with the rest of the tech sector: A number of Facebook users discovered a rather creepy privacy violation by Facebook. It turns out that Facebook was collecting metadata about the calls and texts people were sending from their smartphones with the Facebook app and Googles Android operating system.
And it also turns out that Facebook used a number of sleazy excuses to “get permission” to collect this the data. First, Facebook had users agree to giving such data away by hiding it away in obtuse language in the user agreement. Second, the default setting for the Facebook app was to give this data away. Users could turn off this data sharing, but it was never obvious it was on.
Third, it was based on exploiting how Android’s user permissions system encourages people to share vasts amounts of data without realizing it. This is were this becomes a Google scandal too. If you had the Android operating system the Facebook app would try to get permission to access your phone contact information. This was ostensibly to be used for the Facebook’s friend recommendation algorithms. If you granted permission to read contacts during the Facebook app’s installation on older versions of Android — before version 4.1 (Jelly Bean) — giving permission to an app to read contact information also granted permission to call and message logs by default. So this was just an egregious privacy design by Google and Facebook egregiously exploited it (surprise!).
And when this loose permissions system was fixed in later versions of Android Facebook continued to use a loophole to keep grabbing the call and text metadata. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. In other words, upgrading the Android operating system didn’t guarantee that upgrades to user data privacy rules would actually take effect on the apps you already have installed. Which, again, is egregious. But that’s what Google’s Android operating system allowed and Facebook totally exploited it until Google finally closed the loophole in October of 2017.
Note that Apple’s iOS phones didn’t have this issue with the Facebook app because that iOS operating system simply does not give apps access to that kind of information. So the permissions Google is giving are bad even compared to it’s major competitor in the smartphone operating system space.
It’s also quite analogous to what Facebook was doing with the “friends permissions” giveaway of Facebook profile information to app developers. In both cases we have a major platform built there was a giant privacy-violating loopholes built into the platforms that was developers know about but the public isn’t really aware they’re signing up for. That’s become much of the modern internet giant business model and as we can see it’s a model that feeds on itself. Google and Facebook feed information to each other indicating that the Big Data giant have determined that it’s more profitable to share their data on all of us than keep it locked and proprietary.
Recall how Facebook whistle-blower Sandy Parakilas said he remembered Facebook executives getting concerned that they were giving so much of their information on people away to app developers that competitors would be able to create their own social networks. That’s how much data Facebook was giving away. And now we learn that Google’s operating system made an egregious amount of data available to app developers — like metadata on calls and texts — if people gave an app “contact” permissions.
And so we can see that Facebook and Google just aren’t in the ad space. They’re in the data brokerage space too. They’ve clearly determined that maximizing profits just might require handing over the kind of data people assumed these data giants carefully guarded. Instead, they’ve been carefully and steadily handing that data out. Presumably because it’s more profitable:
“To understand what Facebook is defending requires a lot of explanation—and that’s the heart of the problem.”
It’s a key insight: It really a reflection of the heart of the problem that simply understanding what Facebook is defending requires a lot of explanation. When Facebook started collecting people’s call and text metadata over its app it was exploiting the fact that Google’s Android system allowed them to do that in the first place when users gave “contact” permissions to an app (most people probably didn’t assume that giving an app contact permission was also giving away call and text metadata). And then after Google changed the Android app permissions system and separated the permissions for contact information with permissions for the call and text metadata Facebook relied a loophole Google provided where apps that were already installed could continue collecting that data. And none of this was ever made clear to the millions of people using the Facebook app on their Android phones because it was hidden in the dense text of user agreements that no one reads. The convolutedness of the act obscures the act.
And keep in mind that Facebook is claiming that it merely wanted this call and text metadata for its friend recommendations algorithm. Which is, of course, absurd. That data was going to obviously go into the pool of data Facebook is compiling on everyone.
“To put all of that into plain English, Google’s Android OS has its own privacy issues, and coupled with Facebook’s apps, it could’ve made it possible for Facebook users to opt-into the company’s surveillance program without realizing it.”
Facebook and Google working together to share more of what they know about us with each others. That’s basically what happened. It was a team effort.
And as the article notes, when Facebook claims that this was all fine because it was an opt-in option they ignore the fact the app used to make it very unclear opting-out was an option at all. The opt-out option was hidden in the settings and opting-in was the default setting that people had selected when they installed the app. And it was like that as recently as 2016:
And it’s also all an example of how the ostensibly helpful reasons to collect this personalized data (like make the friend recommendation algorithms better in this case) are used as an excuse to engage in the personal information equivalent of a smash and grab ransacking:
“However, Facebook has turned a convenience into an excuse for grabbing more information that it can combine with everything else to make a perfect psychological and social profile of you, the user. And it has demonstrated that it can’t be trusted to keep that data to itself.”
While Facebook may not have perfect psychological and social profiles of everyone, they probably have the best or nearly the vest, with Google possibly knowing more about people. And it’s hard to imagine that this call and text metadata isn’t potentially pretty valuable information for putting together those personal profiles on everyone. So it’s worth noting that this is potentially the same kind of profile data that Facebook gave out to Cambridge Analytica and thousands of other app developers. In other words, this call and text metadata slurping scandal is potentially also part of the Cambridge Analytica scandal in the sense that the insights Facebook gained from the call and text metadata could have shown up in those profiles Facebook was handing out to app developers like Cambridge Analytica.
Which is a reminder that this new scandal of Google’s Android OS giving Facebook this call and text metadata probably involves a lot more than just Facebook collecting this kind of data. Who knows how many other app developers whose apps requested “contact” permissions also went ahead and grabbed all the call and text metadata?
Also don’t forget that this call and text metadata includes data about the people on the other side of those calls and texts. So Facebook was grabbing data on more people than just the app users. And any other Android developers were potentially grabbing that data too. It’s another parallel with the Facebook “friends permission” loophole exploited by Cambridge Analytica and other Facebook app developers: you don’t have to download these privacy violating apps to be impacted. Simply communicating with someone who does have the privacy violating app will get your privacy violated too.
So as we can see, Facebook doesn’t just have a scandal involving giving private data away. It also has a scandal involving collecting private data too. A scandal that potentially any other Android app developer might also be involved in too. Which means there’s probably a black market for this kind of data too. Because Google, like Facebook, apparently couldn’t resist making itself a data-broker too. And now all this data is potentially floating around out there. It’s was a wildly irresponsible act on Google’s part to make that kind of data available under the “contacts” permissions in the Android operating system but that’s how much Google designed that system to make data collection a priority. Presumably to encourage more app developers to make Android apps. Access to our data is literally part of the incentive structure. It’s really quite stunning. And quite analogous to what Facebook is in trouble for with Cambridge Analytica.
But at least those Facebook friend recommendation algorithms are probably very well powered, so there’s that.
We should probably get ready for a lot more stories like this: Facebook just issued a flurry of new updates to its data-sharing policies. Some of these changes include new restrictions on the data made available to app developers while other changes are focused on clarifying the user agreements that disclose what data is taken.
And there’s a new estimate from Facebook on the number of Facebook profiles grabbed by Cambridge Analytica’s app. It’s gone from 50 million to 87 million profiles:
“Facebook is facing its worst privacy scandal in years following allegations that a Trump-affiliated data mining firm, Cambridge Analytica, used used ill-gotten data from millions of users to try to influence elections. The company said Wednesday that as many as 87 million people might have had their data accessed — an increase from the 50 million disclosed in published reports.”
50 million to now 87 million. It’s quite a jump. How high might it get when this is all over? We’ll see.
And beyond that update, Facebook also updated their data-collection disclosure policies. Now they’re actually mentioning things like the grabbing of call and text data off of your smartphone, which they apparently didn’t feel the need to tell people about before:
And note how Facebook’s update on how local privacy laws could affect its handling of “sensitive” data implies that the absence of those local laws means the same “sensitive” data isn’t going to be handled in a sensitive manner. So if you were hoping the big new EU data privacy rules were going to impact Facebook’s policies outside the EU, nope:
And that’s just some of the updates Facebook issued today. And while a number of these updates are pretty notable, perhaps the most notable part of this flurry of updates is is that they’re updates that actually increase privacy protections, which is not how these updates have normally gone for Facebook in the past
And now let’s take a look at one of the other disclosures Facebook made today: Remember how Facebook whistle-blower Sandy Parakilas speculated that a majority of Facebook users probably had their Facebook profile information scraped by app developers using exactly the same technique Cambridge Analytica used? Well, it looks like Facebook has very belatedly arrived at the same conclusion:
“Facebook said Wednesday that most of its 2 billion users likely have had their public profiles scraped by outsiders without the users’ explicit permission, dramatically raising the stakes in a privacy controversy that has dogged the company for weeks, spurred investigations in the United States and Europe, and sent the company’s stock price tumbling.”
So a billion or so people probably had their Facebook profile data sucked away by app developers. Facebook apparently just discovered this. And while it’s laughable to imagine that Facebook just suddenly discovered this now, recall how Sandy Parakilas also said executives had a “it’s best not to know” attitude about how this data was used by third-parties, so it’s possible that Facebook technically didn’t officially know this until now because they officially never looked before:
““Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped,” the company wrote in its blog post.”
LOL! They just discovered this and knew nothing about how their massive sharing of profile information with app developers might lead to a massive release of profile data. That’s their story and they’re sticking to it. For now.
And notice how it’s just casually acknowledged that “Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015,” and Facebook is announcing all these new restrictions on the data app developers, or even data brokers, can access. And yet Facebook is acting like this is all sort of revelation:
And note how Cambridge Analytica whistle-blower Christopher Wylie has already tweeted out that the new 87 million estimate might not be high enough:
“Could be more tbh.” It’s a rather ominous tweet considering the context.
And don’t forget that the count in the original number of people using the Cambridge Analytica app, ~270,000, hasn’t been updated. That’s still just 270,000 people. So this scandal is providing us a sense of just how many people were likely getting their profile information grabbed by app developers using the “Friends Permission” feature. When it was 50 million people in total, that came out to about 187 friends getting their profiles grabbed for each person who actually downloaded the app. But if it’s 87 million people that makes it ~322 friends for each Cambridge Analytica app user on average.
Along those lines, it’s worth noting the the average number of friends Facebook users have is 338 while the median number of friends in 200, according to a 2014 Pew research poll. So if that 87 million number keeps climbing, and therefore the assumed number of friends per user of the Cambridge Analytica app keeps climbing too, at some point we’re going to start getting into suspicious territory and have to ask the question of whether or not the users of that app were unusually popular or if Cambridge Analytica was getting data from more than just that app.
After all, for all we know Cambridge Analytica may have simply purchased a bunch of data on the Facebook profile black market, something else Sandy Parakilas warned about. So how high might that 87 million number get if Cambridge Analytica was just buying this information for other app developers? Who knows, although at this point, “a billion profiles” can no longer be ruled out, thanks to Facebook’s very belated update today.
And the hits keep coming: Here’s an article some more information on the disclosure Facebook made on Wednesday that “malicious actors” may have been using a couple of ‘features’ Facebook provides to scrape public profile information from Facebook accounts and associate that information with email address and phone numbers. This is separate from the data collection technique used by the Cambridge Analytica app, and thousands of other app developers, to grab the private profile information from app users and their friends.
One technique used by these “malicious actors” was to simply feed phone numbers and email addresses into a Facebook “search” box that would return the Facebook profile associated with that email for phone number. All the public information on that profile could then subsequently be collected and associated with that email/phone data. Users had the option of turning off the ability for others to find their profile using this method, but it was turned on by default and apparently few people turned it off.
The second technique involved used an account recovery tool Facebook provided names, profile pictures and links to the public profiles themselves for anyone pretending to be a Facebook user who forgot how to access their account.
And according to Facebook, this was being done by actors obtaining emails addresses and phone numbers on people on the Dark Web and then setting up scripts to automate this process for large numbers of emails and phone numbers, “with few Facebook users likely escaping the scam.” In other words, almost every Facebook user probably had their email and phone number associated with their Facebook account via this method. Also keep in mind that you don’t need to go to the Dark Web to buy lists of email addresses and phone numbers, so placing an emphasis on the “Dark Web” as the source for this information is likely part of Facebook’s ongoing attempt to ensure that this scandal doesn’t turn into an educational experience for the public on how widespread the data brokerage industry really is and how much information on people is legally commercially available. In other words, these “malicious actors” were probably operators in the commercial data brokerage market in many cases.
And as the article notes, pairing email and phone number information with the kind of information people made publicly available on their profiles is exactly the kind of information that identity thieves want to obtain as a starting point for stealing your identity.
The article also includes more information on just what kind of private profile information app developers like Cambridge Analytica were allowed to grab. Because it’s important to note that we don’t have clarity yet on what exactly app developers were allowed to grab from Facebook profiles. We’ve heard vague descriptions of what was available to the app developers, like Facebook’s ‘profile’ of you (presumably, what they’ve learned or inferred about you) and the list of what you “liked”. But it hasn’t been clear if app developers also had access to literally all of your private Facebook posts. Well, based on the following article, it does indeed sound like app developers potentially had access to literally all of your private Facebook posts. And a lot of that data is probably available on the Dark Web and other black markets too at this point because why not? Facebook made it available and it’s valuable, so why wouldn’t we expect it to be available for sale?
And the article makes one more stunning revelation regarding the permissions app developers had to scrape this private information: Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group.
So while Facebook hasn’t yet admitted that they made almost all the private information on people’s Facebook profiles available for identity thieves and any other bad actors for years with little to no oversight and that this data is probably floating around on the Dark Web for sale, they are getting much close to admitting this given their latest round of admissions:
“But the abuse of Facebook’s search tools — now disabled — happened far more broadly and over the course of several years, with few Facebook users likely escaping the scam, company officials acknowledged.”
Few Facebook users likely escaping the “scam” of using the feature Facebook turned on by default and was a obvious massive privacy violation. A “scam” that was also far less of a privacy violation than what Facebook made available to app developers, but still a scam that likely impacted almost all Facebook users. And the more information people made available on their public profiles, the more these “scammers” could collect about them:
And then there was Facebook’s account recovery function that Facebook also made easy to exploit:
And, again, while this kind of information wasn’t necessarily as extensive as the private information Facebook made available to app developers, it was still a very valuable starter kit of identity theft:
And, of course, that ‘identity theft starter kit’ data — associated phone numbers and emails with real names and other publicly available information — could potentially become combined with the private information made to app developers. Information to app developers that apparently included “people’s relationship status, calendar events, private Facebook posts, and much more data”:
So if “people’s relationship status, calendar events, private Facebook posts, and much more data” was made available to app developers, it raises the question: what wasn’t made available?
It’s all a reminder that there is indeed a “malicious actor” who took possession of all your private data and its name is Facebook.
Here’s a series of articles that that serve as a reminder that Facebook isn’t just an ever-growing vault of personal data profiles on almost everyone (albeit a very leaky data vault). It’s also a medium through which non-Facebook ever-growing vaults of personal data, in particular the data brokerage giants like Acxiom, can be merged with Facebook’s vault, ostensibly for the purpose of making Facebook’s targeted ads even more targeted.
This third-party sharing is done through Facebook’s “Partner Categories” program: Facebook advertisers have the option of filtering their Facebook ad targeting based on, for instance, the group of people who purchased cereal using data from Acxiom’s consumer spending data base. As such, data broker giants that are potentially Facebook’s biggest competitors become Facebook’s biggest partners.
Not surprisingly, merging Facebook’s extensive personal data profiles with the already very extensive personal data profiles held by the data brokerage industry raises a number of privacy concerns. Privacy concerns that are hitting a peak in the wake of the Cambridge Analytica scandal. So, also not surprisingly, Facebook just announced the end of the Partner Categories program over the next six months as part of its post-Cambridge Analytica public relations campaign:
“More specifically, Facebook says it will stop using data from third-party data aggregators — companies like Experian and Acxiom — to help supplement its own data set for ad targeting.”
As we can see, Facebook isn’t just promising to cut off the personal data leaking out of its platforms to address privacy concerns. It’s also promising to cut off some of the data flowing into its platforms. Data from the data brokerage giants flowing into Facebook in exchange for some of the ad money when that data results in a sale:
And while the public explanation for this move is that this is being done to address privacy concerns, there’s also the suspicion that Facebook is willing to make this move simply because Facebook doesn’t necessarily need this third-party data to make its ads more effective. So while cutting out this data-brokerage data is a potential loss for Facebook, that loss might be outweighed by the growing headache of privacy concerns for Facebook that comes from directly incorporating third-party data into its ad algorithms when it can’t control whether or not these third-party data brokerages obtained their own data sets in an ethical manner. In other words, the headache isn’t worth the extra profit this data-sharing arrangement yields:
So is it the case that Facebook is using this Cambridge Analytica scandal as an excuse to cut these data brokers that Facebook doesn’t actually need out of the loop? Well, as the following article notes, it’s not like Facebook doesn’t have the option of buying that data from the data brokers themselves and just incorporating the data into their internal ad targeting models. But Facebook always had that option and still chose to go ahead with this Partner Categories program, so it’s presumably the case that paying outright for that brokerage data is more expensive than setting up the Partner Categories program and giving the brokerages a cut of the ad sales.
As the following article also notes, advertisers will still be able to get that data brokerage information for the purpose of further targeting Facebook users. How so? Because notice the second data set in the above article that Facebook uses for targeting ads: data sets from the advertisers themselves. Like lists of email addresses of the people they want to target. It’s the same Custom Audiences tool that was used extensively by the Trump campaign for its “A/B testing on steroids” psychological profiling techniques. So there’s nothing stopping advertisers from getting that list of email addresses from a data broker and then feeding that into Facebook, effectively leaving the same arrangement in place but in a less direct manner. But it’s less convenient and presumably less profitable if advertisers have to do this themselves. It’s a reminder that partnering means more profits in the business Facebook is in.
Finally, as digital privacy expert Frank Pasquale also points out in the following article, there’s no real reason to assume Facebook is actually going to stand by this pledge to shut down the Partner Categories program over the next six months. It might just quietly start it up again in some other form or reverse simply reverse this decision after the public’s attention shifts away.
So while there are valid questions as to why Facebook is making this policy change, there are unfortunately also valid questions over whether or not this policy change will make any difference and whether or not Facebook will even make this policy change at all:
“Facebook said late Wednesday that it would stop data brokers from helping advertisers target people with ads, severing one of the key methods marketers used to link users’ Facebook data about their friends and lifestyle with their offline data about their families, finances and health.”
Yep, one of the key methods marketers used to link Facebook data with all the offline data that these data brokerages were able to collect just might get severed. It’s potentially a big deal for Facebook and the advertising industry. Or potentially not. That’s part of what makes this such a fascinating move by Facebook: It’s potentially quite significant and potentially inconsequential:
And note how this cooperating with these brokerages as only growing during the same period that Facebook cut off the “friends permissions” privacy loophole exploited by Cambridge Analytica’s app and thousands of other apps in 2015. It’s a reminder that even when Facebook is getting better in some ways, it’s probably getting worse in other ways:
And while some of the data gathered by the data brokerages inevitably overlaps with what Facebook also gathers on people, there are quite a few categories of ‘offline’ data these data brokers systematically gather that Facebook can’t gather without seeming super extra creepy. Data brokers gather data from places like voter rolls, property records, purchase histories, loyalty card programs, consumer surveys, car dealership records. Imagine if Facebook directly gathered that kind of offline information about everyone instead of just buying it from the brokerages or setting up arrangements like the Partner Categories program. Imagine how incredibly creepy that would be if Facebook had an ‘offline data collective’ division. It’s a reminder that Facebook and the data brokers really are engaged in an ‘online’/‘offline’ data gathering and aggregation joint effort. “Partner Categories” is an appropriate name because it’s a real partnership that’s important to both parties because it would be a bigger PR nightmare if Facebook had to collect all this offline data itself:
And, of course, the Custom Audiences tool that lets advertisers feed in lists of things like email address to target specific audiences — used extensively by the 2016 Trump campaign — might make the decision to end the Partner Categories program moot:
And as Frank Pasquale points out, we also don’t know enough about what Facebook knows about us to know now much of an impact ending the Partner Categories program will make to the privacy violations involved with Facebook’s whole business model. It’s entirely possible this change will make fusing data broker data with Facebook data less convenient and less profitable, but also still just as privacy violating both because the present day set up can be replicated indirectly (by Facebook advertisers coordinating with the data brokers separately) and also because Facebook might know almost everyone the data brokers know just from its own data collection methods. In other words, this could be largely cosmetic. And, as Pasquale also pointed out, Facebook might just change its mind and not end the program once public attention wanes:
So is this annoiced policy changed going to happen? Will it matter if it happens? It’s a pretty significant question and not one easy to answer given that Facebook’s algorithms are largely a black box.
That said, Josh Marshall might have a significant data point for us with regards to how important the current third-party data sharing arrangement with data brokerage giants really is in terms of the performance of Facebook’s ad targeting performance: starting in early March advertisers started noticing a significant drop off in the targeting quality of Facebook’s ads. Facebooks ad targeting quality just got worse for some reason. And this was early March, which is before the Cambridge Analytica story hit in mid-march but possibly after Facebook knew the Cambridge Analytica story was coming. So the timing of this observation is interesting and Marshall has a hunch: Facebook was already experimenting with how its internal advertising algorithm would operate without direct access to the data brokerages and potentially without access to a lot of other data sources in anticipation of the new EU regulations and new regulations from the US Congress. In other words, Facebook already saw the writing on the wall before the recent wave of Cambridge Analytica revelations went public and has already started the shift to an in-house ad targeting algorithm and it shows.
Now, it’s possible that Josh Marshall could be correct that Facebook has already started implementing an internal-only ad targeting algorithm and it’s noticably worse now but that it get better in the long run because Facebill will improve its third party-limited algorithm and the advertisers and brokers adapt to a new, less direct data-sharing arrangement. Maybe everyone will adapt and get up to par. Time will tell.
But if not, and if the loss of these data sharing arrangements makes Facebook’s ads less effective in the long run because — maybe because it’s much more efficient to directly funnel the broker data and a whole bunch of other third-party data into Facebook and the indirect methods can’t replicate this arrangement — then it’s worth noting that this downgrade in Facebook’s ad targeting quality as a result of the loss of this third-party data would reflect a real form of privacy enhancement and generally should be cheered. And is also a statement on the public utility of the overall data brokerage industry that is dedicated to collecting, aggregated, and selling personal data profiles. There’s a lot of negative utility in this industry and this wave of Facebook scandals is just one facet of it. So if Marshall’s guess is correct and this observable dropoff in Facebook ad quality reflects a decision by Facebook to preemptively take third-party data out of its ad targeting algorithms in anticipation of the new EU data privacy laws and future congressional action in the US, let’s hope that drop-off is sustained for our privacy’s sake:
“For more than a year, Facebook has faced a rolling public relations debacle. Part of this is the American public’s shifting attitudes toward Big Tech and platforms in general. But the driving problem has been the way the platform was tied up with and perhaps implicated in Russia’s attempt to influence the 2016 presidential election. Users’ trust in the platform has been shaken, politicians are threatening scrutiny and possible regulation, and there’s even a campaign to get people to delete their Facebook accounts. All of this is widely known and we hear more about it every day. But most users, most people in tech and also Wall Street (which is the source of Facebook’s gargantuan valuation) don’t yet get the full picture. We know about Facebook’s reputational crisis. But people aren’t fully internalizing that the current crisis poses a potentially dire threat to Facebook’s core business model, its core advertising business.”
As Josh Marshall points out, if Facebook really does have to turn off the third-party data spigot, the question of what this will actually do to the quality of its ad targeting is a massive question. The importance of the direct third-party data sharing arrangement is one of the big questions swirling around Facebook for both Facebook’s investors (from a price per share standpoint) and the public (from a public privacy standpoint). The fact that the EU’s new data privacy rules are hitting Facebook in Europe right when the Cambridge Analytica scandal starts playing out in the US and threatens to snowball into a larger scandal about Facebook’s business model in general just makes it a bigger question for Facebook.
And it’s a crisis for Facebook that will be numerically reflected in one key measure pointed out by Marshall: the number of advertisements that need to be shown to trigger a sale on Facebook compared to other platforms. It’s a 5‑to‑1 ratio for Facebook vs a 30-to‑1 ratio for other digital platforms and 100-to‑1 for traditional ads. Facebook really is much better at targeting its ads than even its digital peers. So when Facebook gets worse at targeting its ads, that does amount to real privacy gains because it’s one of the biggest and best ad cutting edge ad targeting platforms. This is why Facebook is worth over $450 billion:
“If old-fashioned advertising shows my advertisement to 100 people for every actual buyer and other digital platforms show it to 30 people and Facebook shows it to 5 people, Facebook’s ads are just worth a lot more”
And that’s why this is a pretty big story if there’s a real drop in the quality of Facebook’s ad targeting quality. Facebook is wildly ahead of almost all of its competition. Only Google and governments are going to compete with what Facebook knows about us all. So if Facebook effectively knows less about us, as reflected in a drop in the ad targeting observed starting in early March, that reflects a real de facto increase in public privacy. And it’s also a big story from a business standpoint because it’s it’s not just about Facebook, it’s also about the entire data brokerage industry. There’s a large part of the modern US economy potentially tied into this Facebook scandal. A scandal that now extends beyond the Cambridge Analytica app situation and has led to Facebook declaring the phaseout of its Partner Categories program. Is this ushering in a sea change in the data brokerage industry? If so, that’s big.
Facebook was going to have a sea change in how it did business in the EU thanks to the new data privacy laws, but it’s this Cambridge Analytica scandal that appears to be driving the likelihood of sea change in the US market too. And that’s part of why it’s notable if Facebook really did start rejiggering its algorithms without that third-party data in early March, potentially in anticipation of this flurry of bad press, and then the ad targeting suddenly got worse. Because if it turns out that the loss of the third-party data makes Facebook’s ad targeting worse, we should note that. And ask ourselves whether or not making Facebook even worse at targeting ads would be desirable from a public privacy perspective. The more Facebook sucks at ads the better Facebook is for everyone from a privacy perspective. It’s one of the fundamental contradictions of Facebook’s business model that this Cambridge Analytica scandal risks exposing to the public:
And as Josh Marshall points out, the impact of the loss of this third-party data on Facebook’s ad targeting algorithms is largely speculative because we know so little about what Facebook knows about us without those third party algorithms. Facebook is a black box:
But we might get an answer to the question of whether or not Facebook needs that third-party data to achieve the ad targeting proficiency is has today because of those new EU regulations and the real possibility of some sort of congressional action as a result of the Cambridge Analytica scandal. And that, of course, is why Josh Marshall suspects what we’re seeing in the reported drop in Facebook’s ad targeting is that Facebook is already preparing for coming regulation:
And if Josh Marshall’s hunch is correct and Facebook really did start rejiggering its ad targeting algorithms in anticipation of coming congressional regulation — which points towards an anticipation by Facebook of a very negative public response to the yet-to-be-released Cambridge Analytica story — we have to wonder just have many other privacy violating schemes Facebook has been up to with other third-parties beyond the data brokerage giants like Acxiom or Experian. Like what kinds of other classes of third-party providers might Facebook be incorporating into their algorithms?
Well, here’s a chilling example of the kind of third-party data-sharing partnership Facebook might be interested in: hospital record meta data. Like what diseases people have an the medications they’re on and when they visited the hospital. From several major hospitals, including Stanford Medical School’s.
Facebook says it would be for research purposes only by the medical community but Facebook would have been able to deanonymize the data. And it’s kind of obscene because Facebook says the plan for protecting everyone’s privacy is to using “hashing” — where patients would be assign an anonymous number that is assigned based on a mathematical algorithm that takes something like the patient name and turns it into a seemingly random number — and that only the medical research community will have access to the anonymized data so no one’s privacy is at risk. But using hashing to match the Facebook data set and the hospital data set means Facebook can match up the hospital data with its Facebook users. Facebook is trying to get deanonymized patient health data from hospitals. It’s a disturbing example of the kind of third-party data that Facebook is interested in.
And there’s no real reason to believe they wouldn’t wildly abuse the data and probably turn the patients of those hospitals into focus groups of algorithmic testing using their medical records to pitch ads. Which will probably freak those people out. Facebook + hospital data = yikes.
And this plan was being pursued last month. The Cambridge Analytica scandal disrupted active talks. The plan was “put on pause” by Facebook last week in response to the Cambridge Analytica outrage. Still, that’s just “on pause”. So it sounds like the plan is still “on” and we should expect a continued push into the medical record space by Facebook.
Facebook’s pitch was to combine what health system data on patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with Facebook’s data on the person (such as: user is age 50, married with 3 kids, English isn’t a primary language, actively engages with the community by sending a lot of messages). And then the research project would try to use this combined information to improve patient care in some way, with an initial focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn’t have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.
In other words, Facebook was setting up a research project dedicated to developer hospital decision-making support that utilizes Facebook’s pool of personalized data on people. Which is a path to plug Facebook into the hospital system. Yikes:
“Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.”
Patient data from hospitals. It’s Facebook’s brave new third-party data frontier. Currently under the auspices of medical research, but its research for the purpose of showing Facebook’s utility in medical decision-support which is research to demonstrate the utility of sharing patient information with Facebook. That was the general pitch Facebook was making to several major US hospitals, including Stanford. And it’s a plan that, according to Facebook, was being pursued last month and has merely been “put on pause” in the wake of the Cambridge Analytica scandal:
The way Facebook pitched it, the anonymized data from Facebook and the anonymized data from the hospitals would be combined and used for medical community research (research into Facebook as a patient care decision-support partner):
But what Facebook doesn’t acknowledge in that pitch is that the technique it’s proposing to anonymizing the data only anonymizes it to everyone except the hospital and Facebook. Facebook can easily deanonymize the hospital data if it gets its hands on it. The medical researchers aren’t the privacy threat. It’s actually anonymized for them because they don’t know the patients or the Facebook profiles. They’re just hashed ids. But Facebook sure as hell is a privacy threat because it’s Facebook with it’s hands on the deanonymized data:
And note how the issue of patient consent didn’t come up in these early discussions, suggesting that Facebook is trying to work out a situation where people don’t know their patient record data was handed over to Facebook:
And, of course, it was Facebook’s mad science “Building 8” R&D group that is behind this proposal. The same group behind projects like the human-to-computer mind-reading interface technology that will allow human-to-computer interfaces (so Facebook can literally data mine your brain activity). And the same R&D group that was recently led by former DARPA chief Regina Dugan, until Dugan left last year with a cryptic message about stepping away to be “purposeful about what’s next, thoughtful about new ways to contribute in times of disruption.”. This next-generation Facebook stuff:
It’s a reminder that Facebook’s R&D teams are probably working on all sorts of new ways to tap into data-rich third-party sources. Hospitals are merely one particularly data rich example of the problem.
And if Facebook really does cuts out third-party data brokers from its algorithms, let’s not forget that Facebook is probably going to use that as an excuse and imperative to reach out to all sorts of niche third-party data providers for direct access. Like hospitals. Don’t forget that the above plan was merely “put on pause”. They want to do more stuff like this going forward. And why not if they can get hospitals to give this kind of data out. And any other kind of institution they can convince to out our data. This is how Facebook can go “offline”. With direct data sharing services, like patient care decision-making support services, with one field of institution at a time. Hospitals are just one example.
So given Facebook faces potential congressional action and new regulations, it’s going to be important to keep in mind that those regulations are going to have to include more than just the data brokerage giants like Experian. Because Facebook is interested in what you tell your doctor too. And presumably lots of other ‘services’ where they fuse their data about you with another data source for combined decision-making support. And the more Facebook promising to cut out third-party data, but more Facebook is going to try to directly collect “offline” data by fusing itself with other facets of our lives. It’s really quite disturbing.
And who knows who else in the data brokerage industry might try to follow Facebook’s lead. Will Google also wants get into the patient care decision support market? Third-party data-brokerage decision-making support could be potentially applied to a lot more than just the medical sector. It’s a creepy new profit frontier.
Beyond that, how else might Facebook attempt to replace the “offline” third-party data it’s pledging to phase out over the next six months? We’ll see, but we can be sure that Facebook is working on something.
Here’s a reminder that the proposal to combine Facebook data with patient hospital data — ostensibly for patient care decision-support purposes but also likely so Facebook can get its hands on patient medical record information — isn’t the only project Facebook has put ‘on pause’ (but not canceled) in the wake of the Cambridge Analytica scandal. For example, there’s a new hardware product for your home that Facebook is planning out rolling out later this year.
It’s a “smart speaker” like the kind Amazon and Google already have sale. A smart speaker that will sit in your home and listen to everything and answer questions and schedule things. Potentially with cameras. Your personal home assistant. That’s the market Facebook is getting into later this year. But thanks to the public relations nightmare situation Facebook is experiencing at the moment the announcement of this new smart speaker at its developers conference in May has been cancelled. But it sounds like the roll out is still planned for this fall. So that smart speaker is a useful reminder to the US public and regulators of the future direction Facebook is planning on heading: in home “offline” data collection using internet-connected smart devices:
“Facebook Inc. has decided not to unveil new home products at its major developer conference in May, in part because the public is currently so outraged about the social network’s data-privacy practices, according to people familiar with the matter.”
Yeah, it’s understandable that public outrage over years of deceptive and systemic mass privacy violations might complicate the roll out of your new in-home “smart speakers” which will be listening to everything happening in your home and sending that information back to Facebook. A pause on that grand unveiling does seem prudent.
And yet Facebook still plans to actually launch its new smart speakers later this year:
And that planned roll out of these smart speakers later this year is just one element of Facebook’s plan to “become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers”:
“The devices are part of Facebook’s plan to become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers.”
Yep, Facebook has all sorts of plans to become more intimately involved with your everyday life. Using artificial intelligence. And smart speakers. And no privacy concerns, of course.
And in fairness this move to sell consumer devices that monitor you for the purpose of offering useful services with the data its collecting (and for selling you ads and profiling you) is merely following in the footsteps of companies like Google or Amazon with their wildly popular smart speakers. As the following article notes, A recent Gallup poll found found that 22 percent of Americans use “Home personal assistants” like Google Home or Amazon Echo. That is a huge percentage of the American public that’s already handing out exactly the kind of data Facebook is trying to collect with its new smart speaker.
And as the following article also notes, if the creepy patents Google and Amazon have already filed are any indication of what we can expect from Facebook, we should expect Facebook to work on things like incorporating the smart speakers into smart home AI systems for monitoring children, with whisper detection capabilities and the ability to issue verbal commands at the kids. The smart home would replace the television as the technological parent of today’s kids and one of these mega corporations selling this technology will get audio and visual access to your home. Yes, the existing Google and Amazon patents would incorporate visual data too since these smart speakers tend to have cameras.
And one patent involved a scenario where the camera on a smart speaker recognized a t‑shirt on the floor and recognized a picture of Will Smith on the shirt and then tied that to a database of that person’s browsing history to see if they looked up Will Smith content online and then serving up targeted ads if they found a Will Smith hit. That’s a real patent from Google and that’s the kind of Orwellian patent race that Facebook is quietly getting ready to join later this year:
“While the ad riffed on what Alexa can say to users, the more intriguing question may be what she and other digital assistants can hear — especially as more people bring smart speakers into their homes.”
It’s one of the conundrums of the smart speaker business model: it’s obvious these smart speaker manufacturers would love to just collect all the information they can about what people are saying and doing, but they need to maintain the pretense of not doing that in order to get people to buy their devices. So it’s no surprise that Google and Amazon routinely make it clear that their devices are only recording information after they’ve been activated by the users. But as these patents make clear, there are all sorts of home life surveillance applications that these companies have in mind. Like the smart home child monitoring system, with whisper detection capabilities and mischief-detecting AI capabilities:
“One application details how audio monitoring could help detect that a child is engaging in “mischief” at home by first using speech patterns and pitch to identify a child’s presence, one filing said. A device could then try to sense movement while listening for whispers or silence, and even program a smart speaker to “provide a verbal warning.””
Listening for the mischievous whispers of children and issuing a verbal warning. Those are the kinds of capabilities companies like Google, Amazon, and now Facebook are going to be investing in. And it will probably be very popular because that would be a very handy tool for parents to have smart home systems that literally watch the kids. But it’s going to come at the cost of opening up our homes to monitoring by one of these data giants. And that’s insane, right?
Another patent noted how the smart speakers could detect medical conditions from your voice, like detecting coughing, sneezing, and the breathing rate. And that’s just an example of the kind of personal data these devices are clearly capable of gathering and they’re only going to get better at it:
“The same application outlines how a device could “recognize a T‑shirt on a floor of the user’s closet” bearing Will Smith’s face and combine that with a browser history that shows searches for Mr. Smith “to provide a movie recommendation that displays, ‘You seem to like Will Smith. His new movie is playing in a theater near you.’””
The smart speaker camera is going to interface things it sees in your home with your browser history. For ad targeting. That’s a patent.
It’s why Consumer Watchdog’s Jamie Court warnings that these consumer home devices are really just home life spyware should be heeded. Because it’s pretty obvious that the plan is to turn these things into home activity monitoring devices. And with 22 percent of Americans saying they use a “Home personal assistants” in a recent Gallup poll, that really does make the coming era of smart device home monitoring a public privacy nightmare:
Of course, both Google and Amazon assure us that their devices are only recording audio after they’re triggered. And it’s only being used to improve the user experience and make it more personalized:
And while Google assures us those voice recordings will only be used to personalize the experience, Google’s user agreement includes the possibility of sending transcripts of what people say to third-party service providers. And it “generally” won’t send audio samples to those third-party providers. It’s an example of how little audio and visual snippets of people’s home life are becoming the new “mouse click” of consumer data collected and sold in exchange for a digital service:
And it’s not like these patents are necessarily future privacy nightmares. They’re potentially present privacy nightmares if it’s the case that these devices are actually just collecting data all the time in secret. And in a number of documented cases that’s been exactly what happened, including a murder case partially solved by an Amazon Echo with a propensity to start recording randomly:
And that’s all why better consumer regulation in this area really is called fall, because there’s no way consumers can realistically navigate this technological landscape:
And that’s one of the big questions that really should be asked in the wake of the Cambridge Analytica scandal: does the US need something like the Food and Drug Administration for data privacy for devices? Something far more substantial than the regulatory infrastructure that exists today and is dedicated to ensuring transparency of data collection practices? It seems like the answer is obviously yes. And if the Cambridge Analytica scandal is enough evidence those Orwellian patents should suffice. It
And as the Cambridge Analytica scandal also reminds us, we can either wait for the data abuses to happen and only belatedly deal with the problem or we can deal with it proactively. And dealing with it proactively realistically involves something like an FDA for data privacy.
But as we also just saw with those creepy patents, especially the child monitoring/scolding patent, consumers have much more than data privacy concerns with the world of smart devices Google and Facebook and Amazon have in mind. That future is going to involve devices that are literally raising the kids. Move over television, it’s parenting brought to you by smart home AIs and Silicon Valley.
And let’s also not forget one of the other lessons that we can take from the Cambridge Analytica scandal: the data collected by these smart devices isn’t just going to be collected by Google and Facebook and Amazon. Some of that data is going to be collected by all the third-party app developers too. Home life, brought to you by Google/Facebook/Amazon. That’s going to be a thing.
At the same time it’s undeniable that there will be very positive applications for this kind of technology. And that’s why it’s such a shame companies with the track record of Facebook and Google and Amazon are the ones leading this kind of technological revolution: like much technology, the consumer home smart device technology is heavily reliant on trust in the manufacturer and trust that the manufacturer won’t screw things up and turn their device into a privacy nightmare. That’s not the kind of situation where you want Google, Facebook, and Amazon leading the way.
So that’s all something to keep in mind when Facebook doesn’t talk about its upcoming smart speakers at its annual developers conference next month.
Here’s a fascinating angle to the Cambridge Analytica scandal that involves an Eastern Ukrainian politician with pro-EU leanings and ties to Yulia Tymoshenko and the Azov Battalion:
It turns out Cambridge Analytica outsourced the production of its “Ripon” psychological profiling software to a separate company, AggregateIQ (AIQ). AIQ was founded by Cambridge Analytica co-founder/whistle-blower Christopher Wylie, so it’s basically a subsidiary of Cambridge Analytica. But they were technically separate companies and it turns out that AIQ could end up playing a big role in an investigation into whether or not UK election laws were violated by the “Vote Leave” camp during the lead up to the Brexit vote.
It looks like the “Vote Leave” camp basically secretly spent more than it legally could using AIQ as a vehicle for doing this. Here’s how it worked: There was official “leave” political campaign but there were also third-party pro-leave campaigns. One of those was Leave.EU. In 2016, Robert Mercer offered Leave.EU the services of Cambridge Analytica for free. Leave.EU relied on Cambridge Analytica’s services for its voter influence campaign.
The official Vote Leave campaign, on the other hand, relied on AIQ for its data analytics services. Vote Leave eventually payed AIQ roughly 40 percent of its £7 million campaign budget. Here’s where the illegality came int: Vote Leave also ended up gathering more cash than British law legally allowed it to spend. Vote Leave could legally donate that cash to other campaigns but it couldn’t then coordinate with those campaigns. But that’s exactly what it looks like Vote Leave did. About a a week before the EU referendum, Vote Leave inexplicably donated £625,000 to the founder of a small, unofficial Brexit campaign called BeLeave. Grimes then immediately gave a substantial amount of the cash he received to AIQ. Vote Leave also donated £100,000 to another Leave campaign called Veterans for Britain, which then paid AIQ precisely that amount. So Vote Leave was basically using these small ‘leave’ groups as campaign money laundering vehicles, with AIQ as the final destination of that money.
That’s all why AIQ is now the focus of British investigators. AIQ’s role in this came to light in part from thousands of pages of code that was discovered by a cybersecurity researcher at UpGuard on the web page of a developer named Ali Yassine who worked for SCL Group. Within the code are notes that show SCL had requested that code be turned over by AIQ’s lead developer, Koji Hamid Pourseyed.
AIQ’s contract with SCL stipulates that SCL is the sole owner of “Ripon”, Cambridge Analytica’s campaign platform. The documents also include an internal wiki where AIQ developers also discussed a project known as The Database of Truth, a system that “integrates, obtains, and normalizes data from disparate sources, including starting with the RNC Data Trust. It’s a reminder that the story of Cambridge Analytica isn’t just a story about the Trump campaign or the Brexit vote. It’s also about the Republican Party’s political analytics in general
Also included to the discovered AIQ files were notes related to active projects for Cruz, Abbott, and a Ukrainian oligarch, Sergei Taruta.
So who is Sergei Taruta? Well, he’s a Ukrainian billionaire and co-founder of the Industrial Union of Donbass, one of the largest companies in Ukraine. He was appointed governor of the Donetsk Oblast in Eastern Ukraine by Petro Poroshenko in March of 2014 before being fired in October of 2014.
Taruta went on to get elected to parliament where he remains today. He recently co-founded the “Osnova” political party that describes itself as populist and a promoter of “liberal conservatism” (presumably it’s “liberal” in the libertarian sense). It’s suspected by some that Rinat Akhmetov, Ukraine’s wealthiest oligarch and another Eastern Ukrainian who straddles the line between backing the Kiev government and maintaining friendly ties with the pro-Russian segments of Eastern Urkaine, is also one of the party backers. Ahkmetov was a significant backer of Yankovych’s Party of Regions and a dominant figure in the Opposition Bloc today. It was Ahkmetov who initially hired Paul Manafort back in 2005 to act as a political consultant.
It’s reportedly pretty clear that Taruta’s Osnova party is designed to splinter away ex-supporters of Viktor Yankovych’s Party of Regions based on the politicians who have already declared they are going to join it. And yet as a politician Taruta is characterized as having never really tried to cozy up to the pro-Russian side and has a history of supporting pro-EU politicians. In 2006 he supported Viktor Yuschenko over Viktor Yanukovych. In 2010 he backed Yulia Tymoshenko over Yiktor Yanukovych.
So Taruta a pro-EU Eastern Ukrainian politician, which is notable because he’s not the only pro-EU Eastern Ukrainian politician to be involved with entities and figures in the #TrumpRussia orbit. Don’t forget about Andreii Artemenko, the Ukrainian politician who was involved with that ‘peace plan’ proposal with Michael Cohen and Felix Sater — a proposal that may have been part of a broader offer made to Russia over both Ukraine and the Syria and Iran — and how Artemenko was a pro-EU member of the far right “Radical Party” and also has ties to Right Sector. Artemenko headed up the Kiev department of Yulia Tymoshenko’s Batkivshchyna Party party back in 2006 and was serving in a coalition headed by Tymoshenko.
Also recall that the figure who appears to have arranged for the initial contact between Andreii Artemenko with Michael Cohen and Felix Sater was Alexander Oronov, the father-in-law of Michael Cohen’s brother. And Oronov was, himself, co-owned an ethanol plant with Viktor Topolov, another Ukrainian oligarch who was Viktor Yuschenko’s coal minister and who because an assassination target by Semion Mogilevych’s mafia organization. One of Topolov’s partners who was also targeted by Mogilevych, Slava Konstantinovsky, end up forming and joining one of the “volunteer battalions” fighting the separatists in the East.
So now we learn that AIQ (so, basically Cambridge Analytica) is doing some sort of work for Sergei Taruta, putting another Eastern Ukrainian oligarch politicians with pro-EU leanings in the orbit of this #TrumpRussia scandal.
So what kind of work did AIQ do for Taruta? That’s unclear. And it seems reasonable to assume that it’s work involving Taruta’s new party in Ukraine and its attempts to splinter off former Party of Regions voters.
But as we’re also going to see, Sergei Taruta has been doing some lobbying work in Washington DC. Rather curious lobbying work: It turns out Taruta was at the center of a bizarre ‘congressional hearing’ that took place in the US capital last September. This hearing focused on corruption allegations Taruta has been promoting for over a year against the National Bank of Ukraine, the country’s central bank.
There were two Ukrainian television stations covering the event and pretending like it was a real congressional hearing. Former CIA director James Woolsey, who was briefly part of the Trump campaign, was also at the event, along with former Republican House member Connie Mack, who is now a lobbyist. Mack was basically pretending to speak on behalf of the US Congress and expressing outrage over Taruta’s corruption allegations for the Ukrainian television audiences while expressing his resolve to investigate it. Rep. Ron Estes, a freshman Republican, booked the room in the US Capital for Mack and lobbying first. Estes’s office later said it won’t happen again.
And there’s another twist to this strange attack on the National Bank of Ukraine: According to Vox Ukraine, a a number of the criticisms Taruta brings against the bank are based on distortions and half-truths. In other words, it doesn’t appear to be a genuine anti-corruption campaign. So what is Taruta motivation? Well it’s notable that his criticism of the National Bank of Urkaine extends back to the actions of its previous chair, Valeriya Gontareva (Hontareva). Gontareva was appointed chairman of the bank in June of 2014. And one of her first big moves was the government takeover of Ukraine’s biggest commercial bank, Privatbank. Privatebank was co-founded by Ihor Kolomoisky, another Eastern Ukrainian oligarch.
Ihor Kolomoisky was appointed governor of the Eastern oblast of Dnipropetrovsk at the same time Taruta was appointed governor of Donetsk. Kolomoisky has been supporting the Kiev government in the civil war by financially supporting a number of the volunteer battalions, including directly creating the large private Dnipro Battalion. As we’ll see, both Kolomoisky and Taruta reportedly supported the neo-Nazi Azov Battalion according to a 2015 Reuters report. In other words, Kolomoisky is an Eastern Ukrainian oligarch with ties to the far right, kind of like Andreii Artemenko.
Kolomoisky wasn’t happy about the takeover of Privatbank. When Gontareva presided over the bank’s nationalization, its accounts were missing more than $5 billion in large part because the bank lent so much money to people with connections to Kolomoisky. After the bank takeover, Gontareva received numerous threats. On April 10, 2017, she announced at a press conference that she was resigning from her post.
So it looks like Sergei Taruta might be waging an international PR battle in against the National Bank of Ukraine as part of a counter move on half of Ihor Kolomoisky an the Privatbank investors.
And then there’s the person who actually organized this fake congressional hear. A little-known figure came forward to take full responsibility: Anatoly Motkin, a one-time aide to a Georgian oligarch accused of leading a coup attempt. Motkin the founder and president of StrategEast, a lobbying firm that describes itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe.”
That’s all some new context to factor into the analysis of Cambridge Analytica and the forces it was working for: one of its clients is a pro-EU Eastern Ukrainian oligarch who just set up a political party designed to appeal to for Yanukovych supporters.
Ok, so first, let’s look at the story of Cambridge Analytica and AggregateIQ (AIQ), the Cambridge Analytica offshoot that was used to both develop the GOP’s “Ripon” analytics software and also acted as the analytics firm for the Vote Leave campaign. And the work AIQ was doing for Vote Leave was apparently so valuable that Vote Leave secretly launder almost a million pounds through two smaller ‘leave’ groups in order to get that money to AIQ and secretly exceed the legal spending caps. And that’s the discovery of thousands of AIQ documents by a cybersecurity firm is so politically significant in the UK right now. But as those documents also reveal, AIQ was doing work for other clients: Texas Governor Greg Abbott, Texas Senator Ted Cruz, and Ukrainian oligarch Sergei Taruta:
“A little-known Canadian data firm ensnared by an international investigation into alleged wrongdoing during the Brexit campaign created an election software platform marketed by Cambridge Analytica, according to a batch of internal files obtained exclusively by Gizmodo.”
As we can see, AIQ was the under-the-radar SCL subsidiary that actually created “Ripon”, the political modeling software Cambridge Analytica was offering to client. Cambridge Analytica co-founder/whistle-blower Christopher Wylie helped SCL found the firm. Also AIQ co-found, Jeff Silvester, admits that Wylie was involved with AIQ landing its first big contract but asserts that Wylie was never closely involved with the company. And Silvester also admits that the company had a contract with SCL in 2014 but haven’t worked with SCL since 2016. So AIQ is officially acting like it’s not really an SCL offshoot at this point:
And based on the AIQ’s contract with SCL, we have a better idea of when exactly AIQ’s work with SCL ended in 2016: the code found by UpGuard was uploaded to the code-repository website GitHub in August of 2016. That suggests that was the point when the coded was effectively handed off from AIQ to SCL. And August of 2016, it’s important to recall, is the same month that Steve Bannon, a Cambridge Analytica company officer — and “the boss” according to Wylie — went to work as campaign manager of the Trump campaign. So you have to wonder if that’s a coincidence or a reflection of concerns over this SCL/Cambridge Analytica/AIQ nexus getting some unwanted attention:
And in those discovered AIQ documents are notes on projects AIQ was doing for Cruz, Abbott and Taruta. Along with notes on a project for the GOP called The Database of Truth:
AIQ is making the GOP a “Database of Truth”. Great.
And that sounds like a separate system from Ripon. The Database of Truth appears to focus on the kind of data found in data brokerages — state voter files, consumer data, third party data providers, etc. — whereas Ripon software appeared to be specifically focused on the kind of psychological profiling Cambridge Analytica was specializing in:
And as we’ve heard from the Trump campaign, and their assertions that the Cambridge Analytica software wasn’t actually very useful, the Cruz campaign is also calling this Ripon software just “vaporware”. Denials of the effectiveness of Cambridge Analytica’s psychological profiling methods has been one of the across-the-board assertions we’ve seen from the people involved with this story:
And while everyone involved with Cambridge Analytica has been claiming it’s largely useless, it’s hard to ignored the Brexit scandal that involved Vote Leave using two outside groups to launder almost a million pounds to AIQ for AIQ’s analytics services in excess of the legal spending caps. That’s quite a vote of confidence by Vote Leave:
As we can see, AIQ is an important entity in terms of understanding the broader scope of the kind of work and clients this SCL/Cambridge Analytica/Bannon/Mercer political influence project was undertaking. AIQ is critical for understanding the extent of the role this influence network played in the Brexit vote but also important for showing the other kinds of clients this network was taking on. Like Sergei Taruta.
Now let’s take a closer look at Taruta with this Ukrainian Week profile from October about the creation of Taruta’s new Osnova political party. Many suspect has Rinat Akhmetov of the Opposition Bloc is behind Taruta’s new party. But there is no evidence of that yet and the party so far appears to be designed to appeal to former Party of Regions voters, man of which are now Opposition Bloc voters in many cases and Akhmetov is a major Opposition Bloc backer. So questions about Akhmetov’s involvement remain open but it’s clear that Osnova is trying to appeal to Akhmetov’s political constituency.
As the article also notes, Taruta has a history of supporting pro-EU politicians, including Viktor Yuschenko and Yulia Tymoshenko. And he’s never cozied up to the pro-Russian groups.
But Taruta does have one very notable Kremlin connection: In 2010, 50%+2 shares of the Taruta’s industrial conglomerate, Industrial Union of Donbas (IUD), was bought up by Russia’s Vneshekonombank, the foreign trade bank. It is 100% state-owned and Russian Premier Dmitry Medvedev is the chair of its supervisory board. So Taruta does have a notable direct business tie with with the Russian government. But as the article notes, there are no indications Taruta or his new party are taking Russian money. And based on his political history it would be surprising if he was taking Kremiln money because he’s clearly part of the pro-European branch of Ukraine’s politics.
So we have AIQ doing some sort of work for Sergei (Serhiy) Taruta. Is that work data analytics for Osnova? We don’t know. If it probably involves Taruta’s campaign against the National Bank of Ukraine, because Taruta is clearly very interested in waging that political fight. So interested that he had a fake congressional hearing at the US capital that was broadcast on two Ukrainian television channels and sent the message that the US congress was going to investigate Taruta’s claims about corruption at Ukraine’s central bank. So it’s possible AIQ was involved in that kind of political work too. Especially given what we know about Cambridge Analytica and SCL and their reliance of psychological warfare methods to change public opinion. A fake congressional hearing, made possible with the help of a Republican Congressman, Rep. Estes, to schedule the room at the US Capital, seems like exactly the kind of advice we should expect from the Cambridge Analytica people.
The question of what exactly AIQ has been doing for Taruta would be a pretty big question given the scandal and mystery swirling around Cambridge Analytica and SCL. The fake congressional hearing made it a much wierder big question about the ultimate goals and agenda of the people behind Cambridge Analytica:
“The Osnova site states that the party’s ideology is based on the principles of liberal conservatism. In Ukrainian politics, however, these words typically mean very little. What kind of conservatism are we talking about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s other populists, all of whom count on a fairly undemanding electoral base. In some ways, he resembles Serhiy Tihipko, who tried over and over again to enter politics as a “new face,” although he had been in politics since his days in the Dnipropetrovsk Oblast Komsomol Executive.”
A party based on the principles of liberal conservatism. So a vague party for a vague cause. That seems like an appropriate fit for Sergei Taruta, an intriguingly vague figure. But a notable figure from Donetsk, the heartland of the separatists, because he never played up to the pro-Russian parties and movements and was consistently a support of the pro-Kiev forces. That included supporting Viktor Yuschenko in 2006 and Yulia Tymoshenko in 2010:
And Taruta’s pro-Kiev orientation is no doubt a big reason he was appointed governor of Donetsk in March of 2014 following the post-Maidan collapse of the Yanukovych government. But he didn’t last long, resigning in October of 2014. And that was partly attributed to his limited support for the volunteer militias when compared to the appointed governor of the neighboring Dnipo oblast, Ihor Kolomoisky (note that, as we’ll see in a following article, both Taruta and Kolomoisky reportedly supported the Azov Battalion):
After resigning as governor, he gets elected to the parliament. And now he has a new party, Osnova, which is characterized as clearly designed to pick up the electorate of the now-defunct Party of Regions:
And while the translation is somewhat garbled here, it appears that there is speculation that Rinat Akhmetov, a top oligarch and one of the primary backers of the “Opposition Bloc”, may be behind Taruta’s Osnova initiative. But there’s no evidence of this and if true it would put Osnova in competition for Akhmetov’s Opposition Bloc voters. Also, people close to Akhmetov aren’t found in Osnova’s leadership:
But while Taruta is clearly a pro-Kiev/pro-EU kind of Ukrainian politician, he does have one notable tie to the Kremiln: a majority stake in his industrial conglomerate was sold to a Russian state-own bank in 2010:
And beyond building his mysterious new Osnova party, Taruta is also busy lobbying the US about his pet project of outing alleged corruption at Ukraine’s central bank. Or at least he’s busy making it look like he’s lobbying the US about this. And he’s willing to go to enormous lengths to create those appearances, like a September 25, 2017 fake congressional hearing in the US Capital where an ex-Congressman, Connie Mack, pretended to expression congerssional outrage over Taruta’s allegations and an ex-CIA chief, James Woolsey, gave words of support for the ‘anti-corruption drive’. And this was all televised in Ukraine and treated like a real US political event:
So now lets take a look at a report in this bizarre fake event written by the one American reporter who was invited to attend. As the article notes, the event will billed by the Ukrainian television channel as a meeting of the “US Congressional Committee on Financial Issues.” No current members of Congress were there. Instead, it was a private panel discussion hosted by former Rep. Connie Mack IV (R‑FL), and Matt Keelen, a veteran political fundraiser and operative. It was open only to invited guests (including congressional staffers), two Ukrainian reporters (from NewsOne), and one American reporter. Mack was wearing his old congressional pin on his lapel.
Much of the event was spent criticizing Ukraine’s former central banker Valeriya Hontareva (Gontareva). The “HONTAREVA report” is the product of Taruta, and he has been out promoting it since late 2016. According to VoxCheck, a Ukrainian fact checking website, “the data [in the report], though mostly correct, are manipulated in almost all occasions.” VoxCheck also notes that the report has split Ukrainian politicians.
James Woolsey, the former CIA director and former Trump campaign adviser, was also at the event and briefly spoke. Woolsey talked about how “sweet” Russia was in the early years after the fall of the Berlin Wall and the need to find a way to make Russia “sweet” like that again.
One Senate Aide described Woolsey’s appearance there a strange, strange event and an “inter-oligarch dispute”: “It was a strange, strange event. Even by Ukrainian standards, that was an odd one. . . . I mean, why would a former CIA director be in the basement of the Capitol for a inter-oligarch dispute? [Former] CIA directors don’t just go to events and say, how much we could get along with the Russians. They don’t do that without a reason.” And that seems like a good way to summarizee this: a strange, strange event that’s one element of a broad inter-oligarch dispute. A dispute that’s giving us some insights in the the kind of figures in Ukraine Cambridge Analytica and AIQ want to work for:
“The HONTAREVA report is the product of Sergiy Taruta, and he has been out flogging it for nearly a year. VoxCheck, a Ukrainian fact checking website, analyzed Taruta’s report in late 2016 and says of the report: “VoxCheck has checked most of the facts from the Taruta’s brochure and has discovered that the data, though mostly correct, are manipulated in almost all occasions.””
The fake congressional hear is a sign of how much Taruta wants to publicize his report report on the corruption at Ukraine’s central bank. But it’s also a sign that Taruta’s primary audience with this fake hearing was Ukrainians. And Taruta and his NewOne Ukrainian media partners were more than happy to maintain the pretense that this was a real congressional event for that Ukrainian audience. It was a private event hoax designed to look like a public event:
Adding to the bizarreness was the speech by former CIA director James Woolsey about what sweethearts Russia was after the fall of the Berlin wall and the need to return to that point:
And that’s all why one Senate Aide referred to it all as a strange, strange event to see a former CIA director show up at a hoax event that’s part of a larger inter-oligarch dispute:
So let’s now take a closer look at that inter-oligarch dispute to get a better sense of who Taruta is aligned with in Ukraine. And in this case he’s clearly aligned with Ihor Kolomoisky, co-founder of the nationalized Privatbank.
As the article also notes, when Taruta was selling the majority stake in the industrial conglomerate he co-founded, Industrial Union of Donbass, in 2010, he was a close ally Yulia Tymoshenko at the time. And according to leaked cables, Tymoshenko wanted him to keep the sale a secret over fears that she would be attacked for selling out Ukraine. It’s another indication of Taruta’s political pedigree.
The article also has an explanation from James Woolsey on why he attended that event: he was duped. He agreed to show up in the audience and then was asked on the spot to make some remarks. That’s the line he’s going with.
And the article identifies the person who has come forward to claim responsibility for arranging the event: Anatoly Motkin, a one-time aide to a Georgian oligarch. Motkin founded the StrategEast consulting firm that describes itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe.” Motkin claims that he decided to fund the event because Taruta brought the allegations about Gontareva to his attention.
So that gives us a few more data points about Taruta: he was close to Tymoshenko, he’s doing Ihor Kolomoisky’s bidding in waging this fight against the nationalization of Privatbank, and the person who actually set up the even runs a lobbying firm for that described itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe”:
“Serhiy Taruta, a member of the Ukrainian parliament, is named as the author of the report. In 2008, Forbes estimated his net worth at $2.7 billion. According to a diplomatic cable published by WikiLeaks, American government officials believed Taruta played a role in the sale of a majority stake in the sale of one of Ukraine’s largest steel groups—valued at $2 billion—to a powerful Russian businessman. Taruta was a close ally of politician Yulia Tymoshenko at the time, and the cable said she and Taruta wanted to keep the deal “hidden from public view” to avoid criticism. Had the nature of the deal been made public, the cable said, Tymoshenko could have faced “increased attacks from political rivals for ‘selling out’ Ukrainian assets to Russian interests, perhaps to finance her presidential campaign.””
That’s a key observation: Taruta was seen as a close Tymoshenko ally.
But he’s also a Koloimoisky ally since this inter-oligarch dispute is Kolomoisky’s dispute and Taruta is fighting Kolomoisky’s fight:
But what about James Woolsey? What’s his excuse for fighting Kolomoiksy’s fight? He was tricked. That was his excuse:
And what about Rep. Estes, the congressman who made this official room available for the stunt? Well, he assures us that it won’t happen again. It’s sort of an explanation:
And note the two Ukrainian media companies that covered this. There was ChannelOne, which is owned by 1+1 Media, Ihor Kolomoisky’s media group. And also UkraNews, which belongs to Dmitry Firtash:
And recall what we saw in the above Ukraine Week piece about the makeup of the Opposition Bloc and the unproven speculation that Rinat Akhmetov could be behind Osnova: “One story is that the purpose of Osnova is to gradually siphon off Akhmetov’s folks from the Opposition Bloc, given that former Regionals split into the Akhmetov wing, which is more loyal to Poroshenko, and the Liovochkin-Firtash wing, which is completely opposed”. That sure sounds like Firtash represents a faction of the Opposition Bloc that would like to see Poroshenko go (recall that Andreii Artemenko’s peace plan proposal involved the collapse of the Porokshenko government under a wave scandal revelations. Artemenko would provide the scandal evidence). So it’s notable that we have Firtash’s news channel also promoting Taruta’s fake congressional along with Kolomoisky’s ChannelOne.
And look who has come forward as the even organizer. Anatoly Motkin, a one-time aide to a Georgian oligarch:
And when we look at how Motkin’s lobbying firm describes itself, it’s “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe”:
“Mr. Motkin has devoted much of his career to assisting the processes of Westernization in post-Soviet states through the launching of a variety of media, political and business initiatives aimed to drive social awareness and connect communities. He has successfully invested in multiple technology startups, such as one of the most popular messaging apps and the ridesharing service app Juno, which was recently acquired by on-demand ride service Gett.”
And the involved of someone like Motkin in arranging the theatrics of what amounts to an inter-oligarch dispute over Ihor Kolomoisky’s nationalized bank points to one of the key observations in this situation: it appears to be an inter-oligarch fight of different factions of pro-western Ukrainian oligarchs. And Sergei Taruta appears to be squarely in the camp of faction that doesn’t support the separatists but also doesn’t support Poroshenko. As we’ve seen, Taruta has history ties to Yulia Tymosheko’s power base, but he also appears to be working with fellow East Ukrainian oligarch Ihor Kolomoisky.
So, finally, let’s note something important about Taruta and Kolomoisky from this 2015 report by Joshua Cohen, who has done a lot of good reporting about the risk of the neo-Nazi in Ukraine. It’s a report that would explain some of the animosity between Kolomoisky and the Poroshenko government: The report describes the use of privately financed militias that are, in effect, private armies controlled by their Ukrainian oligarch financiers, with Ihor Kolomoisky being one of the biggest militia financiers. And this actually led to Kolmoiksy’s firing in 2015 after Komoloisky sent one of this private armies to seize control of the headquarters of the state-owned oil company, UkrTransNafta, after Kiev fired the company’s chief executive officer who happened to be an ally of Kolomoisky. This led to Kolomoiksy’s firing as governor of Dnipro. So that, in addition to the Privatbank nationalization, is no doubt part of why Koloimoisky might not be super enthiasistic about the Poroshenko government.
Given the ongoing tensions between the neo-Nazis groups in Ukraine and the Kiev government and the ongoing Nazi threats from groups like the Azov Battalion to ‘march on Kiev’ and take over, it’s noteworthy that one of their biggest financial backers, Ihor Kolomoisky, has so much animosity towards the Poroshenko government. And in our look at Sergei Targuta it’s also pretty worthy that, as the article notes, both Kolomoisky and Taruta were partially financing the neo-Nazi Azov Battalion:
“Ukraine’s President Petro Poroshenko has made clear his intention to rein in Ukraine’s volunteer warriors. Days after Kolomoisky’s soldiers appeared at UkrTransNafta, he said that he would not tolerate oligarchs with “pocket armies” and then fired Kolomoisky from his perch as the governor of Dnipropetrovsk.”
Yep, it was the private use of a private army to seize state assets in a business dispute that got Ihor Kolomoisky fired as government of the Dnipro Oblast in May of 2015. And that was just one example of how these neo-Nazi militias posed a threat to Ukrainian society. There’s also the obvious risk that they act on their own orders and try to seize control.
But the greatest threat these neo-Nazi militias pose clearly involves working in coordination with a team of Ukrainian oligarchs. And that’s part of what makes an understanding of the opaque Ukrainian oligarchic fault lines so important, because there’s always the chance that these inter-oligarch disputes will result in these private armies getting used for a coup or something along those lines.
And that’s a big part of why it’s notable that about Taruta and Kolomoisky have a history of financing groups like the Azov Battalion:
And that’s also why it’s so notable if a company like AIQ is offering political services to someone like Taruta: Because Taruta appears to be allied with the pro-Western faction of Ukrainian oligarchs who want to replace their current Ukrainian government with their own faction. Much like Andreii Artemenko and his ‘peace plan’ proposal, which also appeared to be a plan from a pro-Western-anti-Poroshenko faction of Ukrainian oligarchs.
In other words, the story about Sergei Taruta and the bizarre fake congressional campaign appears to be one element of a much larger very A real inter-oligarch dispute involving some very powerful oligarchs. And Cambridge Analytica/AIQ/SCL appears to be working for one of those sides and it’s the side currently out of power and trying to reverse that situation.
So you know that creepy feeling you get when you Google something and ads creepily related to what you just browsed start following you around on the internet? Rejoice! At least, rejoice if you enjoy that creepy feeling. Because you’ll get to experience that creepy feeling watching broadcast tv too with the next generation of televisions and ATSC 3.0 broadcast format technology that just got offered to the American public for the first time on KFPH UniMás 35 in Pheonix, Arizona, with more market rollouts planned soon.
So how is the ATSC 3.0 broadcast format for television going to allow creepily personalized ads to follow you on television too? The new format basically combines over-the-air TV with internet streaming. So part of what you’ll see on the screen will be content sent over the internet which will obviously be personalized. And that’s going to include ads.
But it won’t just be delivery personalized content. The technology will also allow for tracking of user behavior. And there are no privacy standards at all. That will be up to individual broadcasters who will design their own app will will deliver the personalized content. Which obviously means there are going to be lots of broadcasters tracking your television viewing habits, creating the kind of nightmare privacy situation we’ve already seen on platforms like Facebook and app developers. This ATSC 3.0 broadcast format is like a new giant platform that everyone will share in the US, but there are no privacy standards for the app developers which might even be worse than Facebook.
So that’s coming with the next generation of televisions. As one might imagine given the fact that this new technology threatens to turn the tv into the next consumer privacy nightmare, this technology was a major focus of several tech demonstrations at the recent National Association of Broadcasters (NAB) conference in Las Vegas. And as one might also imagine, the industry hasn’t had much to say about the privacy aspect of this privacy nightmare it’s about to unleash:
“Broadcasters haven’t talked much about the advertising aspect, and they’ve said even less about the potential privacy implications, but it was a major focus of several tech demonstrations at the National Association of Broadcasters (NAB) conference in Las Vegas this week.”
Mum’s the word on the potential privacy implications for American television viewers. Potential privacy implications that could be coming to a media market near you soon:
And while the broadcasting industry may not want to talk about potential privacy violations, they sure are excited to talk about collecting viewer data for the purpose of serving up personalized ads:
And in this new app-based model for personalized broadcast television each broadcaster develop their own apps, meaning there’s going to be a lot of different apps/broadcasters potentially tracking what you do with those next-generation TVs:
Although it’s worth noting that the demonstration apps shown to the author of that TechHive article weren’t capable of tracking what you do on different app. So each broadcaster would, in theory, only get to see what you do with their app and not other broadcasters’ apps. But, of course, a lot of broadcasters are going to own multiple channels in a market. Or they just might decide to share the data with each other:
Also keep in mind that there are still significant potential privacy violations even if apps can’t read the activity of other apps. For instance, if an app is capable of simply detecting when you turn the tv off or on, that gives information about your day to day living schedule. It’s one of the generic privacy violations that come with the “internet-of-things”.
And then there’s the possible privacy violations that come with next-generation televisions with built in microphones. Imagine how many apps will ask for permission to listen to everything you say in order to better personalize the service. Remember those stories about the CIA hacking into Samsung Smart TVs with built in microphones? That’s probably going to be the standard app behavior if people allow it.
And, finally, the article notes that this means the nightmare of micro-targeted personalized political ads is coming to broadcast television:
Yep, just wait for Cambridge Analytica-style personalized psychological profiling of you, a profile that incorporates all the information already gathered about you from all the existing sources of information about you — Facebook, Google, data broker giants like Acxiom — and combines that with the knowledge on you obtained through your smart television, and get ready for the next-generation onslaught of the full-spectrum of personalized political ads designed to inflame you and polarize the country. The “A/B testing on steroids” advertising experiments employed by the Trump team on social media is coming to television.
It’ll be a golden age for television commercial actors because they’re going to have to shoot all the different customized versions of the same commercials used to micro-target the audience’s psychological profiles.
Of course, there is going to be the one option for next-generation television owners for avoiding the data privacy nightmare of personalized tv: unplug it from the internet and just watch tv the soon-to-be-old-fashioned way:
And that points towards one of the glaring problems and solutions to this situation: the only option American television consumers are going to have is either navigate a data privacy nightmare landscape, where each app can have its own privacy standards and there are almost no rules, or unplug the smart tvs from the internet and forgo the internet-based services. And that’s because spying on consumers in exchange for services and enhanced profits is the fundamental model of the internet and this new data privacy nightmare landscape for smart tvs is merely the logical extension of that fundamental model. It’s a fundamental problem with the future of television ads and a fundamental problem with the internet-of-things in general: mass commercial spying is just assumed in America. It’s the model for the internet in America. There is no alternative. And that model is coming to broadcast television since that commercial mass spying model is clearly enshrined in the new ATSC 3.0. broadcast format. It’s a format that lets each app developer make up their own privacy standards. A ‘prepare-for-the-worst-hope-for-the-best’ model that literally prepares the way for the worst case scenario for consumer privacy and then just hopes that it won’t be abused. Like the internet.
And in the case of this next-generation internet-connected television it’s not like there’s the same possibility for competition that we find with Facebook because there’s the possibility for a Facebook competitor. But there’s only one national broadcast format for smart tvs and for nations that use teh ATSC 3.0 standard it’s going to let each app maker make up their own privacy rules. Note that the ATSC 3.0 standard doesn’t just apply the US. It was created by the Advanced Television Systems Committee which is shared by the US, Canada, Mexico, South Korea, and Honduras. So this is a multinational television standard and it’s a standard governments approve so it’s not like there’s competition. This is as good as the privacy standards are going to get for North American and South Korean internet-connected tv consumers: it’s up to the app developers i.e. no privacy standards.
And no standards on the exploitation of all the data collected on us to delivered highly persuasive micro-targeted ad campaigns. Cambridge Analytica-style micro-targeting psychological operations for tv. That’s coming to all elections.
So just FYI, your next smart television is going to be very persuasive.
This was more or less inevitable: it sounds like the ’87 million’ figure — the number of Facebook profiles that had their data scraped by Cambridge Analytica — is set to be raised again. Recall that it was initially a 50 million figure before Cambridge Analytica whistle-blower Christopher Wylie raised the estimate to 87 million, while hinting that the figure could be more.
Also recall that the 87 million figure, ostensibly derived from the 270,000 people who downloaded the Cambridge Analytica Facebook app and their many friends, corresponded to ~322 friends for each app user on average, which is very closer to the 338 average number of friends Facebook users had in 2014. In other words, the 87 million figure is roughly what we should expect if you start off with 270,000 app users and scrape the profile information for each of their 338 friends on average. So if that 87 million figure was to rise significantly, it would raise the question of where else did Cambrdige Analytica get their data.
Well, we have a new Cambridge Analytica whistle-blower, Brittany Kaiser, who worked full-time for SCL, Cambridge Analytica’s parent company, as director of business development between February 2015 and January of 2018. And according to Kaiser, it is indeed “much greater” than 87 million users. And Kaiser has a possible explanation for how Cambridge Analytica got data on all these additional users: they had more than one app that was scraping Facebook profile data.
And the way Kaiser puts it, it sounds like there were quite a few different apps used by Cambridge Analytica. Including one she calls the “sex compass quiz”. So, yes, the Trump team was apparently exploring the sexual predilections of the American electorate.
Additionally, Kaiser makes references to Cambridge Analytica’s “partners”. As she puts it, “I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login–for example, the ‘sex compass’ quiz.” So is that reference to Cambridge Analytica’s “partners” a reference to SCL or Aleksandr Kogan’s Global Science Research (GSR) company? Or were there other third-party firms that are also feeding information into Cambridge Analytica? The Republican National Committee, perhaps?
Along those lines, Kaiser has another remarkable claim that office culture was like the “Wild West” and that personal data was “being scraped, resold and modeled willy-nilly.” So Kaiser is asserting that Cambridge Analytica resold the data too? It sure sounds like it.
These are the kinds of questions raised by Brittany Kaiser’s new claims. Along with the open question of exactly how many people Cambridge Analytica was collecting this kind of Facebook data on. We know it’s “much greater” than 87 million, according to Kaiser, but we have no idea how much greater it is:
“Kaiser claimed that the office culture was like the “Wild West” and alleged that citizens’ data was “being scraped, resold and modeled willy-nilly.””
That’s rights, Cambridge Analytica wasn’t just scraping Facebook users’ data. They were apparently reselling it too. These are the claims by Brittany Kaiser, who worked full-time for the SCL Group, the parent company of Cambridge Analytica, as director of business development between February 2015 and January this year, during her testimony to a UK government government:
And according to Kaiser, the additional apps used by Cambridge Analytica include a “sex compass” quiz.
And keep in mind that the use of this sex app quiz is probably pretty similar to how Aleksandr’s psychological profiling app worked: you use the data collected on the people taking the quiz as the “training set” in order to develop algorithms for inferring Facebook users’ sexual preferences based on their Facebook profile data. And then Cambridge Analytica uses those algorithms to make educated guesses about the ‘sexual compass’ of all the other Facebook user they have profile data on. We don’t know that this is what Cambridge Analytica did with the ‘sex compass’ app, but we know that’s probably what they did because that is the business they are in.
And it’s the use of all these additional apps that Kaiser saw Cambridge Analytica employ that appears to be the basis for her conclusion that the number of Facebook profiles scraped by Cambridge Analytica is “much greater than 87 million”. And she also asserts, quite reasonably, that Cambridge Analytica wasn’t the only entity engaged in this kind of activity:
So how much higher is that 87 million figure going to go? Well, there’s one other highly significant number we should keep in mind when trying to understand what kind of data Cambridge Analytica acquired: The company claimed to have up to 5,000 data points on 220 million Americans. Also keep in mind that 220 million is greater than the total number of Facebook users in the US (~214 million in 2018).
So if we’re wondering how high that 87 million figure might go, the answers might be something along the lines of “almost all the Facebook users in the US in 2014–2015”. Whatever that number happens to be is probably the answer.
Here’s a set of articles on one of the figures who co-founded both Cambridge Analytica and its parent company SCL Group: Nigel Oakes.
While Cambridge Analytica’s former-CEO Alexander Nix has received much of the attention directed at Cambridge Analytica, especially following the shocking hidden-camera footage of Nix talking to an undercover reporter he thought was a client, the story of Cambridge Analytica ultimately leads to Oakes according to multiple sources.
So who is Nigel Oakes? Well, as the following article notes, Oakes got his start in the business of influencing people in the field of “marketing aromatics,” or the use of smells to make consumers spend more money. He also dated Lady Helen Windsor when he was younger, which made him a somewhat publicly known person in the UK.
In 1993, Oakes co-founded Strategic Communication Laboratories, the predecessor to SCL Group. In 2005, he co-founded SCL Group which, at the time, made headlines when it billed itself at a global arms fair in London as the first private company to provide psychological warfare services. Oakes said he was confident that psyops could shorten military conflicts. As he put it, “We used to be in the business of mind bending for political purposes, but now we are in the business of saving lives.”
SCL sold the same psychological warfare products in the US. Services included manipulation of elections and “perception management,” or the intentional spread of fake news. And the US State Department remains a client and confirmed that it retains SCL Group on a contract to “provide research and analytical support in connection with our mission to counter terrorist propaganda and disinformation overseas.”
So Nigel Oakes has quite an interesting history. A history that he unwittingly encapsulate with a now-notorious quote he gave in 1992:
“We use the same techniques as Aristotle and Hitler...We appeal to people on an emotional level to get them to agree on a functional level.”:
““Anyone right now that is focusing on the problems with Cambridge Analytica should be backtracking to the source, which is Nigel Oakes,” said Sam Woolley, research director of the Digital Intelligence Lab at the Silicon Valley-based Institute for the Future.”
Nigel Oakes is seen as “the source” of Cambridge Analytica. And Cambridge Analytica is seen as merely “the tip of the iceberg of Nigel Oakes’ empire of psyops and information ops around the world”:
And that’s how British journalist Carole Cadwalladr, who has done extensive reporting on Cambridge Analytica over the last year, also sees it: the questions about Cambridge Analytica leads to Oakes:
And that’s no surprise that Cambridge Analytica questions lead to Oakes. He helped co-found it, along with co-founding SCL Group in 2005 and Strategic Communication Laboratories in 1993:
And Oakes has been pitching SCL Group as a private psychological warfare service provider for years. So if we’re exploring how Cambridge Analytica got into the business of the manipulation of the masses, the fact that SCL has been providing those services to the US and UK governments for years is a pretty big factor in that story. when Cambridge Analytica was formed in 2013, its team was already quite experienced in these kinds of matters:
And as the hidden-camera footage of Alexander Nix showed the world, those mass manipulation services include dirty tricks. Like sending Ukrainian sex workers to an opponent’s house to sabotage him. It’s an indicator of the amoral character of the people behind Cambridge Analytica and its SCL Group parent:
And that amorality is perfectly encapsulated in a now-notorious 1992 quote from Oakes, where he favorably compares his work in psychological manipulation with the techniques employed by Hitler:
And 1992 quote was the only ‘we use the same techniques as Hitler!’ quote Oakes has made over the years. As the following article notes, Oakes made the same admission last year in reference to the techniques employed by Cambridge Analytica for the Trump campaign:
““Hitler, got to be very careful about saying so, must never probably say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a problem with the Jews at all, but the people didn’t like the Jews,” Oakes said. “So if the people… He could just use them to say… So he just leverage an artificial enemy. Well that’s exactly what Trump did. He leveraged a Muslim- I mean, you know, it’s- It was a real enemy. ISIS is a real, but how big a threat is ISIS really to America? Really, I mean, we are still talking about 9/11, well 9/11 is a long time ago.””
And that’s Nigel Oakes in his own words: he saw Trump’s systematic fear mongering about virtually all Muslims as more or less the same cynical technique employed by Hitler.
And when you look at the full quote provided to the UK parliament it sounds even worse because he’s framing the use of these demonization techniques as simply a way to fire up “your group” (your target base of supporters) by demonizing a different group that you don’t expect to vote for your candidate:
“And often, as you rightly say, it’s the things that resonate, sometimes to attack the other group and know that you are going to lose them is going to reinforce and resonate your group.”
Attacking “the other group and know that you are going to lose” in order to “reinforce and resonate your group.” That’s how Nigel Oakes matter-of-factly framed the use of the same kinds of mass manipulation techniques designed to generate an emotional appeal to a target political demographic. An emotional appeal that happens to be based on demonization a group of people that your target demographic already generally dislikes. In other words, find the existing areas of hatred and inflame them.
And offering services that will strategically inflame those passions is something Nigel Oakes has been openly offering clients for decades. And that’s all part of why Nigel Oakes is described as the real force behind Cambridge Analytica.
At the same time, let’s not forget the previous reports about Cambridge Analytica whistle-blower Christopher Wylie and Wylie characterization of Steve Bannon as Alexander Nix’s real boss at Cambridge Analytica despite technically serving as the company’s vice president and secretary. So while Nigel Oakes is clearly a critically important figure behind Cambridge Analytica, the question of who was really in charge of that Cambridge Analytica operation for the Trump Team is still an open question. Although it was likely more of a Hitler-inspired group effort.
Here’s an ominous article about Palantir (as if there aren’t ominous articles about Palantir) that highlights both the challenges the company faces in selling its surveillance services and their plans for overcoming those challenges: It turns out the services Palantir offers to its clients is pretty labor intensive, including a potentially large number of on-site Palantir employees. One notable example is JP Morgan that hired Palantir to monitor the bank’s employees for the purpose of detecting miscreant behaviors. And this service involved as many as 120 “forward-deployed engineers” from Palantir working at JP Morgan, each one costing the bank as much as $3,000 a day. So from a price standpoint that’s obviously going to be an issue, even for a financial giant like JP Morgan. Although at JP Morgan it sounds like the bigger issue was that the executives learned that their emails and activity were potentially caught up in Palantir’s data dragnet too. But the overall cost of these “forward-deployed engineer” Palantir contractors is reportedly an issue for a number of other corporate clients that recently dropped Palantir including Hershey Co., Coca-Cola, Nasdaq, American Express, and Home Depot.
So how is Palantir planning on addressing the labor-intensive nature of their services to attract more clients? Automation, of course. And that’s already part of the new product Palantir is offering clients called Faundry which is already in use by Airbus SE and Merck KGaA. In other words, the automation of Palantir’s corporate surveillance services is almost here and that means a lot more corporate clients are probably going to be hiring Palantir. So, yeah, that’s rather ominous.
The article also includes a few more Palantir fun facts. For instance, while there are 2,000 engineers at the company, the Privacy and Civil Liberties Team only consists of 10 people.
A second fun fact is about Peter Thiel. Apparently he’s planning on move to Los Angeles and starting up a right-wing media empire. Oh goodie.
The article also contains a couple of fun facts in relation to the questions about Palantir and Cambridge Analytica after the revelation that a Palantir employee was working with Cambridge Analytica to develop its psychological profiling algorithms: First, Palantir claims that the company turned down the offers to work with Cambridge Analytica and that its employee, Alfredas Chmieliauskas, was purely working on his own. As the following article notes, that’s the same explanation Palantir gave when it was caught planning an orchestrated disinformation campaign against Wikileaks and Anonymous. So the “lone employee” explanation for Palantir appears to be a favorite.
Additional, the article notes tha Palantir doesn’t advertise its services and instead purely relies on word-of-mouth. And that’s interesting in relation to the mystery of how it was that Sophie Schmidt, Google CEO Eric Schmidt’s daughter and a former Cambridge Analytica intern, just happened to stop by in Cambridge Analytica’s London headquarters in mid 2013 to push the idea that the company should start working with Palantir. Now, it’s important to recall that part of what made Sophie Schmidt’s seemingly random visit in mid-2013 so curious is that Cambridge Analytica and Palantir has already started talking in early 2013. Still, it’s noteworthy if Palantir only relies on word-of-mouth referrals and Sophie Schmidt appeared to be provided exactly that kind of referral seemingly randomly and spontaneously.
So that’s all some of the new information we learn about Palantir in the following article. New information that’s all ominous, of course:
“High above the Hudson River in downtown Jersey City, a former U.S. Secret Service agent named Peter Cavicchia III ran special ops for JPMorgan Chase & Co. His insider threat group—most large financial institutions have one—used computer algorithms to monitor the bank’s employees, ostensibly to protect against perfidious traders and other miscreants.”
Insider threat services. That appears to be one of the primary services Palantir is trying to offer to corporate clients. It’s the kind of service that gives Palantir access to almost everything employees are doing in a company and basically turns it into a Big Brother-for-hire entity. And when JP Morgan hired Palantir to provide these services they ended up dropping the after the executives learned that it was too Big Brother-ish and watching over the executives too:
And this project at JP Morgan was basically the test lab for a new service Palantir is trying to offer the financial sector: Metropolis:
And through this JP Morgan test bed for Metropolis, Peter Cavicchia insider threat group was given access to “a full range of corporate security databases that had previously required separate authorizations and a specific business justification to use”. Along with a team of Palantir engineers to help him use that data. This is the business model Palantir was trying to test so it could sell to other banks: using Palantir to give bank employees unprecedented access to the bank’s internal data (which, of course, means Palantir likely has access to that data too):
But Palantir’s test bed at JP Morgan ultimately turned into a failed experiment when JP Morgan’s leadership learned that Cavicchia had apparently used his unprecedented access to internal documents to spy on JP Morgan executives who were investigating a leak to the New York Times. The leak appeared to be done by an executive who had just left the company, Frank Bisignano, who also happened to be Cavicchia’s patron at the company before he left. And that leak investigation appeared to show that Cavicchia accessed executive emails about the leak and passed them along to Bisignano. In other words, JP Morgan learned that the guy they made their corporate Big Brother abused that power (shocker):
Thus ended Palantir’s test run of Metropolis, highlighting the fact that the extensive manpower associated with Palantir’s services isn’t the only factor that might keep corporate clients away. The way Palantir’s services create individuals with unprecedented access to the internal documents of a company might also drive clients away. After all, threat assessment groups are intended to mitigate risk. Not exacerbate it.
But the cost of all those on-site Palantir engineers is still a obstacle to wider adoption of Palantir services. As the article notes, roughly half of Palantir’s 2,000 engineers are working on client sites:
And that’s what Palantir’s newest product, Foundry, is designed to address. By increasingly automating the corporate surveillance process:
“Deeper adoption of Foundry in the commercial market is crucial to Palantir’s hopes of a big payday.”
And that appears to be the direction Palantir is heading: automated corporate surveillance which will allow the company to offer its services cheaper and to more clients. So if Palantir succeeds we just might see A LOT more companies hiring Palantir’s services, which means A LOT more employees are going to have Palantir’s software watching and analyzing their every keystroke and email. It really is pretty ominous. Especially given the fact that company’s Privacy and Civil Liberties Team consists of a whole 10 people:
So that’s an overview of the current status of Palantir’s Big Brother-for-hire services: they’ve hit some obstacles, but if they can succeed in overcoming those obstacle Palantir could become the go-to corporate surveillance firm. It’s more than a little ominous.
And then there’s the to fun fact from this article that relate to the questions of Palantir’s ties to Cambridge Analytica: First, just as Palantir claimed that it’s employee found to be working with Cambridge Analytica, Alfredas Chmieliauskas, was doing this on his own, that’s the same excuse Palantir gave when it was caught pitching a project to the US Chamber of Commerce to run a secret campaign to spy on and sabotage the Chamber’s critics: it was just a lone employee:
Finally, there’s the interesting fact that the Palantir executes boast of not employing a single sales-person and just rely on word of mouth:
And Sophie Schmidt, Google CEO Eric Schmidt’s daughter and a former Cambridge Analytica intern, provided exactly that in June of 2013: a word of mouth endorsement of Palantir. So did Sophie Schmidt make this word of mouth pitch independently and coincidentally? It remains an unanswered question but it’s hard to ignore that Schmidt’s pitch appears to be the mode of how Palantir markets itself.
So we’ll see what happens with Palantir and its drive to use automated corporate surveillance to cut costs and sell its Big Brother-for-hire services to even more large employers. But it does seem like just a matter of time before Palantir succeeds in cutting those costs, which means “word of mouth” isn’t just going to be Palantir’s approach to marketing. Word of mouth is also going to be the only way employees in the future will be able to say something to each other without Palantir knowing about it.
Here’s an update on how Facebook his planning on addressing the new congressional scrutiny it’s receiving from the US Congress as the Cambridge Analytica continues to play out: Facebook’s head of policy in the United States, Erin Egan, was just replaced. It’s a notable position, politically speaking, because it’s based in Washington DC, so Facebook basically just replaced one of it’s top DC lobbyists.
So who replaced Egan? Kevin Martin, Facebook’s vice president of mobile and global access policy. Oh, and Martin was also a former Republican chairman of the Federal Communications Commission. Surprise!
Martin will report to vice president of global public policy, Joel Kaplan. Oh, and Martin and Kaplan worked together in the George W. Bush White House and on Bush’s 2000 presidential campaign. Surprise again! There’s a distinct ‘K Street’ feel to it all.
Facebook is spinning this by emphasizing that Egan will remain chief privacy officer. The company is acting like they made this move in order to have someone with Egan’s credentials focused on rebuilding trust and not so they can replace her with a Republican.
And that appears to be Facebook’s strategy for dealing with Congress: tasking Republicans to lobby their fellow Republicans:
“Ms. Egan, who is also Facebook’s chief privacy officer, was responsible for lobbying and government relations as head of policy for the last two years. She will be replaced by Kevin Martin on an interim basis, the company said. Mr. Martin has been Facebook’s vice president of mobile and global access policy and is a former Republican chairman of the Federal Communications Commission.”
When you’re a company as big as Facebook, that’s who you bring in to lead you’re lobbying effort: The former Republican chairman of the FCC.
And this means two Republicans will be in charge of Facebook’s Washington offices (which are pretty much there to lobby):
But the way Facebook would prefer us to look at it, this was really all about freeing up Erin Egan to work on rebuilding trust over privacy concerns:
And this move is happening at the same time Facebook is staring at a new EU data privacy regime, the GDPR:
And those new EU GDPR rules don’t just potentially impact how Facebook handles its European users going forward. It potentially impacts the policies governing all of Facebook’s users outside of the US.
Why? Because Facebook’s customers outside the US and Canada are handled by Facebook’s operations in Ireland and therefore under EU rules. That’s just how Facebook decided to structure itself internationally (largely due to Ireland’s status is a corporate tax haven).
So does this mean Facebook’s US users will be operating in a data privacy regulatory environment managed by the GOP while almost everyone else in the world operates under the EU’s new rules? Nope, because Facebook just moved its international operations out of Ireland and back to its US headquarters in California. And that means the rules Facebook is lobbying for in DC will apply to all Facebook users globally outside the EU:
“Facebook members outside the United States and Canada, whether they know it or not, are currently governed by terms of service agreed with the company’s international headquarters in Ireland.”
Yep, for Facebook and quite a few other major internet companies with international headquarters in Ireland, it’s the EU’s rules that determine the rules for most of their global customer base. But not anymore for Facebook:
And that move from Ireland to California will impact the ~1.5 billion users Facebook has outside of the US, Canada, and EU:
But Facebook wants to assure everyone that this move will have no meaningful impact on anyone’s privacy because it’s committed to having ALL of its users globally follow the same rules as laid out by the EU’s new GDPR. At least ‘in spirit’. That’s right, Facebook is telling the world that its going to implement the GDPR globally at the same time it moves its operations out of the EU. That’s not suspicious or anything:
So why did Facebook make the move if it’s pledging to implement the GDPR ‘in spirit’ for everyone? Well, according to Facebook, it’s “because EU law requires specific language.” That’s not dubious or anything:
And, of course, Facebook isn’t the only multinational internet firm looking to move out of Ireland. Microsoft’s LinkedIn is making the same move, under a similarly laughable pretense:
“We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data”
Yeah, LinkedIn is making the move so users won’t be confused about whether or not the US or EU LinkedIn entity was responsible for their personal data. LOL! We’ll no doubt get similarly laughable explanations from all the other multinational firms making similar moves.
Also don’t forget that these moves mean the US’s data privacy rules are going to be even more important for the internet giants because now those rules are for going to apply to users everywhere but the EU. And that means the lobbying of US lawmakers and regulators is going to be even more important going forward. The more companies that relocate to the US to escape the EU’s GDPR for the international customer base, the greater the incentives for undermining US data privacy laws. In other words, it’s a really great time to be a Republican data privacy lobbyist.
Here’s a pair of stories that relates to both Cambridge Analytica as well as the bizarre collection of stories related to the ‘Seychelles backchannel’ #TrumpRussia story (like George Nader’s participation in the ‘backchannel’ or Nader’s hiring of GOP money man Elliot Broidy to lobby on behalf of the UAE and Saudis). And the connecting element is none other than Erik Prince:
So long Cambridge Analytica! Yep, Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica? Of course not. They’re just rebranding under a new company, Emerdata. It’s kind of like when Blackwater renamed itself Xe, and then Academi. And intriguingly, Cambridge Analytica’s transformation into Emerdata introduces another association with Blackwater: Emerdata’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince:
“In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.”
So Cambridge Analytica is going away and the SCL Group is getting out of the elections business. At least on the surface. But there’s still an open question of who is going to retain the rights to all the information held by Cambridge Analytica, including all those psychographic voter profiles that are presumably worth quite a bit of money:
And that question over who is going to own the rights to all that data is particularly relevant given that executives at Cambridge Analytica and SCL Group and the Mercers recently formed a new company: Emerdata. And look who happens to be one of Emerdata’s directors: Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince:
“Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group.”
LOL! Yeah, the possibility for a “Blackwater-style rebranding” is looking more like a reality at this point. Although we’ll see how many clients this new company gets.
And that brings us to the following piece. It’s a fascinating piece that summarizes all of the various thing we’ve learned about Erik Prince, the #TrumpRussia investigation, and the UAE. And as the article notes, at the same time Emerdata was being formed in 2017 (August 11, 2017, was the incorporation dat) the UAE was already paying SCL to work on running a social media campaign for the UAE against Qatar as part of the UAE’s #BoycottQatar campaign. And as the article also notes, if you look at the name “Emerdata”, it sure sounds like shortened version of “Emerati-Data”.
So given the presence of Erik Prince’s business partner on the board of directors of Emerdata, and given Prince’s extensive ties to the UAE, we have to ask the question of whether or not Cambridge Analytica is about to become the new plaything of UAE:
“In 2017 as Cambridge Analytica executives created Emerdata, they were also working on behalf of the UAE through SCL Social, which had a $330,000 contract to run a social media campaign for the UAE against Qatar, featuring the theme #BoycottQatar. One of the Emerdata directors may have ties to the UAE and the company name, coincidentally, sounds like a play on Emirati-Data…Emerdata.”
Emirati-Data = Emerdata. Is that the play on words we’re seeing in this name? It does sound like a reasonable inference. Especially given Erik Prince’s close association with both the EmerData’s board of directors and the UAE:
So let’s take a closer look at Prince’s ties to the UAE and his parnters in Hong Kong: He moves to the UAE in 2010, and gets hired by the Sheik Mohamed bin Zayed al-Nahyan to build a fighting force in 2011. In 2012, while still living in the UAE, Prince creates the Frontier Resource Group, an Africa-dedicated investment firm partnered with major Chinese enterprises:
Then, in 2014, Prince gets named as Chairman of DVN Holdings, controlled by Hong Kong businessman Johnson Ko Chun-shun (who sits on the board of Emerdata) and Chinese state-owned Citic Group:
Then there’s all the shenanigans involving the Seychelles ‘backchannel’ (that inexplicably involves the UAE) and GOP money-man Elliott Broidy:
Then Emerdata gets formed in August of 2017. The next month, Steve Bannon and Alexander Nix atten the CSLA Investors’ Forum in Hong Kong, which is run by Citic Group, the majority owner of Prince’s Frontier Services Group:
Then in October of 2017, we have a continuation of Elliot Broidy’s lobbying the Trump administration on behalf of the UAE at the same time the SCL Group gets hired to implement a social media campaign for the UAE against Qatar:
Finally, in early 2018 we find Emerdata adding Alexander Nix, Johnson Chun Shun Ko (Prince’s partner at Frontier Services Group), Cheng Peng, Ahmad Al Khatib, Rebekah Mercer, and Jennifer Mercer to the board of directors:
So it sure looks a lot like the new incarnation of Cambridge Analytica is basically going to be applying Cambridge Analytica’s psychological warfare methods on behalf of the UAE, among others. The Chinese investors will also presumably be interested in these kinds of services. And anyone else who might want to hire a psychological warfare service provider run by a bunch of far right luminaries.
Oh look at that: Remember how Aleksandr Kogan, the University of Cambridge professor who built the app used by Cambridge Analytica, claimed that what he was doing was rather typical? Well, Facebook’s audit of the thousands of apps used on its platform appears to be proving Kogan right. Facebook just announced that it has already found and suspended 200 apps that appear to be misusing user data.
Facebook won’t say which apps were suspended, how many users were involved, or what the red flags were that triggered the suspension, so we’re largely left in the dark in terms of the scope of the problem.
But there is one particular problem app that’s been revealed, although it wasn’t revealed by Facebook. It’s the myPersonality app which was also developed by Cambridge University professors at the Cambridge Psychometrics Center. Recall how Cambridge Analytica ended up working with Aleksander Kogan only after first being rebuffed by the Cambridge Psychometrics Center. And as we’re going to see in the second article below, Kogan actually working on the myPersonality app until 2014 (when he went to work for Cambridge Analytica). So the one app of the 200 recently suspended apps that we get to know about at this point is an app Kogan helped develop. And the other 199 apps remain a mystery for now:
“Facebook declined to provide more detail on which apps were suspended, how many people had used them or what red flags had led them to suspect those apps of misuse.”
Did you happen to use one of the 200 suspended apps? Who knows, although Facebook says it will notify people of the names of suspended apps eventually. No timeline for that disclosure is given:
And, again, this is exactly what Kogan warned us about:
And note how Facebook is specifically saying it’s reviewing “tens of thousands of apps that could have accessed or collected large amounts of users’ personal information before the site’s more restrictive data rules for third-party developers took effect in 2015”. In other words, Facebook isn’t reviewing all of it’s apps. Only those that existed before the policy change that stopped apps from exploiting the “friends permission” feature that let app developers scrape the information for Facebook users and their friends. So it sounds like this review process isn’t looking for data privacy abuses under the current set of rules. Just abuses under the old set of rules:
And that apparent focus on abuses from the old “friends permission” rules suggests that current data use problems might go undetected. And the one app we’ve learned about, the myPersonality app, is a perfect example of the kind of app that would have been violating Facebook’s current data privacy rules. Because as people recently learned, the Facebook data gathered by the app was available online for the purpose of sharing with other researchers, but it was so poorly secured that anyone could have potentially accessed it:
But it gets worse. Because as the following New Scientist article that revealed the myPersonality apps privacy issues points out, the data on some 6 million Facebook users was anonymized, but it was such a shoddy anonymization scheme that someone could have easily deanonymized the data in an automated fashion. And access to this database was potentially available to anyone for the past four years. So almost anyone could have grabbed this anonymized data on 6 million Facebook users and deanonymized it with relative ease.
And putting aside the possible unofficial access of this data, the people and instituations that got official access is also concerning:More than 280 people from nearly 150 institutions accessed this database, including researchers at universities and at companies like Facebook, Google, Microsoft and Yahoo. Yep, researchers at Facebook were apparently accessing this database of poorly anonymized data.
So it should come as no surprise that, just as Aleksandr Kogan defended himself by asserting that lots of other apps did the same thing as his Cambridge Analytica app and Facebook was well aware of how his app was being used, we’re getting the exact same defense from the team by myPersonality:
“Academics at the University of Cambridge distributed the data from the personality quiz app myPersonality to hundreds of researchers via a website with insufficient security provisions, which led to it being left vulnerable to access for four years. Gaining access illicitly was relatively easy.”
Yep, an online database of highly sensitive Facebook + psychological profile data was made accessible to hundreds of researchers. But it was also potentially accessible to anyone due to poor security. For four years.
And those that were given official access to the data included companies like Microsoft, Google, Yahoo, and Facebook:
While the Facebook researchers could plausibly claim that they had no idea the server hosting this data had insufficient security, it would be a lot harder for them to claim they had no idea the anonymization scheme was highly inadequate:
And the only thing the myPersonality team appeared to do to anonymize the data was replace names with a number. THAT’S IT! And when that’s the only anonymization step employed in a data set with large amounts of data on each individual, including status updates, it’s going to be trivial to automate the deanonymization of these people, especially for companies like Google, Yahoo, Microsoft and Facebook:
Not surprisingly, two of the academics in charge of this project were part of a spin-off company that sold tools for targeting ads based on personality types. So it wasn’t just commercial companies like Google and Yahoo who got access to this data. The whole enterprise appeared to be commercial in nature:
And, of course, Aleksandr Kogan was part of this project before he went to work for Cambridge Analytica:
And note how Facebook only suspended this app on April 7th of this year, four years after Facebook ended its notorious “friends permission” feature that’s received most of the attention from the Cambridge Analytica scandal. It’s a big reminder that data privacy abuses via Facebook apps aren’t limited to that “friends permissions” feature. It’s an existing problem, which is why it’s troubling to hear that Facebook was looking into the tens of thousands of apps that may have abused in pre-2015 data use policies:
But beyond the troubling half-assed anonymization scheme, there’s the issue of all this data being inadvertently made available to the world due to the user credentials for the database getting uploaded into some code on GitHub, an online coding repository:
It’s important to keep in mind that the accidental release of those credentials by some students is probably the most understandable aspect of this data privacy nightmare. It’s the equivalent of writing a bug in code: a common careless accident. Everything else associated with this data privacy nightmare is far less understandable because it wasn’t a mistake but by design.
And as we should expect at this point, the designers of the myPersonality app are expressing dismay as Facebook’s dismay. After all, Facebook has long been aware of the project and even held meetings with the team as far back as 2011:
And don’t forget, Facebook researchers were among the users of this data. So Facebook was obviously pretty familiar with the app.
And in the end, we’ll likely never know who accessed the data and what they did with it. It’s just the tip of the iceberg:
And note one of the other chilling implications of this story: Recall how the ~270,000 user of the Cambridge Analytica app resulting in Cambridge Analytica harvesting data on ~87 million people using the “friends permissions” option. Well, if this myPersonality app was been operating for 9 years that means it also had access to the “friends permissions” option, and for much longer than the Cambridge Analytica app. And 6 million people apparently downloaded this app! So how many of that 6 million people were using this app in the pre-2015 period when the “friends permission” option was still available and how many friends of those 6 million people had their profiles harvested too?
So it’s entirely possible the people at myPersonality grabbed information on far more than the 6 million people who used their app and we have no idea what they did with the data. What we know know is just the tip of the iceberg of this story.
And this story of myPersonality is just covering one of the 200 apps that Facebook just suspended. In other words, this iceberg of a story is just the tip of a much, much larger iceberg.
Here’s a story about explosive new lawsuit against Facebook that could end up being a major headache for the company, and Mark Zuckerberg in particular: The lawsuit is being brought by Six4Three, a former app developer startup. Six4Three claims that, in 2012, Facebook was facing a large crisis with its advertising business model due to the rapid adoption of smartphones and the fact that Facebook’s ads were primarily focused on desktops. Facing a large drop in revenue, Facebook allegedly forced developer to buy expensive ads on the new, underused Facebook mobile service or risk having their access to data at the core of their business cut off.
The way Six4Three describes it, Facebook first got developers to build their business models around access to that data, and then engaged in what amounts to a shakedown of those developers, threatening to take that access away unless expensive mobile ads were purchased.
But beyond that, Six4Three alleges that Facebook incentivized developed to create apps for its system by implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. Don’t forget the Facebook friends data data (accessed via the “friends permission” feature) is the information at the heart of the Cambridge Analytica scandal.
So Facebook was apparently offering long-term access to “friends permission” data back in 2012 as a means of incentivizing developers to create apps and the same time it was threatening to cut off developer access to this data unless they purchased expensive mobile adds. And then, of course, that “friends permission” feature was wound down in 2015, which was undoubtedly a good thing for the privacy of Facebook users but as we can see the developers weren’t so happy about this, in part because they were apparently told by Facebook to expect long-term access to that data. Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook.
It’s worth noting that Six4Three developed an app called Pinkinis that searched through the photos of your friends for pictures of them in swimwear. So losing access to friends data more or less broke Six4Three’s app.
Beyond that, Six4Three also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. This is also noteworthy with respect to the Cambridge Analytica scandal since it appeared to be the case that Aleksandr Kogan’s psychological profiling app was allowed to access the “friends permission” feature later than other apps. In other words, the Cambridge Analytica app did actually appear to get preferential treatment from Facebook.
But Six4Three’s allegations go further, and suggest that Facebook’s executives would observe which apps were the most successful and plotted to either extract money from them, co-opt them or destroy them using the threat of cutting off access to the user data as leverage.
So, basically, Facebook is getting sued by this app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.”
It was Mark Zuckerberg who personally led this shakedown operation, according to the lawsuit. So what’s the evidence? Well, that appears to be in the form of thousands of currently redacted internal emails. It’s unclear how those emails were obtained:
Note this isn’t a new lawsuit by Six4Three. They first filed a case in 2015, shortly after Facebook removed developers’ access to the “friends permission” data feature, where app developers could grab extensive information from ALL the Facebook friends of the users who downloaded their apps. And when you look at the how the Six4Three app works it’s pretty clear why they would have been very upset about losing access to the friends data: their “Pikinis” app is based on scanning your friends’ pictures for shots of them in swimwear:
And it’s a rather fascinating lawsuit by Six4Three because it’s basically complaining about Facebook suddenly threatening to remove access to this personal data after previously implying that developers would have long-term access to it and use that power to extort developers. And in order to make that case, Six4Three also asserts that Facebook was well aware of the privacy implications of its data sharing policies because access to that data was both the carrot and the stick for developers. So this case, if proven, would utterly destroy Facebook’s portrayal of itself as a victim of Cambridge Analytica’s misuse of its data:
And the initial motive for all this was Facebook’s realization in 2012 that it failed to anticipate the speed of consumer adoption of smartphones and effectively damaged its lucrative advertising business, which was focused on desktop ads:
So Facebook responded to this sudden threat to its core business by in multiple scandalous ways, according to the lawsuit. First, Facebook began forcing app developers to buy expensive mobile ads on its new, underused mobile service, or risk having their access to data at the core of their business cut off. It’s an example of how important selling access to that user data to third parties was to Facebook’s business model:
But beyond that, Six4Three alleges that Facebook was simultaneously trying to entice developers to makes for its systems by implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. So the “friends permission” feature for developers that Facebook was phasing out in 2014–2015 was apparently be peddled to developers as a long-term feature back in 2012:
And, according to Six4Three, once a business became hooked on Facebook’s user data, Facebook would then look for particularly lucrative apps and try to find ways to extract more money out of them. And that would apparently include threatening to cut off access to that user data to either force companies out of business or coerce app owners into selling at below market prices. Up to 40,000 companies were potentially defrauded in this way and it was Facebook’s senior executives who personally devised and managed the scheme, including Zuckerberg:
Not surprisingly, Sandy Parakila, the former Facebook executive turned whistleblower who previously revealed that Facebook executives were consciously negligent in how user data was used(or abused), views this lawsuit and the revelations contained in those emails a “bombshell” that more or less backs up what he’s been saying all along:
So was Mark Zuckerberg effectively acting like the top mobster in a shakedown scheme involving app developers? A scheme where Facebook selectively threatened to rescind access to its core data in order to extort ad buys from the developers, buy the app at below market prices, or straight up drive app developers out of business? We’ll see, but this is going to be a lawsuit to keep in eye on.
“That’s a nice app you got there...it would be a shame if something happened to your access to user data...”
Here’s a fascinating twist to the already fascinating story of Psy Group, the Israeli-owned private intelligence firm that was apparently pushed on the Trump team during the August 3, 2016, Trump Tower meeting. That’s the newly discovered meeting where Erik Prince and George Nader met with Donald Trump, Jr. and Stephen Miller to inform the Trump team that the crown princes of Saudi Arabia and the UAE were “eager” to help Trump win the election. And Psy Group, an Israeli private intelligence firm that offers many of the same psychological warfare services of Cambridge Analytica, presented a pitch at that meeting for a socia media manipulation campaign involving thousands of fake accounts. And this meeting happened a couple weeks before Steve Bannon replaced Paul Manafort and brought Cambridge Analytica into prominence in the Trump team’s electoral machinations.
So here’s the new twist to this Psy Group/Cambridge Analytica story: now we learn that Cambridge Analytica and Psy Group formed a business alliance with Cambridge Analytica after Trump’s victory to try to win U.S. government work. This alliance reportedly happened after the Cambridge Analytica and Psy Group signed a mutual non-disclosure agreement.
Intriguingly, the agreement was signed on December 14, 2016, according to documents seen by Bloomberg. And December 14th, 2016, just happens to be one day before the Crown Prince of the UAE secretly traveled the US — against diplomatic protocol — and met with the Trump transition team at Trump Tower (including Michael Flynn, Jared Kushner, and Steve Bannon) to help arrange the eventual meeting in the Seychelles between Erik Prince, George Nader, and Kirill Dmitriev.
So you have to wonder if the signing of that non-disclosure agreement was part of all the scheming associated with the Seychelles. Don’t forget that the Seychelles meeting appears to center around what amounts to a lucrative offer to Russia to realign itself away from the governments of Iran and Syria, which implicitly suggests plans for ongoing regime change operations in Syria and a major new regime change operation in Iran. And based on what we know about the services offered by both Psy Group and Cambridge Analytica — psychological warfare services designed to change the attitudes of entire nations — the two firms sound like exactly the kinds of companies that might have been major contractors for those planned regime change operations.
Granted, there would have been no shortage of potential US government contracts Cambridge Analytica and Psy Group would have been mutually interested in pursuing that have nothing to do with the Seychelles scheme. But the timing sure is interesting given the heavy overlap of characters involved.
And while the non-disclosure documents don’t indicate which government contracts precisely the two companies were initially planning on jointly bidding on (which makes sense if they were initially planning on working on something involving a Seychelles/regime-change scheme), there is some information on one of the contracts they did end up jointing bidding on which happened to focus on psychological warfare services in the Middle East. Specifically, they made a joint proposal for the State Department’s Global Engagement Center for a project focused on disrupting the recruitment and radicalization of ISIS members. It sounds like the proposal focused heavily on creating fake online personas so it’s basically a different application for the same fake-persona services Psy Group and Cambridge Analytica offer in the political arena.
And it turns out the State Department’s Global Engagement Center did indeed sign a contract with Cambridge Analytica’s parent company, SCL Group, last year. Additionally, one of the contracts Psy Group and Cambridge Analytica jointly submitted to the US State Department also included SCL. Although it’s unclear if it involved Cambridge Analytica because it didn’t include provisions for subcontractors and the contract didn’t involve social media and was focused on in-person interviews. So while we don’t know how successful Cambrdige Analytica and Psy Group were in their mutual hunt for government contracts, SCL was successful. So if SCL was getting lots of other contracts who knows how many of them also involved Cambridge Analytica and/or Psy Group.
We’re also learning that Psy Group appears to have shut itself down in February of 2018 shortly after George Nader was interview by Robert Mueller’s grand jury. But it doesn’t appear to be a real shutdown and it sounds like Psy Group has quietly reopened under the new name “WhiteKnight”. Let’s not forget that Cambridge Analytica appears to have already done the same thing, shutting down only to quietly reopen as “Emerdata”. So for all we know there’s already a new WhiteKnight/Emerdata non-disclosure agreement in place for the purpose of further joint bidding on government contracts. But as the following story makes clear, one thing we do know for sure at this point is that if the Cambridge Analytica and/or Psy Group end up getting government contracts they’re going to go to great lengths to hide it:
“Special Counsel Robert Mueller’s team has asked about flows of money into the Cyprus bank account of a company that specialized in social-media manipulation and whose founder reportedly met with Donald Trump Jr. in August 2016, according to a person familiar with the investigation.”
So the Mueller probe is looking into money-flows of Psy Group’s Cyprus bank account, along with the activities of George Nader (who pitched Psy Group to the Trump team in August 2016) and this interest from Mueller appears to have led to the sudden shutdown of the company a few months ago:
Although the sudden shutdown of Psy Group appears to really be a secret rebranding. Psy Group is apparently now WhiteKnight, a rebranding the company has been working on for a white it seems since WhiteKnight was hired by Nader to do a post-election analysis on the role social media played in the 2016 election:
Just imagine how fascinating WhiteKnight’s post-election analysis on the role social media played must since it was basically conducted by Psy Group, a social media manipulation firm that either executed much of the most egregious (and effective) social media manipulation itself or worked directly with the worst perpetrators like Cambridge Analytica. There’s probably quite a few insights in that report that wouldn’t be available to other firms.
So what kinds of secrets is Psy Group hoping to keep hidden with its shutdown/rebranding move? Well, some of those secrets presumably involve the alliance Psy Group created with Cambridge Analytica shortly after Trump’s victory, culminating the the December 14, 2016, mutual non-disclosure agreement (one day before the Trump Tower meeting with the crown prince of the UAE to set up the Seychelles meeting). And note how the contract Psy Group and Cambridge Analytica pitched to “conducted messaging/influence operations in well over a dozen languages and dialects” was also submitted with Cambridge Analytica’s parent company SCL. So Psy Group’s alliance with Cambridge Analytica was probably really an alliance with Cambridge Analytica’s parent company too:
Another point to keep in mind regarding the timing of that December 14, 2016, mutual non-disclosure agreement: the Seychelles meeting appears to be a giant pitch designed to realign Russia, indicating the UAE was clearly very interested in exploiting Trump’s victory in a big way. They were ‘cashing in’, metaphorically. So it seems reasonable to suspect that Psy Group, which is closely affiliated with the UAE’s crown prince, would also be quite interested in literally ‘cashing in’ in a very big way too during that December 2016 transition period. In other words, while we don’t know what Psy Group and Cambridge Analytica decided to not disclose with their non-disclosure agreement, we can be pretty sure it was extremely ambitious at the time.
But at this point, the only proposals for US government contracts that we do know about were for an anti-ISIS social media operation for the US State Department’s Global Engagement Center:
And one contract we do know about at this point that was awarding to this network of companies was actually awarded to Cambridge Analytica’s parent company, SCL:
So there’s one government contract that SCL won following Trump’s election, but Psy Group/Cambridge Analytica may or may not have been involved with.
And that’s all we know about the work Psy Group may or may not have done for the US government following Trump’s victory at this point. Except we also know that Psy Group and Cambridge Analytica weren’t competing, so whatever contract Psy Group got Cambridge Analytica may have received too. And that indicates, at a minimum, a willingness for these two companies to work VERY close together. So close they risk revealing internal secrets to each other. Don’t forget, Psy Group and Cambridge Analytica are ostensibly competitors offering similar services to the same types of clients. And shortly after the election they were willing to sign an agreement to jointly compete for contracts that they would work on together. Don’t forget that one of the massive questions looming over this whole story is whether or not Psy Group and Cambridge Analytica — two direct competitors — were not just on the same team but actually working closely together during the 2016 election to help elect Trump. And thanks to these recent revelations we now know Psy Group and Cambridge Analytica were at least willing to work extremely closely with each other immediately after the election on a variety of different government contracts. That seems like a relevant clue in this whole mess.
Oh look, a new scary Cambridge Analytica operation was just discovered. Or rather, it’s a scary new story about AggregateIQ (AIQ), the Cambridge Analytica offshoot that Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development to and played a key role in also worked on the pro-Brexit campaign and later assisted a West-leaning East Ukraine politician Sergei Taruta. It’s like these companies can’t go a week without a new scary story. Which is extra scary.
For scary starters, the article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. So if Facebook really was trying to kick Cambridge Analytica off of its platform it’s not trying very hard. One is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
Another part of what makes the following article scary is that it’s a reminder that you don’t necessarily need to have downloaded a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made. That’s what AIQ was offering and the newly discovered database contained the info for that.
In this case, the Financial Times has somehow gotten its hands on a bunch of Facebook-related data on held internally by AIQ. It turns out that AIQ stored a list of 759,934 Facebook users in a table that included home addresses, phone numbers and email addresses for some profiles. Additionally, the files contain political Facebook posts and likes for the people. It all appears to be part of a software package AIQ was developing for a client that would allow them to search the political posts and “Likes” people made on Facebook. A personal political browser that could give a far more detailed peak into someone’s politics than other forms of traditionally available information on people’s politics like political donation records and party affiliation.
Also keep in mind that we already know Cambridge Analytica collected large amounts of information on 87 million Facebook accounts. So the 759,934 number should not be seen as the total number of people AIQ has similar such files on. It could just be a particular batch selected by that client. A batch of 759,934 people a client just happens to want to make personalized political searches on.
It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period. And an ideological purge of the State Department is reportedly already underway. So it will be interesting to learn if this AIQ is being used for such purposes.
It’s unclear if the data in these files was collected through a Facebook app developed by AIQ — in which case the people in the file at least had to click the “I accept” part of installing the app — or if the data was collected simply from scraping publicly available Facebook posts. Again, it’s a reminder that pretty much ANYTHING you do on a publicly accessible Facebook post, even a ‘Like’, is probably getting collected by someone, aggregated, and resold. Including, perhaps, by AggregateIQ:
““The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.””
As security researcher Chris Vickery put it, the whole is greater than the sum when you look at the synergystic way the various tools developed by companies like Cambridge Analytica and AIQ work together. Synergy in the service of creating a mass manipulation service with personalized micro-targeting capabilities.
And that synergistic mass manipulation is part of why it’s disturbing to hear that Vickery just discovered 13 AIQ apps still available on Facebook after Cambridge Analytica was declared banned and caused Facebook so much bad publicity. The fact that there are still Cambridge Analytica-affiliated apps suggests Facebook either really, really, really likes Cambridge Analytica or it’s just really, really bad at app oversight:
“However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.”
“AIQ Johnny Scraper”. They weren’t even hiding it. But at least the Johnny Scraper app sounds relatively innocuous.
The personal political post search engine service, on the other hand, sounds far from innocuous. A database on 759,934 Facebook users created by AIQ software that tracked which which users liked a particular page or were posting positive or negative comments. So software that interprets what people write about politics on Facebook and aggregates that data into a search engine for clients. You have to wonder how sophisticated that automated interpretation software is at this point. Whatever the answer, AIQ’s text interpretation software is only going to get more sophisticated. That’s a given.
Someday that software will probably be able to write its own synopsis of a person that’s better than a human could do. Who knows when that kind of software will be available but someday it will be and companies like AIQ will be there to exploit it if that’s legal. That’s also a given.
And this 759,934 person database of political Likes and written political comments was what AIQ provided for just one client:
And for all we know, AIQ’s database could have been data curated from publicly available posts and not AIQ app users, highlighting how anything publicly done on Facebook, even a Like, is going to be collected by someone and probably sold:
You are what you Like in this commercial space. And we’re all in this commercial space to some extent. There really is a commercially available profile of you. It’s just distributed between the many different data brokers offering slices of it.
Another key dynamic in all this is that Facebook’s business model appears to be both a combination of exploiting the vast information monopoly it possesses with an opposing business model of effectively selling off little chunks of that data by making it available to app developers. There’s an obvious tension in both exploiting your data monopoly while selling it off but that appears to be the most profitable path forward which is why that’s probably the business model AIQ was offering with the data it was collecting from Facebook: analyzing the Facebook data it’s collected through apps and public data scraping, categorizing the data (like political of non-political comments and if they’re positive or negative), and then sell slices of that vast internal AIQ curated content to clients.
Aggregate as much data as possible. Analyze it. And offer pieces of that curated data pile to clients. That appears to be a business model of choice in this commercial big data arena which is why we should assume AIQ and Cambridge Analytica were offering similar service and shouldn’t assume this particular database of 759,934 Facebook accounts is the only one of its nature. Especially given the 87 million profiles they already scraped.
And this is a business model that’s going to apply for far more than just Facebook content. The whole spectrum of information collected on everyone is going to be part of this commercial space. And that’s part of what’s so scary: the data that gets fed into these independent Big Data repositories like the AIQ/Cambridge Analytica database is going to increasingly be the curated data provided by other Big Data providers in the same business. Everyone is collecting and analyzing the curated data everyone else is regurgitating out. Just as Cambridge Analytica and AIQ offer a slew of separate interoperable services to clients that have a ‘whole is greater than the sum’ synergistic quality, the entire Big Data industry is going to have a similar quality. It’s a competitive cooperative division of labor. Cambridge Analytica and AIQ are just the extra scary team members in a synergistic industry-wide team effort in the service of maximizing the profits we all make from exploiting everyone’s data for sale.
It’s that time again. Time to learn how the Cambridge Analytica/Facebook scandal just got worse. So what’s the new low? Well, it turns out Facebook hasn’t just been sharing egregious amounts of Facebook user data with app developers. Device makers, like Apple and Samsung, have also been given similar access to user data. At least 60 device makers known thus far.
Except, of course, it’s worse and these device makers have actually been given EVEN MORE data that Facebook app developers received. For example, Facebook allowed the device makers access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. And some device makers could access personal information from users’ friends who thought they had turned off any sharing. So the “friends permissions” option that allowed Cambridge Analytica’s app to collect data on 87 million Facebook users even though just 300,000 people used their app has remained an option for device manufacturers even after Facebook phased out the friends permission option in 2014–2015.
Beyond that, the New York Times examined the kind of information gathered from a Blackberry device owned by one of its reporters and found that it wasn’t just collecting identifying information on all the reporters friends. It was also grabbing identifying information on those friends’ friends. That single Blackberry was able to retrieve identifying information on nearly 295,000 people!
Facebook justifies all this by arguing that the device makers are basically an extension of Facebook. The company also asserts that there were strict agreements on how the data could be used. But the main loophole they cite is that Facebook viewed its hardware partners as “service providers,” like a cloud computing service paid to store Facebook data or a company contracted to process credit card transactions. And by categorizing these device makers as service providers Facebook is able to get around a 2011 consent decree Facebook signed with the US Federal Trade Commission over previous privacy violations. According to that consent decree Facebook does not need to seek additional permission to share friend data with service providers.
So it’s not just Cambridge Analytica and the thousands of app developers who have been scooping up mountains of Facebook user data without people realizing it. The device makers have been doing it too. More so. Much, much more so:
“Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.”
At least 60 device makers are sitting on A LOT of Facebook data. Note how NONE of them acknowledge this before this report came out as this Cambridge Analytica scandal was unfolding. It’s one of those quiet lessons in how the world unfortunately works.
And these 60+ device makers were able to access data of users’ friends without their consent even when those friends changed their privacy setting to bar any sharing:
“Most of the partnerships remain in effect, though Facebook began winding them down in April.”
Yep, these data sharing partnership largely remain in effect and didn’t end in 2014–2015 when the app developers lost access to this kind of data. It’s only now, as the Cambridge Analytica scandal unfolds, that these partnerships are being ended.
This was all done despite a 2011 consent decree that barred Facebook from overriding users’ privacy settings without first getting explicit consent. Facebook simply categorizing the device makers “service providers”, exploiting a “service provider” loophole in the decree:
It’s also worth recalling that Facebook made similar excuses for allowing app developers to grab user friends data, claiming that the data was solely going to be used for “improving user experiences.” Which makes the Facebook explanation for how the device maker data sharing program was very different from the app developer data sharing program rather amusing because, according to Facebook, the the device partners can use Facebook data only to provide versions of “the Facebook experience” (which implicitly admits that app developers were using that data from a lot more than just improving user experiences):
““These partnerships work very differently from the way in which app developers use our platform,” said Ime Archibong, a Facebook vice president. Unlike developers that provide games and services to Facebook users, the device partners can use Facebook data only to provide versions of “the Facebook experience,” the officials said.” LOL!
Of course, it’s basically impossible for Facebook to know what device makers were doing with this data because, just like with app developers, these device manufacturers had the option of keeping this Facebook data on their own servers:
And this data privacy nightmare situation apparently all started in 2007, when Facebook began building private APIs for device makers:
So what kind of data are device manufacturers actually collecting? Well, it’s unclear if all device makers get the same level of access. But BlackBerry, for example, can access 50 types of information on users and their friends. Information like Facebook users’ relationship status, religion, political leaning and upcoming events:
And as the New York Times discovered after testing a reporter’s Blackberry device, Blackberry was able to grab information on friends of friends, allowing the one device they tested to collect identifying information on 295,000 Facebook users:
And this information was collected and sent to the “BlackBerry Hub” immediately after the reporter connected their device to his Facebook account:
Not surprisingly, Facebook whistle-blower Sandy Parakilas, who left the company in 2012, recalls this data sharing arrangement triggering discussions within Facebook as early as 2012. So Facebook has had internal concerns about this kind of data sharing for the past six years. Concerns that were apparently ignored
Also keep in mind that the main concerns Sandy Parakilas recalls hearing Facebook executives expressing over the app developer data sharing back in 2012 was concerns that these developers were collected so much information that they were going to be able to create their own social networks. As Parakilas put it, ““They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people...They were worried that they were going to build their own social networks.”
Well, the major device makers have undoubtedly been gathering far more information than major app developers, especially when you factor in the “friends of friends” option and the fact that they’ve apparently had access to this kind of data up until now. And that means these device makers must already possess remarkably detailed social networks of their own at this point.
So when you hear Facebook executives characterizing these device manufacturers as “extensions of Facebook”...
...it’s probably the most honest thing Facebook has said about this entire scandal.
Here’s an angle to the Facebook data privacy scandal that has received surprisingly little attention because, when it comes to privacy violations, this just might be the worst one we’ve seen: It turns out one of the types of data that Facebook gave app developers permission to access is the contents of their private Inboxes.
Yep, it’s not just your Facebook ‘profile’ of data points Facebook has collected on you. Or all the things you ‘liked’. App developers apparently also could gain access to the private messages you received. And much like the ‘friends permission’ option exploited by Cambridge Analytica to get profile information on all of the friends of app users without the permissions of those friends, this ability to access the contents of your inbox is obviously a privacy violation of the people sending you those messages.
The one positive aspect of this whole story is that at least app developers had to let users know that they were giving access to the inbox. So users presumably had to agree somehow. And Facebook states that users had to explicitly give permission for this. So at least this wasn’t a default app permission.
But when asked about the language used in this notification Facebook had no response. So we can’t assume that all people who used Facebook apps were giving developers access to their their private inbox messages, but we also have no idea how many people were tricked into it with deceptive language during the permissions notifications.
Of course, one of the big questions is whether or not this inbox permissions feature got exploited by Cambridge Analytica? Yes, and that’s actually how we learned about its existence: When Facebook started sending out notifications to users that they may have been impacted by the Cambridge Analytica data collection (which impacted 87 million users) via the “This Is Your Digital Life” app created by Aleksandr Kogan, they sent the following notification that informed people that they may have had their personal messages collected:
So Facebook casually informed users that only a “small number of people” who used the Cambridge Analytica “This Is Your Digital Life” app may have given access to “message from you”. Did they actually give developers access to messages from you? That’s left a mystery.
And notice that the language in that Facebook notification says user posts were also made available to developers. That’s been one of the things that’s never been entirely clear in the reporting of this topic: were developers given access to the actual private posts people make? The language of that notification is ambiguous as to whether or not apps could access private posts or only public posts, but given the way everything else has played out on this story is seems like private posts were highly likely.
The inbox permissions was phased out in 2014 along with the “friends permission” option and many of the other permissions Facebook used to grant to app developer. There was a one year grace period for app developers to adjust to the new rules that took effect in April of 2015. But as the article notes, developers were actually given access to the Inbox permission until October 6 of 2015. And that’s well into the US 2016 election cycle, which raises the fascinating possibility that this ‘feature’ could have actually be used to spy on the US political campaign. Or the UK Brexit campaign. Or any other political campaign around the world around that time. Or anything else of importance across the world from 2010 — 2015 when these mailbox reading options were available to app developers.
And that’s what makes it so amazing that this particular story wasn’t bigger: back in April Facebook acknowledged that it gave almost anyone the potential capacity to spy on private Facebook messages and users had almost no idea this was going on. That seems like a pretty massive scandal: