Let the Great Unfriending Commence! Specifically, the mass unfriending of Facebook. Which would be a well deserved unfriending after the scandalous revelations in a recent series of articles centered around the claims of Christopher Wylie, a Cambridge Analytica whistle-blower who helped found the firm and worked there until late 2014 until he and others grew increasingly uncomfortable with the far right goals and questionable actions of the firm.
And it turns out those questionable actions by Cambridge involve a far larger and more scandalous Facebook policy brought forth by another whistle-blower, Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012.
So here’s a rough breakdown of what’s been learned so far:
According to Christopher Wylie, Cambridge Analytica was “harvesting” massive amount data off of Facebook from people who did not give their permission by utilizing a Facebook loophole. This “friends permissions” loophole allowed app developers to scrape information not just from the Facebook profiles of the people that agree to use their apps but also their friends’ profiles too. In other words, if your Facebook friend downloaded Cambridge Analytica’s app, Cambridge Analytica was allowed to grab private information from your Facebook profile without your permission. And you would never know it.
So how many profiles was Cambridge Analytica allowed to “harvest” utilizing this “friends permission” feature? About 50 million, and only a tiny fraction (~270,000) of that 50 million people actually agreed to use Cambridge Analytica’s app. The rest were all their friends. So Facebook literally used the connectivity of Facebook users against them.
Keep in mind that this isn’t a new revelation. There were reports last year about how Cambridge Analytica paid ~100,000 people a dollar or two (via Amazon’s Mechanical Turks micro-task platform) to take an online survey. But the only way they could be paid was to download an app that gave Cambridge Analytica access to the profiles of all their Facebook friends, eventually yielding ~30 million “harvested” profiles. Although according to these new reports that number is closer to 50 million profiles.
Before that, there was also a report from December of 2015 about Cambridge Analytica’s building of “psychographic profiles” for the Ted Cruz campaign. And that report also included the fact that this involved Facebook data harvested largely without users’ permissions.
So the fact that Cambridge Analytica was secretly harvesting private Facebook user data without their permissions isn’t the big revelation here. What’s new is the revelation that what Cambridge Analytica did was integral to Facebook’s business model for years and very widespread.
This is where Sandy Parakilas comes into the picture. According to Parakilas, this profile-scraping loophole that Cambridge Analytica was exploiting with its app was routinely exploiting by possibly hundreds of thousands of other app developers for years. Yep. It turns out that Facebook had an arrangement going back to 2007 where the company would get a 30 percent cut in the money app developers make off their Facebook apps and in exchange these developers were given the ability to scrape the profiles of not just the people who used their apps but also their friends. In other words, Facebook was essentially selling the private information of its users to app developers. Secretly. Well, except it wasn’t a secret to all those app developers. That’s also part of this scandal
This “friends permission” feature started getting phased out around 2012, although it turns out Cambridge Analytica was one of the very last apps allowed to use it up into 2014.
Facebook has tried to defend itself by asserting that Facebook was only making this available for things like academic research and that Cambridge Analytica was therefore misusing that data. And academic research was in fact the cover story Cambridge Analytica used. Cambridge Analytic actually set up a shell company, Global Science Research (GRS), that was run by a Cambridge University professor, Aleksandr Kogan, and claimed to be purely interested in using that Facebook data for academic research. The collected data was then sent off to Cambridge Analytica. But according to Parakilas, Facebook was allowing developers to utilize this “friends permissions” feature reasons as vague as “improving user experiences”. Parakilas saw plenty of apps harvesting this data for commercial purposes. Even worse, both Parakilas and Wylie paint a picture of Facebook releasing this data and then doing almost nothing to ensure that it’s not misused.
So we’ve learned that Facebook was allowing app developers to “harvest” private data on Facebook users without their permissions from 2007–2014, and now we get to perhaps the most chilling part: According to Parakilas, this data almost certainly floating around in the black market. And it was so easy to set up an app and start collecting this kind of data that anyone with basic app create skills could start trawling Facebook for data. And a majority of Facebook users probably had their profiles secretly “harvested” during this period. If true, that means there’s likely a massive black market of Facebook user profiles just floating around out there and Facebook has done little to nothing to address this.
Parakilas, whose job it was to police data breaches by third-party software developers from 2011–2012, understandably grew quite concerned over the risks to user data inherent in this business model. So what did Facebook’s leadership do when he raised these concerns? They essentially asked him “do you really want to know how this data is being use” attitude and actively discouraged him from investigating how this data may be abused. Intentionally not knowing about abuses was other part of the business model. Cracking down on “rogue developers” was very rare and the approval of Facebook CEO Mark Zuckerberg himself was required to get an app kicked off the platform.
Facebook has been publicly denying allegations like this for years. It was the public denials that led Parakilas to come forward.
And it gets worse. It turns out that Aleksandr Kogan, the University of Cambridge academic who ended up teaming up with Cambridge Analytica and built the app that harvested the data, has a remarkably close working relationship with Facebook. So close that Kogan actually co-authored an academic study published in 2015 with Facebook employees. In addition, one of Kogan’s partners in the data harvesting, Joseph Chancellor, was also an author on the study and went on to join Facebook a few months after it was published.
It also looks like Steve Bannon was overseeing this entire process, although he claims to know nothing.
Oh, and Palantir, the private intelligence firm with deep ties to the US national security state owned by far right Facebook board member Peter Thiel, appears to have had an informal relationship with Cambridge Analytica this whole time, with Palantir employees reportedly traveling to Cambridge Analytica’s office to help build the psychological profiles. And this state of affairs is an extension of how the internet has been used from its very conception a half century ago.
And that’s all part of why the Great Unfriending of Facebook really is long overdue. It’s one really big reason to delete your Facebook account comprised of many many many small egregious reasons.
So let’s start taking a look at those many small reasons to delete your Facebook account with a look at a New York Times story about Christopher Wylie and his story of the origins of Cambridge Analytica and the crucial role Facebook “harvesting” played in providing the company with the data it needed to carry out the goals of its chief financiers: waging the kind of ‘culture war’ the billionaire far right Mercer family and Steve Bannon wanted to wage:
The New York Times
How Trump Consultants Exploited the Facebook Data of Millions
by Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr;
03/17/2018As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem.
The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.
Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”
“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”
Details of Cambridge’s acquisition and use of Facebook data have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate about the merits of the firm’s so-called psychographic modeling techniques.
But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
Cambridge paid to acquire the personal information through an outside researcher who, Facebook says, claimed to be collecting it for academic purposes.
During a week of inquiries from The Times, Facebook downplayed the scope of the leak and questioned whether any of the data still remained out of its control. But on Friday, the company posted a statement expressing alarm and promising to take action.
“This was a scam — and a fraud,” Paul Grewal, a vice president and deputy general counsel at the social network, said in a statement to The Times earlier on Friday. He added that the company was suspending Cambridge Analytica, Mr. Wylie and the researcher, Aleksandr Kogan, a Russian-American academic, from Facebook. “We will take whatever steps are required to see that the data in question is deleted once and for all — and take action against all offending parties,” Mr. Grewal said.
Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.
In Britain, Cambridge Analytica is facing intertwined investigations by Parliament and government regulators into allegations that it performed illegal work on the “Brexit” campaign. The country has strict privacy laws, and its information commissioner announced on Saturday that she was looking into whether the Facebook data was “illegally acquired and used.”
In the United States, Mr. Mercer’s daughter, Rebekah, a board member, Mr. Bannon and Mr. Nix received warnings from their lawyer that it was illegal to employ foreigners in political campaigns, according to company documents and former employees.
Congressional investigators have questioned Mr. Nix about the company’s role in the Trump campaign. And the Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.
While the substance of Mr. Mueller’s interest is a closely guarded secret, documents viewed by The Times indicate that the firm’s British affiliate claims to have worked in Russia and Ukraine. And the WikiLeaks founder, Julian Assange, disclosed in October that Mr. Nix had reached out to him during the campaign in hopes of obtaining private emails belonging to Mr. Trump’s Democratic opponent, Hillary Clinton.
The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.” Only a tiny fraction of the users had agreed to release their information to a third party.
“Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”
Still, he added, “it’s a serious abuse of our rules.”
Reading Voters’ Minds
The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.
Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.
The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.
Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.
Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.
The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.
...
Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.
But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.
Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.
When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.
All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”
He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.
Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.
“We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.”
Mr. Nix tells a different story. Appearing before a parliamentary committee last month, he described Dr. Kogan’s contributions as “fruitless.”
An International Effort
Just as Dr. Kogan’s efforts were getting underway, Mr. Mercer agreed to invest $15 million in a joint venture with SCL’s elections division. The partners devised a convoluted corporate structure, forming a new American company, owned almost entirely by Mr. Mercer, with a license to the psychographics platform developed by Mr. Wylie’s team, according to company documents. Mr. Bannon, who became a board member and investor, chose the name: Cambridge Analytica.
The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European.
But in July 2014, an American election lawyer advising the company, Laurence Levy, warned that the arrangement could violate laws limiting the involvement of foreign nationals in American elections.
In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.”
In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican.
Cambridge Analytica, in its statement to The Times, said that all “personnel in strategic roles were U.S. nationals or green card holders.” Mr. Nix “never had any strategic or operational role” in an American election campaign, the company said.
Whether the company’s American ventures violated election laws would depend on foreign employees’ roles in each campaign, and on whether their work counted as strategic advice under Federal Election Commission rules.
Cambridge Analytica appears to have exhibited a similar pattern in the 2016 election cycle, when the company worked for the campaigns of Mr. Cruz and then Mr. Trump. While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees.
Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support.
Cambridge executives have offered conflicting accounts about the use of psychographic data on the campaign. Mr. Nix has said that the firm’s profiles helped shape Mr. Trump’s strategy — statements disputed by other campaign officials — but also that Cambridge did not have enough time to comprehensively model Trump voters.
In a BBC interview last December, Mr. Nix said that the Trump efforts drew on “legacy psychographics” built for the Cruz campaign.
After the Leak
By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.
Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”
Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.
Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”
Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”
But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.
While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.
Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations.
All the scrutiny appears to have damaged Cambridge Analytica’s political business. No American campaigns or “super PACs” have yet reported paying the company for work in the 2018 midterms, and it is unclear whether Cambridge will be asked to join Mr. Trump’s re-election campaign.
In the meantime, Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on.
———-
“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”
Cambridge Analytica was supposed to be the arsenal of weapons to fight the culture war Cambridge Analytica’s leadership wanted to wage. But that arsenal couldn’t be built without data on what makes us ‘tick’. That’s where Facebook profile harvesting came in:
The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.
Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”
...
And the acquisition of these 50 million Facebook profiles has never been acknowledge by Facebook, until now. And most or perhaps all of that data is still in the hands of Cambridge Analytica:
...
But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
...
And Facebook isn’t alone in suddenly discovering that its data was “harvested” by Cambridge Analytica. Cambridge Analytica itself wouldn’t admit this either. Until now. Now Cambridge Analytica admits it did indeed obtained Facebook’s data. But the company blames it all on Aleksandr Kogan, the Cambridge University academic who ran the front-company that paid people to take the psychological profile surveys, for violating Facebook’s data usage rules. It also claims it deleted all the “harvested” information two years ago as soon as it learned there was a problem. That’s Cambridge Analytica’s new story and it’s sticking to it. For now:
...
Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.
...
But Christopher Wylie has a very different recollection of events. In 2013, Wylie was a 24-year-old political operative with ties to veterans of President Obama’s campaigns interested in using psychological traits to affect voters’ behavior. He even had a team of psychologists and data scientists, some of them affiliated with Cambridge University (where Aleksandr Kogan was also working at the time). And that expertise in psychological profiling for political purposes is why Mr. Nix recruited Wylie and his team.
Then Nix has a chance meeting with Steve Bannon and Robert Mercer. Mercer shows interest in the company because he believes it can make him a Republican kingmaker, while Bannon was focused on the possibility of using personality profiling to shift America’s culture and rewire its politics. The Mercers end up investing $1.5 million in a pilot project: polling voters and testing psychographic messaging in Virginia’s 2013 gubernatorial race:
...
The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.
The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.
Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.
Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.
...
So the pilot project proceed, but there was a problem: Wylie’s team simply did not have the data it needed. They only had the kind of data traditional analytics firms had: voting records and consumer purchase histories. And getting the kind of data they wanted to gain insight into voter neuroticisms and psychological traits could be very expensive:
...
The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win....
Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.
But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.
...
And that’s where Aleksandr Kogan enters the picture: First, Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of set up he needed. Researchers there claimed to have developed techniques for mapping personality traits based on what people “liked” on Facebook. Better yet, this team already had an app that paid users small sums to take a personality quiz and download an app that would scrape private information from their Facebook profiles and from their friends’ Facebook profiles. In other words, Cambridge University’s Psychometrics Centre was already employing exactly the same kind of “harvesting” model Kogan and Cambridge Analytica eventually ended up doing.
But there was a problem for Wylie and his team: Cambridge University’s Psychometrics Centre declined to work with them:
...
Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.
...
But it wasn’t a particularly big problem because Wylie found another Cambridge University psychology professor who was familiar with the techniques and willing to do the job: Aleksandr Kogan. So Kogan built his own psychological profile app and began harvesting data for Cambridge Analytica in June 2014. Kogan was even allowed to keep the harvested data for his own research according to his contract with Cambridge Analytica. According to Facebook, the only thing Kogan told them and told the users of his app in the fine print was that he was collecting information for academic purposes. Although Facebook didn’t appear to have ever attempted to verify that claim:
...
When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”
...
In the end, Kogan’s app managed to “harvest” 50 million Facebook profiles based on a mere 270,000 people actually signing up for Kogan’s app. So for each person who signed up for the app there were ~185 other people who had their profiles sent to Kogan too.
And 30 million of those profiles contained information like places of residence that allowed them to match that Facebook profile with other records (presumably non-Facebook records) and build psychographic profiles, implying that those 30 million records were mapped to real life people:
...
He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.
...
So this harvesting starts in mid-2014, but by early 2015, Wylie and more than half his original team leave the firm to start a rival firm, although it sounds lie concerns over the far right cause they were working for was also behind their departure:
...
By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”
...
Finally, this whole scandal goes public. Well, at least partially: At the end of 2015, the Guardian reports this Facebook profile collection scheme Cambridge Analytica was doing for the Ted Cruz campaign. Facebook doesn’t publicly acknowledge the truth of this report, but it did publicly state that it was “carefully investigating this situation.” Facebook also sent a letter to Cambridge Analytica demanding that it destroy this data...except the letter wasn’t sent until August of 2016.
...
Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”
...
Facebook now claims that Cambridge Analytica “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.” But, of course, this was a lie. The New York Times was shown sets of the raw data.
And even more disturbing, a former Cambridge Analytica employee claims he recently saw hundreds of gigabytes on Cambridge Analytica’s servers. Unencrypted. Which means that data could potentially be grabbed by any Cambridge Analytica employee with access to that server:
...
Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.
While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.
...
So, to summarize the key points from this New York Times article:
1. In 2013, Cambridge Analytica is formed when Alexander Nix, then a salesman for the small elections division at SCL Group, recruits Christopher Wylie and a team of psychologist to help develop a “political data” unit at the company, with an eye on the 2014 US mid-terms.
2. By chance, Nix and Wylie meet Steve Bannon and Robert Mercer, who are quickly sold on the idea of psychographic profiling for political purposes. Bannon was intrigue by the idea of using this data to wage the “culture war.” Mercer agrees to invest $1.5 Billion in a pilot project involving the Virginia gubernatorial race. Their success is limited as Wylie soon discovers that they don’t have the data they really need to carry out their psychographic profiling project. But Robert Mercer remained committed to the project.
3. Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of data they were seeking. Data that was being collected via an app administered through Facebook, where people were paid small amounts a money to take a survey, and in exchange Cambridge University’s Psychometrics Centre was allowed to scrape their Facebook profile as well as the profiles of all their Facebook friends.
4. Cambridge University’s Psychometrics Centre rejected Wylies offer to work with them, but there was another Cambridge University psychology professor who was willing to do so, Aleksandr Kogan. Kogan proceeded to start a company (as a front for Cambridge Analytica) and develop his own app, getting ~270,000 people to download it and give their permission for their profiles to be collected. But using the “friends permission” feature, Kogan’s app ended collecting another ~50 million Facebook profiles from the friends of those 270,000 people. ~30 million of those profiles were matched to US voters.
5. By early 2015, Wylie and his left-leaning team members leave Cambridge Analytica and form their own company, apparently due to concerns over the far right goals of the firm.
6. Cambridge Analytica goes on to work for the Ted Cruz campaign. In late 2015, it’s reported that Cambridge Analytica work for Cruz involved working with Facebook data from people who didn’t give it permission. Facebook issues a vague statement about how it’s going to investigate.
7. In August 2016, Facebook sends a letter to Cambridge Analytica asserting that the data was obtained and used without permission and must be deleted immediately. The New York Times was just shown copies of exactly that data to write this article. Hundreds of gigabytes of data that is completely outside Facebook’s control.
8. Cambridge Analytica CEO (now former CEO) Alexander Nix told lawmakers that the firm didn’t possess any Facebook data. So he was clearly completely lying.
9. Finally, a former Cambridge Analytica employee showed the New York Times hundreds of gigabytes of Facebook data. And it was unencrypted, so anyone with access to it could make a copy and give it to whoever they want.
And that’s what we learned from just the New York Times’s version of this story. The Guardian Observer was also talking with Christopher Wylie and other Cambridge Analytica whistle-blowers. And while it largely covers the same story as the New York Times report, the Observer article contains some additional details.
1. For starters, the following article notes that the Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. That’s important to note because the stated use of the data grabbed by Aleksandr Kogan’s app was for research purposes. But “improving user experience in the app” is a far more generic reason for grabbing that data than academic research purposes. And that hints at something we’re going to see below from a Facebook whistle-blower: that all sorts of app developers were grabbing this kind of data using the ‘friends’ loophole for reasons that had absolutely nothing to do with academic purposes and this was deemed fine by Facebook.
2. Facebook didn’t formally suspend Cambridge Analytica and Aleksandr Kogan from the platform until one day before the Observer article was published, which is more than two years after the initial reports in late 2015 about the Cambridge Analytica misusing Facebook data for the Ted Cruz campaign. So if Facebook felt like Cambridge Analytica and Aleksandr Kogan was improperly obtaining and misusing its data it sure tried hard not to let on until the very last moment.
3. Simon Milner, Facebook’s UK policy director, told the UK MP when asked if Cambridge Analytica had Facebook data that, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.” Which, again, as we’re going to see, was a total lie according to a Facebook whistle-blower because Facebook was routinely providing exactly the kind of data Kogan’s app was collecting to thousands of developers.
4. Aleksandr Kogan had a license from Facebook to collect profile data, but for research purposes, so when he used the data for commercial purposes he was violating his agreement, according to the article. Also, Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps. And as we’re going to see in subsequent articles, it does indeed look like Kogan is correct and he was very open about using the data from the Cambridge Analytica app for commercial purposes and Facebook had no problem with this.
5. In addition to being a Cambridge University professor, Aleksandr Kogan has links to a Russian university and took Russian grants for research. This will undoubtedly raise speculation about the possibility that Kogan’s data was handed over to the Kremlin and used in the social-media influencing campaign carried out by the Kremlin-linked Internet Research Agency. If so, it’s still important to keep in mind that, based on what we’re going to see from Facebook whistle-blower Sandy Parakilas, the Kremlin could have easily set up all sorts of Facebook apps for collecting this kind of data because apparently anyone could do it as long as the data was for “improving the user experience”. That’s how obscene this situation is. Kogan was not at all needed to provide this data to the Kremlin because it was so easy for anyone to obtain. In other words, we should assume all sorts of governments have this kind of data.
6. The legal letter sent by Facebook to Cambridge Analytica in August 2016 demanding that it delete the data was sent just days before it was officially announced that Steve Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him. That sure does seem like Facebook knew about Bannon’s involvement with Cambridge Analytica and the fact that Bannon was going to become Trump’s campaign manager and bring Cambridge Analytica into the campaign.
7. Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.”
So as we can see, like the proverbial onion, the more layers you peel back on the story Cambridge Analytica and Facebook have been peddling about how this data was obtained and used, the more acrid and malodorous it gets. With a distinct tinge of BS:
The Guardian
Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach
Whistleblower describes how firm linked to former Trump adviser Steve Bannon compiled user data to target American voters
Carole Cadwalladr and Emma Graham-Harrison
Sat 17 Mar 2018 18.03 EDT
The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box.
A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.
Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”
Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.
The New York Times is reporting that copies of the data harvested for Cambridge Analytica could still be found online; its reporting team had viewed some of the raw data.
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.
However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.
Cambridge Analytica and Facebook are one focus of an inquiry into data and politics by the British Information Commissioner’s Office. Separately, the Electoral Commission is also investigating what role Cambridge Analytica played in the EU referendum.
...
On Friday, four days after the Observer sought comment for this story, but more than two years after the data breach was first reported, Facebook announced that it was suspending Cambridge Analytica and Kogan from the platform, pending further information over misuse of data. Separately, Facebook’s external lawyers warned the Observer it was making “false and defamatory” allegations, and reserved Facebook’s legal position.
The revelations provoked widespread outrage. The Massachusetts Attorney General Maura Healey announced that the state would be launching an investigation. “Residents deserve answers immediately from Facebook and Cambridge Analytica,” she said on Twitter.
The Democratic senator Mark Warner said the harvesting of data on such a vast scale for political targeting underlined the need for Congress to improve controls. He has proposed an Honest Ads Act to regulate online political advertising the same way as television, radio and print. “This story is more evidence that the online political advertising market is essentially the Wild West. Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, it’s clear that, left unregulated, this market will continue to be prone to deception and lacking in transparency,” he said.
Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.
Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”
Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”
Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.
At the time of the data breach, Wylie was a Cambridge Analytica employee, but Facebook described him as working for Eunoia Technologies, a firm he set up on his own after leaving his former employer in late 2014.
The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.
That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.
“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.
Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.
“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”
Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.
“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”
A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.
Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.
Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.
Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.
The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.
The algorithm and database together made a powerful political tool. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.
“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.
At the time, more than 50 million profiles represented around a third of active North American Facebook users, and nearly a quarter of potential US voters. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.”
Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.
GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.
SCL Elections, an affiliate, worked with Facebook over the period to ensure it was satisfied no terms had been “knowingly breached” and provided a signed statement that all data and derivatives had been deleted, he said. Cambridge Analytica also said none of the data was used in the 2016 presidential election.
Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.
———-
“Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.””
Exploiting everyone’s inner demons. Yeah, that sounds like something Steve Bannon and Robert Mercer would be interested in. And it explains why Facebook data would have been potentially so useful for exploiting those demons. Recall that the original non-Facebook data that Christopher Wylie and initial Cambridge Analytica team was working with with in 2013 and 2014 wasn’t seen as effective. It didn’t have that inner-demon-influencing granularity. And then they discovered the Facebook data available through this app loophole and it was taken to a different level. Remember when Facebook ran that controversial experiment on users where they tried to manipulate their emotions by altering their news feeds? It sounds like that’s what Cambridge Analytica was basically trying to do using Facebook ads instead of the newsfeed, but perhaps in a more microtargeted way.
And that’s all because Facebook’s “platform policy” allowed for the collection of friends’ data to “improve user experience in the app” with the non-enforced request that the data not be sold on or used for advertising:
...
The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.
...
Just imagine how many app developers were using this over the 2007–2014 period Facebook had this “platform policy” that allowed data captures of friends’ “to improve user experience in the app”. It wasn’t just Cambridge Analytica that took advantage of this. That’s a big part of the story here.
And yet when Simon Milner, Facebook’s UK policy director, was asked if Cambridge Analytica had Facebook data, he said, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”:
...
Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”
Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”
...
And note how the article appears to say the data Cambridge Analytica collected on Facebook users included “emails, invoices, contracts and bank transfers that reveal more than 50 million profiles.” It’s not clear if that’s a reference to emails, invoices, contracts and bank transfers that involved with setting up Cambridge Analytica or emails, invoices, contracts and bank transfers from Facebook users, but if that was from users that would be wildly scandalous:
...
Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.
...
So it will be interesting to see if that point of ambiguity is ever clarified somewhere. Because wow would that be scandalous if emails, invoices, contracts and bank transfers of Facebook users were released through this “platform policy”.
Either way, it looks unambiguously awful for Facebook. Especially now that we learn that the cease and destroy letter Facebook sent to Cambridge Analytica in August of 2016 was suspiciously sent just days before Steve Bannon, a founder and officer of Cambridge Analytica, becomes Trump’s campaign manager and brings the company into the Trump campaign:
...
The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.
“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.
...
And the only thing Facebook did to confirm that the Facebook data wasn’t misused, according to Christopher Wylie, was to ask that a box be checked a box on a form:
...
Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”
...
And, again, Facebook denied it’s data based passed along to Cambridge Analytica when questioned by both the US Congress and UK Parliament:
...
Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”
A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.
...
And not how Facebook now admits Aleksandr Kogan did indeed get the data legally. It just wasn’t used properly. It’s why Facebook is saying it shouldn’t be called a “data breach”: because it wasn’t a breach because the data was obtained properly:
...
Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.
...
But Aleksandr Kogan isn’t simply arguing that he did nothing wrong when he obtained that Facebook data via his app. Kogan also argues that he had a “close working relationship” with Facebook, which has granted him permission for his apps, and everything he did with the data was legal. So Aleksandr Kogan’s story is quite notable because, again, as we’ll see below, there is evidence that his story is closest to the truth of all the stories we’re hearing: that Facebook was totally fine with Kogan’s apps obtaining the private data of millions of Facebook friends. And Facebook was perfectly fine with how that data was used or was at least consciously trying to not know how the data might be misused. That’s the picture that’s going to emerge so keep that in mind when Kogan asserts that he had a “close working relationship” with Facebook. He probably did based on available evidence:
...
Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.
...
Kogan maintains everything he did was legal, and guess what? It probably was legal. That’s part of the scandal here.
And regarding those testimony’s by Cambridge Analytica’s now-former CEO Alexander Nix that the company never worked with Facebook, note how the Observer got to see a copy of the contract Cambridge Analytica entered into with Kogan’s GSR and the contract was entirely premised on harvesting and processing the Facebook data. Which, again, hints at the likelihood that they thought what they were doing at the time (2014) was completely legal. They talked about it in the contract:
...
The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour....
“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.
...
Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.
GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.
...
““The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.”
A contract to create a ‘gold standard’ of 2 million Facebook accounts that are ‘matched’ to real life voters for the use of “understanding personality from Facebook profile information.” That was the actual contract Kogan had with Cambridge Analytica. All for the purpose of developing a system that would allow Cambridge Analytica to infer your inner demons from your Facebook profile and then manipulate them.
So it’s worth noting how the app permissions setup Facebook allowed from 2007–2014 of letting app developers collect Facebook profile information of the people who use their apps and their friends created this amazing arrangement where app developers could generate a ‘gold standard’ of of people using apps and a test set from all their friends. If the goal was getting people to encourage their friends to download an app that would have been a very useful data set. But it would of course also have been an incredibly useful data set for anyone who wanted to collect the profile information of Facebook users. Because, again, as we’re going to see, a Facebook whistle-blower is claiming that Facebook user profile information was routinely handed out to app developers.
So if an app developer wanted to experiment on, say, how to use that available Facebook profile information to manipulate people, getting a ‘gold standard’ of people to take a psychological profile survey would be an important step in carrying out that experiment. Because those people who take your psychological survey form the data set you can use to train your algorithms that take Facebook profile information as the input and create psychological profile data as the output.
And that’s what Aleksandr Kogan’s app was doing: grabbing psychological information from the survey while simultaneously grabbing the Facebook profile data from the test-takers, along with the Facebook profile data of all their friends. Kogan’s ‘gold standard’ training set was the people who actually used his app and handed over a bunch of personality information from the survey and the test set would have been the tens of millions of friends whose data was also collected. Since the goal of Cambridge Analytica was to infer personality characteristics from people’s Facebook profiles, pairing the personality surveys from the ~270,000 people who took the app survey to their Facebook profiles allowed Cambridge Analytica to train their algorithms that guessed at personality characteristics from the Facebook profile information. Then they had all the rest of the profile information on the rest of the ~50 million people to apply those algorithms.
Recall how Trump’s 2016 campaign digital director, Brad Parscale, curiously downlplayed the utility of Cambridge Analytica’s data during interviews where he was bragging about how they were using Facebook’s ad micro-targeting features to run “A/B testing on steriods” on micro-targeted audiences i.e. strategically exposing micro-targeted Facebook audiences sets of ads that differed in some specific way design to explore a particular psychological dimension of that micro-audience. So it’s worth noting that the “A/B testing on steroids” Brad Parscale referred to was probably focused on the ~30 million of that ~50 million set of people that Cambridge Analytica obtained a Facebook profile who could be matched back to real people. Those 30 million Facebook users that Cambridge Analytica had Facebook profile data on were the test set. And the algorithms designed to guess the psychological makeup of people from their Facebook profiles that Cambridge Analytica refined on the training set of ~270,000 Facebook users who took the psychological profiles were likely unleashed on that test set of ~30 million people.
So when we find out that the Cambridge Analytica contract with Aleksandr Kogan’s GSR company included language like building a “gold standard”, keep in mind that this implied that there was a lot of testing to do after the algorithmic refinements based on that gold standard. And the ~30–50 million profiles they collected from the friends of the ~270,000 people who downloaded Kogan’s app made for quite a test set.
Also keep in mind that the denials that Cambridge Analytica worked with Facebook data by former CEO Alexander Nix aren’t the only laughable denials of Cambridge Analytica’s officers. Any denials by Steve Bannon and his lawyers that he knew about Cambridge Analytica’s use of Facebook profile data should also be seen laughable, starting with the denials from Bannon’s lawyers that he knows nothing about what Wylie and others are claiming:
...
Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.
Steve Bannon: the Boss Who Knows Nothing (Or So He Says)
Steve Bannon “knows nothing about the claims being asserted.” LOL! Yeah, well, not according to Christopher Wylie, who, in the following article, has some rather significant claims about the role Steve Bannon in all this. According to Wylie:
1. Steve Bannon was the person overseeing the acquisition of Facebook data by Cambridge Analytica. As Wylie put it, “We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss.” Now, when Wylie says Bannon was Nix’s boss, note that Bannon served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016. And Nix was CEO during this period. So technically Nix was the boss. But it sounds like Bannon was effectively the boss, according to Wylie.
2. Wylie acknowledges that it’s unclear whether Bannon knew how Cambridge Analytica was obtaining the Facebook data. But Wylie does say that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And Bannon “approved the data-collection scheme we were proposing”. So if Bannon and Mercer didn’t know the details of how the purchase of massive amounts of Facebook data took place that would be pretty remarkable. Remarkably uncurious, given that acquiring this data was at the core of what the company was doing and they approved of the data-collection scheme. A scheme that involved having Aleksandr Kogan set up a separate company. That was the “scheme” Bannon and Mercer would have had to approve so the question if they didn’t realize that they were acquire this Facebook data using this “friend sharing” feature Facebook made available to app developers that would have been a significant oversight.
The article goes on to include a few more fun facts, like...
3. Cambridge Analytica was doing focus group tests on voters in 2014 and identified many of the same underlying emotional sentiments in voters that formed the core message behind Donald Trump’s campaign. In focus groups for the 2014 midterms, the firm found that voters responded to calls for building a wall with Mexico, “draining the swamp” int Washington DC, and to thinly veiled forms of racism toward African Americans called “race realism”. The firm also tested voter attitudes towards Russian President Vladimir Putin and discovered that there’s a lot of Americans who really like the idea of a really strong authoritarian leader. Again, this was all discovered before Trump even jumped into the race.
4. The Trump campaign rejected early overtures to hire Cambridge Analytica, which suggests that Trump was actually the top choice of the Mercers and Bannon, ahead of Ted Cruz.
5. Cambridge Analytica CEO Alexander Nix was caught by Channel 4 News in the UK boasting about the secrecy of his firm, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”
So based on these allegations, Steve Bannon was closely involved in approval the various schemes to acquire Facebook data and probably using self-destructing emails in the process:
The Washington Post
Bannon oversaw Cambridge Analytica’s collection of Facebook data, according to former employee
By Craig Timberg, Karla Adam and Michael Kranish
March 20, 2018 at 7:53 PMLONDON — Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.
The 2014 effort was part of a high-tech form of voter persuasion touted by the company, which under Bannon identified and tested the power of anti-establishment messages that later would emerge as central themes in President Trump’s campaign speeches, according to Chris Wylie, who left the company at the end of that year.
Among the messages tested were “drain the swamp” and “deep state,” he said.
Cambridge Analytica, which worked for Trump’s 2016 campaign, is now facing questions about alleged unethical practices, including charges that the firm improperly handled the data of tens of millions of Facebook users. On Tuesday, the company’s board announced that it was suspending its chief executive, Alexander Nix, after British television released secret recordings that appeared to show him talking about entrapping political opponents.
More than three years before he served as Trump’s chief political strategist, Bannon helped launch Cambridge Analytica with the financial backing of the wealthy Mercer family as part of a broader effort to create a populist power base. Earlier this year, the Mercers cut ties with Bannon after he was quoted making incendiary comments about Trump and his family.
In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.
“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”
Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.
Bannon received more than $125,000 in consulting fees from Cambridge Analytica in 2016 and owned “membership units” in the company worth between $1 million and $5 million, according to his financial disclosure.
...
It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.
Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.
Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.
Bannon “approved the data-collection scheme we were proposing,” Wylie said.
...
The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.
“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”
The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.
In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.
The firm also tested views of Russian President Vladimir Putin.
“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”
The controversy over Cambridge Analytica’s data collection erupted in recent days amid news reports that an app created by a Cambridge University psychologist, Aleksandr Kogan, accessed extensive personal data of 50 million Facebook users. The app, called thisisyourdigitallife, was downloaded by 270,000 users. Facebook’s policy, which has since changed, allowed Kogan to also collect data —including names, home towns, religious affiliations and likes — on all of the Facebook “friends” of those users. Kogan shared that data with Cambridge Analytica for its growing database on American voters.
Facebook on Friday banned the parent company of Cambridge Analytica, Kogan and Wylie for improperly sharing that data.
The Federal Trade Commission has opened an investigation into Facebook to determine whether the social media platform violated a 2011 consent decree governing its privacy policies when it allowed the data collection. And Wylie plans to testify to Democrats on the House Intelligence Committee as part of their investigation of Russian interference in the election, including possible ties to the Trump campaign.
Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”
The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”
Nix could not be reached for comment.
Cambridge Analytica was set up as a U.S. affiliate of British-based SCL Group, which had a wide range of governmental clients globally, in addition to its political work.
Wylie said that Bannon and Nix first met in 2013, the same year that Wylie — a young data whiz with some political experience in Britain and Canada — was working for SCL Group. Bannon and Wylie met soon after and hit it off in conversations about culture, elections and how to spread ideas using technology.
Bannon, Wylie, Nix, Rebekah Mercer and Robert Mercer met in Rebekah Mercer’s Manhattan apartment in the fall of 2013, striking a deal in which Robert Mercer would fund the creation of Cambridge Analytica with $10 million, with the hope of shaping the congressional elections a year later, according to Wylie. Robert Mercer, in particular, seemed transfixed by the group’s plans to harness and analyze data, he recalled.
The Mercers were keen to create a U.S.-based business to avoid bad optics and violating U.S. campaign finance rules, Wylie said. “They wanted to create an American brand,” he said.
The young company struggled to quickly deliver on its promises, Wiley said. Widely available information from commercial data brokers provided people’s names, addresses, shopping habits and more, but failed to distinguish on more fine-grained matters of personality that might affect political views.
Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”
After Cruz faded, the Mercers switched their allegiance to Trump and pitched their services to Trump’s digital director, Brad Parscale. The company’s hiring was approved by Trump’s son-in-law, Jared Kushner, who was informally helping to manage the campaign with a focus on digital strategy.
Kushner said in an interview with Forbes magazine that the campaign “found that Facebook and digital targeting were the most effective ways to reach the audiences. ...We brought in Cambridge Analytica.” Kushner said he “built” a data hub for the campaign “which nobody knew about, until towards the end.”
Kushner’s spokesman and lawyer both declined to comment Tuesday.
Two weeks before Election Day, Nix told a Post reporter at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”
The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.
Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.
Cambridge Analytica’s parent company, SCL Group, has an ongoing contract with the State Department’s Global Engagement Center. The company was paid almost $500,000 to interview people overseas to understand the mind-set of Islamist militants as part of an effort to counter their online propaganda and block recruits.
Heather Nauert, the acting undersecretary for public diplomacy, said Tuesday that the contract was signed in November 2016, under the Obama administration, and has not expired yet. In public records, the contract is dated in February 2017, and the reason for the discrepancy was not clear. Nauert said that the State Department had signed other contracts with SCL Group in the past.
———-
“Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.”
Steve Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data. That’s what Christopher Wylie claims, and given Bannon’s role as vice president of the company it’s not, on its face, an outlandish claim. And Bannon apparently approved the spending of nearly $1 million to acquire that Facebook data in 2014. Because, according to Wylie, Alexander Nix didn’t actually have permission to spend that kind of money without approval. Bannon, on the hand, did have permission to make those kinds of expenditure approvals. That’s how high up Bannon was at that company even though he was technically the vice president while Nix was the CEO:
...
In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”
Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.
...
“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss...Alexander Nix didn’t have the authority to spend that much money without approval.””
And while Wylie acknowledges that unclear whether Bannon knew how Cambridge Analytica was obtaining the data, Wylie does assert that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And, generally speaking, if Bannon was approval $1 million expenditures on acquiring Facebook data he probably sat in on at least one meeting where they described how they were planning on actually getting the data by spending on that money. Don’t forget the scheme involved paying individuals small amounts of money to take the psychological survey on Kogan’s app, so at a minimum you would expect Bannon to know about how these apps were going to result in the gathering of Facebook profile information:
...
It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.
Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.
Bannon “approved the data-collection scheme we were proposing,” Wylie said.
...
What’s Bannon hiding by claiming ignorance? Well, that’s a good question after Britain’s Channel 4 News aired a video Tuesday in which Nix was highlighting his firm’s secrecy, including the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing”:
...
Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”
...
Self-destructing emails. That’s not suspicious or anything.
And note how Cambridge Analytica was apparently already honing in on a very ‘Trumpian’ message in 2014, long before Trump was on the radar:
...
The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”
The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.
In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.
The firm also tested views of Russian President Vladimir Putin.
“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”
...
Intriguingly, given these early Trumpian findings in their 2014 voter research, it appears that the Trump campaign turned down early overtures to hire Cambridge Analytica, which suggests that Trump really was the top preference for Bannon and the Mercers, not Ted Cruz:
...
Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”
...
And as the article reminds us, the Trump campaign has completely denied EVER using Cambridge Analytica’s data. Brad Parscale, Trump’s digital director, claimed he got all the data they were working with from the Republican National Committee:
...
Two weeks before Election Day, Nix told a Post reporter at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.
Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.
...
And that denial by Parscale raises an obvious question: when Parscale claims they only used data from the RNC, it’s clearly very possible that he’s just straight up lying. But it’s also possible that he’s lying while technically telling the truth. Because if Cambridge Analytica gave its data to the RNC, it’s possible the Trump campaign acquired the Camgridge Analytica data from the RNC at that point, giving the campaign a degree of deniability about the use of such scandalously acquired data if the story of it ever became public. Like now.
Don’t forget that data of this nature would have been potentially useful for EVERY 2016 race, not just the presidential campaign. So if Bannon and Mercer were intent on helping Republicans win across the board, handing that data over to the RNC would have just made sense.
Also don’t forget that the New York Times was shown unencrypted copies of the Facebook data collected by Cambridge Analytica. If the New York Times saw this data, odds are the RNC has too. And who knows who else.
Facebook’s Sandy Parakilas Blows an “Utterly Horrifying” Whistle
It all raises the question of whether or not the Republican National Committee now possess all that Cambridge Analytica data/Facebook data right now. And that brings us to perhaps the most scandalous article of all that we’re going to look at. It’s about Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012 who is now a whistle-blower about exactly the kind of “friend’s permission” loophole Cambridge Analytica exploited. And as the following article makes horrifically clear:
1. It’s not just Cambridge Analytica or the RNC that might possess this treasure trove of personal information. It’s the entire data brokerage industry that probably has thier hands on this data. Along with anyone who has picked it up through the black market.
2. It was relatively easy to write an app that could exploit this “friends permissions” feature and start trawling Facebook for profile data for app users and their friends. Anyone with basic app coding skills could do it.
3. Parakilas estimates that perhaps hundreds of thousands of developers likely exploited exactly the same ‘for research purposes only’ loophole exploited by Cambridge Analytica. And Facebook had no way of tracking how this data was used by developers once it left Facebook’s servers.
4. Parakilas suspects that this amount of data will inevitably end up in the black market meaning there is probably a massive amount of personally identifiable Facebook data just floating around for the entire marketing industry and anyone else (like the GOP) to data mine.
5. Parakilas knew of many commercial apps that were using the same “friends permission” feature to grab Facebook profile data use it commercial purposes.
6. Facebook’s policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings. That appears to be part of the legal protection Facebook employed when it had this policy: don’t complain, it’s in the fine print.
7. Perhaps most scandalous of all, Facebook took a 30% cut of payments made through apps in exchange for giving these app developers access to Facebook user data. Yep, Facebook was effectively selling user data, but by structuring the sale of this data as a 30% share of the payments made through the app Facebook also created an incentive to help developers maximize the profits they made through the app. So Facebook literally set up a system that incentivized itself to help app developers make as much money as possible off of the user data they were handing over.
8. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users. So as a 2010, ~1 in 10 Facebook apps were using this app loophole to grab information about both the users of the app and their friends.
9. While Cambridge Analytica was far from alone in exploiting this loophole, it was actually one of the very last firms given permission to be allowed to do so. Which means that particular data set collected by Cambridge Analytica could be uniquely valuable simply be being larger and containing and more recent data than most other data sets of this nature.
10. When Parakilas brought up these concerns to Facebook’s executives and suggested the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach. One Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”
11. Shortly after arriving at the company’s Silicon Valley headquarters, Parakilas was told that any decision to ban an app required the personal approval of Mark Zuckerberg. Although the policy was later relaxed to make it easier to deal with rogue developers. That said, rogue developers were rarely dealt with.
12. When Facebook eventually phased out this “friends permissions” policy for app developers, it was likely done out of concerns over the commercial value of all this data they were handing out. Executives were apparently concerned that competitors were going to use this data to build their own social networks.
So, as we can see, the entire saga of Cambridge Analytica’s scandalous acquisition of private Facebook profiles on ~50 million Americans is something Facebook made routine for developers of all sorts from 2007–2014, which means this is far from a ‘Cambridge Analytica’ story. It’s a Facebook story about a massive problem Facebook created for itself (for its own profits):
The Guardian
‘Utterly horrifying’: ex-Facebook insider says covert data harvesting was routine
Sandy Parakilas says numerous companies deployed these techniques – likely affecting hundreds of millions of users – and that Facebook looked the other way
Paul Lewis in San Francisco
Tue 20 Mar 2018 07.46 EDTHundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to Cambridge Analytica, according to a new whistleblower.
Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.
“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.
Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.
Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.
“It has been painful watching,” he said, “because I know that they could have prevented it.”
Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”
Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.
He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.
He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”
...
Facebook did not respond to a request for comment on the information supplied by Parakilas, but directed the Guardian to a November 2017 blogpost in which the company defended its data sharing practices, which it said had “significantly improved” over the last five years.
“While it’s fair to criticise how we enforced our developer policies more than five years ago, it’s untrue to suggest we didn’t or don’t care about privacy,” that statement said. “The facts tell a different story.”
‘A majority of Facebook users’
Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.
That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.
The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.
Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.
Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.
Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.
Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.
Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”
Facebook announced on Monday that it had hired a digital forensics firm to conduct an audit of Cambridge Analytica. The decision comes more than two years after Facebook was made aware of the reported data breach.
During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.
While the previous policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings, Parakilas said he believed the policy was problematic.
“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”
It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.
The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.
While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.
“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.
If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.
The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.
Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.
He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.
“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”
‘They treated it like a PR exercise’
Parakilas said he lobbied internally at Facebook for “a more rigorous approach” to enforcing data protection, but was offered little support. His warnings included a PowerPoint presentation he said he delivered to senior executives in mid-2012 “that included a map of the vulnerabilities for user data on Facebook’s platform”.
“I included the protective measures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do malicious things with the data,” he said. “On the list of bad actors I included foreign state actors and data brokers.”
Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”
That changed, Parakilas said, when he heard the congressional testimony given by Facebook lawyers to Senate and House investigators in late 2017 about Russia’s attempt to sway the presidential election. “They treated it like a PR exercise,” he said. “They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.”
It was at that point that Parakilas decided to go public with his concerns, writing an opinion article in the New York Times that said Facebook could not be trusted to regulate itself. Since then, Parakilas has become an adviser to the Center for Humane Technology, which is run by Tristan Harris, a former Google employee turned whistleblower on the industry.
———-
“Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.”
The platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012: That’s who is making these claims. In other words, Sandy Parakilas is indeed someone who should be intimately familiar with Facebook’s policies of handing user data over to app developers because it was his job to ensure that data wasn’t breached.
And as Parakilas makes clear, he wasn’t actually able to do his job. When the data left Facebook’s servers after getting handed over to app developer Facebook had no idea what developers were doing with the data and apparently no interest in learning:
...
“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.
Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.
“It has been painful watching,” he said, “because I know that they could have prevented it.”
Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”
...
And this completely lack of oversight by Facebook led Parakilas to assume there was “something of a black market” for that Facebook data. But when he expressed these concerns with fellow executives he was warned not to look. Not knowing how this data was being used was ironically part of Facebook’s legal strategy, it seems:
...
Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.
He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”
...
“They felt that it was better not to know. I found that utterly shocking and horrifying.”
Well, at least one executive at Facebook was utterly shocked and horrified by the “better not to know” policy towards handing personal private information over to developers. And that one executive, Parakilas, left the company and is now a whistle-blower.
And one of the things that made Parakilas particularly concerned that this was widespread among app was the fact that it was so easy to create apps that could then just be released onto Facebook to trawl for Facebook profile data from users and their unwitting friends:
...
The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.
...
And while rogue app developers were at times dealt with, it was exceedingly rare with Parakilas not witnessing a single audit of a developer’s systems during his time there.
Even more alarming is that Facebook was apparently quite on encouraging app developers to grab this Facebook profile data as an incentive to encourage even more app develop. Apps were seen as so important to Facebook that Mark Zuckerberg himself had to give his personal approval to ban on app. And while that policy was later relaxed to not require Zuckerberg’s approval, it doesn’t sound like that policy change actually resulted in more apps getting banned:
...
Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”
During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.
...
So how many Facebook users had their private profile information likely via this ‘fine print’ feature that allowed app developers to scrape the profiles of app users and their friends? According to Parakilas, probably a majority of Facebook users. So that black market of Facebook profiles probably includes a majority of Facebook users. But even more amazing is that Facebook handed out this personal user information to app developers in exchange for a 30 share of the money they made through the app. Facebook was basically directly selling private user data to developers, which is a big reason why Parakilas’s estimate that a majority of Facebook users were impacted by this is likely true. Especially if, as Parakilas hints, the number of developers grabbing user profile information via these apps might be in the hundreds of thousands. That’s a lot of developers potentially feeding into that black market:
...
‘A majority of Facebook users’Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.
That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.
The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.
Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.
Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.
Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.
...
During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.
...
“Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.”
And that, right there, is perhaps the biggest scandal here: Facebook just handed user data away in exchange for revenue streams from app developers. And this was a key element of its business model during this 2007–2014 period. “Read the fine print” in the terms of service was the excuse they use:
...
“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.
...
And this is all why Aleksandr Kogan’s assertions that he had a close working relationship with Facebook and did nothing technically wrong do actually seem to be backed up by Parakilas’s whistle-blowing. Both because it’s hard to see what Kogan did that wasn’t part of Facebook’s business model and also because it’s hard to ignore that Kogan’s GSR shell company was one of the very last apps to have permission to exploit their “friends’ permission” app loophole. That sure does suggest that Kogan really did have a “close working relationship” with Facebook. So close he got seemingly favored treatment, and that’s compared to the seemingly vast number of apps that were apparently using this “friends permissions” feature: 1 in 10 Facebook apps, according to a 2010 study:
...
The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.
“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.
If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.
...
““Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.”
As of 2010, around 11 percent of app developers requested data belonging to friends of users. Keep that in mind when Facebook claims that Aleksandr Kogan improperly obtained data from the friends of the people who downloaded Kogan’s app.
So what made Facebook eventually end this “friends permissions” policy in mid-2014? While Parakilas has already left the company by then, he does recall conversations with executive who were nervous about competitors building their own social networks from all the data Facebook was giving away:
...
Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.
“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”
...
That’s how much data Facebook was handing out to encourage new app development: so much data that they were concerned about creating competitors.
Finally, it’s important to note that the picture painted by Parakilas only goes until the end of 2012, when he left in frustration. So we don’t actually have testimony of Facebook insiders who were involved with app data breaches like Parakilas during the period when Cambridge Analytica was engaged in its mass data collection scheme:
...
Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”
...
Now, it seems like a safe bet that the problem only got worse after Parakilas left given how the Cambridge Analytica situation played out, but we don’t know yet just had bad it was at this point.
Aleksandr Kogan: Facebook’s Close Friend (Until He Belatedly Wasn’t)
So, factoring in what we just saw with Parakilas’s claims about extent to which Facebook was handing out private Facebook profile data — the internal profile that Facebook builds up about you — to app developers for widespread commercial applications, let’s take a look at the some of the claims Aleksandr Kogan has made about his relationship with Facebook. Because while Kogan makes some extraordinary claims, they are also consistent with Parakilas’s claims, although in some cases Kogan’s description actually goes much further than Parakilas.
For instance, according to the following Observer article ...
1. In an email to colleagues at the University of Cambridge, Aleksandr Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions.
2. Kogan also claims in that email that the contract his GSR company signed with Facebook in 2014 made it absolutely clear the data was going to be used for commercial applications and that app users were granting Kogan’s company the right to license or resell the data. “We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.” So Kogan says he made it clear to Facebook and user the app was for commercial purposes and that the data might be resold which sounds like the kind of situation Sandy Parakilas said he witnessed except even more open (which should be easily verifiable if the app code still exists).
3. Facebook didn’t actually kick Kogan off of its platform until March 16th of this year, just days before this story broke. Which consistent with Kogan’s claims that he had a good working relationship with Facebook.
4. When Kogan founded Global Science Research (GSR) in May 2014, he co-founded it with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.
5. Facebook gave Kogan’s University of Cambridge lab provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level”. 57 billion Facebook relationships in all. The data was anonymized and aggregated, so it didn’t literally include details on individual Facebook friends and was instead the aggregate “friend” counts at a national. The data was used to publish a study in Personality and Individual Differences in 2015 and two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. But it’s still a sign that Kogan is indeed being honest when he says he had a close working relationship with Facebook. It’s also a reminder that when Facebook claims that it was just handing out data for “research purposes” only, if that was true it would have handed out anonymized aggregated data like they did in this situation with Kogan.
6. That study co-authored by Kogan’s team and Facebook didn’t just use the anonymized aggregated friendship data. The study also used non-anonymized Facebook ata collected through Facebook apps using exactly the same techniques Kogan’s app for Cambridge Analytica used. This study was published in August of 2015. Again, it was a study co-authored by Facebook. GSR co-founder Joseph Chancellor left GSR a month later and joined Facebook as a user experience research in November 2015. Recall that it was a month later, December 2015, when we saw the first news reports of Ted Cruz’s campaign using Facebook data. Also recall that Facebook responded to that December 2015 report by saying it would look into the matter. Facebook finally sent Cambridge Analytica a letter in August of 2016, days before Steve Bannon became Trump’s campaign manager, asking that Cambridge Analytica delete the data. So the fact that Facebook co-authored a paper with Kogan and Chancellor in August of the 2015 and then Chancellor joined Facebook in 2015 is a pretty significant bit of context for looking into Facebook’s behavior. Because Facebook didn’t just know it was guilty of working closely with Kogan. They also knew they just co-authored an academic paper using data gathered with the same technique Cambridge Analytica was charged with using.
7. Kogan does challenge one of the claims by Christopher Wylie. Specifically, Wylie claimed that Facebook became alarmed over the volume of data Kogan’s app was scooping up (50 million profiles) but Kogan assuaged those concerns by saying it was all for research. Kogan says this is a fabrication and Facebook never actually contacted him expressing alarm.
So, according to Aleksandr Kogan, Facebook really did have an exceptionally close relationship with Kogan and Facebook really was totally on board with what Kogan and Cambridge Analytica were doing:
The Guardian
Facebook gave data about 57bn friendships to academic
Volume of data suggests trusted partnership with Aleksandr Kogan, says analystJulia Carrie Wong and Paul Lewis in San Francisco
Thu 22 Mar 2018 10.56 EDT
Last modified on Sat 24 Mar 2018 22.56 EDTBefore Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.
Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.
A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.
“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”
Facebook downplayed the significance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was literally numbers – numbers of how many friendships were made between pairs of countries – ie x number of friendships made between the US and UK,” Facebook spokeswoman Christine Chen said by email. “There was no personally identifiable information included in this data.”
Facebook’s relationship with Kogan has since soured.
“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported that the data had been obtained by Cambridge Analytica.
“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.
On Friday 16 March, in anticipation of the Observer’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans, Facebook suspended Kogan from the platform, issued a statement saying that he “lied” to the company, and characterised his activities as “a scam – and a fraud”.
On Tuesday, Facebook went further, saying in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.
But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting activities – for it to inform the users whose personal information was improperly shared.
And Kogan has offered a defence of his actions in an interview with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.
The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.
Between June and August of that year, GSR paid approximately 270,000 individuals to use a Facebook questionnaire app that harvested data from their own Facebook profiles, as well as from their friends, resulting in a dataset of more than 50 million users. The data was subsequently given to Cambridge Analytica, in what Facebook has said was a violation of Kogan’s agreement to use the data solely for academic purposes.
In his email to colleagues at Cambridge, Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions. CNN first reported on the Cambridge email. Kogan did not respond to the Guardian’s request for comment on this article.
“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”
Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.
“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.
Whistleblower Christopher Wylie told the Observer that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”
In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and ... we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”
The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.
For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.
The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.
...
———-
“Before Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.”
An anonymized, aggregate dataset of 57bn Facebook friendships sure makes it a lot easier to take Kogan at his word when he claims a close working relationship with Facebook.
Now, keep in mind that the aggregate anonymized data was aggregate at the national level, so it’s not as if Facebook gave Kogan a list of 57 billion Facebook friendships. And when you think about it, that aggregated anonymized data is far less sensitive than the personal Facebook profile data Kogan and other app developers were routinely grabbing during this period. It’s the fact that Facebook gave this data to Kogan in the first place that lends credence to his claims.
But the biggest factor lending credence to Kogan’s claims is the fact that Facebook co-authored a study with Kogan and other at the University of Cambridge using that anonymized aggregated data. Two Facebook employees were named as co-authors of the study. That is definitely a sign of close working relationship:
...
Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.
“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”
...
Even more damning for Facebook is that the research co-authored by Kogan, Facebook, and other researchers didn’t just included the anonymized aggregated data. It also included a second data set of non-anonymized data that was harvested in exactly the same way Kogan’s GSR app worked. And while Facebook apparently wasn’t involved in that part of the study, that’s beside the point. Facebook clearly knew about it if they co-authored the study:
...
The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.
The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.
...
But, alas, Kogan’s relationship with Facebook as since soured, with Facebook now acting as if Kogan had totally violated their trust. And yet it’s hard to ignore the fact that Kogan wasn’t formally kicked off Facebook’s platform until March 16th of this year, just a few days before all these stories about Kogan and Facebook were about to go public:
...
Facebook’s relationship with Kogan has since soured.“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported that the data had been obtained by Cambridge Analytica.
“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.
On Friday 16 March, in anticipation of the Observer’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans, Facebook suspended Kogan from the platform, issued a statement saying that he “lied” to the company, and characterised his activities as “a scam – and a fraud”.
On Tuesday, Facebook went further, saying in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.
...
““The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.”
Mark Zuckerberg is complaining about a “breach of trust.” LOL!
And yet Facebook has yet to explain the nature of its relationship with Kogan or why it was that they didn’t kick him off the platform until only recently. But Kogan has an explanation: He’s a scapegoat and he wasn’t doing anything Facebook didn’t know he was doing. And when you notice that Kogan’s co-founder of GSR, Joseph Chancellor, is now a Facebook employee, it’s hard not to take his claims seriously:
...
But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting activities – for it to inform the users whose personal information was improperly shared.And Kogan has offered a defence of his actions in an interview with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.
The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.
...
But if Kogan’s claims are to be taken seriously, we have a pretty serious scandal on our hands. Because Kogan claims that not only did he make it clear to Facebook and his app users that the data they were collecting was for commercial use — with no mention of academic or research purposes of the University of Cambridge — but he also claims that he made it clear the data GSR was collecting could be licensed and resold. And Facebook at no point raised any concerns at all about any of this:
...
“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.
“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.
...
Now, it’s worth noting that the casual acceptance of the commercial use of the data collected over these Facebook apps and the potential licensing and reselling of that data is actually a far more seriously situation than the one Sandy Parakilas described during his time at Facebook. Recall that, according to Parakilas, app developers simply had to tell Facebook was that they were going to use the profile data on app users and their friends to ‘improve the user experience.’ It was fine if they were commercial apps from Facebook’s perspective. But Parakilas didn’t describe a situation where app developers openly made it clear they might license or resell the data. So Kogan’s claim that it was clear his app had commercial applications and might involve reselling the data is even more egregious than the situation Parakilas described. But don’t forget that Parakilas left Facebook in late 2012 and Kogan’s app would have been approved in 2014 so it’s entirely possible Facebook’s policies got even more egregious after Parakilas left.
And it’s worth noting how Kogan’s claims differ from Christopher Wylie’s. Wylie asserts that Facebook grew alarmed by the volume of data GSR’s app was pulling from Facebook users and Kogan assured them it was for research purposes. Whereas Kogan says Facebook never expressed any alarm at all:
...
Whistleblower Christopher Wylie told the Observer that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and ... we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”
...
So as we can see, when it comes to Facebook’s “friends permissions” data sharing policy, its arrangement with Aleksandr Kogan was probably one of the more responsible ones it engaged in because, hey, at least Kogan’s work was ostensibly for research purposes and involved at least some anonymized data.
Cambridge Analytica’s Informal Friend: Palantir
And as we can also see, the more we learn about this situation, the harder it gets to dismiss Kogan’s claims that Facebook is making in a scapegoat in order to cover up not just the relationship Facebook had with Kogan but the fact that what Kogan was doing was routine for app developers for years.
But as the following New York Times article makes clear, Facebook’s relationship with Aleksandr Kogan isn’t the only working relationship Facebook needs to worry about that might lead back to Cambridge Analytica. Because it turns out there’s another Facebook connection to Cambridge Analytica and it’s potentially far, far more scandalous than Facebook’s relationship with Kogan: It turns out Palantir might be the originator of the idea to create Kogan’s app for the purpose of collecting psychological profiles. That’s right, according to documents the New York Times has seen, Palantir, the private intelligence firm with a close relationship with the US national security state, was in talks with Cambridge Analytica from 2013–2014 about psychologically profiling voters and it was an employee of Palantir who raised the idea of creating that app in the first place.
And this is of course wildly scandalous if true because Palantir was founded by the Facebook executive Peter Thiel who also happens to be a far right political activist and a close ally of President Trump.
But it gets worse. And weirder. Because it sounds like one of the people encouraging SCL (Cambridge Analytica’s parent company) to work with Palantir was none other than Sophie Schmidt, daughter of Google CEO Eric Schmidt.
Keep in mind that this isn’t the first time we’ve heard about Palantir’s ties to Cambridge Analytica and Sophie Schmidt’s role in this. It was reported by the Observer last May. According to that May 2017 article in the Observer, Schmidt was passing through London in June of 2013 when she decided to called up her former boss at SCL and recommend that they contact Palantir. Also if interest is that if you look at the current version of that Observer article, all mention of Sophie Schmidt has been removed and there’s a note that the article is the subject of legal complaints on behalf of Cambridge Analytica LLC and SCL Elections Limited. But in the original article she’s mentioned quite extensively. It would appear that someone is very upset about the Sophie Schmidt angle to this story.
So this Palantir/Sophie Schmidt side of this story isn’t a new. But we’re learning a lot more information about that relationship now. For instance:
1. In early 2013, Cambridge Analytica CEO Alexander Nix, an SCL director at the time, and a Palantir executive discussed working together on election campaigns.
2. And SCL employee wrote to a colleague in a June 2013 email that Schmidt is pushing them to work with Palantir. “Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” .
3. According to Christopher Wylie’s testimony to lawmakers, “There were Palantir staff who would come into the office and work on the data...And we would go and meet with Palantir staff at Palantir.” Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014.
4. The Palantir employee who floated the idea of create the app ultimately built by Aleksandr Kogan is Alfredas Chmieliauskas. Chmieliauskas works on business development for Palantire according to his LinkedIn page.
5. Palantir and Cambridge Analytica never formally started working together. A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. Wylie acknowledges that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge Analytica’s psychographic models. In other words, while there was never a formal relationship, there was an pretty significant informal relationship.
6. Mr. Chmieliauskas was in communication with Wylie’s team in 2014 during the period when Cambridge Analytica was initially trying to convince the University of Cambridge team to work with them. Recall that Cambridge Analytica initially discovered that the University of Cambridge team had exactly the kind of data they were interested in collected via a Facebook app, but the negotiations ultimately failed and it was then that Cambridge Analytica found Aleksandr Kogan who agreed to create his own app. Well, according to this report, it was Chmieliauskas who initially suggested to Cambridge Analytica that the firm create its own version of the University of Cambridge team’s app as leverage in those negotiations. In essence, Chmieliauskas wanted Cambridge Analytica to show the University of Cambridge team that they could collect the information themselves, presumably to drive a harder bargain. And when those negotiations failed Cambridge Analytica did indeed create their own app after teaming up with Kogan.
7. Palantir asserts that Chmieliauskas was acting in his own capacity when he continued communicating with Wylie and made the suggestion to create their own app. Palantir initially told the New York Times that it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Palantir later revised this, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
And, again, do not forget that Palantir is own by Peter Thiel, the far right billionaire early investor in Facebook and one of Facebook’s board members to this day. He was also a Trump delegate in 2016 and was in discussions with the Trump administration to lead the powerful President’s Intelligence Advisory Board, although he ultimately turned that offer down. Oh, and he’s an advocate of the Dark Enlightenment.
Basically, Peter Thiel was a member of the ‘Alt Right’ before that term was ever coined. And he’s a very powerful influence at Facebook. So learning that Palantir and Cambridge Analytica were in discussion to work together on election projects in 2013 and 2014, a Palantir employee was advising Cambridge Analytica during the negotiations with the University of Cambridge team, and that Palantir employees helped engineer Cambridge Analytica’s psychographic model based on Facebook is the kind of revelation that just might qualify as the most scandalous revelation in this entire mess:
As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.
It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.
Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.
The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.
“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday.
...
The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook.
The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
“We learned today that an employee, in 2013–2014, engaged in an entirely personal capacity with people associated with Cambridge Analytica,” the company said. “We are looking into this and will take the appropriate action.”
The company said it was continuing to investigate but knew of no other employees who took part in the effort. Mr. Wylie told lawmakers that multiple Palantir employees played a role.
Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.
In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.
A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.
In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.
Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.
“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”
Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.
———-
“The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.”
Yep, a Facebook board member’s private intelligence firm was working closely with Cambrige Analytica as they developed their psychological profiling technology. It’s quite a revelation. The kind of explosive revelation that has Palantir first denying that there was any relationship at all, followed with acknowledgement/denial that, yes, a Palantir employee, Alfredas Chmieliauskas, was indeed working with Cambridge Analytica but not on behalf of Palantir:
...
It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times....
The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
...
Adding the scandalous nature of it all is that Google CEO Eric Schmidt’s daughter suddenly appeared in June of 2013 to also promote to her old boss at SCL a relationship with Palantir:
...
Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.
...
But this June 2013 proposal by Sophie Schmidt wasn’t what started Cambridge Analytica’s relationship with Palantir. Because that reportedly started in early 2013, when Alexander Nix and a Palantir executive discussed working together on election campaigns:
...
In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.
...
So Sophie Schmidt swooped in to promote Palantir to Cambridge Analytica months after the negotiations began. It raises the question of who encouraged her to do that.
Palantir now admits these negotiations happened, but claims that they chose not to work with Cambridge Analytica because they “wanted to steer clear of election work.” And emails indicate that Palantir did indeed formally turn down the idea of working with Cambridge Analytica since the emails show that Nix and Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. And yet, according to Christopher Wylie, some Palantir employees helped engineer their psychogrophic models. And that suggests Palantir turned down a formal relationship in favor of an informal one:
...
A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.
“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.
...
“There were Palantir staff who would come into the office and work on the data...And we would go and meet with Palantir staff at Palantir.”
That sure sounds like a relationship! Formal or not.
And that informal relationship continued during the period when Cambridge Analytica was in negotiation with the initial University of Cambridge Psychometrics Centre in 2014:
...
In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”
Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.
...
And it was during those negotiations, in May of 2014, when Chmieliauskas first proposed the idea of just replicating what the University of Cambridge Psychometrics Centre was doing for leverage in the negotiations. When those negotiations ultimately failed, Cambridge Analytica found another Cambridge University psychologist, Aleksandr Kogan, to build the app for them:
...
“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.
...
And that’s what we know so far about the relationship between Cambridge Analytica and Palantir. Which raises a number of questions. Like whether or not this informal relationship continued well after Cambridge Analytica started harvesting all that Facebook information. Let’s look at seven key the facts about we know Palantir’s involvement in this so far:
1. Palantir employees helped build the psychographic profiles.
2. Mr. Chmieliauskas was in contact with Wylie at least as late as May of 2014 as Cambridge Analytica was negotiating with the University of Cambridge’s Psychometrics Centre.
3. We don’t know when this informal relationship between Palantir and Cambridge Analytica ended.
4. We don’t know if the informal relationship between Palantir and Cambridge Analytica — which largely appears to center around Mr. Chmieliauskas — really was largely Chmieliauskas’s initiative alone after Palantir initially rejected a formal relationship (it’s possible) or if Chmieliauskas was directed to pursue this relationship informally but on behalf of Palantir to maintain deniability in the case of awkward situations like the present one (also very possible, and savvy given the current situation).
5. We don’t know if the Palantir employees who helped build those psychographic profiles were working with the data Cambridge Analytica harvested from Facebook or were they working with the earlier, inadequate sets of data that didn’t include the Facebook data? Because if the Palantir employees helped build the psychographic profiles based on the Facebook data that implies this informal relationship went on a lot longer than May of 2014 since that’s when it first started getting collected via Kogan’s app. How long? We don’t yet know.
6. Neither do we know how much of this data ultimately fell into the hands of Palantir. As Wylie described it, “There were Palantir staff who would come into the office and work on the data...And we would go and meet with Palantir staff at Palantir.” So did those Palantir employees who were working on “the data” take any of that data back to Palantir?
7. For that matter, given that Peter Thiel sits on the board of Facebook, and given how freely Facebook hands out this kind of data, we have to ask the question of whether or not Palantir already has direct access to exactly the kind of data Cambridge Analytica was harvesting. Did Palantir even need Cambridge Analytica’s data? Perhaps Palantir was already using apps of their own to harvest this kind of data? We don’t know. At the same time, don’t forget that even if Palantir had ready access to the same Facebook profile data gathered by Kogan’s app, it’s still possible Palantir would have had an interest in the company purely to see how the data was analyzed and learn from that. In other words, the interest in Cambridge Analytica may be been more related to the algorithms, and not the data, for Peter Thiel’s Palantir. Don’t forget that if anyone is the real power behind the throne at Facebook it’s probably Thiel.
8. What on earth is going on with Sophie Schmidt, daughter of Google CEO Eric Schmidt, pushing Cambridge Analytica to work with Palantir in June of 2013, months after Cambridge Analytic and Palantir began talking with each other? That seems potentially significant.
Those are just some of the questions raised about Palantir’s ambiguously ominous relationship with Cambridge Analytica. Bad don’t forget that it’s not just Palantir that we need to ask these kinds of questions. For instance, what about Steve Bannon’s Breitbart? Does Breitbart, home the neo-Nazi ‘Alt Right’, also have access to all that harvested Cambridge Analytica data? Not just the raw Facebook data but also the processed psychological profile data on 50 million Americans that Cambridge Analytica generated. Does Breitbart have the processed profiles too? And what about the Republican Party? And all the other entities out there who gained access to this Facebook profile data. Just how many different entities around the globe possess that Cambridge Analytica data set?
It’s Not Just Cambridge Analytica. Or Facebook. Or Google. It’s Society.
Of course, as we saw with Sandy Parakilas’s whistle-blower claims, when it comes to the question of who might possess Facebook profile data harvested during the 2007–2014 period when Facebook had “friends permissions” policy, the list of suspects includes potentially hundreds of thousands of developers and anyone who has purchased this information on the black market.
Don’t forget one of the other amazing aspects of this whole situation: if hundreds of thousands of developers were using this feature to scrape user profiles, that means this really was an open secret. Lots and lots of people were doing this. For years. So, like many scandals, perhaps the most scandalous part of it is that we’re learning about something we should have known all along and many of did know all along. It’s not like it’s a secret that people are being surveilled in detail in the internet age and this data is being stored and aggregated in public and private databases and put up for sale. We’ve collectively known this all along. At least on some level.
And yet this surveillance is so pervasive that it’s almost never thought about on a moment by moment basis at an individual level. When people browse the web they presumably aren’t thinking about the volume of tracking cookies and other personal information slurped up as a result of that mouse click. Nor are they thinking about how that click contributes to the numerous personal profiles of them floating around the commercial data brokerage marketplace. So in a more fundamental sense we don’t actually know we’re being surveilled because we’re not thinking about it.
It’s one example of how humans aren’t wired to naturally think about the macro forces impacting their lives in day to day decisions, which was fine when we were cave men but becomes a problematic instinct when we’re literally mastering the laws of physics and shaping our world and environment. From physics and nature to history and contemporary trends, the vast majority of humanity spends very little time studying these topics. Which is completely understandable given the lack of time or resources to do so, but that understandable instinct creates world perfectly set up for abuse by surveillance states, both public and private, which makes it less understandable and much more problematic.
So, in the interest of gaining perspective on how we got to this point where the Facebook emerged as an ever-growing Panopticon in just a few short years after its conception, let’s take a look at one last article. It’s an article by investigative journalist Yasha Levine, who recently published the must-read book Surveillance Valley: The Secret Military History of the Internet. It’s a book filled with vital historical fun fact about the internet. Fun facts like...
1. How the internet began as a system built for national security purposes with a focus on military hardware and command and control communication purposes in general. But there was also a focus on building a system that could collect, store, process, and distribute of massive volumes of information used to wage the Vietnam war. Beyond that, these early computer networks also acted as a collection and sharing system for dealing with domestic national security concerns (concerns that centered around tracking anti-war protesters, civil rights activists, etc). That’s what the internet started out as. A system for storing data about people and conflict for US national security purposes.
2. Building databases of profiles on people (foreign and domestic) was one of the very first goals of these internet predecessors. In fact, one of the key visionaries behind the development of the internet, Ithiel de Sola Pool, both helped shape the development of the early internet as a surveillance and counterinsurgency technology and also pioneered data-driven election campaigns. He even started a private firm to do this: Simulmatics. Pool’s vision was a world where the surveillance state acted as a benign master that the kept the peace peacefully by using superior knowledge to nudge people in the ‘right’ direction.
3. This vision of vast database of personal profiles for the purpose was largely a secret at first, but it didn’t remain that way. And there was actually quite a bit of public paranoia in the US about these internet-predecessors, especially within the anti-Vietnam war activist communities. Flash forward a couple decades and that paranoia has faded almost entirely...until scandals like the current one erupt and we temporarily grow concerned.
4. What Cambridge Analytica is accused of doing is what the data giants like Facebook and Google do every day and have been going for years. And it’s not just the giants. Smaller firms are scooping up fast amounts of information too...it’s just not as vast as what the giants are collecting. Even cute apps, like the wildly popular Angry Birds, has been found to collect all sorts of data about users.
5. While it’s great that public attention is being directed at the kind of sleazy manipulative activities Cambridge Analytica was engaging in, deceptively wielding real power over real unwitting people, it is a wild mischaracterization to act like Cambridge Analytica was exerting mass mind-control over the masses using internet marketing voodoo. What Cambridge Analytica, or any of the other sleazy manipulators, were doing was indeed influential, but it needs to be viewed in the context of a political state of affairs where massive numbers of Americans, including Trump voters, really have been collectively failed by the American power establishment for decades. The collapse of the American middle class and rise of the plutocracy is what created the kind of macro environment where carnival barker like Donald Trump could use firms like Cambridge Analytica to ‘nudge’ people in the direction of voting for him. In other words, the focus on Cambridge Analytica’s manipulation of people’s psychological profiles in the absence of the recognition of the massive political failures of last several decades in America — the mass socioeconomic failures of the American embrace of ‘Reaganonics’ and right-wing economic gospel coupled with the American Left’s failure to effectively repudiate these doctrines — is profoundly ahistorical. The story of the rise of the power of firms like Facebook, Google, and Cambridge Analytica is a story the implicitly includes the story of that entire history of political/socioeconomic failures tied to failure to effectively respond to the rise of the American right-wing over the last several decades. And we are making a massive mistake if we forget that. Cambridge Analytica wouldn’t have been nearly as effective in nudging people towards voting for someone like Trump if so many people weren’t already so ready to burn the current system down.
These are the kinds of historical chapters that can’t be left out of any analysis of Cambridge Analytica. Because Cambridge Analytica isn’t the exception. It’s an exceptionally sleazy example of the rules we’ve been playing by for a while, whether we realized it or not:
The Baffler
The Cambridge Analytica Con
Yasha Levine,
March 21, 2018“The man with the proper imagination is able to conceive of any commodity in such a way that it becomes an object of emotion to him and to those to whom he imparts his picture, and hence creates desire rather than a mere feeling of ought.”
—Walter Dill Scott, Influencing Men in Business: Psychology of Argument and Suggestion (1911)
This week, Cambridge Analytica, the British election data outfit funded by billionaire Robert Mercer and linked to Steven Bannon and President Donald Trump, blew up the news cycle. The charge, as reported by twin exposés in the New York Times and the Guardian, is that the firm inappropriately accessed Facebook profile information belonging to 50 million people and then used that data to construct a powerful internet-based psychological influence weapon. This newfangled construct was then used to brainwash-carpet-bomb the American electorate, shredding our democracy and turning people into pliable zombie supporters of Donald Trump.
In the words of a pink-haired Cambridge Analytica data-warrior-turned-whistleblower, the company served as a digital armory that turned “Likes” into weapons and produced “Steve Bannon’s psychological warfare mindfuck tool.”
Scary, right? Makes me wonder if I’m still not under Cambridge Analytica’s influence right now.
Naturally, there are also rumors of a nefarious Russian connection. And apparently there’s more dirt coming. Channel 4 News in Britain just published an investigation showing top Cambridge Analytica execs bragging to an undercover reporter that their team uses high-tech psychometric voodoo to win elections for clients all over the world, but also dabbles in traditional meatspace techniques as well: bribes, kompromat, blackmail, Ukrainian escort honeypots—you know, the works.
It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.
But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.
So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.
Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.
Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.
All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.
What do these companies know about us, their users? Well, just about everything.
Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.
On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.
It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.
In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.
These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.
Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.
The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.
The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”
Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.
Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.
Here’s how The National Journal’s Andrew Rice described i360 in 2015:
Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.
Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”
In 2008, the hip young Blackberry-toting Barack Obama was the first major-party candidate on the national scene to truly leverage the power of internet-targeted agitprop. With help from Facebook cofounder Chris Hughes, who built and ran Obama’s internet campaign division, the first Obama campaign built an innovative micro-targeting initiative to raise huge amounts of money in small chunks directly from Obama’s supporters and sell his message with a hitherto unprecedented laser-guided precision in the general election campaign.
...
Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.
What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.
On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.
So our present-day freakout over Cambridge Analytica needs to be put in the broader historical context of our decades-long complacency over Silicon Valley’s business model. The fact is that companies like Facebook and Google are the real malicious actors here—they are vital public communications systems that run on profiling and manipulation for private profit without any regulation or democratic oversight from the society in which it operates. But, hey, let’s blame Cambridge Analytica. Or better yet, take a cue from the Times and blame the Russians along with Cambridge Analytica.
***
There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.
This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.
The truth is that the internet has never been about egalitarianism or democracy.
The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.
The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.
Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”
In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.
Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.
He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.
These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.
Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.
At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”
With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.
This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.
Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.
A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.
***
Simulmatics and its first-generation imitations are now ancient history—dating back from the long-ago time when computers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The internet surrounds us, engulfing and monitoring everything we do. We are tracked and watched and profiled every minute of every day by countless companies—from giant platform monopolies like Facebook and Google to boutique data-driven election firms like i360 and Cambridge Analytica.
Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.
Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.
***
And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.
Cambridge Analytica has been one of the lesser bogeyman used to explain Trump’s victory for quite a while, going back more than year. Back in March 2017, the New York Times, which now trumpets the saga of Cambridge Analytica’s Facebook heist, was skeptically questioning the company’s technology and its role in helping bring about a Trump victory. With considerable justification, Times reporters then chalked up the company’s overheated rhetoric to the competition for clients in a crowded field of data-driven election influence ops.
Yet now, with Robert Meuller’s Russia investigation dragging on and producing no smoking gun pointing to definitive collusion, it seems that Cambridge Analytica has been upgraded to Class A supervillain. Now the idea that Steve Bannon and Robert Mercer concocted a secret psychological weapon to bewitch the American electorate isn’t just a far-fetched marketing ploy—it’s a real and present danger to a virtuous info-media status quo. And it’s most certainly not the extension of a lavishly funded initiative that American firms have been pursuing for half a century. No, like the Trump uprising it has allegedly midwifed into being, it is an opportunistic perversion of the American way. Employing powerful technology that rewires the inner workings of our body politic, Cambridge Analytica and its backers duped the American people into voting for Trump and destroying American democracy.
It’s a comforting idea for our political elite, but it’s not true. Alexander Nix, Cambridge Analytica’s well-groomed CEO, is not a cunning mastermind but a garden-variety digital hack. Nix’s business plan is but an updated version of Ithiel de Sola Pool’s vision of permanent peace and prosperity won through a placid regime of behaviorally managed social control. And while Nix has been suspended following the bluster-filled video footage of his cyber-bragging aired on Channel 4, we’re kidding ourselves if we think his punishment will serve as any sort of deterrent for the thousands upon thousands of Big Data operators nailing down billions in campaign, military, and corporate contracts to continue monetizing user data into the void. Cambridge Analytica is undeniably a rogue’s gallery of bad political actors, but to finger the real culprits behind Donald Trump’s takeover America, the self-appointed watchdogs of our country’s imperiled political virtue had best take a long and sobering look in the mirror.
———-
“The Cambridge Analytica Con” by Yasha Levine; The Baffler; 03/21/2018
“It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.”
Yes indeed, it is great to see that this topic is finally getting the attention it has long deserved. But it’s not great to see the topic limited to Cambridge Analytica and Facebook. As Levine puts it, “We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries.” Societies in general are held together via overt and covert influence, but we’ve gotten really, really good at that over the last half century in America and the story of Cambridge Analytica, and the larger story of Sandy Parakilas’s whistle-blowing about mass data collection, can’t really be understood outside that historical context:
...
But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.
...
And the first step in putting the Cambridge Analytica story in proper perspective is recognizing that what it is accused of doing — grabbing personal data and building profiles for the purpose of influencing voters — is done every day by entities like Facebook and Google. It’s a regular part of our lives. And you don’t even need to use Facebook or Google to become part of this vast commercial surveillance system. You just need to communicate with someone who does use those platforms:
...
Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.
All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.
What do these companies know about us, their users? Well, just about everything.
Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.
On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.
...
The next step in contextualizing this is recognizing that Facebook and Google are merely the biggest fish in an ocean of data brokerage markets that has many smaller inhabitants trying to do the same thing. This is part of what makes Facebook’s handing over of profile data to app developers so scandalous...Facebook clearly new there was a voracious market for this information and made a lot of money selling into that market:
...
It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.
These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.
...
And when it comes to political campaigns, the digital giants like Facebook and Google already have special election units set up to give privileged access to political campaigns so they can influence voters even more effectively. The stories about the Trump campaign’s use of Facebook “embeds” to run a massive systematic advertising campaign of “A/B testing on steroids” to systematically experiment on voter ad responses is part of that larger story of how these giants have already made the manipulation of voters big business:
...
Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.
The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”
Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.
Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.
Here’s how The National Journal’s Andrew Rice described i360 in 2015:
Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.
Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”
...
And offering special services to campaign manipulate voters isn’t just big business. It’s a largely unregulated business. If Facebook decides to covertly manipulate you by altering its newsfeed algorithms so it shows you news articles more from your conservative-leaning friends (or liberal-leaning friends), that’s totally legal. Because, again, subtly manipulating people is as American as apple pie:
...
Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.
On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.
...
And this contemporary state of affairs didn’t emerge spontaneously. As Levine covers in Surveillance Valley, this is what the internet — back when it was the ARPANET military network — was all about from its very conception:
...
There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.
The truth is that the internet has never been about egalitarianism or democracy.
The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.
The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.
...
“The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting”
And one of the key figures behind this early ARPANET version of the internet, Ithiel de Sola Pool, got his start in this area in the 1950’s working at the Hoover Institution at Stanford University to understand the nature and causes of left-wing revolutions and distill this down to a mathematical formula. Pool, an virulent anti-Communist, also worked for JFK’s 1960 campaign and went on to start a private company, Simulmatics, offering services in modeling and manipulating human behavior based on large data sets on people:
...
Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.
Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.
He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.
These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.
Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.
...
And part of what drove Pool’s was a utopian belief that computers and massive amounts of data could be used to run society harmoniously. Left-wing revolutions were problems to be managed with Big Data. It’s a pretty important historical context when thinking about the role Cambridge Analytica played in electing Donald Trump:
...
At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”
...
And guess what: the American public wasn’t enamored with Pool’s vision of a world managed by computing technology and Big Data models of society. When the public learned about these early version of the internet inspired by visions of a computer-managed world in the 60’s and 70’s, the public got scared:
...
With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.
Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.
A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.
...
But that fear somehow disappeared in subsequent decades, only to be replaced with a faith in our benevolent techno-elite. And a faith that this mass public/private surveillance system is actually an empowering tool that will lead to a limitless future. And that is perhaps the biggest scandal here: The public didn’t just forgot to keep an eye on the powerful. The public forgot to keep an eye on the people whose power is derived from keeping an eye on the public. We built a surveillance state at the same time we fell into a fog of civic and historical amnesia. And that has coincided with the rise of a plutocracy, the dominance of right-wing anti-government economic doctrines, and the larger failure of the American political and economic elites to deliver a society that actually works for average people. To put it another way, the rise of the modern surveillance state is one element of a massive, decades-long process of collectively ‘dropping the ball’. We screwed up massively and Facebook and Google are just one of the consequences of this. And yet we still don’t view the Trump phenomena within the context of that massive collective screw up, which means we’re still screwing up massively:
...
Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.
***
And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.
...
So the biggest story here isn’t that Cambridge Analytica was engaged in mass manipulation campaign. And the biggest story isn’t even that Cambridge Analytica was engaged in a cutting-edge commercial mass manipulation campaign. Because both of those stories are eclipsed by the story that even if Cambridge Analytica really was engaged in a commercial cutting edge campaign, it probably wasn’t nearly as cutting edge as what Facebook and Google and the other data giants routinely engage in. And this situation has been building for decades and within the context of the much larger scandal of the rise of a oligarchy that more or less runs America by and for powerful interests. Powerful interests that are overwhelmingly dedicated to right-wing elitist doctrines that view the public as a resources to be controlled and exploited for private profit.
It’s all a reminder that, like so many incredibly complex issues, creating very high quality government is the only feasible answer. A high quality government managed by a self-aware public. Some sort of ‘surveillance state’ is almost an inevitability as long as we have ubiquitous surveillance technology. Even the array of ‘crypto’ tools touted in recent years have consistently proven to be vulnerable, which isn’t necessarily a bad thing since ubiquitous crypto-technology comes with its own suite of mega-collective headaches. National security and personal data insecurity really are intertwined in both mutually inclusive and exclusive ways. It’s not as if the national security hawk arguments that “you can’t be free if you’re dead from [insert war, terror, random chaos things a national security state is supposed to deal with]” isn’t valid. But fears of Big Brother are also valid, as our present situation amply demonstrates. The path isn’t clear, which is why a national security state with a significant private sector component and access to ample intimate details is likely for the foreseeable future whether you like it or not. People err on immediate safety. So we better have very high quality government. Especially high quality regulations for the private sector components of that national security state.
And while digital giants like Google and Facebook will inevitably have access to a troves of personal data that they need to offer the kinds of services people need, there’s no reason any sort of regulating them heavily so they don’t become personal data repository for sale. Which is what they are now.
What do we do about services that people use to run their lives which, by definition, necessitate the collection of private data by a third-party? How do we deal with these challenges? Well, again, it starts with being aware of them and actually trying to collectively grapple with them so some sort of general consensus can be arrive at. And that’s all why we need to recognize that it is imperative that the public surveils the surveillance state along with surveilling the rest of the world going on around us too. A self-aware surveillance state comprised of a self-aware populace of people who know what’s going on with their surveillance state and the world. In other words, part of the solution to ‘Big Data Big Brother’ really is a society of ‘Little Brothers and Sisters’ who are collectively very informed about what is going on in the world and politically capable of effecting changes to that surveillance state — and the rest of government or the private sector — when necessary change is identified. In other other words, the one ‘utopian’ solution we can’t afford to give up on is the utopia of a well-function democracy populated by a well-informed citizenry. A well-armed citizenry armed with relevant facts and wisdom (and an extensive understanding of the history and technique of fascism and other authoritarian movements). Because a clueless society will be an abusively surveilled society.
But the fact that this Cambridge Analytica scandal is a surprise and is being covered largely in isolation of this broader historic and contemporary context is a reminder that we are no where near that democratic ideal of a well-informed citizenry. Well, guess what would be a really valuable tool for surveilling the surveillance state and the rest of the world around us and becoming that well-informed citizenry: the internet! Specifically, we really do need to read and digest growing amounts of information to make sense of an increasingly complex world. But the internet is just the start. The goal needs to be the kind of functional, self-aware democracy were situations like the current one don’t develop in a fog of collective amnesia and can be pro-actively managed. To put it another way, we need an inverse of Ithiel de Sola Pool’s vision of world with benevolent elites use computers and Big Data to manage the rabble and ward of political revolutions. Instead, we need a political revolution of the rabble fueled by the knowledge of our history and world the internet makes widely accessible. And one of the key goals of the political revolution needs to be to create a world with the knowledge the internet makes widely available is used to reign in our elites and build a world that works for everyone.
And yes, that implicitly implies a left-wing revolution since left-wing democratic movements those are the only kind that have everyone in mind. And yes, this implies an economic revolution that systematically frees up time for virtually everyone one so people actually have the time to inform themselves. Economic security and time security. We need to build a world that provide both to everyone.
So when we ask ourselves how we should respond to the growing Cambridge Analytica/Facebook scandal, don’t forget that one of the key lessons that the story of Cambridge Analytica teaches us is that there is an immense amount of knowledge about ourselves — our history and contemporary context- that we needed to learn and didn’t. And that includes envisioning what a functional democratic society and economy that works for everyone would look like and building it. Yes, the internet could be very helpful in that process, just don’t forget about everything else that will be required to build that functional democracy.
Here’s a good example of many of the problem with Facebook are facilitated by the many privacy problems with the rest of the tech sector: A number of Facebook users discovered a rather creepy privacy violation by Facebook. It turns out that Facebook was collecting metadata about the calls and texts people were sending from their smartphones with the Facebook app and Googles Android operating system.
And it also turns out that Facebook used a number of sleazy excuses to “get permission” to collect this the data. First, Facebook had users agree to giving such data away by hiding it away in obtuse language in the user agreement. Second, the default setting for the Facebook app was to give this data away. Users could turn off this data sharing, but it was never obvious it was on.
Third, it was based on exploiting how Android’s user permissions system encourages people to share vasts amounts of data without realizing it. This is were this becomes a Google scandal too. If you had the Android operating system the Facebook app would try to get permission to access your phone contact information. This was ostensibly to be used for the Facebook’s friend recommendation algorithms. If you granted permission to read contacts during the Facebook app’s installation on older versions of Android — before version 4.1 (Jelly Bean) — giving permission to an app to read contact information also granted permission to call and message logs by default. So this was just an egregious privacy design by Google and Facebook egregiously exploited it (surprise!).
And when this loose permissions system was fixed in later versions of Android Facebook continued to use a loophole to keep grabbing the call and text metadata. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. In other words, upgrading the Android operating system didn’t guarantee that upgrades to user data privacy rules would actually take effect on the apps you already have installed. Which, again, is egregious. But that’s what Google’s Android operating system allowed and Facebook totally exploited it until Google finally closed the loophole in October of 2017.
Note that Apple’s iOS phones didn’t have this issue with the Facebook app because that iOS operating system simply does not give apps access to that kind of information. So the permissions Google is giving are bad even compared to it’s major competitor in the smartphone operating system space.
It’s also quite analogous to what Facebook was doing with the “friends permissions” giveaway of Facebook profile information to app developers. In both cases we have a major platform built there was a giant privacy-violating loopholes built into the platforms that was developers know about but the public isn’t really aware they’re signing up for. That’s become much of the modern internet giant business model and as we can see it’s a model that feeds on itself. Google and Facebook feed information to each other indicating that the Big Data giant have determined that it’s more profitable to share their data on all of us than keep it locked and proprietary.
Recall how Facebook whistle-blower Sandy Parakilas said he remembered Facebook executives getting concerned that they were giving so much of their information on people away to app developers that competitors would be able to create their own social networks. That’s how much data Facebook was giving away. And now we learn that Google’s operating system made an egregious amount of data available to app developers — like metadata on calls and texts — if people gave an app “contact” permissions.
And so we can see that Facebook and Google just aren’t in the ad space. They’re in the data brokerage space too. They’ve clearly determined that maximizing profits just might require handing over the kind of data people assumed these data giants carefully guarded. Instead, they’ve been carefully and steadily handing that data out. Presumably because it’s more profitable:
“To understand what Facebook is defending requires a lot of explanation—and that’s the heart of the problem.”
It’s a key insight: It really a reflection of the heart of the problem that simply understanding what Facebook is defending requires a lot of explanation. When Facebook started collecting people’s call and text metadata over its app it was exploiting the fact that Google’s Android system allowed them to do that in the first place when users gave “contact” permissions to an app (most people probably didn’t assume that giving an app contact permission was also giving away call and text metadata). And then after Google changed the Android app permissions system and separated the permissions for contact information with permissions for the call and text metadata Facebook relied a loophole Google provided where apps that were already installed could continue collecting that data. And none of this was ever made clear to the millions of people using the Facebook app on their Android phones because it was hidden in the dense text of user agreements that no one reads. The convolutedness of the act obscures the act.
And keep in mind that Facebook is claiming that it merely wanted this call and text metadata for its friend recommendations algorithm. Which is, of course, absurd. That data was going to obviously go into the pool of data Facebook is compiling on everyone.
“To put all of that into plain English, Google’s Android OS has its own privacy issues, and coupled with Facebook’s apps, it could’ve made it possible for Facebook users to opt-into the company’s surveillance program without realizing it.”
Facebook and Google working together to share more of what they know about us with each others. That’s basically what happened. It was a team effort.
And as the article notes, when Facebook claims that this was all fine because it was an opt-in option they ignore the fact the app used to make it very unclear opting-out was an option at all. The opt-out option was hidden in the settings and opting-in was the default setting that people had selected when they installed the app. And it was like that as recently as 2016:
And it’s also all an example of how the ostensibly helpful reasons to collect this personalized data (like make the friend recommendation algorithms better in this case) are used as an excuse to engage in the personal information equivalent of a smash and grab ransacking:
“However, Facebook has turned a convenience into an excuse for grabbing more information that it can combine with everything else to make a perfect psychological and social profile of you, the user. And it has demonstrated that it can’t be trusted to keep that data to itself.”
While Facebook may not have perfect psychological and social profiles of everyone, they probably have the best or nearly the vest, with Google possibly knowing more about people. And it’s hard to imagine that this call and text metadata isn’t potentially pretty valuable information for putting together those personal profiles on everyone. So it’s worth noting that this is potentially the same kind of profile data that Facebook gave out to Cambridge Analytica and thousands of other app developers. In other words, this call and text metadata slurping scandal is potentially also part of the Cambridge Analytica scandal in the sense that the insights Facebook gained from the call and text metadata could have shown up in those profiles Facebook was handing out to app developers like Cambridge Analytica.
Which is a reminder that this new scandal of Google’s Android OS giving Facebook this call and text metadata probably involves a lot more than just Facebook collecting this kind of data. Who knows how many other app developers whose apps requested “contact” permissions also went ahead and grabbed all the call and text metadata?
Also don’t forget that this call and text metadata includes data about the people on the other side of those calls and texts. So Facebook was grabbing data on more people than just the app users. And any other Android developers were potentially grabbing that data too. It’s another parallel with the Facebook “friends permission” loophole exploited by Cambridge Analytica and other Facebook app developers: you don’t have to download these privacy violating apps to be impacted. Simply communicating with someone who does have the privacy violating app will get your privacy violated too.
So as we can see, Facebook doesn’t just have a scandal involving giving private data away. It also has a scandal involving collecting private data too. A scandal that potentially any other Android app developer might also be involved in too. Which means there’s probably a black market for this kind of data too. Because Google, like Facebook, apparently couldn’t resist making itself a data-broker too. And now all this data is potentially floating around out there. It’s was a wildly irresponsible act on Google’s part to make that kind of data available under the “contacts” permissions in the Android operating system but that’s how much Google designed that system to make data collection a priority. Presumably to encourage more app developers to make Android apps. Access to our data is literally part of the incentive structure. It’s really quite stunning. And quite analogous to what Facebook is in trouble for with Cambridge Analytica.
But at least those Facebook friend recommendation algorithms are probably very well powered, so there’s that.
We should probably get ready for a lot more stories like this: Facebook just issued a flurry of new updates to its data-sharing policies. Some of these changes include new restrictions on the data made available to app developers while other changes are focused on clarifying the user agreements that disclose what data is taken.
And there’s a new estimate from Facebook on the number of Facebook profiles grabbed by Cambridge Analytica’s app. It’s gone from 50 million to 87 million profiles:
“Facebook is facing its worst privacy scandal in years following allegations that a Trump-affiliated data mining firm, Cambridge Analytica, used used ill-gotten data from millions of users to try to influence elections. The company said Wednesday that as many as 87 million people might have had their data accessed — an increase from the 50 million disclosed in published reports.”
50 million to now 87 million. It’s quite a jump. How high might it get when this is all over? We’ll see.
And beyond that update, Facebook also updated their data-collection disclosure policies. Now they’re actually mentioning things like the grabbing of call and text data off of your smartphone, which they apparently didn’t feel the need to tell people about before:
And note how Facebook’s update on how local privacy laws could affect its handling of “sensitive” data implies that the absence of those local laws means the same “sensitive” data isn’t going to be handled in a sensitive manner. So if you were hoping the big new EU data privacy rules were going to impact Facebook’s policies outside the EU, nope:
And that’s just some of the updates Facebook issued today. And while a number of these updates are pretty notable, perhaps the most notable part of this flurry of updates is is that they’re updates that actually increase privacy protections, which is not how these updates have normally gone for Facebook in the past
And now let’s take a look at one of the other disclosures Facebook made today: Remember how Facebook whistle-blower Sandy Parakilas speculated that a majority of Facebook users probably had their Facebook profile information scraped by app developers using exactly the same technique Cambridge Analytica used? Well, it looks like Facebook has very belatedly arrived at the same conclusion:
“Facebook said Wednesday that most of its 2 billion users likely have had their public profiles scraped by outsiders without the users’ explicit permission, dramatically raising the stakes in a privacy controversy that has dogged the company for weeks, spurred investigations in the United States and Europe, and sent the company’s stock price tumbling.”
So a billion or so people probably had their Facebook profile data sucked away by app developers. Facebook apparently just discovered this. And while it’s laughable to imagine that Facebook just suddenly discovered this now, recall how Sandy Parakilas also said executives had a “it’s best not to know” attitude about how this data was used by third-parties, so it’s possible that Facebook technically didn’t officially know this until now because they officially never looked before:
““Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped,” the company wrote in its blog post.”
LOL! They just discovered this and knew nothing about how their massive sharing of profile information with app developers might lead to a massive release of profile data. That’s their story and they’re sticking to it. For now.
And notice how it’s just casually acknowledged that “Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015,” and Facebook is announcing all these new restrictions on the data app developers, or even data brokers, can access. And yet Facebook is acting like this is all sort of revelation:
And note how Cambridge Analytica whistle-blower Christopher Wylie has already tweeted out that the new 87 million estimate might not be high enough:
“Could be more tbh.” It’s a rather ominous tweet considering the context.
And don’t forget that the count in the original number of people using the Cambridge Analytica app, ~270,000, hasn’t been updated. That’s still just 270,000 people. So this scandal is providing us a sense of just how many people were likely getting their profile information grabbed by app developers using the “Friends Permission” feature. When it was 50 million people in total, that came out to about 187 friends getting their profiles grabbed for each person who actually downloaded the app. But if it’s 87 million people that makes it ~322 friends for each Cambridge Analytica app user on average.
Along those lines, it’s worth noting the the average number of friends Facebook users have is 338 while the median number of friends in 200, according to a 2014 Pew research poll. So if that 87 million number keeps climbing, and therefore the assumed number of friends per user of the Cambridge Analytica app keeps climbing too, at some point we’re going to start getting into suspicious territory and have to ask the question of whether or not the users of that app were unusually popular or if Cambridge Analytica was getting data from more than just that app.
After all, for all we know Cambridge Analytica may have simply purchased a bunch of data on the Facebook profile black market, something else Sandy Parakilas warned about. So how high might that 87 million number get if Cambridge Analytica was just buying this information for other app developers? Who knows, although at this point, “a billion profiles” can no longer be ruled out, thanks to Facebook’s very belated update today.
And the hits keep coming: Here’s an article some more information on the disclosure Facebook made on Wednesday that “malicious actors” may have been using a couple of ‘features’ Facebook provides to scrape public profile information from Facebook accounts and associate that information with email address and phone numbers. This is separate from the data collection technique used by the Cambridge Analytica app, and thousands of other app developers, to grab the private profile information from app users and their friends.
One technique used by these “malicious actors” was to simply feed phone numbers and email addresses into a Facebook “search” box that would return the Facebook profile associated with that email for phone number. All the public information on that profile could then subsequently be collected and associated with that email/phone data. Users had the option of turning off the ability for others to find their profile using this method, but it was turned on by default and apparently few people turned it off.
The second technique involved used an account recovery tool Facebook provided names, profile pictures and links to the public profiles themselves for anyone pretending to be a Facebook user who forgot how to access their account.
And according to Facebook, this was being done by actors obtaining emails addresses and phone numbers on people on the Dark Web and then setting up scripts to automate this process for large numbers of emails and phone numbers, “with few Facebook users likely escaping the scam.” In other words, almost every Facebook user probably had their email and phone number associated with their Facebook account via this method. Also keep in mind that you don’t need to go to the Dark Web to buy lists of email addresses and phone numbers, so placing an emphasis on the “Dark Web” as the source for this information is likely part of Facebook’s ongoing attempt to ensure that this scandal doesn’t turn into an educational experience for the public on how widespread the data brokerage industry really is and how much information on people is legally commercially available. In other words, these “malicious actors” were probably operators in the commercial data brokerage market in many cases.
And as the article notes, pairing email and phone number information with the kind of information people made publicly available on their profiles is exactly the kind of information that identity thieves want to obtain as a starting point for stealing your identity.
The article also includes more information on just what kind of private profile information app developers like Cambridge Analytica were allowed to grab. Because it’s important to note that we don’t have clarity yet on what exactly app developers were allowed to grab from Facebook profiles. We’ve heard vague descriptions of what was available to the app developers, like Facebook’s ‘profile’ of you (presumably, what they’ve learned or inferred about you) and the list of what you “liked”. But it hasn’t been clear if app developers also had access to literally all of your private Facebook posts. Well, based on the following article, it does indeed sound like app developers potentially had access to literally all of your private Facebook posts. And a lot of that data is probably available on the Dark Web and other black markets too at this point because why not? Facebook made it available and it’s valuable, so why wouldn’t we expect it to be available for sale?
And the article makes one more stunning revelation regarding the permissions app developers had to scrape this private information: Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group.
So while Facebook hasn’t yet admitted that they made almost all the private information on people’s Facebook profiles available for identity thieves and any other bad actors for years with little to no oversight and that this data is probably floating around on the Dark Web for sale, they are getting much close to admitting this given their latest round of admissions:
“But the abuse of Facebook’s search tools — now disabled — happened far more broadly and over the course of several years, with few Facebook users likely escaping the scam, company officials acknowledged.”
Few Facebook users likely escaping the “scam” of using the feature Facebook turned on by default and was a obvious massive privacy violation. A “scam” that was also far less of a privacy violation than what Facebook made available to app developers, but still a scam that likely impacted almost all Facebook users. And the more information people made available on their public profiles, the more these “scammers” could collect about them:
And then there was Facebook’s account recovery function that Facebook also made easy to exploit:
And, again, while this kind of information wasn’t necessarily as extensive as the private information Facebook made available to app developers, it was still a very valuable starter kit of identity theft:
And, of course, that ‘identity theft starter kit’ data — associated phone numbers and emails with real names and other publicly available information — could potentially become combined with the private information made to app developers. Information to app developers that apparently included “people’s relationship status, calendar events, private Facebook posts, and much more data”:
So if “people’s relationship status, calendar events, private Facebook posts, and much more data” was made available to app developers, it raises the question: what wasn’t made available?
It’s all a reminder that there is indeed a “malicious actor” who took possession of all your private data and its name is Facebook.
Here’s a series of articles that that serve as a reminder that Facebook isn’t just an ever-growing vault of personal data profiles on almost everyone (albeit a very leaky data vault). It’s also a medium through which non-Facebook ever-growing vaults of personal data, in particular the data brokerage giants like Acxiom, can be merged with Facebook’s vault, ostensibly for the purpose of making Facebook’s targeted ads even more targeted.
This third-party sharing is done through Facebook’s “Partner Categories” program: Facebook advertisers have the option of filtering their Facebook ad targeting based on, for instance, the group of people who purchased cereal using data from Acxiom’s consumer spending data base. As such, data broker giants that are potentially Facebook’s biggest competitors become Facebook’s biggest partners.
Not surprisingly, merging Facebook’s extensive personal data profiles with the already very extensive personal data profiles held by the data brokerage industry raises a number of privacy concerns. Privacy concerns that are hitting a peak in the wake of the Cambridge Analytica scandal. So, also not surprisingly, Facebook just announced the end of the Partner Categories program over the next six months as part of its post-Cambridge Analytica public relations campaign:
“More specifically, Facebook says it will stop using data from third-party data aggregators — companies like Experian and Acxiom — to help supplement its own data set for ad targeting.”
As we can see, Facebook isn’t just promising to cut off the personal data leaking out of its platforms to address privacy concerns. It’s also promising to cut off some of the data flowing into its platforms. Data from the data brokerage giants flowing into Facebook in exchange for some of the ad money when that data results in a sale:
And while the public explanation for this move is that this is being done to address privacy concerns, there’s also the suspicion that Facebook is willing to make this move simply because Facebook doesn’t necessarily need this third-party data to make its ads more effective. So while cutting out this data-brokerage data is a potential loss for Facebook, that loss might be outweighed by the growing headache of privacy concerns for Facebook that comes from directly incorporating third-party data into its ad algorithms when it can’t control whether or not these third-party data brokerages obtained their own data sets in an ethical manner. In other words, the headache isn’t worth the extra profit this data-sharing arrangement yields:
So is it the case that Facebook is using this Cambridge Analytica scandal as an excuse to cut these data brokers that Facebook doesn’t actually need out of the loop? Well, as the following article notes, it’s not like Facebook doesn’t have the option of buying that data from the data brokers themselves and just incorporating the data into their internal ad targeting models. But Facebook always had that option and still chose to go ahead with this Partner Categories program, so it’s presumably the case that paying outright for that brokerage data is more expensive than setting up the Partner Categories program and giving the brokerages a cut of the ad sales.
As the following article also notes, advertisers will still be able to get that data brokerage information for the purpose of further targeting Facebook users. How so? Because notice the second data set in the above article that Facebook uses for targeting ads: data sets from the advertisers themselves. Like lists of email addresses of the people they want to target. It’s the same Custom Audiences tool that was used extensively by the Trump campaign for its “A/B testing on steroids” psychological profiling techniques. So there’s nothing stopping advertisers from getting that list of email addresses from a data broker and then feeding that into Facebook, effectively leaving the same arrangement in place but in a less direct manner. But it’s less convenient and presumably less profitable if advertisers have to do this themselves. It’s a reminder that partnering means more profits in the business Facebook is in.
Finally, as digital privacy expert Frank Pasquale also points out in the following article, there’s no real reason to assume Facebook is actually going to stand by this pledge to shut down the Partner Categories program over the next six months. It might just quietly start it up again in some other form or reverse simply reverse this decision after the public’s attention shifts away.
So while there are valid questions as to why Facebook is making this policy change, there are unfortunately also valid questions over whether or not this policy change will make any difference and whether or not Facebook will even make this policy change at all:
“Facebook said late Wednesday that it would stop data brokers from helping advertisers target people with ads, severing one of the key methods marketers used to link users’ Facebook data about their friends and lifestyle with their offline data about their families, finances and health.”
Yep, one of the key methods marketers used to link Facebook data with all the offline data that these data brokerages were able to collect just might get severed. It’s potentially a big deal for Facebook and the advertising industry. Or potentially not. That’s part of what makes this such a fascinating move by Facebook: It’s potentially quite significant and potentially inconsequential:
And note how this cooperating with these brokerages as only growing during the same period that Facebook cut off the “friends permissions” privacy loophole exploited by Cambridge Analytica’s app and thousands of other apps in 2015. It’s a reminder that even when Facebook is getting better in some ways, it’s probably getting worse in other ways:
And while some of the data gathered by the data brokerages inevitably overlaps with what Facebook also gathers on people, there are quite a few categories of ‘offline’ data these data brokers systematically gather that Facebook can’t gather without seeming super extra creepy. Data brokers gather data from places like voter rolls, property records, purchase histories, loyalty card programs, consumer surveys, car dealership records. Imagine if Facebook directly gathered that kind of offline information about everyone instead of just buying it from the brokerages or setting up arrangements like the Partner Categories program. Imagine how incredibly creepy that would be if Facebook had an ‘offline data collective’ division. It’s a reminder that Facebook and the data brokers really are engaged in an ‘online’/‘offline’ data gathering and aggregation joint effort. “Partner Categories” is an appropriate name because it’s a real partnership that’s important to both parties because it would be a bigger PR nightmare if Facebook had to collect all this offline data itself:
And, of course, the Custom Audiences tool that lets advertisers feed in lists of things like email address to target specific audiences — used extensively by the 2016 Trump campaign — might make the decision to end the Partner Categories program moot:
And as Frank Pasquale points out, we also don’t know enough about what Facebook knows about us to know now much of an impact ending the Partner Categories program will make to the privacy violations involved with Facebook’s whole business model. It’s entirely possible this change will make fusing data broker data with Facebook data less convenient and less profitable, but also still just as privacy violating both because the present day set up can be replicated indirectly (by Facebook advertisers coordinating with the data brokers separately) and also because Facebook might know almost everyone the data brokers know just from its own data collection methods. In other words, this could be largely cosmetic. And, as Pasquale also pointed out, Facebook might just change its mind and not end the program once public attention wanes:
So is this annoiced policy changed going to happen? Will it matter if it happens? It’s a pretty significant question and not one easy to answer given that Facebook’s algorithms are largely a black box.
That said, Josh Marshall might have a significant data point for us with regards to how important the current third-party data sharing arrangement with data brokerage giants really is in terms of the performance of Facebook’s ad targeting performance: starting in early March advertisers started noticing a significant drop off in the targeting quality of Facebook’s ads. Facebooks ad targeting quality just got worse for some reason. And this was early March, which is before the Cambridge Analytica story hit in mid-march but possibly after Facebook knew the Cambridge Analytica story was coming. So the timing of this observation is interesting and Marshall has a hunch: Facebook was already experimenting with how its internal advertising algorithm would operate without direct access to the data brokerages and potentially without access to a lot of other data sources in anticipation of the new EU regulations and new regulations from the US Congress. In other words, Facebook already saw the writing on the wall before the recent wave of Cambridge Analytica revelations went public and has already started the shift to an in-house ad targeting algorithm and it shows.
Now, it’s possible that Josh Marshall could be correct that Facebook has already started implementing an internal-only ad targeting algorithm and it’s noticably worse now but that it get better in the long run because Facebill will improve its third party-limited algorithm and the advertisers and brokers adapt to a new, less direct data-sharing arrangement. Maybe everyone will adapt and get up to par. Time will tell.
But if not, and if the loss of these data sharing arrangements makes Facebook’s ads less effective in the long run because — maybe because it’s much more efficient to directly funnel the broker data and a whole bunch of other third-party data into Facebook and the indirect methods can’t replicate this arrangement — then it’s worth noting that this downgrade in Facebook’s ad targeting quality as a result of the loss of this third-party data would reflect a real form of privacy enhancement and generally should be cheered. And is also a statement on the public utility of the overall data brokerage industry that is dedicated to collecting, aggregated, and selling personal data profiles. There’s a lot of negative utility in this industry and this wave of Facebook scandals is just one facet of it. So if Marshall’s guess is correct and this observable dropoff in Facebook ad quality reflects a decision by Facebook to preemptively take third-party data out of its ad targeting algorithms in anticipation of the new EU data privacy laws and future congressional action in the US, let’s hope that drop-off is sustained for our privacy’s sake:
“For more than a year, Facebook has faced a rolling public relations debacle. Part of this is the American public’s shifting attitudes toward Big Tech and platforms in general. But the driving problem has been the way the platform was tied up with and perhaps implicated in Russia’s attempt to influence the 2016 presidential election. Users’ trust in the platform has been shaken, politicians are threatening scrutiny and possible regulation, and there’s even a campaign to get people to delete their Facebook accounts. All of this is widely known and we hear more about it every day. But most users, most people in tech and also Wall Street (which is the source of Facebook’s gargantuan valuation) don’t yet get the full picture. We know about Facebook’s reputational crisis. But people aren’t fully internalizing that the current crisis poses a potentially dire threat to Facebook’s core business model, its core advertising business.”
As Josh Marshall points out, if Facebook really does have to turn off the third-party data spigot, the question of what this will actually do to the quality of its ad targeting is a massive question. The importance of the direct third-party data sharing arrangement is one of the big questions swirling around Facebook for both Facebook’s investors (from a price per share standpoint) and the public (from a public privacy standpoint). The fact that the EU’s new data privacy rules are hitting Facebook in Europe right when the Cambridge Analytica scandal starts playing out in the US and threatens to snowball into a larger scandal about Facebook’s business model in general just makes it a bigger question for Facebook.
And it’s a crisis for Facebook that will be numerically reflected in one key measure pointed out by Marshall: the number of advertisements that need to be shown to trigger a sale on Facebook compared to other platforms. It’s a 5‑to‑1 ratio for Facebook vs a 30-to‑1 ratio for other digital platforms and 100-to‑1 for traditional ads. Facebook really is much better at targeting its ads than even its digital peers. So when Facebook gets worse at targeting its ads, that does amount to real privacy gains because it’s one of the biggest and best ad cutting edge ad targeting platforms. This is why Facebook is worth over $450 billion:
“If old-fashioned advertising shows my advertisement to 100 people for every actual buyer and other digital platforms show it to 30 people and Facebook shows it to 5 people, Facebook’s ads are just worth a lot more”
And that’s why this is a pretty big story if there’s a real drop in the quality of Facebook’s ad targeting quality. Facebook is wildly ahead of almost all of its competition. Only Google and governments are going to compete with what Facebook knows about us all. So if Facebook effectively knows less about us, as reflected in a drop in the ad targeting observed starting in early March, that reflects a real de facto increase in public privacy. And it’s also a big story from a business standpoint because it’s it’s not just about Facebook, it’s also about the entire data brokerage industry. There’s a large part of the modern US economy potentially tied into this Facebook scandal. A scandal that now extends beyond the Cambridge Analytica app situation and has led to Facebook declaring the phaseout of its Partner Categories program. Is this ushering in a sea change in the data brokerage industry? If so, that’s big.
Facebook was going to have a sea change in how it did business in the EU thanks to the new data privacy laws, but it’s this Cambridge Analytica scandal that appears to be driving the likelihood of sea change in the US market too. And that’s part of why it’s notable if Facebook really did start rejiggering its algorithms without that third-party data in early March, potentially in anticipation of this flurry of bad press, and then the ad targeting suddenly got worse. Because if it turns out that the loss of the third-party data makes Facebook’s ad targeting worse, we should note that. And ask ourselves whether or not making Facebook even worse at targeting ads would be desirable from a public privacy perspective. The more Facebook sucks at ads the better Facebook is for everyone from a privacy perspective. It’s one of the fundamental contradictions of Facebook’s business model that this Cambridge Analytica scandal risks exposing to the public:
And as Josh Marshall points out, the impact of the loss of this third-party data on Facebook’s ad targeting algorithms is largely speculative because we know so little about what Facebook knows about us without those third party algorithms. Facebook is a black box:
But we might get an answer to the question of whether or not Facebook needs that third-party data to achieve the ad targeting proficiency is has today because of those new EU regulations and the real possibility of some sort of congressional action as a result of the Cambridge Analytica scandal. And that, of course, is why Josh Marshall suspects what we’re seeing in the reported drop in Facebook’s ad targeting is that Facebook is already preparing for coming regulation:
And if Josh Marshall’s hunch is correct and Facebook really did start rejiggering its ad targeting algorithms in anticipation of coming congressional regulation — which points towards an anticipation by Facebook of a very negative public response to the yet-to-be-released Cambridge Analytica story — we have to wonder just have many other privacy violating schemes Facebook has been up to with other third-parties beyond the data brokerage giants like Acxiom or Experian. Like what kinds of other classes of third-party providers might Facebook be incorporating into their algorithms?
Well, here’s a chilling example of the kind of third-party data-sharing partnership Facebook might be interested in: hospital record meta data. Like what diseases people have an the medications they’re on and when they visited the hospital. From several major hospitals, including Stanford Medical School’s.
Facebook says it would be for research purposes only by the medical community but Facebook would have been able to deanonymize the data. And it’s kind of obscene because Facebook says the plan for protecting everyone’s privacy is to using “hashing” — where patients would be assign an anonymous number that is assigned based on a mathematical algorithm that takes something like the patient name and turns it into a seemingly random number — and that only the medical research community will have access to the anonymized data so no one’s privacy is at risk. But using hashing to match the Facebook data set and the hospital data set means Facebook can match up the hospital data with its Facebook users. Facebook is trying to get deanonymized patient health data from hospitals. It’s a disturbing example of the kind of third-party data that Facebook is interested in.
And there’s no real reason to believe they wouldn’t wildly abuse the data and probably turn the patients of those hospitals into focus groups of algorithmic testing using their medical records to pitch ads. Which will probably freak those people out. Facebook + hospital data = yikes.
And this plan was being pursued last month. The Cambridge Analytica scandal disrupted active talks. The plan was “put on pause” by Facebook last week in response to the Cambridge Analytica outrage. Still, that’s just “on pause”. So it sounds like the plan is still “on” and we should expect a continued push into the medical record space by Facebook.
Facebook’s pitch was to combine what health system data on patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with Facebook’s data on the person (such as: user is age 50, married with 3 kids, English isn’t a primary language, actively engages with the community by sending a lot of messages). And then the research project would try to use this combined information to improve patient care in some way, with an initial focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn’t have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.
In other words, Facebook was setting up a research project dedicated to developer hospital decision-making support that utilizes Facebook’s pool of personalized data on people. Which is a path to plug Facebook into the hospital system. Yikes:
“Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.”
Patient data from hospitals. It’s Facebook’s brave new third-party data frontier. Currently under the auspices of medical research, but its research for the purpose of showing Facebook’s utility in medical decision-support which is research to demonstrate the utility of sharing patient information with Facebook. That was the general pitch Facebook was making to several major US hospitals, including Stanford. And it’s a plan that, according to Facebook, was being pursued last month and has merely been “put on pause” in the wake of the Cambridge Analytica scandal:
The way Facebook pitched it, the anonymized data from Facebook and the anonymized data from the hospitals would be combined and used for medical community research (research into Facebook as a patient care decision-support partner):
But what Facebook doesn’t acknowledge in that pitch is that the technique it’s proposing to anonymizing the data only anonymizes it to everyone except the hospital and Facebook. Facebook can easily deanonymize the hospital data if it gets its hands on it. The medical researchers aren’t the privacy threat. It’s actually anonymized for them because they don’t know the patients or the Facebook profiles. They’re just hashed ids. But Facebook sure as hell is a privacy threat because it’s Facebook with it’s hands on the deanonymized data:
And note how the issue of patient consent didn’t come up in these early discussions, suggesting that Facebook is trying to work out a situation where people don’t know their patient record data was handed over to Facebook:
And, of course, it was Facebook’s mad science “Building 8” R&D group that is behind this proposal. The same group behind projects like the human-to-computer mind-reading interface technology that will allow human-to-computer interfaces (so Facebook can literally data mine your brain activity). And the same R&D group that was recently led by former DARPA chief Regina Dugan, until Dugan left last year with a cryptic message about stepping away to be “purposeful about what’s next, thoughtful about new ways to contribute in times of disruption.”. This next-generation Facebook stuff:
It’s a reminder that Facebook’s R&D teams are probably working on all sorts of new ways to tap into data-rich third-party sources. Hospitals are merely one particularly data rich example of the problem.
And if Facebook really does cuts out third-party data brokers from its algorithms, let’s not forget that Facebook is probably going to use that as an excuse and imperative to reach out to all sorts of niche third-party data providers for direct access. Like hospitals. Don’t forget that the above plan was merely “put on pause”. They want to do more stuff like this going forward. And why not if they can get hospitals to give this kind of data out. And any other kind of institution they can convince to out our data. This is how Facebook can go “offline”. With direct data sharing services, like patient care decision-making support services, with one field of institution at a time. Hospitals are just one example.
So given Facebook faces potential congressional action and new regulations, it’s going to be important to keep in mind that those regulations are going to have to include more than just the data brokerage giants like Experian. Because Facebook is interested in what you tell your doctor too. And presumably lots of other ‘services’ where they fuse their data about you with another data source for combined decision-making support. And the more Facebook promising to cut out third-party data, but more Facebook is going to try to directly collect “offline” data by fusing itself with other facets of our lives. It’s really quite disturbing.
And who knows who else in the data brokerage industry might try to follow Facebook’s lead. Will Google also wants get into the patient care decision support market? Third-party data-brokerage decision-making support could be potentially applied to a lot more than just the medical sector. It’s a creepy new profit frontier.
Beyond that, how else might Facebook attempt to replace the “offline” third-party data it’s pledging to phase out over the next six months? We’ll see, but we can be sure that Facebook is working on something.
Here’s a reminder that the proposal to combine Facebook data with patient hospital data — ostensibly for patient care decision-support purposes but also likely so Facebook can get its hands on patient medical record information — isn’t the only project Facebook has put ‘on pause’ (but not canceled) in the wake of the Cambridge Analytica scandal. For example, there’s a new hardware product for your home that Facebook is planning out rolling out later this year.
It’s a “smart speaker” like the kind Amazon and Google already have sale. A smart speaker that will sit in your home and listen to everything and answer questions and schedule things. Potentially with cameras. Your personal home assistant. That’s the market Facebook is getting into later this year. But thanks to the public relations nightmare situation Facebook is experiencing at the moment the announcement of this new smart speaker at its developers conference in May has been cancelled. But it sounds like the roll out is still planned for this fall. So that smart speaker is a useful reminder to the US public and regulators of the future direction Facebook is planning on heading: in home “offline” data collection using internet-connected smart devices:
“Facebook Inc. has decided not to unveil new home products at its major developer conference in May, in part because the public is currently so outraged about the social network’s data-privacy practices, according to people familiar with the matter.”
Yeah, it’s understandable that public outrage over years of deceptive and systemic mass privacy violations might complicate the roll out of your new in-home “smart speakers” which will be listening to everything happening in your home and sending that information back to Facebook. A pause on that grand unveiling does seem prudent.
And yet Facebook still plans to actually launch its new smart speakers later this year:
And that planned roll out of these smart speakers later this year is just one element of Facebook’s plan to “become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers”:
“The devices are part of Facebook’s plan to become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers.”
Yep, Facebook has all sorts of plans to become more intimately involved with your everyday life. Using artificial intelligence. And smart speakers. And no privacy concerns, of course.
And in fairness this move to sell consumer devices that monitor you for the purpose of offering useful services with the data its collecting (and for selling you ads and profiling you) is merely following in the footsteps of companies like Google or Amazon with their wildly popular smart speakers. As the following article notes, A recent Gallup poll found found that 22 percent of Americans use “Home personal assistants” like Google Home or Amazon Echo. That is a huge percentage of the American public that’s already handing out exactly the kind of data Facebook is trying to collect with its new smart speaker.
And as the following article also notes, if the creepy patents Google and Amazon have already filed are any indication of what we can expect from Facebook, we should expect Facebook to work on things like incorporating the smart speakers into smart home AI systems for monitoring children, with whisper detection capabilities and the ability to issue verbal commands at the kids. The smart home would replace the television as the technological parent of today’s kids and one of these mega corporations selling this technology will get audio and visual access to your home. Yes, the existing Google and Amazon patents would incorporate visual data too since these smart speakers tend to have cameras.
And one patent involved a scenario where the camera on a smart speaker recognized a t‑shirt on the floor and recognized a picture of Will Smith on the shirt and then tied that to a database of that person’s browsing history to see if they looked up Will Smith content online and then serving up targeted ads if they found a Will Smith hit. That’s a real patent from Google and that’s the kind of Orwellian patent race that Facebook is quietly getting ready to join later this year:
“While the ad riffed on what Alexa can say to users, the more intriguing question may be what she and other digital assistants can hear — especially as more people bring smart speakers into their homes.”
It’s one of the conundrums of the smart speaker business model: it’s obvious these smart speaker manufacturers would love to just collect all the information they can about what people are saying and doing, but they need to maintain the pretense of not doing that in order to get people to buy their devices. So it’s no surprise that Google and Amazon routinely make it clear that their devices are only recording information after they’ve been activated by the users. But as these patents make clear, there are all sorts of home life surveillance applications that these companies have in mind. Like the smart home child monitoring system, with whisper detection capabilities and mischief-detecting AI capabilities:
“One application details how audio monitoring could help detect that a child is engaging in “mischief” at home by first using speech patterns and pitch to identify a child’s presence, one filing said. A device could then try to sense movement while listening for whispers or silence, and even program a smart speaker to “provide a verbal warning.””
Listening for the mischievous whispers of children and issuing a verbal warning. Those are the kinds of capabilities companies like Google, Amazon, and now Facebook are going to be investing in. And it will probably be very popular because that would be a very handy tool for parents to have smart home systems that literally watch the kids. But it’s going to come at the cost of opening up our homes to monitoring by one of these data giants. And that’s insane, right?
Another patent noted how the smart speakers could detect medical conditions from your voice, like detecting coughing, sneezing, and the breathing rate. And that’s just an example of the kind of personal data these devices are clearly capable of gathering and they’re only going to get better at it:
“The same application outlines how a device could “recognize a T‑shirt on a floor of the user’s closet” bearing Will Smith’s face and combine that with a browser history that shows searches for Mr. Smith “to provide a movie recommendation that displays, ‘You seem to like Will Smith. His new movie is playing in a theater near you.’””
The smart speaker camera is going to interface things it sees in your home with your browser history. For ad targeting. That’s a patent.
It’s why Consumer Watchdog’s Jamie Court warnings that these consumer home devices are really just home life spyware should be heeded. Because it’s pretty obvious that the plan is to turn these things into home activity monitoring devices. And with 22 percent of Americans saying they use a “Home personal assistants” in a recent Gallup poll, that really does make the coming era of smart device home monitoring a public privacy nightmare:
Of course, both Google and Amazon assure us that their devices are only recording audio after they’re triggered. And it’s only being used to improve the user experience and make it more personalized:
And while Google assures us those voice recordings will only be used to personalize the experience, Google’s user agreement includes the possibility of sending transcripts of what people say to third-party service providers. And it “generally” won’t send audio samples to those third-party providers. It’s an example of how little audio and visual snippets of people’s home life are becoming the new “mouse click” of consumer data collected and sold in exchange for a digital service:
And it’s not like these patents are necessarily future privacy nightmares. They’re potentially present privacy nightmares if it’s the case that these devices are actually just collecting data all the time in secret. And in a number of documented cases that’s been exactly what happened, including a murder case partially solved by an Amazon Echo with a propensity to start recording randomly:
And that’s all why better consumer regulation in this area really is called fall, because there’s no way consumers can realistically navigate this technological landscape:
And that’s one of the big questions that really should be asked in the wake of the Cambridge Analytica scandal: does the US need something like the Food and Drug Administration for data privacy for devices? Something far more substantial than the regulatory infrastructure that exists today and is dedicated to ensuring transparency of data collection practices? It seems like the answer is obviously yes. And if the Cambridge Analytica scandal is enough evidence those Orwellian patents should suffice. It
And as the Cambridge Analytica scandal also reminds us, we can either wait for the data abuses to happen and only belatedly deal with the problem or we can deal with it proactively. And dealing with it proactively realistically involves something like an FDA for data privacy.
But as we also just saw with those creepy patents, especially the child monitoring/scolding patent, consumers have much more than data privacy concerns with the world of smart devices Google and Facebook and Amazon have in mind. That future is going to involve devices that are literally raising the kids. Move over television, it’s parenting brought to you by smart home AIs and Silicon Valley.
And let’s also not forget one of the other lessons that we can take from the Cambridge Analytica scandal: the data collected by these smart devices isn’t just going to be collected by Google and Facebook and Amazon. Some of that data is going to be collected by all the third-party app developers too. Home life, brought to you by Google/Facebook/Amazon. That’s going to be a thing.
At the same time it’s undeniable that there will be very positive applications for this kind of technology. And that’s why it’s such a shame companies with the track record of Facebook and Google and Amazon are the ones leading this kind of technological revolution: like much technology, the consumer home smart device technology is heavily reliant on trust in the manufacturer and trust that the manufacturer won’t screw things up and turn their device into a privacy nightmare. That’s not the kind of situation where you want Google, Facebook, and Amazon leading the way.
So that’s all something to keep in mind when Facebook doesn’t talk about its upcoming smart speakers at its annual developers conference next month.
Here’s a fascinating angle to the Cambridge Analytica scandal that involves an Eastern Ukrainian politician with pro-EU leanings and ties to Yulia Tymoshenko and the Azov Battalion:
It turns out Cambridge Analytica outsourced the production of its “Ripon” psychological profiling software to a separate company, AggregateIQ (AIQ). AIQ was founded by Cambridge Analytica co-founder/whistle-blower Christopher Wylie, so it’s basically a subsidiary of Cambridge Analytica. But they were technically separate companies and it turns out that AIQ could end up playing a big role in an investigation into whether or not UK election laws were violated by the “Vote Leave” camp during the lead up to the Brexit vote.
It looks like the “Vote Leave” camp basically secretly spent more than it legally could using AIQ as a vehicle for doing this. Here’s how it worked: There was official “leave” political campaign but there were also third-party pro-leave campaigns. One of those was Leave.EU. In 2016, Robert Mercer offered Leave.EU the services of Cambridge Analytica for free. Leave.EU relied on Cambridge Analytica’s services for its voter influence campaign.
The official Vote Leave campaign, on the other hand, relied on AIQ for its data analytics services. Vote Leave eventually payed AIQ roughly 40 percent of its £7 million campaign budget. Here’s where the illegality came int: Vote Leave also ended up gathering more cash than British law legally allowed it to spend. Vote Leave could legally donate that cash to other campaigns but it couldn’t then coordinate with those campaigns. But that’s exactly what it looks like Vote Leave did. About a a week before the EU referendum, Vote Leave inexplicably donated £625,000 to the founder of a small, unofficial Brexit campaign called BeLeave. Grimes then immediately gave a substantial amount of the cash he received to AIQ. Vote Leave also donated £100,000 to another Leave campaign called Veterans for Britain, which then paid AIQ precisely that amount. So Vote Leave was basically using these small ‘leave’ groups as campaign money laundering vehicles, with AIQ as the final destination of that money.
That’s all why AIQ is now the focus of British investigators. AIQ’s role in this came to light in part from thousands of pages of code that was discovered by a cybersecurity researcher at UpGuard on the web page of a developer named Ali Yassine who worked for SCL Group. Within the code are notes that show SCL had requested that code be turned over by AIQ’s lead developer, Koji Hamid Pourseyed.
AIQ’s contract with SCL stipulates that SCL is the sole owner of “Ripon”, Cambridge Analytica’s campaign platform. The documents also include an internal wiki where AIQ developers also discussed a project known as The Database of Truth, a system that “integrates, obtains, and normalizes data from disparate sources, including starting with the RNC Data Trust. It’s a reminder that the story of Cambridge Analytica isn’t just a story about the Trump campaign or the Brexit vote. It’s also about the Republican Party’s political analytics in general
Also included to the discovered AIQ files were notes related to active projects for Cruz, Abbott, and a Ukrainian oligarch, Sergei Taruta.
So who is Sergei Taruta? Well, he’s a Ukrainian billionaire and co-founder of the Industrial Union of Donbass, one of the largest companies in Ukraine. He was appointed governor of the Donetsk Oblast in Eastern Ukraine by Petro Poroshenko in March of 2014 before being fired in October of 2014.
Taruta went on to get elected to parliament where he remains today. He recently co-founded the “Osnova” political party that describes itself as populist and a promoter of “liberal conservatism” (presumably it’s “liberal” in the libertarian sense). It’s suspected by some that Rinat Akhmetov, Ukraine’s wealthiest oligarch and another Eastern Ukrainian who straddles the line between backing the Kiev government and maintaining friendly ties with the pro-Russian segments of Eastern Urkaine, is also one of the party backers. Ahkmetov was a significant backer of Yankovych’s Party of Regions and a dominant figure in the Opposition Bloc today. It was Ahkmetov who initially hired Paul Manafort back in 2005 to act as a political consultant.
It’s reportedly pretty clear that Taruta’s Osnova party is designed to splinter away ex-supporters of Viktor Yankovych’s Party of Regions based on the politicians who have already declared they are going to join it. And yet as a politician Taruta is characterized as having never really tried to cozy up to the pro-Russian side and has a history of supporting pro-EU politicians. In 2006 he supported Viktor Yuschenko over Viktor Yanukovych. In 2010 he backed Yulia Tymoshenko over Yiktor Yanukovych.
So Taruta a pro-EU Eastern Ukrainian politician, which is notable because he’s not the only pro-EU Eastern Ukrainian politician to be involved with entities and figures in the #TrumpRussia orbit. Don’t forget about Andreii Artemenko, the Ukrainian politician who was involved with that ‘peace plan’ proposal with Michael Cohen and Felix Sater — a proposal that may have been part of a broader offer made to Russia over both Ukraine and the Syria and Iran — and how Artemenko was a pro-EU member of the far right “Radical Party” and also has ties to Right Sector. Artemenko headed up the Kiev department of Yulia Tymoshenko’s Batkivshchyna Party party back in 2006 and was serving in a coalition headed by Tymoshenko.
Also recall that the figure who appears to have arranged for the initial contact between Andreii Artemenko with Michael Cohen and Felix Sater was Alexander Oronov, the father-in-law of Michael Cohen’s brother. And Oronov was, himself, co-owned an ethanol plant with Viktor Topolov, another Ukrainian oligarch who was Viktor Yuschenko’s coal minister and who because an assassination target by Semion Mogilevych’s mafia organization. One of Topolov’s partners who was also targeted by Mogilevych, Slava Konstantinovsky, end up forming and joining one of the “volunteer battalions” fighting the separatists in the East.
So now we learn that AIQ (so, basically Cambridge Analytica) is doing some sort of work for Sergei Taruta, putting another Eastern Ukrainian oligarch politicians with pro-EU leanings in the orbit of this #TrumpRussia scandal.
So what kind of work did AIQ do for Taruta? That’s unclear. And it seems reasonable to assume that it’s work involving Taruta’s new party in Ukraine and its attempts to splinter off former Party of Regions voters.
But as we’re also going to see, Sergei Taruta has been doing some lobbying work in Washington DC. Rather curious lobbying work: It turns out Taruta was at the center of a bizarre ‘congressional hearing’ that took place in the US capital last September. This hearing focused on corruption allegations Taruta has been promoting for over a year against the National Bank of Ukraine, the country’s central bank.
There were two Ukrainian television stations covering the event and pretending like it was a real congressional hearing. Former CIA director James Woolsey, who was briefly part of the Trump campaign, was also at the event, along with former Republican House member Connie Mack, who is now a lobbyist. Mack was basically pretending to speak on behalf of the US Congress and expressing outrage over Taruta’s corruption allegations for the Ukrainian television audiences while expressing his resolve to investigate it. Rep. Ron Estes, a freshman Republican, booked the room in the US Capital for Mack and lobbying first. Estes’s office later said it won’t happen again.
And there’s another twist to this strange attack on the National Bank of Ukraine: According to Vox Ukraine, a a number of the criticisms Taruta brings against the bank are based on distortions and half-truths. In other words, it doesn’t appear to be a genuine anti-corruption campaign. So what is Taruta motivation? Well it’s notable that his criticism of the National Bank of Urkaine extends back to the actions of its previous chair, Valeriya Gontareva (Hontareva). Gontareva was appointed chairman of the bank in June of 2014. And one of her first big moves was the government takeover of Ukraine’s biggest commercial bank, Privatbank. Privatebank was co-founded by Ihor Kolomoisky, another Eastern Ukrainian oligarch.
Ihor Kolomoisky was appointed governor of the Eastern oblast of Dnipropetrovsk at the same time Taruta was appointed governor of Donetsk. Kolomoisky has been supporting the Kiev government in the civil war by financially supporting a number of the volunteer battalions, including directly creating the large private Dnipro Battalion. As we’ll see, both Kolomoisky and Taruta reportedly supported the neo-Nazi Azov Battalion according to a 2015 Reuters report. In other words, Kolomoisky is an Eastern Ukrainian oligarch with ties to the far right, kind of like Andreii Artemenko.
Kolomoisky wasn’t happy about the takeover of Privatbank. When Gontareva presided over the bank’s nationalization, its accounts were missing more than $5 billion in large part because the bank lent so much money to people with connections to Kolomoisky. After the bank takeover, Gontareva received numerous threats. On April 10, 2017, she announced at a press conference that she was resigning from her post.
So it looks like Sergei Taruta might be waging an international PR battle in against the National Bank of Ukraine as part of a counter move on half of Ihor Kolomoisky an the Privatbank investors.
And then there’s the person who actually organized this fake congressional hear. A little-known figure came forward to take full responsibility: Anatoly Motkin, a one-time aide to a Georgian oligarch accused of leading a coup attempt. Motkin the founder and president of StrategEast, a lobbying firm that describes itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe.”
That’s all some new context to factor into the analysis of Cambridge Analytica and the forces it was working for: one of its clients is a pro-EU Eastern Ukrainian oligarch who just set up a political party designed to appeal to for Yanukovych supporters.
Ok, so first, let’s look at the story of Cambridge Analytica and AggregateIQ (AIQ), the Cambridge Analytica offshoot that was used to both develop the GOP’s “Ripon” analytics software and also acted as the analytics firm for the Vote Leave campaign. And the work AIQ was doing for Vote Leave was apparently so valuable that Vote Leave secretly launder almost a million pounds through two smaller ‘leave’ groups in order to get that money to AIQ and secretly exceed the legal spending caps. And that’s the discovery of thousands of AIQ documents by a cybersecurity firm is so politically significant in the UK right now. But as those documents also reveal, AIQ was doing work for other clients: Texas Governor Greg Abbott, Texas Senator Ted Cruz, and Ukrainian oligarch Sergei Taruta:
“A little-known Canadian data firm ensnared by an international investigation into alleged wrongdoing during the Brexit campaign created an election software platform marketed by Cambridge Analytica, according to a batch of internal files obtained exclusively by Gizmodo.”
As we can see, AIQ was the under-the-radar SCL subsidiary that actually created “Ripon”, the political modeling software Cambridge Analytica was offering to client. Cambridge Analytica co-founder/whistle-blower Christopher Wylie helped SCL found the firm. Also AIQ co-found, Jeff Silvester, admits that Wylie was involved with AIQ landing its first big contract but asserts that Wylie was never closely involved with the company. And Silvester also admits that the company had a contract with SCL in 2014 but haven’t worked with SCL since 2016. So AIQ is officially acting like it’s not really an SCL offshoot at this point:
And based on the AIQ’s contract with SCL, we have a better idea of when exactly AIQ’s work with SCL ended in 2016: the code found by UpGuard was uploaded to the code-repository website GitHub in August of 2016. That suggests that was the point when the coded was effectively handed off from AIQ to SCL. And August of 2016, it’s important to recall, is the same month that Steve Bannon, a Cambridge Analytica company officer — and “the boss” according to Wylie — went to work as campaign manager of the Trump campaign. So you have to wonder if that’s a coincidence or a reflection of concerns over this SCL/Cambridge Analytica/AIQ nexus getting some unwanted attention:
And in those discovered AIQ documents are notes on projects AIQ was doing for Cruz, Abbott and Taruta. Along with notes on a project for the GOP called The Database of Truth:
AIQ is making the GOP a “Database of Truth”. Great.
And that sounds like a separate system from Ripon. The Database of Truth appears to focus on the kind of data found in data brokerages — state voter files, consumer data, third party data providers, etc. — whereas Ripon software appeared to be specifically focused on the kind of psychological profiling Cambridge Analytica was specializing in:
And as we’ve heard from the Trump campaign, and their assertions that the Cambridge Analytica software wasn’t actually very useful, the Cruz campaign is also calling this Ripon software just “vaporware”. Denials of the effectiveness of Cambridge Analytica’s psychological profiling methods has been one of the across-the-board assertions we’ve seen from the people involved with this story:
And while everyone involved with Cambridge Analytica has been claiming it’s largely useless, it’s hard to ignored the Brexit scandal that involved Vote Leave using two outside groups to launder almost a million pounds to AIQ for AIQ’s analytics services in excess of the legal spending caps. That’s quite a vote of confidence by Vote Leave:
As we can see, AIQ is an important entity in terms of understanding the broader scope of the kind of work and clients this SCL/Cambridge Analytica/Bannon/Mercer political influence project was undertaking. AIQ is critical for understanding the extent of the role this influence network played in the Brexit vote but also important for showing the other kinds of clients this network was taking on. Like Sergei Taruta.
Now let’s take a closer look at Taruta with this Ukrainian Week profile from October about the creation of Taruta’s new Osnova political party. Many suspect has Rinat Akhmetov of the Opposition Bloc is behind Taruta’s new party. But there is no evidence of that yet and the party so far appears to be designed to appeal to former Party of Regions voters, man of which are now Opposition Bloc voters in many cases and Akhmetov is a major Opposition Bloc backer. So questions about Akhmetov’s involvement remain open but it’s clear that Osnova is trying to appeal to Akhmetov’s political constituency.
As the article also notes, Taruta has a history of supporting pro-EU politicians, including Viktor Yuschenko and Yulia Tymoshenko. And he’s never cozied up to the pro-Russian groups.
But Taruta does have one very notable Kremlin connection: In 2010, 50%+2 shares of the Taruta’s industrial conglomerate, Industrial Union of Donbas (IUD), was bought up by Russia’s Vneshekonombank, the foreign trade bank. It is 100% state-owned and Russian Premier Dmitry Medvedev is the chair of its supervisory board. So Taruta does have a notable direct business tie with with the Russian government. But as the article notes, there are no indications Taruta or his new party are taking Russian money. And based on his political history it would be surprising if he was taking Kremiln money because he’s clearly part of the pro-European branch of Ukraine’s politics.
So we have AIQ doing some sort of work for Sergei (Serhiy) Taruta. Is that work data analytics for Osnova? We don’t know. If it probably involves Taruta’s campaign against the National Bank of Ukraine, because Taruta is clearly very interested in waging that political fight. So interested that he had a fake congressional hearing at the US capital that was broadcast on two Ukrainian television channels and sent the message that the US congress was going to investigate Taruta’s claims about corruption at Ukraine’s central bank. So it’s possible AIQ was involved in that kind of political work too. Especially given what we know about Cambridge Analytica and SCL and their reliance of psychological warfare methods to change public opinion. A fake congressional hearing, made possible with the help of a Republican Congressman, Rep. Estes, to schedule the room at the US Capital, seems like exactly the kind of advice we should expect from the Cambridge Analytica people.
The question of what exactly AIQ has been doing for Taruta would be a pretty big question given the scandal and mystery swirling around Cambridge Analytica and SCL. The fake congressional hearing made it a much wierder big question about the ultimate goals and agenda of the people behind Cambridge Analytica:
“The Osnova site states that the party’s ideology is based on the principles of liberal conservatism. In Ukrainian politics, however, these words typically mean very little. What kind of conservatism are we talking about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s other populists, all of whom count on a fairly undemanding electoral base. In some ways, he resembles Serhiy Tihipko, who tried over and over again to enter politics as a “new face,” although he had been in politics since his days in the Dnipropetrovsk Oblast Komsomol Executive.”
A party based on the principles of liberal conservatism. So a vague party for a vague cause. That seems like an appropriate fit for Sergei Taruta, an intriguingly vague figure. But a notable figure from Donetsk, the heartland of the separatists, because he never played up to the pro-Russian parties and movements and was consistently a support of the pro-Kiev forces. That included supporting Viktor Yuschenko in 2006 and Yulia Tymoshenko in 2010:
And Taruta’s pro-Kiev orientation is no doubt a big reason he was appointed governor of Donetsk in March of 2014 following the post-Maidan collapse of the Yanukovych government. But he didn’t last long, resigning in October of 2014. And that was partly attributed to his limited support for the volunteer militias when compared to the appointed governor of the neighboring Dnipo oblast, Ihor Kolomoisky (note that, as we’ll see in a following article, both Taruta and Kolomoisky reportedly supported the Azov Battalion):
After resigning as governor, he gets elected to the parliament. And now he has a new party, Osnova, which is characterized as clearly designed to pick up the electorate of the now-defunct Party of Regions:
And while the translation is somewhat garbled here, it appears that there is speculation that Rinat Akhmetov, a top oligarch and one of the primary backers of the “Opposition Bloc”, may be behind Taruta’s Osnova initiative. But there’s no evidence of this and if true it would put Osnova in competition for Akhmetov’s Opposition Bloc voters. Also, people close to Akhmetov aren’t found in Osnova’s leadership:
But while Taruta is clearly a pro-Kiev/pro-EU kind of Ukrainian politician, he does have one notable tie to the Kremiln: a majority stake in his industrial conglomerate was sold to a Russian state-own bank in 2010:
And beyond building his mysterious new Osnova party, Taruta is also busy lobbying the US about his pet project of outing alleged corruption at Ukraine’s central bank. Or at least he’s busy making it look like he’s lobbying the US about this. And he’s willing to go to enormous lengths to create those appearances, like a September 25, 2017 fake congressional hearing in the US Capital where an ex-Congressman, Connie Mack, pretended to expression congerssional outrage over Taruta’s allegations and an ex-CIA chief, James Woolsey, gave words of support for the ‘anti-corruption drive’. And this was all televised in Ukraine and treated like a real US political event:
So now lets take a look at a report in this bizarre fake event written by the one American reporter who was invited to attend. As the article notes, the event will billed by the Ukrainian television channel as a meeting of the “US Congressional Committee on Financial Issues.” No current members of Congress were there. Instead, it was a private panel discussion hosted by former Rep. Connie Mack IV (R‑FL), and Matt Keelen, a veteran political fundraiser and operative. It was open only to invited guests (including congressional staffers), two Ukrainian reporters (from NewsOne), and one American reporter. Mack was wearing his old congressional pin on his lapel.
Much of the event was spent criticizing Ukraine’s former central banker Valeriya Hontareva (Gontareva). The “HONTAREVA report” is the product of Taruta, and he has been out promoting it since late 2016. According to VoxCheck, a Ukrainian fact checking website, “the data [in the report], though mostly correct, are manipulated in almost all occasions.” VoxCheck also notes that the report has split Ukrainian politicians.
James Woolsey, the former CIA director and former Trump campaign adviser, was also at the event and briefly spoke. Woolsey talked about how “sweet” Russia was in the early years after the fall of the Berlin Wall and the need to find a way to make Russia “sweet” like that again.
One Senate Aide described Woolsey’s appearance there a strange, strange event and an “inter-oligarch dispute”: “It was a strange, strange event. Even by Ukrainian standards, that was an odd one. . . . I mean, why would a former CIA director be in the basement of the Capitol for a inter-oligarch dispute? [Former] CIA directors don’t just go to events and say, how much we could get along with the Russians. They don’t do that without a reason.” And that seems like a good way to summarizee this: a strange, strange event that’s one element of a broad inter-oligarch dispute. A dispute that’s giving us some insights in the the kind of figures in Ukraine Cambridge Analytica and AIQ want to work for:
“The HONTAREVA report is the product of Sergiy Taruta, and he has been out flogging it for nearly a year. VoxCheck, a Ukrainian fact checking website, analyzed Taruta’s report in late 2016 and says of the report: “VoxCheck has checked most of the facts from the Taruta’s brochure and has discovered that the data, though mostly correct, are manipulated in almost all occasions.””
The fake congressional hear is a sign of how much Taruta wants to publicize his report report on the corruption at Ukraine’s central bank. But it’s also a sign that Taruta’s primary audience with this fake hearing was Ukrainians. And Taruta and his NewOne Ukrainian media partners were more than happy to maintain the pretense that this was a real congressional event for that Ukrainian audience. It was a private event hoax designed to look like a public event:
Adding to the bizarreness was the speech by former CIA director James Woolsey about what sweethearts Russia was after the fall of the Berlin wall and the need to return to that point:
And that’s all why one Senate Aide referred to it all as a strange, strange event to see a former CIA director show up at a hoax event that’s part of a larger inter-oligarch dispute:
So let’s now take a closer look at that inter-oligarch dispute to get a better sense of who Taruta is aligned with in Ukraine. And in this case he’s clearly aligned with Ihor Kolomoisky, co-founder of the nationalized Privatbank.
As the article also notes, when Taruta was selling the majority stake in the industrial conglomerate he co-founded, Industrial Union of Donbass, in 2010, he was a close ally Yulia Tymoshenko at the time. And according to leaked cables, Tymoshenko wanted him to keep the sale a secret over fears that she would be attacked for selling out Ukraine. It’s another indication of Taruta’s political pedigree.
The article also has an explanation from James Woolsey on why he attended that event: he was duped. He agreed to show up in the audience and then was asked on the spot to make some remarks. That’s the line he’s going with.
And the article identifies the person who has come forward to claim responsibility for arranging the event: Anatoly Motkin, a one-time aide to a Georgian oligarch. Motkin founded the StrategEast consulting firm that describes itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe.” Motkin claims that he decided to fund the event because Taruta brought the allegations about Gontareva to his attention.
So that gives us a few more data points about Taruta: he was close to Tymoshenko, he’s doing Ihor Kolomoisky’s bidding in waging this fight against the nationalization of Privatbank, and the person who actually set up the even runs a lobbying firm for that described itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe”:
“Serhiy Taruta, a member of the Ukrainian parliament, is named as the author of the report. In 2008, Forbes estimated his net worth at $2.7 billion. According to a diplomatic cable published by WikiLeaks, American government officials believed Taruta played a role in the sale of a majority stake in the sale of one of Ukraine’s largest steel groups—valued at $2 billion—to a powerful Russian businessman. Taruta was a close ally of politician Yulia Tymoshenko at the time, and the cable said she and Taruta wanted to keep the deal “hidden from public view” to avoid criticism. Had the nature of the deal been made public, the cable said, Tymoshenko could have faced “increased attacks from political rivals for ‘selling out’ Ukrainian assets to Russian interests, perhaps to finance her presidential campaign.””
That’s a key observation: Taruta was seen as a close Tymoshenko ally.
But he’s also a Koloimoisky ally since this inter-oligarch dispute is Kolomoisky’s dispute and Taruta is fighting Kolomoisky’s fight:
But what about James Woolsey? What’s his excuse for fighting Kolomoiksy’s fight? He was tricked. That was his excuse:
And what about Rep. Estes, the congressman who made this official room available for the stunt? Well, he assures us that it won’t happen again. It’s sort of an explanation:
And note the two Ukrainian media companies that covered this. There was ChannelOne, which is owned by 1+1 Media, Ihor Kolomoisky’s media group. And also UkraNews, which belongs to Dmitry Firtash:
And recall what we saw in the above Ukraine Week piece about the makeup of the Opposition Bloc and the unproven speculation that Rinat Akhmetov could be behind Osnova: “One story is that the purpose of Osnova is to gradually siphon off Akhmetov’s folks from the Opposition Bloc, given that former Regionals split into the Akhmetov wing, which is more loyal to Poroshenko, and the Liovochkin-Firtash wing, which is completely opposed”. That sure sounds like Firtash represents a faction of the Opposition Bloc that would like to see Poroshenko go (recall that Andreii Artemenko’s peace plan proposal involved the collapse of the Porokshenko government under a wave scandal revelations. Artemenko would provide the scandal evidence). So it’s notable that we have Firtash’s news channel also promoting Taruta’s fake congressional along with Kolomoisky’s ChannelOne.
And look who has come forward as the even organizer. Anatoly Motkin, a one-time aide to a Georgian oligarch:
And when we look at how Motkin’s lobbying firm describes itself, it’s “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe”:
“Mr. Motkin has devoted much of his career to assisting the processes of Westernization in post-Soviet states through the launching of a variety of media, political and business initiatives aimed to drive social awareness and connect communities. He has successfully invested in multiple technology startups, such as one of the most popular messaging apps and the ridesharing service app Juno, which was recently acquired by on-demand ride service Gett.”
And the involved of someone like Motkin in arranging the theatrics of what amounts to an inter-oligarch dispute over Ihor Kolomoisky’s nationalized bank points to one of the key observations in this situation: it appears to be an inter-oligarch fight of different factions of pro-western Ukrainian oligarchs. And Sergei Taruta appears to be squarely in the camp of faction that doesn’t support the separatists but also doesn’t support Poroshenko. As we’ve seen, Taruta has history ties to Yulia Tymosheko’s power base, but he also appears to be working with fellow East Ukrainian oligarch Ihor Kolomoisky.
So, finally, let’s note something important about Taruta and Kolomoisky from this 2015 report by Joshua Cohen, who has done a lot of good reporting about the risk of the neo-Nazi in Ukraine. It’s a report that would explain some of the animosity between Kolomoisky and the Poroshenko government: The report describes the use of privately financed militias that are, in effect, private armies controlled by their Ukrainian oligarch financiers, with Ihor Kolomoisky being one of the biggest militia financiers. And this actually led to Kolmoiksy’s firing in 2015 after Komoloisky sent one of this private armies to seize control of the headquarters of the state-owned oil company, UkrTransNafta, after Kiev fired the company’s chief executive officer who happened to be an ally of Kolomoisky. This led to Kolomoiksy’s firing as governor of Dnipro. So that, in addition to the Privatbank nationalization, is no doubt part of why Koloimoisky might not be super enthiasistic about the Poroshenko government.
Given the ongoing tensions between the neo-Nazis groups in Ukraine and the Kiev government and the ongoing Nazi threats from groups like the Azov Battalion to ‘march on Kiev’ and take over, it’s noteworthy that one of their biggest financial backers, Ihor Kolomoisky, has so much animosity towards the Poroshenko government. And in our look at Sergei Targuta it’s also pretty worthy that, as the article notes, both Kolomoisky and Taruta were partially financing the neo-Nazi Azov Battalion:
“Ukraine’s President Petro Poroshenko has made clear his intention to rein in Ukraine’s volunteer warriors. Days after Kolomoisky’s soldiers appeared at UkrTransNafta, he said that he would not tolerate oligarchs with “pocket armies” and then fired Kolomoisky from his perch as the governor of Dnipropetrovsk.”
Yep, it was the private use of a private army to seize state assets in a business dispute that got Ihor Kolomoisky fired as government of the Dnipro Oblast in May of 2015. And that was just one example of how these neo-Nazi militias posed a threat to Ukrainian society. There’s also the obvious risk that they act on their own orders and try to seize control.
But the greatest threat these neo-Nazi militias pose clearly involves working in coordination with a team of Ukrainian oligarchs. And that’s part of what makes an understanding of the opaque Ukrainian oligarchic fault lines so important, because there’s always the chance that these inter-oligarch disputes will result in these private armies getting used for a coup or something along those lines.
And that’s a big part of why it’s notable that about Taruta and Kolomoisky have a history of financing groups like the Azov Battalion:
And that’s also why it’s so notable if a company like AIQ is offering political services to someone like Taruta: Because Taruta appears to be allied with the pro-Western faction of Ukrainian oligarchs who want to replace their current Ukrainian government with their own faction. Much like Andreii Artemenko and his ‘peace plan’ proposal, which also appeared to be a plan from a pro-Western-anti-Poroshenko faction of Ukrainian oligarchs.
In other words, the story about Sergei Taruta and the bizarre fake congressional campaign appears to be one element of a much larger very A real inter-oligarch dispute involving some very powerful oligarchs. And Cambridge Analytica/AIQ/SCL appears to be working for one of those sides and it’s the side currently out of power and trying to reverse that situation.
So you know that creepy feeling you get when you Google something and ads creepily related to what you just browsed start following you around on the internet? Rejoice! At least, rejoice if you enjoy that creepy feeling. Because you’ll get to experience that creepy feeling watching broadcast tv too with the next generation of televisions and ATSC 3.0 broadcast format technology that just got offered to the American public for the first time on KFPH UniMás 35 in Pheonix, Arizona, with more market rollouts planned soon.
So how is the ATSC 3.0 broadcast format for television going to allow creepily personalized ads to follow you on television too? The new format basically combines over-the-air TV with internet streaming. So part of what you’ll see on the screen will be content sent over the internet which will obviously be personalized. And that’s going to include ads.
But it won’t just be delivery personalized content. The technology will also allow for tracking of user behavior. And there are no privacy standards at all. That will be up to individual broadcasters who will design their own app will will deliver the personalized content. Which obviously means there are going to be lots of broadcasters tracking your television viewing habits, creating the kind of nightmare privacy situation we’ve already seen on platforms like Facebook and app developers. This ATSC 3.0 broadcast format is like a new giant platform that everyone will share in the US, but there are no privacy standards for the app developers which might even be worse than Facebook.
So that’s coming with the next generation of televisions. As one might imagine given the fact that this new technology threatens to turn the tv into the next consumer privacy nightmare, this technology was a major focus of several tech demonstrations at the recent National Association of Broadcasters (NAB) conference in Las Vegas. And as one might also imagine, the industry hasn’t had much to say about the privacy aspect of this privacy nightmare it’s about to unleash:
“Broadcasters haven’t talked much about the advertising aspect, and they’ve said even less about the potential privacy implications, but it was a major focus of several tech demonstrations at the National Association of Broadcasters (NAB) conference in Las Vegas this week.”
Mum’s the word on the potential privacy implications for American television viewers. Potential privacy implications that could be coming to a media market near you soon:
And while the broadcasting industry may not want to talk about potential privacy violations, they sure are excited to talk about collecting viewer data for the purpose of serving up personalized ads:
And in this new app-based model for personalized broadcast television each broadcaster develop their own apps, meaning there’s going to be a lot of different apps/broadcasters potentially tracking what you do with those next-generation TVs:
Although it’s worth noting that the demonstration apps shown to the author of that TechHive article weren’t capable of tracking what you do on different app. So each broadcaster would, in theory, only get to see what you do with their app and not other broadcasters’ apps. But, of course, a lot of broadcasters are going to own multiple channels in a market. Or they just might decide to share the data with each other:
Also keep in mind that there are still significant potential privacy violations even if apps can’t read the activity of other apps. For instance, if an app is capable of simply detecting when you turn the tv off or on, that gives information about your day to day living schedule. It’s one of the generic privacy violations that come with the “internet-of-things”.
And then there’s the possible privacy violations that come with next-generation televisions with built in microphones. Imagine how many apps will ask for permission to listen to everything you say in order to better personalize the service. Remember those stories about the CIA hacking into Samsung Smart TVs with built in microphones? That’s probably going to be the standard app behavior if people allow it.
And, finally, the article notes that this means the nightmare of micro-targeted personalized political ads is coming to broadcast television:
Yep, just wait for Cambridge Analytica-style personalized psychological profiling of you, a profile that incorporates all the information already gathered about you from all the existing sources of information about you — Facebook, Google, data broker giants like Acxiom — and combines that with the knowledge on you obtained through your smart television, and get ready for the next-generation onslaught of the full-spectrum of personalized political ads designed to inflame you and polarize the country. The “A/B testing on steroids” advertising experiments employed by the Trump team on social media is coming to television.
It’ll be a golden age for television commercial actors because they’re going to have to shoot all the different customized versions of the same commercials used to micro-target the audience’s psychological profiles.
Of course, there is going to be the one option for next-generation television owners for avoiding the data privacy nightmare of personalized tv: unplug it from the internet and just watch tv the soon-to-be-old-fashioned way:
And that points towards one of the glaring problems and solutions to this situation: the only option American television consumers are going to have is either navigate a data privacy nightmare landscape, where each app can have its own privacy standards and there are almost no rules, or unplug the smart tvs from the internet and forgo the internet-based services. And that’s because spying on consumers in exchange for services and enhanced profits is the fundamental model of the internet and this new data privacy nightmare landscape for smart tvs is merely the logical extension of that fundamental model. It’s a fundamental problem with the future of television ads and a fundamental problem with the internet-of-things in general: mass commercial spying is just assumed in America. It’s the model for the internet in America. There is no alternative. And that model is coming to broadcast television since that commercial mass spying model is clearly enshrined in the new ATSC 3.0. broadcast format. It’s a format that lets each app developer make up their own privacy standards. A ‘prepare-for-the-worst-hope-for-the-best’ model that literally prepares the way for the worst case scenario for consumer privacy and then just hopes that it won’t be abused. Like the internet.
And in the case of this next-generation internet-connected television it’s not like there’s the same possibility for competition that we find with Facebook because there’s the possibility for a Facebook competitor. But there’s only one national broadcast format for smart tvs and for nations that use teh ATSC 3.0 standard it’s going to let each app maker make up their own privacy rules. Note that the ATSC 3.0 standard doesn’t just apply the US. It was created by the Advanced Television Systems Committee which is shared by the US, Canada, Mexico, South Korea, and Honduras. So this is a multinational television standard and it’s a standard governments approve so it’s not like there’s competition. This is as good as the privacy standards are going to get for North American and South Korean internet-connected tv consumers: it’s up to the app developers i.e. no privacy standards.
And no standards on the exploitation of all the data collected on us to delivered highly persuasive micro-targeted ad campaigns. Cambridge Analytica-style micro-targeting psychological operations for tv. That’s coming to all elections.
So just FYI, your next smart television is going to be very persuasive.
This was more or less inevitable: it sounds like the ’87 million’ figure — the number of Facebook profiles that had their data scraped by Cambridge Analytica — is set to be raised again. Recall that it was initially a 50 million figure before Cambridge Analytica whistle-blower Christopher Wylie raised the estimate to 87 million, while hinting that the figure could be more.
Also recall that the 87 million figure, ostensibly derived from the 270,000 people who downloaded the Cambridge Analytica Facebook app and their many friends, corresponded to ~322 friends for each app user on average, which is very closer to the 338 average number of friends Facebook users had in 2014. In other words, the 87 million figure is roughly what we should expect if you start off with 270,000 app users and scrape the profile information for each of their 338 friends on average. So if that 87 million figure was to rise significantly, it would raise the question of where else did Cambrdige Analytica get their data.
Well, we have a new Cambridge Analytica whistle-blower, Brittany Kaiser, who worked full-time for SCL, Cambridge Analytica’s parent company, as director of business development between February 2015 and January of 2018. And according to Kaiser, it is indeed “much greater” than 87 million users. And Kaiser has a possible explanation for how Cambridge Analytica got data on all these additional users: they had more than one app that was scraping Facebook profile data.
And the way Kaiser puts it, it sounds like there were quite a few different apps used by Cambridge Analytica. Including one she calls the “sex compass quiz”. So, yes, the Trump team was apparently exploring the sexual predilections of the American electorate.
Additionally, Kaiser makes references to Cambridge Analytica’s “partners”. As she puts it, “I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login–for example, the ‘sex compass’ quiz.” So is that reference to Cambridge Analytica’s “partners” a reference to SCL or Aleksandr Kogan’s Global Science Research (GSR) company? Or were there other third-party firms that are also feeding information into Cambridge Analytica? The Republican National Committee, perhaps?
Along those lines, Kaiser has another remarkable claim that office culture was like the “Wild West” and that personal data was “being scraped, resold and modeled willy-nilly.” So Kaiser is asserting that Cambridge Analytica resold the data too? It sure sounds like it.
These are the kinds of questions raised by Brittany Kaiser’s new claims. Along with the open question of exactly how many people Cambridge Analytica was collecting this kind of Facebook data on. We know it’s “much greater” than 87 million, according to Kaiser, but we have no idea how much greater it is:
“Kaiser claimed that the office culture was like the “Wild West” and alleged that citizens’ data was “being scraped, resold and modeled willy-nilly.””
That’s rights, Cambridge Analytica wasn’t just scraping Facebook users’ data. They were apparently reselling it too. These are the claims by Brittany Kaiser, who worked full-time for the SCL Group, the parent company of Cambridge Analytica, as director of business development between February 2015 and January this year, during her testimony to a UK government government:
And according to Kaiser, the additional apps used by Cambridge Analytica include a “sex compass” quiz.
And keep in mind that the use of this sex app quiz is probably pretty similar to how Aleksandr’s psychological profiling app worked: you use the data collected on the people taking the quiz as the “training set” in order to develop algorithms for inferring Facebook users’ sexual preferences based on their Facebook profile data. And then Cambridge Analytica uses those algorithms to make educated guesses about the ‘sexual compass’ of all the other Facebook user they have profile data on. We don’t know that this is what Cambridge Analytica did with the ‘sex compass’ app, but we know that’s probably what they did because that is the business they are in.
And it’s the use of all these additional apps that Kaiser saw Cambridge Analytica employ that appears to be the basis for her conclusion that the number of Facebook profiles scraped by Cambridge Analytica is “much greater than 87 million”. And she also asserts, quite reasonably, that Cambridge Analytica wasn’t the only entity engaged in this kind of activity:
So how much higher is that 87 million figure going to go? Well, there’s one other highly significant number we should keep in mind when trying to understand what kind of data Cambridge Analytica acquired: The company claimed to have up to 5,000 data points on 220 million Americans. Also keep in mind that 220 million is greater than the total number of Facebook users in the US (~214 million in 2018).
So if we’re wondering how high that 87 million figure might go, the answers might be something along the lines of “almost all the Facebook users in the US in 2014–2015”. Whatever that number happens to be is probably the answer.
Here’s a set of articles on one of the figures who co-founded both Cambridge Analytica and its parent company SCL Group: Nigel Oakes.
While Cambridge Analytica’s former-CEO Alexander Nix has received much of the attention directed at Cambridge Analytica, especially following the shocking hidden-camera footage of Nix talking to an undercover reporter he thought was a client, the story of Cambridge Analytica ultimately leads to Oakes according to multiple sources.
So who is Nigel Oakes? Well, as the following article notes, Oakes got his start in the business of influencing people in the field of “marketing aromatics,” or the use of smells to make consumers spend more money. He also dated Lady Helen Windsor when he was younger, which made him a somewhat publicly known person in the UK.
In 1993, Oakes co-founded Strategic Communication Laboratories, the predecessor to SCL Group. In 2005, he co-founded SCL Group which, at the time, made headlines when it billed itself at a global arms fair in London as the first private company to provide psychological warfare services. Oakes said he was confident that psyops could shorten military conflicts. As he put it, “We used to be in the business of mind bending for political purposes, but now we are in the business of saving lives.”
SCL sold the same psychological warfare products in the US. Services included manipulation of elections and “perception management,” or the intentional spread of fake news. And the US State Department remains a client and confirmed that it retains SCL Group on a contract to “provide research and analytical support in connection with our mission to counter terrorist propaganda and disinformation overseas.”
So Nigel Oakes has quite an interesting history. A history that he unwittingly encapsulate with a now-notorious quote he gave in 1992:
“We use the same techniques as Aristotle and Hitler...We appeal to people on an emotional level to get them to agree on a functional level.”:
““Anyone right now that is focusing on the problems with Cambridge Analytica should be backtracking to the source, which is Nigel Oakes,” said Sam Woolley, research director of the Digital Intelligence Lab at the Silicon Valley-based Institute for the Future.”
Nigel Oakes is seen as “the source” of Cambridge Analytica. And Cambridge Analytica is seen as merely “the tip of the iceberg of Nigel Oakes’ empire of psyops and information ops around the world”:
And that’s how British journalist Carole Cadwalladr, who has done extensive reporting on Cambridge Analytica over the last year, also sees it: the questions about Cambridge Analytica leads to Oakes:
And that’s no surprise that Cambridge Analytica questions lead to Oakes. He helped co-found it, along with co-founding SCL Group in 2005 and Strategic Communication Laboratories in 1993:
And Oakes has been pitching SCL Group as a private psychological warfare service provider for years. So if we’re exploring how Cambridge Analytica got into the business of the manipulation of the masses, the fact that SCL has been providing those services to the US and UK governments for years is a pretty big factor in that story. when Cambridge Analytica was formed in 2013, its team was already quite experienced in these kinds of matters:
And as the hidden-camera footage of Alexander Nix showed the world, those mass manipulation services include dirty tricks. Like sending Ukrainian sex workers to an opponent’s house to sabotage him. It’s an indicator of the amoral character of the people behind Cambridge Analytica and its SCL Group parent:
And that amorality is perfectly encapsulated in a now-notorious 1992 quote from Oakes, where he favorably compares his work in psychological manipulation with the techniques employed by Hitler:
And 1992 quote was the only ‘we use the same techniques as Hitler!’ quote Oakes has made over the years. As the following article notes, Oakes made the same admission last year in reference to the techniques employed by Cambridge Analytica for the Trump campaign:
““Hitler, got to be very careful about saying so, must never probably say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a problem with the Jews at all, but the people didn’t like the Jews,” Oakes said. “So if the people… He could just use them to say… So he just leverage an artificial enemy. Well that’s exactly what Trump did. He leveraged a Muslim- I mean, you know, it’s- It was a real enemy. ISIS is a real, but how big a threat is ISIS really to America? Really, I mean, we are still talking about 9/11, well 9/11 is a long time ago.””
And that’s Nigel Oakes in his own words: he saw Trump’s systematic fear mongering about virtually all Muslims as more or less the same cynical technique employed by Hitler.
And when you look at the full quote provided to the UK parliament it sounds even worse because he’s framing the use of these demonization techniques as simply a way to fire up “your group” (your target base of supporters) by demonizing a different group that you don’t expect to vote for your candidate:
“And often, as you rightly say, it’s the things that resonate, sometimes to attack the other group and know that you are going to lose them is going to reinforce and resonate your group.”
Attacking “the other group and know that you are going to lose” in order to “reinforce and resonate your group.” That’s how Nigel Oakes matter-of-factly framed the use of the same kinds of mass manipulation techniques designed to generate an emotional appeal to a target political demographic. An emotional appeal that happens to be based on demonization a group of people that your target demographic already generally dislikes. In other words, find the existing areas of hatred and inflame them.
And offering services that will strategically inflame those passions is something Nigel Oakes has been openly offering clients for decades. And that’s all part of why Nigel Oakes is described as the real force behind Cambridge Analytica.
At the same time, let’s not forget the previous reports about Cambridge Analytica whistle-blower Christopher Wylie and Wylie characterization of Steve Bannon as Alexander Nix’s real boss at Cambridge Analytica despite technically serving as the company’s vice president and secretary. So while Nigel Oakes is clearly a critically important figure behind Cambridge Analytica, the question of who was really in charge of that Cambridge Analytica operation for the Trump Team is still an open question. Although it was likely more of a Hitler-inspired group effort.
Here’s an ominous article about Palantir (as if there aren’t ominous articles about Palantir) that highlights both the challenges the company faces in selling its surveillance services and their plans for overcoming those challenges: It turns out the services Palantir offers to its clients is pretty labor intensive, including a potentially large number of on-site Palantir employees. One notable example is JP Morgan that hired Palantir to monitor the bank’s employees for the purpose of detecting miscreant behaviors. And this service involved as many as 120 “forward-deployed engineers” from Palantir working at JP Morgan, each one costing the bank as much as $3,000 a day. So from a price standpoint that’s obviously going to be an issue, even for a financial giant like JP Morgan. Although at JP Morgan it sounds like the bigger issue was that the executives learned that their emails and activity were potentially caught up in Palantir’s data dragnet too. But the overall cost of these “forward-deployed engineer” Palantir contractors is reportedly an issue for a number of other corporate clients that recently dropped Palantir including Hershey Co., Coca-Cola, Nasdaq, American Express, and Home Depot.
So how is Palantir planning on addressing the labor-intensive nature of their services to attract more clients? Automation, of course. And that’s already part of the new product Palantir is offering clients called Faundry which is already in use by Airbus SE and Merck KGaA. In other words, the automation of Palantir’s corporate surveillance services is almost here and that means a lot more corporate clients are probably going to be hiring Palantir. So, yeah, that’s rather ominous.
The article also includes a few more Palantir fun facts. For instance, while there are 2,000 engineers at the company, the Privacy and Civil Liberties Team only consists of 10 people.
A second fun fact is about Peter Thiel. Apparently he’s planning on move to Los Angeles and starting up a right-wing media empire. Oh goodie.
The article also contains a couple of fun facts in relation to the questions about Palantir and Cambridge Analytica after the revelation that a Palantir employee was working with Cambridge Analytica to develop its psychological profiling algorithms: First, Palantir claims that the company turned down the offers to work with Cambridge Analytica and that its employee, Alfredas Chmieliauskas, was purely working on his own. As the following article notes, that’s the same explanation Palantir gave when it was caught planning an orchestrated disinformation campaign against Wikileaks and Anonymous. So the “lone employee” explanation for Palantir appears to be a favorite.
Additional, the article notes tha Palantir doesn’t advertise its services and instead purely relies on word-of-mouth. And that’s interesting in relation to the mystery of how it was that Sophie Schmidt, Google CEO Eric Schmidt’s daughter and a former Cambridge Analytica intern, just happened to stop by in Cambridge Analytica’s London headquarters in mid 2013 to push the idea that the company should start working with Palantir. Now, it’s important to recall that part of what made Sophie Schmidt’s seemingly random visit in mid-2013 so curious is that Cambridge Analytica and Palantir has already started talking in early 2013. Still, it’s noteworthy if Palantir only relies on word-of-mouth referrals and Sophie Schmidt appeared to be provided exactly that kind of referral seemingly randomly and spontaneously.
So that’s all some of the new information we learn about Palantir in the following article. New information that’s all ominous, of course:
“High above the Hudson River in downtown Jersey City, a former U.S. Secret Service agent named Peter Cavicchia III ran special ops for JPMorgan Chase & Co. His insider threat group—most large financial institutions have one—used computer algorithms to monitor the bank’s employees, ostensibly to protect against perfidious traders and other miscreants.”
Insider threat services. That appears to be one of the primary services Palantir is trying to offer to corporate clients. It’s the kind of service that gives Palantir access to almost everything employees are doing in a company and basically turns it into a Big Brother-for-hire entity. And when JP Morgan hired Palantir to provide these services they ended up dropping the after the executives learned that it was too Big Brother-ish and watching over the executives too:
And this project at JP Morgan was basically the test lab for a new service Palantir is trying to offer the financial sector: Metropolis:
And through this JP Morgan test bed for Metropolis, Peter Cavicchia insider threat group was given access to “a full range of corporate security databases that had previously required separate authorizations and a specific business justification to use”. Along with a team of Palantir engineers to help him use that data. This is the business model Palantir was trying to test so it could sell to other banks: using Palantir to give bank employees unprecedented access to the bank’s internal data (which, of course, means Palantir likely has access to that data too):
But Palantir’s test bed at JP Morgan ultimately turned into a failed experiment when JP Morgan’s leadership learned that Cavicchia had apparently used his unprecedented access to internal documents to spy on JP Morgan executives who were investigating a leak to the New York Times. The leak appeared to be done by an executive who had just left the company, Frank Bisignano, who also happened to be Cavicchia’s patron at the company before he left. And that leak investigation appeared to show that Cavicchia accessed executive emails about the leak and passed them along to Bisignano. In other words, JP Morgan learned that the guy they made their corporate Big Brother abused that power (shocker):
Thus ended Palantir’s test run of Metropolis, highlighting the fact that the extensive manpower associated with Palantir’s services isn’t the only factor that might keep corporate clients away. The way Palantir’s services create individuals with unprecedented access to the internal documents of a company might also drive clients away. After all, threat assessment groups are intended to mitigate risk. Not exacerbate it.
But the cost of all those on-site Palantir engineers is still a obstacle to wider adoption of Palantir services. As the article notes, roughly half of Palantir’s 2,000 engineers are working on client sites:
And that’s what Palantir’s newest product, Foundry, is designed to address. By increasingly automating the corporate surveillance process:
“Deeper adoption of Foundry in the commercial market is crucial to Palantir’s hopes of a big payday.”
And that appears to be the direction Palantir is heading: automated corporate surveillance which will allow the company to offer its services cheaper and to more clients. So if Palantir succeeds we just might see A LOT more companies hiring Palantir’s services, which means A LOT more employees are going to have Palantir’s software watching and analyzing their every keystroke and email. It really is pretty ominous. Especially given the fact that company’s Privacy and Civil Liberties Team consists of a whole 10 people:
So that’s an overview of the current status of Palantir’s Big Brother-for-hire services: they’ve hit some obstacles, but if they can succeed in overcoming those obstacle Palantir could become the go-to corporate surveillance firm. It’s more than a little ominous.
And then there’s the to fun fact from this article that relate to the questions of Palantir’s ties to Cambridge Analytica: First, just as Palantir claimed that it’s employee found to be working with Cambridge Analytica, Alfredas Chmieliauskas, was doing this on his own, that’s the same excuse Palantir gave when it was caught pitching a project to the US Chamber of Commerce to run a secret campaign to spy on and sabotage the Chamber’s critics: it was just a lone employee:
Finally, there’s the interesting fact that the Palantir executes boast of not employing a single sales-person and just rely on word of mouth:
And Sophie Schmidt, Google CEO Eric Schmidt’s daughter and a former Cambridge Analytica intern, provided exactly that in June of 2013: a word of mouth endorsement of Palantir. So did Sophie Schmidt make this word of mouth pitch independently and coincidentally? It remains an unanswered question but it’s hard to ignore that Schmidt’s pitch appears to be the mode of how Palantir markets itself.
So we’ll see what happens with Palantir and its drive to use automated corporate surveillance to cut costs and sell its Big Brother-for-hire services to even more large employers. But it does seem like just a matter of time before Palantir succeeds in cutting those costs, which means “word of mouth” isn’t just going to be Palantir’s approach to marketing. Word of mouth is also going to be the only way employees in the future will be able to say something to each other without Palantir knowing about it.
Here’s an update on how Facebook his planning on addressing the new congressional scrutiny it’s receiving from the US Congress as the Cambridge Analytica continues to play out: Facebook’s head of policy in the United States, Erin Egan, was just replaced. It’s a notable position, politically speaking, because it’s based in Washington DC, so Facebook basically just replaced one of it’s top DC lobbyists.
So who replaced Egan? Kevin Martin, Facebook’s vice president of mobile and global access policy. Oh, and Martin was also a former Republican chairman of the Federal Communications Commission. Surprise!
Martin will report to vice president of global public policy, Joel Kaplan. Oh, and Martin and Kaplan worked together in the George W. Bush White House and on Bush’s 2000 presidential campaign. Surprise again! There’s a distinct ‘K Street’ feel to it all.
Facebook is spinning this by emphasizing that Egan will remain chief privacy officer. The company is acting like they made this move in order to have someone with Egan’s credentials focused on rebuilding trust and not so they can replace her with a Republican.
And that appears to be Facebook’s strategy for dealing with Congress: tasking Republicans to lobby their fellow Republicans:
“Ms. Egan, who is also Facebook’s chief privacy officer, was responsible for lobbying and government relations as head of policy for the last two years. She will be replaced by Kevin Martin on an interim basis, the company said. Mr. Martin has been Facebook’s vice president of mobile and global access policy and is a former Republican chairman of the Federal Communications Commission.”
When you’re a company as big as Facebook, that’s who you bring in to lead you’re lobbying effort: The former Republican chairman of the FCC.
And this means two Republicans will be in charge of Facebook’s Washington offices (which are pretty much there to lobby):
But the way Facebook would prefer us to look at it, this was really all about freeing up Erin Egan to work on rebuilding trust over privacy concerns:
And this move is happening at the same time Facebook is staring at a new EU data privacy regime, the GDPR:
And those new EU GDPR rules don’t just potentially impact how Facebook handles its European users going forward. It potentially impacts the policies governing all of Facebook’s users outside of the US.
Why? Because Facebook’s customers outside the US and Canada are handled by Facebook’s operations in Ireland and therefore under EU rules. That’s just how Facebook decided to structure itself internationally (largely due to Ireland’s status is a corporate tax haven).
So does this mean Facebook’s US users will be operating in a data privacy regulatory environment managed by the GOP while almost everyone else in the world operates under the EU’s new rules? Nope, because Facebook just moved its international operations out of Ireland and back to its US headquarters in California. And that means the rules Facebook is lobbying for in DC will apply to all Facebook users globally outside the EU:
“Facebook members outside the United States and Canada, whether they know it or not, are currently governed by terms of service agreed with the company’s international headquarters in Ireland.”
Yep, for Facebook and quite a few other major internet companies with international headquarters in Ireland, it’s the EU’s rules that determine the rules for most of their global customer base. But not anymore for Facebook:
And that move from Ireland to California will impact the ~1.5 billion users Facebook has outside of the US, Canada, and EU:
But Facebook wants to assure everyone that this move will have no meaningful impact on anyone’s privacy because it’s committed to having ALL of its users globally follow the same rules as laid out by the EU’s new GDPR. At least ‘in spirit’. That’s right, Facebook is telling the world that its going to implement the GDPR globally at the same time it moves its operations out of the EU. That’s not suspicious or anything:
So why did Facebook make the move if it’s pledging to implement the GDPR ‘in spirit’ for everyone? Well, according to Facebook, it’s “because EU law requires specific language.” That’s not dubious or anything:
And, of course, Facebook isn’t the only multinational internet firm looking to move out of Ireland. Microsoft’s LinkedIn is making the same move, under a similarly laughable pretense:
“We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data”
Yeah, LinkedIn is making the move so users won’t be confused about whether or not the US or EU LinkedIn entity was responsible for their personal data. LOL! We’ll no doubt get similarly laughable explanations from all the other multinational firms making similar moves.
Also don’t forget that these moves mean the US’s data privacy rules are going to be even more important for the internet giants because now those rules are for going to apply to users everywhere but the EU. And that means the lobbying of US lawmakers and regulators is going to be even more important going forward. The more companies that relocate to the US to escape the EU’s GDPR for the international customer base, the greater the incentives for undermining US data privacy laws. In other words, it’s a really great time to be a Republican data privacy lobbyist.
Here’s a pair of stories that relates to both Cambridge Analytica as well as the bizarre collection of stories related to the ‘Seychelles backchannel’ #TrumpRussia story (like George Nader’s participation in the ‘backchannel’ or Nader’s hiring of GOP money man Elliot Broidy to lobby on behalf of the UAE and Saudis). And the connecting element is none other than Erik Prince:
So long Cambridge Analytica! Yep, Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.
Is this truly the end of Cambridge Analytica? Of course not. They’re just rebranding under a new company, Emerdata. It’s kind of like when Blackwater renamed itself Xe, and then Academi. And intriguingly, Cambridge Analytica’s transformation into Emerdata introduces another association with Blackwater: Emerdata’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince:
“In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.”
So Cambridge Analytica is going away and the SCL Group is getting out of the elections business. At least on the surface. But there’s still an open question of who is going to retain the rights to all the information held by Cambridge Analytica, including all those psychographic voter profiles that are presumably worth quite a bit of money:
And that question over who is going to own the rights to all that data is particularly relevant given that executives at Cambridge Analytica and SCL Group and the Mercers recently formed a new company: Emerdata. And look who happens to be one of Emerdata’s directors: Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince:
“Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group.”
LOL! Yeah, the possibility for a “Blackwater-style rebranding” is looking more like a reality at this point. Although we’ll see how many clients this new company gets.
And that brings us to the following piece. It’s a fascinating piece that summarizes all of the various thing we’ve learned about Erik Prince, the #TrumpRussia investigation, and the UAE. And as the article notes, at the same time Emerdata was being formed in 2017 (August 11, 2017, was the incorporation dat) the UAE was already paying SCL to work on running a social media campaign for the UAE against Qatar as part of the UAE’s #BoycottQatar campaign. And as the article also notes, if you look at the name “Emerdata”, it sure sounds like shortened version of “Emerati-Data”.
So given the presence of Erik Prince’s business partner on the board of directors of Emerdata, and given Prince’s extensive ties to the UAE, we have to ask the question of whether or not Cambridge Analytica is about to become the new plaything of UAE:
“In 2017 as Cambridge Analytica executives created Emerdata, they were also working on behalf of the UAE through SCL Social, which had a $330,000 contract to run a social media campaign for the UAE against Qatar, featuring the theme #BoycottQatar. One of the Emerdata directors may have ties to the UAE and the company name, coincidentally, sounds like a play on Emirati-Data…Emerdata.”
Emirati-Data = Emerdata. Is that the play on words we’re seeing in this name? It does sound like a reasonable inference. Especially given Erik Prince’s close association with both the EmerData’s board of directors and the UAE:
So let’s take a closer look at Prince’s ties to the UAE and his parnters in Hong Kong: He moves to the UAE in 2010, and gets hired by the Sheik Mohamed bin Zayed al-Nahyan to build a fighting force in 2011. In 2012, while still living in the UAE, Prince creates the Frontier Resource Group, an Africa-dedicated investment firm partnered with major Chinese enterprises:
Then, in 2014, Prince gets named as Chairman of DVN Holdings, controlled by Hong Kong businessman Johnson Ko Chun-shun (who sits on the board of Emerdata) and Chinese state-owned Citic Group:
Then there’s all the shenanigans involving the Seychelles ‘backchannel’ (that inexplicably involves the UAE) and GOP money-man Elliott Broidy:
Then Emerdata gets formed in August of 2017. The next month, Steve Bannon and Alexander Nix atten the CSLA Investors’ Forum in Hong Kong, which is run by Citic Group, the majority owner of Prince’s Frontier Services Group:
Then in October of 2017, we have a continuation of Elliot Broidy’s lobbying the Trump administration on behalf of the UAE at the same time the SCL Group gets hired to implement a social media campaign for the UAE against Qatar:
Finally, in early 2018 we find Emerdata adding Alexander Nix, Johnson Chun Shun Ko (Prince’s partner at Frontier Services Group), Cheng Peng, Ahmad Al Khatib, Rebekah Mercer, and Jennifer Mercer to the board of directors:
So it sure looks a lot like the new incarnation of Cambridge Analytica is basically going to be applying Cambridge Analytica’s psychological warfare methods on behalf of the UAE, among others. The Chinese investors will also presumably be interested in these kinds of services. And anyone else who might want to hire a psychological warfare service provider run by a bunch of far right luminaries.
Oh look at that: Remember how Aleksandr Kogan, the University of Cambridge professor who built the app used by Cambridge Analytica, claimed that what he was doing was rather typical? Well, Facebook’s audit of the thousands of apps used on its platform appears to be proving Kogan right. Facebook just announced that it has already found and suspended 200 apps that appear to be misusing user data.
Facebook won’t say which apps were suspended, how many users were involved, or what the red flags were that triggered the suspension, so we’re largely left in the dark in terms of the scope of the problem.
But there is one particular problem app that’s been revealed, although it wasn’t revealed by Facebook. It’s the myPersonality app which was also developed by Cambridge University professors at the Cambridge Psychometrics Center. Recall how Cambridge Analytica ended up working with Aleksander Kogan only after first being rebuffed by the Cambridge Psychometrics Center. And as we’re going to see in the second article below, Kogan actually working on the myPersonality app until 2014 (when he went to work for Cambridge Analytica). So the one app of the 200 recently suspended apps that we get to know about at this point is an app Kogan helped develop. And the other 199 apps remain a mystery for now:
“Facebook declined to provide more detail on which apps were suspended, how many people had used them or what red flags had led them to suspect those apps of misuse.”
Did you happen to use one of the 200 suspended apps? Who knows, although Facebook says it will notify people of the names of suspended apps eventually. No timeline for that disclosure is given:
And, again, this is exactly what Kogan warned us about:
And note how Facebook is specifically saying it’s reviewing “tens of thousands of apps that could have accessed or collected large amounts of users’ personal information before the site’s more restrictive data rules for third-party developers took effect in 2015”. In other words, Facebook isn’t reviewing all of it’s apps. Only those that existed before the policy change that stopped apps from exploiting the “friends permission” feature that let app developers scrape the information for Facebook users and their friends. So it sounds like this review process isn’t looking for data privacy abuses under the current set of rules. Just abuses under the old set of rules:
And that apparent focus on abuses from the old “friends permission” rules suggests that current data use problems might go undetected. And the one app we’ve learned about, the myPersonality app, is a perfect example of the kind of app that would have been violating Facebook’s current data privacy rules. Because as people recently learned, the Facebook data gathered by the app was available online for the purpose of sharing with other researchers, but it was so poorly secured that anyone could have potentially accessed it:
But it gets worse. Because as the following New Scientist article that revealed the myPersonality apps privacy issues points out, the data on some 6 million Facebook users was anonymized, but it was such a shoddy anonymization scheme that someone could have easily deanonymized the data in an automated fashion. And access to this database was potentially available to anyone for the past four years. So almost anyone could have grabbed this anonymized data on 6 million Facebook users and deanonymized it with relative ease.
And putting aside the possible unofficial access of this data, the people and instituations that got official access is also concerning:More than 280 people from nearly 150 institutions accessed this database, including researchers at universities and at companies like Facebook, Google, Microsoft and Yahoo. Yep, researchers at Facebook were apparently accessing this database of poorly anonymized data.
So it should come as no surprise that, just as Aleksandr Kogan defended himself by asserting that lots of other apps did the same thing as his Cambridge Analytica app and Facebook was well aware of how his app was being used, we’re getting the exact same defense from the team by myPersonality:
“Academics at the University of Cambridge distributed the data from the personality quiz app myPersonality to hundreds of researchers via a website with insufficient security provisions, which led to it being left vulnerable to access for four years. Gaining access illicitly was relatively easy.”
Yep, an online database of highly sensitive Facebook + psychological profile data was made accessible to hundreds of researchers. But it was also potentially accessible to anyone due to poor security. For four years.
And those that were given official access to the data included companies like Microsoft, Google, Yahoo, and Facebook:
While the Facebook researchers could plausibly claim that they had no idea the server hosting this data had insufficient security, it would be a lot harder for them to claim they had no idea the anonymization scheme was highly inadequate:
And the only thing the myPersonality team appeared to do to anonymize the data was replace names with a number. THAT’S IT! And when that’s the only anonymization step employed in a data set with large amounts of data on each individual, including status updates, it’s going to be trivial to automate the deanonymization of these people, especially for companies like Google, Yahoo, Microsoft and Facebook:
Not surprisingly, two of the academics in charge of this project were part of a spin-off company that sold tools for targeting ads based on personality types. So it wasn’t just commercial companies like Google and Yahoo who got access to this data. The whole enterprise appeared to be commercial in nature:
And, of course, Aleksandr Kogan was part of this project before he went to work for Cambridge Analytica:
And note how Facebook only suspended this app on April 7th of this year, four years after Facebook ended its notorious “friends permission” feature that’s received most of the attention from the Cambridge Analytica scandal. It’s a big reminder that data privacy abuses via Facebook apps aren’t limited to that “friends permissions” feature. It’s an existing problem, which is why it’s troubling to hear that Facebook was looking into the tens of thousands of apps that may have abused in pre-2015 data use policies:
But beyond the troubling half-assed anonymization scheme, there’s the issue of all this data being inadvertently made available to the world due to the user credentials for the database getting uploaded into some code on GitHub, an online coding repository:
It’s important to keep in mind that the accidental release of those credentials by some students is probably the most understandable aspect of this data privacy nightmare. It’s the equivalent of writing a bug in code: a common careless accident. Everything else associated with this data privacy nightmare is far less understandable because it wasn’t a mistake but by design.
And as we should expect at this point, the designers of the myPersonality app are expressing dismay as Facebook’s dismay. After all, Facebook has long been aware of the project and even held meetings with the team as far back as 2011:
And don’t forget, Facebook researchers were among the users of this data. So Facebook was obviously pretty familiar with the app.
And in the end, we’ll likely never know who accessed the data and what they did with it. It’s just the tip of the iceberg:
And note one of the other chilling implications of this story: Recall how the ~270,000 user of the Cambridge Analytica app resulting in Cambridge Analytica harvesting data on ~87 million people using the “friends permissions” option. Well, if this myPersonality app was been operating for 9 years that means it also had access to the “friends permissions” option, and for much longer than the Cambridge Analytica app. And 6 million people apparently downloaded this app! So how many of that 6 million people were using this app in the pre-2015 period when the “friends permission” option was still available and how many friends of those 6 million people had their profiles harvested too?
So it’s entirely possible the people at myPersonality grabbed information on far more than the 6 million people who used their app and we have no idea what they did with the data. What we know know is just the tip of the iceberg of this story.
And this story of myPersonality is just covering one of the 200 apps that Facebook just suspended. In other words, this iceberg of a story is just the tip of a much, much larger iceberg.
Here’s a story about explosive new lawsuit against Facebook that could end up being a major headache for the company, and Mark Zuckerberg in particular: The lawsuit is being brought by Six4Three, a former app developer startup. Six4Three claims that, in 2012, Facebook was facing a large crisis with its advertising business model due to the rapid adoption of smartphones and the fact that Facebook’s ads were primarily focused on desktops. Facing a large drop in revenue, Facebook allegedly forced developer to buy expensive ads on the new, underused Facebook mobile service or risk having their access to data at the core of their business cut off.
The way Six4Three describes it, Facebook first got developers to build their business models around access to that data, and then engaged in what amounts to a shakedown of those developers, threatening to take that access away unless expensive mobile ads were purchased.
But beyond that, Six4Three alleges that Facebook incentivized developed to create apps for its system by implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. Don’t forget the Facebook friends data data (accessed via the “friends permission” feature) is the information at the heart of the Cambridge Analytica scandal.
So Facebook was apparently offering long-term access to “friends permission” data back in 2012 as a means of incentivizing developers to create apps and the same time it was threatening to cut off developer access to this data unless they purchased expensive mobile adds. And then, of course, that “friends permission” feature was wound down in 2015, which was undoubtedly a good thing for the privacy of Facebook users but as we can see the developers weren’t so happy about this, in part because they were apparently told by Facebook to expect long-term access to that data. Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook.
It’s worth noting that Six4Three developed an app called Pinkinis that searched through the photos of your friends for pictures of them in swimwear. So losing access to friends data more or less broke Six4Three’s app.
Beyond that, Six4Three also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. This is also noteworthy with respect to the Cambridge Analytica scandal since it appeared to be the case that Aleksandr Kogan’s psychological profiling app was allowed to access the “friends permission” feature later than other apps. In other words, the Cambridge Analytica app did actually appear to get preferential treatment from Facebook.
But Six4Three’s allegations go further, and suggest that Facebook’s executives would observe which apps were the most successful and plotted to either extract money from them, co-opt them or destroy them using the threat of cutting off access to the user data as leverage.
So, basically, Facebook is getting sued by this app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:
“A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.”
It was Mark Zuckerberg who personally led this shakedown operation, according to the lawsuit. So what’s the evidence? Well, that appears to be in the form of thousands of currently redacted internal emails. It’s unclear how those emails were obtained:
Note this isn’t a new lawsuit by Six4Three. They first filed a case in 2015, shortly after Facebook removed developers’ access to the “friends permission” data feature, where app developers could grab extensive information from ALL the Facebook friends of the users who downloaded their apps. And when you look at the how the Six4Three app works it’s pretty clear why they would have been very upset about losing access to the friends data: their “Pikinis” app is based on scanning your friends’ pictures for shots of them in swimwear:
And it’s a rather fascinating lawsuit by Six4Three because it’s basically complaining about Facebook suddenly threatening to remove access to this personal data after previously implying that developers would have long-term access to it and use that power to extort developers. And in order to make that case, Six4Three also asserts that Facebook was well aware of the privacy implications of its data sharing policies because access to that data was both the carrot and the stick for developers. So this case, if proven, would utterly destroy Facebook’s portrayal of itself as a victim of Cambridge Analytica’s misuse of its data:
And the initial motive for all this was Facebook’s realization in 2012 that it failed to anticipate the speed of consumer adoption of smartphones and effectively damaged its lucrative advertising business, which was focused on desktop ads:
So Facebook responded to this sudden threat to its core business by in multiple scandalous ways, according to the lawsuit. First, Facebook began forcing app developers to buy expensive mobile ads on its new, underused mobile service, or risk having their access to data at the core of their business cut off. It’s an example of how important selling access to that user data to third parties was to Facebook’s business model:
But beyond that, Six4Three alleges that Facebook was simultaneously trying to entice developers to makes for its systems by implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. So the “friends permission” feature for developers that Facebook was phasing out in 2014–2015 was apparently be peddled to developers as a long-term feature back in 2012:
And, according to Six4Three, once a business became hooked on Facebook’s user data, Facebook would then look for particularly lucrative apps and try to find ways to extract more money out of them. And that would apparently include threatening to cut off access to that user data to either force companies out of business or coerce app owners into selling at below market prices. Up to 40,000 companies were potentially defrauded in this way and it was Facebook’s senior executives who personally devised and managed the scheme, including Zuckerberg:
Not surprisingly, Sandy Parakila, the former Facebook executive turned whistleblower who previously revealed that Facebook executives were consciously negligent in how user data was used(or abused), views this lawsuit and the revelations contained in those emails a “bombshell” that more or less backs up what he’s been saying all along:
So was Mark Zuckerberg effectively acting like the top mobster in a shakedown scheme involving app developers? A scheme where Facebook selectively threatened to rescind access to its core data in order to extort ad buys from the developers, buy the app at below market prices, or straight up drive app developers out of business? We’ll see, but this is going to be a lawsuit to keep in eye on.
“That’s a nice app you got there...it would be a shame if something happened to your access to user data...”
Here’s a fascinating twist to the already fascinating story of Psy Group, the Israeli-owned private intelligence firm that was apparently pushed on the Trump team during the August 3, 2016, Trump Tower meeting. That’s the newly discovered meeting where Erik Prince and George Nader met with Donald Trump, Jr. and Stephen Miller to inform the Trump team that the crown princes of Saudi Arabia and the UAE were “eager” to help Trump win the election. And Psy Group, an Israeli private intelligence firm that offers many of the same psychological warfare services of Cambridge Analytica, presented a pitch at that meeting for a socia media manipulation campaign involving thousands of fake accounts. And this meeting happened a couple weeks before Steve Bannon replaced Paul Manafort and brought Cambridge Analytica into prominence in the Trump team’s electoral machinations.
So here’s the new twist to this Psy Group/Cambridge Analytica story: now we learn that Cambridge Analytica and Psy Group formed a business alliance with Cambridge Analytica after Trump’s victory to try to win U.S. government work. This alliance reportedly happened after the Cambridge Analytica and Psy Group signed a mutual non-disclosure agreement.
Intriguingly, the agreement was signed on December 14, 2016, according to documents seen by Bloomberg. And December 14th, 2016, just happens to be one day before the Crown Prince of the UAE secretly traveled the US — against diplomatic protocol — and met with the Trump transition team at Trump Tower (including Michael Flynn, Jared Kushner, and Steve Bannon) to help arrange the eventual meeting in the Seychelles between Erik Prince, George Nader, and Kirill Dmitriev.
So you have to wonder if the signing of that non-disclosure agreement was part of all the scheming associated with the Seychelles. Don’t forget that the Seychelles meeting appears to center around what amounts to a lucrative offer to Russia to realign itself away from the governments of Iran and Syria, which implicitly suggests plans for ongoing regime change operations in Syria and a major new regime change operation in Iran. And based on what we know about the services offered by both Psy Group and Cambridge Analytica — psychological warfare services designed to change the attitudes of entire nations — the two firms sound like exactly the kinds of companies that might have been major contractors for those planned regime change operations.
Granted, there would have been no shortage of potential US government contracts Cambridge Analytica and Psy Group would have been mutually interested in pursuing that have nothing to do with the Seychelles scheme. But the timing sure is interesting given the heavy overlap of characters involved.
And while the non-disclosure documents don’t indicate which government contracts precisely the two companies were initially planning on jointly bidding on (which makes sense if they were initially planning on working on something involving a Seychelles/regime-change scheme), there is some information on one of the contracts they did end up jointing bidding on which happened to focus on psychological warfare services in the Middle East. Specifically, they made a joint proposal for the State Department’s Global Engagement Center for a project focused on disrupting the recruitment and radicalization of ISIS members. It sounds like the proposal focused heavily on creating fake online personas so it’s basically a different application for the same fake-persona services Psy Group and Cambridge Analytica offer in the political arena.
And it turns out the State Department’s Global Engagement Center did indeed sign a contract with Cambridge Analytica’s parent company, SCL Group, last year. Additionally, one of the contracts Psy Group and Cambridge Analytica jointly submitted to the US State Department also included SCL. Although it’s unclear if it involved Cambridge Analytica because it didn’t include provisions for subcontractors and the contract didn’t involve social media and was focused on in-person interviews. So while we don’t know how successful Cambrdige Analytica and Psy Group were in their mutual hunt for government contracts, SCL was successful. So if SCL was getting lots of other contracts who knows how many of them also involved Cambridge Analytica and/or Psy Group.
We’re also learning that Psy Group appears to have shut itself down in February of 2018 shortly after George Nader was interview by Robert Mueller’s grand jury. But it doesn’t appear to be a real shutdown and it sounds like Psy Group has quietly reopened under the new name “WhiteKnight”. Let’s not forget that Cambridge Analytica appears to have already done the same thing, shutting down only to quietly reopen as “Emerdata”. So for all we know there’s already a new WhiteKnight/Emerdata non-disclosure agreement in place for the purpose of further joint bidding on government contracts. But as the following story makes clear, one thing we do know for sure at this point is that if the Cambridge Analytica and/or Psy Group end up getting government contracts they’re going to go to great lengths to hide it:
“Special Counsel Robert Mueller’s team has asked about flows of money into the Cyprus bank account of a company that specialized in social-media manipulation and whose founder reportedly met with Donald Trump Jr. in August 2016, according to a person familiar with the investigation.”
So the Mueller probe is looking into money-flows of Psy Group’s Cyprus bank account, along with the activities of George Nader (who pitched Psy Group to the Trump team in August 2016) and this interest from Mueller appears to have led to the sudden shutdown of the company a few months ago:
Although the sudden shutdown of Psy Group appears to really be a secret rebranding. Psy Group is apparently now WhiteKnight, a rebranding the company has been working on for a white it seems since WhiteKnight was hired by Nader to do a post-election analysis on the role social media played in the 2016 election:
Just imagine how fascinating WhiteKnight’s post-election analysis on the role social media played must since it was basically conducted by Psy Group, a social media manipulation firm that either executed much of the most egregious (and effective) social media manipulation itself or worked directly with the worst perpetrators like Cambridge Analytica. There’s probably quite a few insights in that report that wouldn’t be available to other firms.
So what kinds of secrets is Psy Group hoping to keep hidden with its shutdown/rebranding move? Well, some of those secrets presumably involve the alliance Psy Group created with Cambridge Analytica shortly after Trump’s victory, culminating the the December 14, 2016, mutual non-disclosure agreement (one day before the Trump Tower meeting with the crown prince of the UAE to set up the Seychelles meeting). And note how the contract Psy Group and Cambridge Analytica pitched to “conducted messaging/influence operations in well over a dozen languages and dialects” was also submitted with Cambridge Analytica’s parent company SCL. So Psy Group’s alliance with Cambridge Analytica was probably really an alliance with Cambridge Analytica’s parent company too:
Another point to keep in mind regarding the timing of that December 14, 2016, mutual non-disclosure agreement: the Seychelles meeting appears to be a giant pitch designed to realign Russia, indicating the UAE was clearly very interested in exploiting Trump’s victory in a big way. They were ‘cashing in’, metaphorically. So it seems reasonable to suspect that Psy Group, which is closely affiliated with the UAE’s crown prince, would also be quite interested in literally ‘cashing in’ in a very big way too during that December 2016 transition period. In other words, while we don’t know what Psy Group and Cambridge Analytica decided to not disclose with their non-disclosure agreement, we can be pretty sure it was extremely ambitious at the time.
But at this point, the only proposals for US government contracts that we do know about were for an anti-ISIS social media operation for the US State Department’s Global Engagement Center:
And one contract we do know about at this point that was awarding to this network of companies was actually awarded to Cambridge Analytica’s parent company, SCL:
So there’s one government contract that SCL won following Trump’s election, but Psy Group/Cambridge Analytica may or may not have been involved with.
And that’s all we know about the work Psy Group may or may not have done for the US government following Trump’s victory at this point. Except we also know that Psy Group and Cambridge Analytica weren’t competing, so whatever contract Psy Group got Cambridge Analytica may have received too. And that indicates, at a minimum, a willingness for these two companies to work VERY close together. So close they risk revealing internal secrets to each other. Don’t forget, Psy Group and Cambridge Analytica are ostensibly competitors offering similar services to the same types of clients. And shortly after the election they were willing to sign an agreement to jointly compete for contracts that they would work on together. Don’t forget that one of the massive questions looming over this whole story is whether or not Psy Group and Cambridge Analytica — two direct competitors — were not just on the same team but actually working closely together during the 2016 election to help elect Trump. And thanks to these recent revelations we now know Psy Group and Cambridge Analytica were at least willing to work extremely closely with each other immediately after the election on a variety of different government contracts. That seems like a relevant clue in this whole mess.
Oh look, a new scary Cambridge Analytica operation was just discovered. Or rather, it’s a scary new story about AggregateIQ (AIQ), the Cambridge Analytica offshoot that Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development to and played a key role in also worked on the pro-Brexit campaign and later assisted a West-leaning East Ukraine politician Sergei Taruta. It’s like these companies can’t go a week without a new scary story. Which is extra scary.
For scary starters, the article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. So if Facebook really was trying to kick Cambridge Analytica off of its platform it’s not trying very hard. One is even named “AIQ Johnny Scraper” and it’s registered to AIQ.
Another part of what makes the following article scary is that it’s a reminder that you don’t necessarily need to have downloaded a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.
Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made. That’s what AIQ was offering and the newly discovered database contained the info for that.
In this case, the Financial Times has somehow gotten its hands on a bunch of Facebook-related data on held internally by AIQ. It turns out that AIQ stored a list of 759,934 Facebook users in a table that included home addresses, phone numbers and email addresses for some profiles. Additionally, the files contain political Facebook posts and likes for the people. It all appears to be part of a software package AIQ was developing for a client that would allow them to search the political posts and “Likes” people made on Facebook. A personal political browser that could give a far more detailed peak into someone’s politics than other forms of traditionally available information on people’s politics like political donation records and party affiliation.
Also keep in mind that we already know Cambridge Analytica collected large amounts of information on 87 million Facebook accounts. So the 759,934 number should not be seen as the total number of people AIQ has similar such files on. It could just be a particular batch selected by that client. A batch of 759,934 people a client just happens to want to make personalized political searches on.
It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period. And an ideological purge of the State Department is reportedly already underway. So it will be interesting to learn if this AIQ is being used for such purposes.
It’s unclear if the data in these files was collected through a Facebook app developed by AIQ — in which case the people in the file at least had to click the “I accept” part of installing the app — or if the data was collected simply from scraping publicly available Facebook posts. Again, it’s a reminder that pretty much ANYTHING you do on a publicly accessible Facebook post, even a ‘Like’, is probably getting collected by someone, aggregated, and resold. Including, perhaps, by AggregateIQ:
““The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.””
As security researcher Chris Vickery put it, the whole is greater than the sum when you look at the synergystic way the various tools developed by companies like Cambridge Analytica and AIQ work together. Synergy in the service of creating a mass manipulation service with personalized micro-targeting capabilities.
And that synergistic mass manipulation is part of why it’s disturbing to hear that Vickery just discovered 13 AIQ apps still available on Facebook after Cambridge Analytica was declared banned and caused Facebook so much bad publicity. The fact that there are still Cambridge Analytica-affiliated apps suggests Facebook either really, really, really likes Cambridge Analytica or it’s just really, really bad at app oversight:
“However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.”
“AIQ Johnny Scraper”. They weren’t even hiding it. But at least the Johnny Scraper app sounds relatively innocuous.
The personal political post search engine service, on the other hand, sounds far from innocuous. A database on 759,934 Facebook users created by AIQ software that tracked which which users liked a particular page or were posting positive or negative comments. So software that interprets what people write about politics on Facebook and aggregates that data into a search engine for clients. You have to wonder how sophisticated that automated interpretation software is at this point. Whatever the answer, AIQ’s text interpretation software is only going to get more sophisticated. That’s a given.
Someday that software will probably be able to write its own synopsis of a person that’s better than a human could do. Who knows when that kind of software will be available but someday it will be and companies like AIQ will be there to exploit it if that’s legal. That’s also a given.
And this 759,934 person database of political Likes and written political comments was what AIQ provided for just one client:
And for all we know, AIQ’s database could have been data curated from publicly available posts and not AIQ app users, highlighting how anything publicly done on Facebook, even a Like, is going to be collected by someone and probably sold:
You are what you Like in this commercial space. And we’re all in this commercial space to some extent. There really is a commercially available profile of you. It’s just distributed between the many different data brokers offering slices of it.
Another key dynamic in all this is that Facebook’s business model appears to be both a combination of exploiting the vast information monopoly it possesses with an opposing business model of effectively selling off little chunks of that data by making it available to app developers. There’s an obvious tension in both exploiting your data monopoly while selling it off but that appears to be the most profitable path forward which is why that’s probably the business model AIQ was offering with the data it was collecting from Facebook: analyzing the Facebook data it’s collected through apps and public data scraping, categorizing the data (like political of non-political comments and if they’re positive or negative), and then sell slices of that vast internal AIQ curated content to clients.
Aggregate as much data as possible. Analyze it. And offer pieces of that curated data pile to clients. That appears to be a business model of choice in this commercial big data arena which is why we should assume AIQ and Cambridge Analytica were offering similar service and shouldn’t assume this particular database of 759,934 Facebook accounts is the only one of its nature. Especially given the 87 million profiles they already scraped.
And this is a business model that’s going to apply for far more than just Facebook content. The whole spectrum of information collected on everyone is going to be part of this commercial space. And that’s part of what’s so scary: the data that gets fed into these independent Big Data repositories like the AIQ/Cambridge Analytica database is going to increasingly be the curated data provided by other Big Data providers in the same business. Everyone is collecting and analyzing the curated data everyone else is regurgitating out. Just as Cambridge Analytica and AIQ offer a slew of separate interoperable services to clients that have a ‘whole is greater than the sum’ synergistic quality, the entire Big Data industry is going to have a similar quality. It’s a competitive cooperative division of labor. Cambridge Analytica and AIQ are just the extra scary team members in a synergistic industry-wide team effort in the service of maximizing the profits we all make from exploiting everyone’s data for sale.
It’s that time again. Time to learn how the Cambridge Analytica/Facebook scandal just got worse. So what’s the new low? Well, it turns out Facebook hasn’t just been sharing egregious amounts of Facebook user data with app developers. Device makers, like Apple and Samsung, have also been given similar access to user data. At least 60 device makers known thus far.
Except, of course, it’s worse and these device makers have actually been given EVEN MORE data that Facebook app developers received. For example, Facebook allowed the device makers access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. And some device makers could access personal information from users’ friends who thought they had turned off any sharing. So the “friends permissions” option that allowed Cambridge Analytica’s app to collect data on 87 million Facebook users even though just 300,000 people used their app has remained an option for device manufacturers even after Facebook phased out the friends permission option in 2014–2015.
Beyond that, the New York Times examined the kind of information gathered from a Blackberry device owned by one of its reporters and found that it wasn’t just collecting identifying information on all the reporters friends. It was also grabbing identifying information on those friends’ friends. That single Blackberry was able to retrieve identifying information on nearly 295,000 people!
Facebook justifies all this by arguing that the device makers are basically an extension of Facebook. The company also asserts that there were strict agreements on how the data could be used. But the main loophole they cite is that Facebook viewed its hardware partners as “service providers,” like a cloud computing service paid to store Facebook data or a company contracted to process credit card transactions. And by categorizing these device makers as service providers Facebook is able to get around a 2011 consent decree Facebook signed with the US Federal Trade Commission over previous privacy violations. According to that consent decree Facebook does not need to seek additional permission to share friend data with service providers.
So it’s not just Cambridge Analytica and the thousands of app developers who have been scooping up mountains of Facebook user data without people realizing it. The device makers have been doing it too. More so. Much, much more so:
“Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.”
At least 60 device makers are sitting on A LOT of Facebook data. Note how NONE of them acknowledge this before this report came out as this Cambridge Analytica scandal was unfolding. It’s one of those quiet lessons in how the world unfortunately works.
And these 60+ device makers were able to access data of users’ friends without their consent even when those friends changed their privacy setting to bar any sharing:
“Most of the partnerships remain in effect, though Facebook began winding them down in April.”
Yep, these data sharing partnership largely remain in effect and didn’t end in 2014–2015 when the app developers lost access to this kind of data. It’s only now, as the Cambridge Analytica scandal unfolds, that these partnerships are being ended.
This was all done despite a 2011 consent decree that barred Facebook from overriding users’ privacy settings without first getting explicit consent. Facebook simply categorizing the device makers “service providers”, exploiting a “service provider” loophole in the decree:
It’s also worth recalling that Facebook made similar excuses for allowing app developers to grab user friends data, claiming that the data was solely going to be used for “improving user experiences.” Which makes the Facebook explanation for how the device maker data sharing program was very different from the app developer data sharing program rather amusing because, according to Facebook, the the device partners can use Facebook data only to provide versions of “the Facebook experience” (which implicitly admits that app developers were using that data from a lot more than just improving user experiences):
““These partnerships work very differently from the way in which app developers use our platform,” said Ime Archibong, a Facebook vice president. Unlike developers that provide games and services to Facebook users, the device partners can use Facebook data only to provide versions of “the Facebook experience,” the officials said.” LOL!
Of course, it’s basically impossible for Facebook to know what device makers were doing with this data because, just like with app developers, these device manufacturers had the option of keeping this Facebook data on their own servers:
And this data privacy nightmare situation apparently all started in 2007, when Facebook began building private APIs for device makers:
So what kind of data are device manufacturers actually collecting? Well, it’s unclear if all device makers get the same level of access. But BlackBerry, for example, can access 50 types of information on users and their friends. Information like Facebook users’ relationship status, religion, political leaning and upcoming events:
And as the New York Times discovered after testing a reporter’s Blackberry device, Blackberry was able to grab information on friends of friends, allowing the one device they tested to collect identifying information on 295,000 Facebook users:
And this information was collected and sent to the “BlackBerry Hub” immediately after the reporter connected their device to his Facebook account:
Not surprisingly, Facebook whistle-blower Sandy Parakilas, who left the company in 2012, recalls this data sharing arrangement triggering discussions within Facebook as early as 2012. So Facebook has had internal concerns about this kind of data sharing for the past six years. Concerns that were apparently ignored
Also keep in mind that the main concerns Sandy Parakilas recalls hearing Facebook executives expressing over the app developer data sharing back in 2012 was concerns that these developers were collected so much information that they were going to be able to create their own social networks. As Parakilas put it, ““They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people...They were worried that they were going to build their own social networks.”
Well, the major device makers have undoubtedly been gathering far more information than major app developers, especially when you factor in the “friends of friends” option and the fact that they’ve apparently had access to this kind of data up until now. And that means these device makers must already possess remarkably detailed social networks of their own at this point.
So when you hear Facebook executives characterizing these device manufacturers as “extensions of Facebook”...
...it’s probably the most honest thing Facebook has said about this entire scandal.
Here’s an angle to the Facebook data privacy scandal that has received surprisingly little attention because, when it comes to privacy violations, this just might be the worst one we’ve seen: It turns out one of the types of data that Facebook gave app developers permission to access is the contents of their private Inboxes.
Yep, it’s not just your Facebook ‘profile’ of data points Facebook has collected on you. Or all the things you ‘liked’. App developers apparently also could gain access to the private messages you received. And much like the ‘friends permission’ option exploited by Cambridge Analytica to get profile information on all of the friends of app users without the permissions of those friends, this ability to access the contents of your inbox is obviously a privacy violation of the people sending you those messages.
The one positive aspect of this whole story is that at least app developers had to let users know that they were giving access to the inbox. So users presumably had to agree somehow. And Facebook states that users had to explicitly give permission for this. So at least this wasn’t a default app permission.
But when asked about the language used in this notification Facebook had no response. So we can’t assume that all people who used Facebook apps were giving developers access to their their private inbox messages, but we also have no idea how many people were tricked into it with deceptive language during the permissions notifications.
Of course, one of the big questions is whether or not this inbox permissions feature got exploited by Cambridge Analytica? Yes, and that’s actually how we learned about its existence: When Facebook started sending out notifications to users that they may have been impacted by the Cambridge Analytica data collection (which impacted 87 million users) via the “This Is Your Digital Life” app created by Aleksandr Kogan, they sent the following notification that informed people that they may have had their personal messages collected:
So Facebook casually informed users that only a “small number of people” who used the Cambridge Analytica “This Is Your Digital Life” app may have given access to “message from you”. Did they actually give developers access to messages from you? That’s left a mystery.
And notice that the language in that Facebook notification says user posts were also made available to developers. That’s been one of the things that’s never been entirely clear in the reporting of this topic: were developers given access to the actual private posts people make? The language of that notification is ambiguous as to whether or not apps could access private posts or only public posts, but given the way everything else has played out on this story is seems like private posts were highly likely.
The inbox permissions was phased out in 2014 along with the “friends permission” option and many of the other permissions Facebook used to grant to app developer. There was a one year grace period for app developers to adjust to the new rules that took effect in April of 2015. But as the article notes, developers were actually given access to the Inbox permission until October 6 of 2015. And that’s well into the US 2016 election cycle, which raises the fascinating possibility that this ‘feature’ could have actually be used to spy on the US political campaign. Or the UK Brexit campaign. Or any other political campaign around the world around that time. Or anything else of importance across the world from 2010 — 2015 when these mailbox reading options were available to app developers.
And that’s what makes it so amazing that this particular story wasn’t bigger: back in April Facebook acknowledged that it gave almost anyone the potential capacity to spy on private Facebook messages and users had almost no idea this was going on. That seems like a pretty massive scandal:
“Facebook has admitted that some apps had access to users’ private messages, thanks to a policy that allowed devs to request mailbox permissions.”
Imagine how much Facebook did not want to admit that: they actually allowed users to give access to their inboxes to random app developers until 2014. If you downloaded a Facebook app with read_mailbox permissions, all your inbox messages were scooped up:
But the practice actually went on well into 2015 due to the grace period Facebook gave app developers. And the grace period for the inbox permissions went until October 6, 2015, 6 months later than almost all the other permissions that were getting phased out. This app spying feature was given extra time:
But at least people had to give explicit consent, according to Facebook. And they said only a “small number” of people gave that consent. So it suggests not all users of the Cambridge Analytica app gave this permission and it wasn’t turned on by default. Hopefully:
And when asked for an example of the permissions forms users would have had to sign Facebook didn’t give a reply:
Another alarming aspect of this is how Facebook tries to downplay the seriousness by pointing out that the messaging service was less used as a ‘real-time messaging service’ back in 2010–2015 than it is today. It merely highlights how it was inevitably used as a real-time messaging service by some people back then just not as many as today:
Imagine how content-rich those inboxes used for real-time messaging are for apps collecting that real-time messaging information. Along with all the non-real-time messages people were sending. Like long messages. And very private and secret stuff that people people wouldn’t want to give to any random app developer. Apps just had to successfully ask for that kind of data. That was available to app developers from 2010-October 6, 2015 (keep in mind that developers were getting personal information as far back as 2007. There were just new rules in 2010).
Cambridge Analytica denies they ever receiving this messaging data from Aleksander Kogan’s GSR company that actually built the app and collected the data. Which is probably as believable as the rest of Cambridge Analytica’s initial denials:
And as the article grimly reminds us, this issue is about much more than Cambridge Analytica and its SCL Parent company. It’s about any random app developer doing this and people who send messages to people who turn on the inbox spying option not giving their permission. It’s just a horrible option for Facebook to give to the developer community. How was there not mass spying going on?
And this story is just the latest horrible story against the back drop all the other data privacy horrors to emerge from the Cambridge Analytica scandal. Horrors that were rarely exclusive to Cambridge Analytica. Facebook is going with the ‘we’re sorry, we’ll be better’ angle, and angle that gets more and more jaded with each new horror story:
Yep, Facebook has been pretty lacking in the general reporting on all the data privacy nightmares of past and present. But there was a recent update on Facebook’s big app audit: It can’t actually do it. That’s the update:
“Facebook Inc.’s internal probe into potential misuse of user data is hitting fundamental roadblocks: The company can’t track where much of the data went after it left the platform or figure out where it is now.”
What happened to that mountain of personal data Facebook was doling out to any random app developer? Facebook is still trying to figure out who all the app developers were:
And Facebook hasn’t even gotten access to Cambridge Analytica’s servers to see what Cambridge Analytica may have done with the data:
And there’s no legal necessity for companies to comply with Facebook, so this app data usage audit is probably going to be pretty spotty at best:
And note how it’s apparently difficult for Facebook to simply known what data they gave away and this is blamed on technical difficulties with how the developer platform was designed, which is absurd if true. If there’s one thing they should know it’s what they gave away:
And note the tragic-comic juxtaposition of Facebook defining data use ‘wrongdoing’ as including things like storing personally identifiable like Kogan’s GSR did for the Cambridge Analytica app. And yet as Kogan has repeatedly pointed out, this was routine for app developers, and all evident suggests he is correct.
Despite that, Facebook’s vice president of product partnerships proclaimed that the vast majority—“99.99999999%”— of Facebook developers are good actors and that the firm doesn’t want to unnecessarily alienate them.
So the wrongdoing was routine and yet 99.9999999% of app developers did nothing wrong with the data from Facebook’s perspective. It’s the kind of narrative that makes it clear why Facebook doesn’t appear to want to try very hard in conducting this audit:
Finally, there’s the warning from Facebook at the end that the timeline for the audit is “somewhat amorphous”. Which translates as “as slowly as possible” (the other kind of ASAP):
So it sounds like Facebook has no idea what it gave away and not even necessarily who it gave it away to. We just know that data potentially included your inbox or your outbox until October 6, 2015.
All in all, one the key lessons we can take from all this is that Facebook is determined to learn nothing. A combination of feigned ignorance and system ignorance really is Facebook’s best defense at this point.
Another lesson from all this is that Facebook users should all probably do a personal audit of their inbox or outboxes. There’s probably quite a few other entities doing that same audit.
Here’s a story about privacy violations that’s surprisingly good news for Facebook. Good news for Facebook in the sense that it’s potentially really bad news for other tech giants like Google, Microsoft and Yahoo (now owned by Verizon) and more less ‘evens the score’ on Big Tech bad news front:
Remember the story about how Facebook was granting app developers potential access to the message inboxes of Facebook users? Well, as we should probably expect, it turns out there are large numbers of apps for Google’s, Microsoft’s, and Yahoo’s free email services that can also provide the developers of those apps full access to read your emails.
Yep, if you signed up for any sort of third-party email app, the people at that app company can potentially read all your emails. And this is the case for Gmail and Yahoo, the two biggest free email providers on the planet.
On the one hand, it makes perfect sense that human app developers could potentially be able to read your emails since their apps are literally doing that algorithmically and humans have to build, tweak, and maintain those algorithms. But on the other hand, it’s kind of amazing that this is barely known or recognized. As one app developer in the following article describes it, “Some people might consider that to be a dirty secret...It’s kind of reality.”
Realty is a dirty secret. That’s an apt way to put it, because as the article also points out, data-mining companies commonly use free apps and services to hook users into giving up access to their inboxes without making that clear.
And it can be large corporations all the way down to one-man operations that can develop and potentially gain this kind information. One of the companies described below, Return Path, partners with 163 different email app developer companies. In all of those cases, Return Path gets potential access to that email data.
Return Path’s systems are supposed to provide markets with a sense of how many people actually read their emails. So it sounds like they provide useful email apps to users and use the access they gain to users’ inboxes to see if people actually click on emails sent by marketer clients of Return Path. Return Path says its software is supposed to strip out personal emails from its systems and only focus on commercial emails and it automatically does this by examining senders’ domain names and searching for specific words, such as “grandma.” But in 2016, Return Path discovered its algorithm was mislabeling many personal emails as commercial, allowing millions of personal messages that should have been deleted to pass through to Return Path’s servers. And in response to this bug, Return Path gave some of its developers access to users inboxes so they could hammer out the bugs in their algorithm. So when Return Path’s algorithms turned out to be violating user privacy, humans are given access to user emails. And this is a single anecdote from a single company.
Google asserts that it vets all of the developers and had clear rules restricting developers’ abilities to store user data. But as the article notes, Google doesn’t actually do much to police those policies, much like what we saw with Facebook. The co-founder of an email for real-estate agents bluntly tells the Wall Street Journal, “I have not seen any evidence of human review” by Google employees. So if you use any third-party emails apps, keep in mind that those third parties might have a front row view to all of your emails. It’s quite a dirty open secret
So while we haven’t yet reached the nightmare scenario of learning that everyone’s emails around the globe have been hacked and in the hands of unknown entities, we’re steadily getting there.:
“Facebook Inc. for years let outside developers gain access to its users’ data. That practice, which Facebook has said it stopped by 2015, spawned a scandal when the social-media giant this year said it suspected one developer of selling data on tens of millions of users to a research firm with ties to President Donald Trump’s 2016 campaign. The episode led to renewed scrutiny from lawmakers and regulators in the U.S. and Europe over how internet companies protect user information.”
Will the kind public backlash Facebook is enduring over its shockingly loose data privacy policies for third party app developers translate into a more general public backlash against third-party apps? We’ll find out. Probably soon. Because if learning that Google’s Gmail, the most popular free email service in the world, has a similar third-party app data privacy policy can’t generate that backlash, probably nothing will:
“Google does little to police those developers, who train their computers—and, in some cases, employees—to read their users’ emails, a Wall Street Journal examination has found.”
That sure sounds a lot like everyone one of the Facebook scandals we’ve seen of late: the company assures us that it has policies in place to prevent data abuses but then we learn that those policies aren’t policed.
And it’s not just Google. Apps for Microsoft and Yahoo email services also have these data privacy issues. And as the article notes, data-mining companies commonly use apps to gain access to inboxes without making it clear. It really is a remarkably dirty open secret:
And in some cases there’s a single data-mining firm that partners with a large number of email app developers. Return Path, has access to two million people via its network of app developer partners. It’s algorithms scan around 100 million emails a day:
Another app developer, Edison Software, personally examined hundreds of user emails to develop new features. It’s a reminder that the development of new features more or less acts as a permanent excuse for developers to have humans reading user emails:
And according to one the former chief of one of Rival Path’s competitors, letting employees read user emails is “common practice”. As they put it, that common practice is a dirty secret and kind of reality:
And notice how neither Return Path nor Edison Software explicitly tell users that they are going to possibly be reading their emails. Instead, both say the user agreements cover that practice. In other words, user agreements don’t need to tell people they are giving this third party access to their emails:
And yet Google claims that it reviews the privacy policies of all apps to make sure they are clear:
“The company checks the domain name of the sender to look for anyone who has a history of abusing Google policies, and reads the privacy policies to make sure they are clear.”
Notice how Google assures us that its checking the domains of the emails sent by app developers asking for access to this data to search for people with a history abusing Google policies. On the one hand, it’s good to hear that they are at least doing that. But if that’s the extent of the ‘vetting’, that’s not much vetting.
Google also claims that it vets all outside developers who are going to get access to this kind of data and only grants it for users that explicitly granted the developer permission to access their emails. So Google is taking the line that users are granting permissions for this despite everything we’re hearing about user’s not actually granting explicit permission. It’s another parallel with the Facebook scandal:
And note the implicit acknowledgment of trust issues with app developers: Google assures us that businesses have the option of restricting access to only trust app for its employees’ Gmail accounts. While that might have been intended to be reassuring, it sure sounds like Google acknowledging the validity of these trust issues:
So what’s to stop these third-party app developers from taking the data they collect and reselling it? Well, in Google’s case it at least does prohibit app developers from exposing a user’s private data to anyone else “without explicit opt-in consent from that user” and bars app developers from making permanent copies of user data and storing them in a database. But much like what we saw with Facebook and its similar policies, it’s not actually enforced:
And note the interesting timing of Google promoting access to this kind of data for app developers: It apparently started in 2014. Recall that 2014 is the same year Facebook began restricting the user data access it had been granting developers since 2007. So you have to wonder if Google’s looser 2014 policy was actually intended to attract app developers bristling from the new Facebook policies:
“Meanwhile, Google in 2014 started promoting Gmail as a platform for developers to leverage the contents of users’ email to develop apps for such productivity tasks as scheduling meetings. A new Gmail version launched this spring adds a link next to inboxes to a curated menu of 34 add-ons, including one that offers to track users’ outgoing emails to report whether recipients open them.”
Wow, that sure seems like a dirty open secret.
So how many email apps with this kind of data access are available for people today? Google won’t disclose that. Shouldn’t there be a list they provide somewhere? But on Apple and Android app stores the number more than doubled over the past five years, from 142 to 379:
And that number is only going to keep growing because almost company can sign up to be one of these developers. Including one-person companies:
And note how concentrated this practice appears to be. We just saw how there were 379 apps with this kind of email access on the Apple and Android (Google) app stores last year. And Return Path, the company that checks to see if you’ve clicked on a client’s marketing emails with 2 million users, has partnerships with 163 apps. That puts the company, which few people have ever heard of, on track to have access to a majority of the people who use emails apps. Yikes:
And that’s the latest ‘WTF Big Tech?!’ story. At least Facebook got to be a bystander on this one. By no means an innocent bystander, but still a bystander by virtue of the fact that Facebook doesn’t offer email services. Congrats to Facebook.
Following up on the creepy Wall Street Journal story about the email app developers for popular emails services like Gmail and Yahoo gaining access to people’s inboxes with minimal disclosure — ostensibly just to develop new features and fix bugs - here’s an article about another reasons humans might be interacting with your app data and a whole lot of other data: pseudo-AI.
What’s pseudo-AI? Well, it’s basically an AI service that’s actually humans and AI working together, often with the humans acting as the primary actor. And as the following article makes, while pseudo-AI is nothing new and appropriate for some applications, it’s also the case that many companies are companies offering services that they advertise as fully AI-driven but are actually pseudo-AI with humans involved. In some cases, companies are using pseudo-AI to fool investors into thinking they’ve already developed an AI-powered product. In other cases, companies claim to be rolling out human-powered ‘AI’ services as part of the initial develop of a product, like human-powered ‘chatbot’ services, with the intent of making it fully AI-run later. As one person put it, “It’s essentially prototyping the AI with human beings”, while others perhaps more accurately put is as “fake it till you make it”. And in virtually all of these cases the end-user is left with the impression that they’re purely interacting with a computer.
This is, of course, all highly relevant to the scandal over third-party developer access to personal data because one of the primary assurances companies give to end users is that it’s primarily just AIs that are viewing your personal data and humans only come into the loop to fix technical issues. Which obviously isn’t going to be the case if those AI services are secretly pseudo-AIs:
“It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.”
Yes, it is indeed hard to build a service power by artificial intelligence. Much harder than just hiring a bunch of humans to fake it apparently.
That said, it is true that ‘pseudo-AI’ could be useful for developing a new product and allowing companies to learn more about how people will actually use and AI service before making the potentially massive investment in developing the AI:
Or, as was the case with Edison Software — one of the companies that developed email apps profiled in the Wall Street Journal article — the ‘pseudo-AI’ might involve periodic human interactions (with personal information hopefully redacted) to develop a feature or improve it. In other words, while it’s accepted that AIs will general get better the more data you feed into it, there’s also less acknowledge requirement in many cases of requires continual human refinement of the algorithms for to make it really work or improve, and that potentially creates a permanent need for humans to be interacting the data that’s assumed to be handled by AI systems. And that’s going to be true from the tech giants like Google down to one-man app development operations. Someone is going to have to be looking over the data if products are going to get better. It’s non-ideal from a privacy standpoint but that’s reality:
But the examples of human software developers periodically looking at user data is just one example of ‘pseudo-AI’ and a far more understandable example from a technical standpoint. The far more egregious version is the ‘AI’ services that are essentially powered by large numbers of poorly paid humans. Like Spinvox, a company that allegedly used AI to convert voicemails into text messages but actually just used overseas cheap labor:
And faking AI services with cheap labor is actually profitable, it should come as no surprise to learn that companies have already been caught using Amazon’s Mechanical Turk crowdsourced labor service to power their pseudo-AI services. The Mechanical Turk service is perfect for pseudo-AI. It’s also worth recalling that Cambridge Analytica actually used Amazon’s Mechanical Turks to pay the people who took its psychological survey used to test and calibrate its personalized micro-targeting services for the Trump campaign, highlighting how Mechanical Turks and other crowdsourcing ‘micro-task’ labor platforms can be used to both power the AI of pseudo-AI services and also pay individuals to fuel the AI with data-rich test sets from large numbers of individuals. That’s what Cambridge Analytica did and it seems like a likely growth area for Amazon:
And, of course, Facebook has already been caught using pseudo-AI for its Messenger service virtual assistant helper-chatbot. Like digital Soylent Green, Facebook’s ‘virtual assistant’ was people:
And as scandalous as this all should be from a data-privacy perspective, it’s also potentially quite scandalous from a business standpoint because users aren’t the only ones potentially getting scammed. Investors investing in companies with functional AI products are ripe for the pickings too. Now that AI with human-like qualities exist it’s created a fascinating new area of investor scams: using humans to fake cutting edge AI to potential investors:
And note one of the other potential applications of pseudo-AI: specifically presenting a human-powered service as AI-driven in order to elicit greater honesty from users. Like the Woebot psychological support service that secretly used humans. As research has shown, people are more likely to open up about some kinds of medical conditions, like psychological help which can have a stigma, if they think they are are talking to a computer. And that’s merely one example of how maintaining the pretense of something being AI-driven with no human involvement can be advantageous:
m
And we can’t forget that that this is more than just a privacy concern. It’s a labor market concern too since the whole micro-task labor market is already demonstrably bad for for working conditions. And tracking the growth of the pseudo-AI micro-task market is going to be complicated by the fact that 21 Inc — the company dedicated to making money by giving free things, like toasters, that are hooked up to the internet and mine bitcoins (and potentially spy on people) for 21 Inc when plugged in — has already created a micro-tasking platform that pays people in bitcoins (tiny fractions of bitcoins per task). Payment in bitcoins and micro-tasks is the perfect recipe for a stealth labor market. Especially when so much of the labor being sourced internationally over the internet.
And for those kinds of pseudo-AI services that don’t quite fit into the ‘micro-task’ category and require humans to have something closer to normal employment, there’s also the new contract-employment service being created by Amazon that will undoubtedly prove useful in the pseudo-AI employment realm.
Also keep in mind that the wages these pseudo-AI jobs are going to pay will almost ensure the employees will have to be on welfare because that’s how the micro-task labor dynamics work. So we have to ask the question of whether or not the welfare state will effectively end up subsidizing the pseudo-AI pseudo-AI industry, making it even more economical to ‘fake it while you make it’. So we have to recall the proposal put forward by a pair of GOP policy wonks to used micro-tasks as the employment market for people facing work-requirements for public welfare programs. The plan didn’t just make the micro-tasks available for meeting welfare work requirement conditions, it was intended to fostered the growth of a large permanent pool of people available for micro-task work (online and offline), with job training programs that would help people move up the economic ladder within that micro-task labor market. So micro-tasks for welfare is already on the GOP menu.
And while the micro-task labor market is global in nature, a lot of pseudo-AI services might be best done by people living in the country of the unwitting pseudo-AI service user. There’s an interesting trade-off for the unwitting users: if they are asking the pseudo-AI about something that requires knowledge about their local area they are going to have to unwittingly disclose their personal information to someone who lives locally. So we have to keep in mind that the growth of the pseudo-AI market could very easily spark a growing demand by companies for a large cheap domestic pseudo-AI micro-task labor force which, in turn, will make people on welfare a potentially precious commercial commodity if the micro-task labor pool becomes the ‘default’ expectation due to new safety-net work requirement. It’s sadly easy to imagine the US powering its fake AI services with people on welfare. With the way things are going in the US that seems almost inevitable. And for other countries too.
So get ready for a future of fake-AI powered by the poor people on welfare. And, yes, this means the humans who will be seeing the personal data you think is being handed over to an AI will be the working poor and people on welfare. Many of them working poor. Your local waitress making poverty wages living in public housing and surviving on food stamps might also be the magic behind your new AI ‘virtual assistant’ chatbot app you just disclosed your personal secrets to last week. At least that’s going to be the case if employing poor people as secret AIs maximizes profits. Another gift to the public from the growing ‘gig’ economy.
Another interesting dynamic introduced by the pseudo-AI commercial space is that it’s going to potentially make extremely life-like AI services suspicious. AIs won’t want to seem too real because that might seem fake. The Uncanny Valley could be the commercial sweet spot. And as AIs become more and more conversational and rich, there’s going to be more and more demand for the human pretending to be AIs providing rich, nuanced commentary. The Turing test will become fascinating. The humans secretly powering that kind of ‘AI’ will have a fascinatingly tricky line to walk.
So as we can see, if you find yourself interacting with a new ‘virtual assistant’ and it appears to be passing the Turing test with flying colors, you probably shouldn’t be surprised and might not want to get too chatty about the personal stuff.
Uh oh, there’s another Cambridge Analytica offshoot. Auspex. It’s a thing.
Oh, wait, Auspex will have an ethical streak! No worries!
Those are the assurances we’re getting from Ahmad Al Khatib, one of the primary figures behind Auspex. Recall that Al Khatib was the mystery investor in “Emerdata”, the other new offshoot of Cambridge Analytica. Emerdata had a board of directors of Al Khatib along with Steve Bannon, Robert and Rebekah Mercer, Alexander Nix, and Johnson Chun Shun Ko (a business partner of Erik Prince). So Auspex this is an ostensibly different Cambridge Analtica offshoot involving Al Khatib.
And Malcom Turnbull, the former managing director of CA Political Global. He’s also part of Auspex.
Recall that Turnbull was one of the figures caught on film in an undercover Channel 4 investigation where Turnbull, Alexander Nix (Cambridge Analytica’s CEO at the time), and Alex Taylor (Cambridge Analytica’s chief data officer) met with an undercover investigative journalist posing as fixer for client who was working to get candidates elected in Sri Lanka. It was during that meeting that Nix made a sales pitch to an that included getting Ukrainian women to entrap a political opponent in a compromising situation and recording it as an example of the kinds of dirty tricks operation Cambridge Analytica offered. Nix also offered to spread outright lies as a service. When Nix is describing the scenario involving Ukrainian women, he starts off the pitch by suggesting they send in Turnbull posing as a wealthy developer looking to exchange campaign finance for land. Turnbull remarks, “I’m a master of disguise.”.
So Turnbull is apparently willing to go undercover in their dirty tricks operations. But he wants to assure everyone that Auspex is going to have a strong ethical streak.
In Turnbull’s defense, he only joined Cambridge Analytica and SCL in May 2016, to work on election campaigns. He spent the previous 16 years working at Bell Pottinger, a London public relations firm. Bell Pottinger was sanctioned in 2017 for stoking racial tensions in South Africa on behalf of its clients (the wealthy Gupta family that are patrons of president Jacob Zuma) — following the release of a report about “state capture” by the Guptas. The firm lobbied for the release of Pinochet in 1998 and has a client list that includes Belaruse’s Alexander Lukashenko and FW de Klerk, when he ran against Nelson Mandela for president of South Africa and is generally known as a the kind of PR firm dictators and corrupt regimes can turn to for a profession image makeover (it seems like the perfect place for Paul Manafort).
When asked about Cambridge Analytica’s work hacking a political opponent of a Nigerian client, Turmbull claimed he didn’t know about it. When asked if he has any remorse about the various criminal acts Cambridge Anlytica is accused of, Turnbull assure us, “I’ve never been in a position where I’ve felt really, genuinely morally compromised.” So the guy who bragged about being a “master of disguise” in the undercover video wants to assure us that he’s never felt genuinely morally compromised at all and that his new company Auspex is going to have a strong ethical streak.
Ahmad Al Khatib, who only technically joined the Cambridge Analytica crew in January of this year when he joined Emerdata’s board of directors, also assures us that he asked all his colleagues about the allegations of abuses by Cambridge Analytica that he read about and they all assured him that there was no wrongdoing and he took their word for it.
So we have “master of disguise” Mark Turnbull, who denies any wrongdoing ever despite being caught on tape recently, and Ahmad Al Khatib, who appears to be incapable of seeing wrongdoing, assuring everyone that Auspex is nothing to worry about and actually on an altruistic mission:
“Would you trust anyone caught up in one of the biggest privacy debacles in the history of the internet with your data? Ahmad Al Khatib, a 29-year-old Egyptian-born entrepreneur who had front-row seats to the public shaming and downfall of Cambridge Analytica, thinks you should. He’s just launched Auspex International with a cadre of ex-Cambridge Analytica folk. It does much the same work, applying data analysis to PR for its clientele, including political campaigns. But Auspex will have an ethical streak, says Al Khatib, who’s giving Forbes the first in-depth interview on his big venture.”
An ethical version of Cambridge Analytica brought to you by the people that brought you Cambridge Analytica. That’s more or less the pitch from Ahmad Al Khatib. And his partner Mark Turnbull, the “master of disguise” who will play roles in dirty tricks operations that involve using Ukrainian women to implicate an opponent and spreading outright lies:
Al Khatib also wants to assure us that Auspex will be an altruistic, interventionist force. It’ll apply big data to PR strategy just like Cambridge Analytica, but it’ll do so without any dirty tactics. In other words, it will be Bizarro Cambridge Analytica:
And that “altruistic, interventionist force” But at least Al Khatib assures us Cambridge Analytica will be is going to be co-directed by Mark Turnbull. Bizarro Cambridge Analytica is bizarre on multiple levels:
Turnbull joined Cambridge Analytica in 2016 after 18 years at Bell Pottinger public relations firm, where his work included the post-war Iraq election, followed by antiterror and counter-radicalization projects. Turnbull portrays the mission of Auspex as an “altruistic mission” that will be focused on creating an informed populace. That sure sounds like a propagandistic way of describing propaganda services:
An both Turnbull and Al Khatib insist there will be minimal dangers of privacy violations because they will only work with client data and will ask that the data owners be informed of how their personal information is used. Ask, not demand. So they’re somehow going to offer Cambridge Analytica-style services that rely on big data from numerous resources, but they’re going to ask that individuals be informed about how their data is being used. And they’ll only be using client data. That’s the laughable sales pitch:
And when questions about Cambridge Analytica’s past crimes and abuses it’s accused of come up, both play dumb and innocent. Extremely innocent in Turnbull’s case:
Al-Khatib, whose Emerdata partners include many of the people accused of wrongdoing at Cambridge Analytica assures us he’s been assured by them that nothing untowards was going on:
And back in July, a Medium post noted that Al-Khatib is a citizen of the Seychelles and raised questions about whether or not he was involved in the whole Erik Prince/UAE/Russia/Seychelles mystery meeting. It highlights how much of a man of mystery he is. Al Khatib is pumping money into both Emerdata and Auspex, but it’s unclear if he’s a front for a coalition of Gulf backers:
So it looks like the world had better get ready for Bizarro Cambridge Analytica and its alleged altruistic mission of creating a more informed populace using data gathered with consent. Unless the “master of disguise” and his ‘see no evil’ partner are lying, in which case the world had better get ready another Cambridge Analytica. A more aggressively farcical Cambridge Analytica.
Welp, Facebook had another privacy breach. But this time it’s different. Different in a much worse: Facebook just revealed that 50 million users may have had their accounts possibly taken over by unknown actors over the past year. Oh, and it potentially also gave the hackers the ability to log into any apps that used your Facebook account to log in. Yep. And it’s an actual hack, unlike the rest of privacy breaches which have been caused by Facebook basically giving the data away to app developers. And it’s a hack that potentially allowed hacker access to any app that used a Facebook login.
How did the hack work? It sounds like it was a result of separate bugs that were introduced to Facebooks video uploader in July of 2017. First, Facebook created a “View As” feature that lets you view your Facebook profile as someone else. You can select the specific person you want to view your profile as. One bug caused the video uploader to show up when using the View As feature.
A second bug cause the erroneously showing video uploader to create an “access token”, which is basically like a cookie stored on your computer that’s used to avoid having to re-login to Facebook or any apps that use your Facebook login. Except the access tokens create by the video uploader were the tokens for the person you were viewing your account as. In other words, if you used the View As feature to view your profile the way the “John Doe2” account sees your profile, the video uploader bug would create an access token for John Doe2’s account on your computer. And that access token would allowed the hackers to log into John Doe2’s Facebook account and potentially any other app online that happens to use John Doe2’s Facebook account to login.
So this is an unusually big deal for Facebook users. But how big a deal it is remains unclear. For instance, while Facebook assures us that no credit card information was obtained by the hackers, they aren’t giving the same assurances about hackers reading your private messages. Facebook is simply saying at this point that they think it’s unlikely that private messages were accessed, which is an answer that implies the hackers potentially could have done so. And if they could have done so, it’s hard to imagine they didn’t. Why wouldn’t they? That’s potentially the most valuable reservoir of information to steal.
And note that while Facebook says in the following article that it hasn’t observed any accounts being improperly accessed, the company also acknowledges that it only came across this vulnerability when it observed an unusual spike in activity on September 16 of this year. And that sure sounds like an acknowledgment that they were observing at least one hacker exploiting this vulnerability. And since Facebook reset the logins of 50 million directly impacted users and another 40 million users ‘just to be safe’, it sure sounds like the company is aware of activity that might indicate hacker activity for those 50 million user and the potential for hacker activity against another 40 million accounts.
So, for now, the extent of the impact remains ambiguous and it just looks like it’s going to be really bad in the long run. We don’t know for sure yet. As we’ve seen from the steady drip of increasingly bad disclosures about the Cambridge Analytica scandal, the official statements from Facebook will likely remain ambiguous for as long as possible but get steadily less ambiguous as the full extent of the impact is slowly revealed to the public one bad revelation at a time. But, for now, we’re at the early phase of this scandal, so the official story is that Facebook found a vulnerability that might be really, really, really bad, but they haven’t confirmed that yet:
“Facebook says at least 50 million users’ data were confirmed at risk after attackers exploited a vulnerability that allowed them access to personal data. The company also preventively secure 40 million additional accounts out of an abundance of caution.”
50 million user accounts “at risk” and another 40 million secured out of “an abundance of caution”. But no confirmed hacks. It’s a curious admission. Like, aren’t the 40 million accounts implicitly “at risk” too? What differentiates those accounts from the 50 million accounts? At this point we have no idea.
And note the contradictory language: Facebook says it hasn’t seen any accounts compromised and improperly accessed and yet Mark Zuckerberg literally refers to hackers who were using this exploit:
Also note how Facebook explicitly says that credit card information couldn’t have been stolen using this hack, but when it comes to the private messages Facebook merely says that it thinks it’s “unlikely” that private messages were accessed. That sure sounds like private messages could have been accessed:
And this vulnerability has been in place since July of 2017 and wasn’t discovered until September 16, 2018. And it was only discovered by Facebook observing a spike in unusual activity, mean basically means they caught someone using this exploit in a BIG way. There’s no reason to assume this wasn’t being used in a much more targeted manner (that doesn’t create a spike) before that:
The hack itself appears to be the result of a series of bugs that resulted in other users access tokens (login cookies) getting generated. Bugs that ironically are part of the “View As” feature which is intended to give users more control over their privacy:
And those access tokens don’t just give access to Facebook accounts. They give access to any other app that uses the Facebook login system. That’s what makes this such a massive breach. The impact is almost impossible to assess because it’s basically a hack of one of the biggest login systems used on the internet. Any app that offered a Facebook login option was vulnerable from July 2017 until a few days ago:
So is Facebook going to be punished by US and EU regulators and face new regulations after a massive privacy f#ck like this? At this point that’s a big maybe:
But let’s not forget the one group that could deal the greatest punishment to Facebook: Facebook’s users, who can just delete their accounts at this point:
And whether or not you decide to delete your account, it seems clear at this point that the widespread assumption that a company with as big as Facebook would be able to avoid a vulnerability of this nature is an erroneous assumption. Facebook just experienced a hack that didn’t just leave Facebook accounts open to anyone but the accounts of all sorts of third-party apps all across the web. That’s like the worst hack ever by some measures. And it happened because Facebook didn’t catch bugs. Bugs that should have been easy to spot like the video uploader showing up in the “View As” mode. How did Facebook not catch that but for over a year? It’s not like it was a subtle bug that’s hard to spot. It’s kind of amazing. But it happened. And now we get to spend the next year getting updates from Facebook about how this hack was actually worse than they earlier disclosed.
On the plus side, the Cambridge Analytica privacy nightmare doesn’t seem quite as bad anymore, relatively speaking.
With the 2018 US mid-terms less than a month away, and the GOP appearing to have arrived at a strategy of stoking right-wing rage over the growing right-wing conspiracy theory that all of the accusations of sexual harassment and assault against Brett Kavanaugh were all made up and part of a liberal plot, it’s pretty clear that an entity like Cambridge Analytica with experience in promoting conspiracy theories to targeted audiences and stoking outrage would be very useful for the GOP in this final stretch. So it’s worth noting that it was reported back in June that the GOP appears to have already hired a Cambridge Analytica-like entity to do work for it in the 2018 mid-terms. Specifically, it hired Data Propria, which employs four ex-Cambridge Analytica employees, including Cambridge Analytica’s chief data scientist. Cambridge Analytica’s former head of product, Matt Oczkowski, leads Data Propria. Oczkowski led the Cambridge Analytica team that worked for Trump’s 2016 campaign.
Brad Parscale, the head of Trump’s 2016 digital campaign, is an owner of Data Propria’s parent company, Cloud Commerce. Parscale has already signed on to lead Trump’s 2020 digital campaign efforts and is currently working for the GOP on the mid-terms. So Data Propria is an exceptionally well-connected company just in terms of who is behind it.
As the following article also makes clear, Data Propria is clearly very interested in hiding from the public the nature of the work its doing and the fact that it’s working for the GOP at all. Oczkowski denied any links to the Trump campaign, but acknowledged that Data Propria is doing 2018 campaign work for the Republican National Committee. But AP reporters actually overheard conversations between Oczkowski and a prospective client held in public where Oczkowski talked about how he and Parscale were already working on Trump’s 2020 campaign.
Oczkowski also previously told the AP Data Propria had no intention of seeking political clients. But after the AP told him about how their reported overheard him directly discussing campaign work, Oczkowski claimed the company had changed course and that whatever he’d said about the 2020 campaign would have been speculative. So he appears to continue to deny Trump’s 2020 campaign work even after getting caught talking about it, which isn’t particularly surprising given how bad it looks for Trump to continue working with Cambridge Analytica after everything that happened in 2016.
But you also have to wonder if the reason they want to avoid any connection to Trump’s 2020 campaign is because they have plans for some very dirty tactics. And that’s not just a medium-term concern about 2020. Because if those dirty tactics are going to be employed in 2020, they’re probably honing them in the 2018 elections. Right now:
“The AP confirmed that at least four former Cambridge Analytica employees are affiliated with Data Propria, a new company specializing in voter and consumer targeting work similar to Cambridge Analytica’s efforts before its collapse. The company’s former head of product, Matt Oczkowski, leads the new firm, which also includes Cambridge Analytica’s former chief data scientist.”
Meet the new political data analytics nightmare company, same as the old political data analytics nightmare company.
And while Data Propria acknowledges doing work for the RNC in 2018, they curiously deny having a contract to work on Trump’s 2020 campaign:
But those denials proved to be lies. Of course. The AP’s reporters overheard Oczkowski bragging about “doing the president’s work for 2020” with Brad Parscale. And this was anonymously confirmed by someone familiar with the company. But when confronted with the fact that an AP reported overheard him talking about that 2020 contract, Oczkowski acted like the company had changed its mind ans that talk of 2020 contracts was purely speculative:
So we’ve just learned about the existence of Data Propria and the company is already engaging in blatantly false denials that can be easily disproven. Again, Meet the new political data analytics nightmare company, same as the old political data analytics nightmare company.
And just to be clear, it sounds like the services offered by Data Propria include the “psychography” service of creating psychometric profiles on individuals and micro-targeting them with ads based on that profile. But, of course, Oczkowski denies this too:
As we should have also expected, it appears that Data Propria is potentially flouted US election laws. Specifically, Federal election law bars foreign nationals from “directing, controlling or directly or indirectly participating in the decision-making process” of U.S. campaigns. Cambridge Analytica’s chief data scientist, David Wilkinson, is now working for Data Propria. He also happens to be a British citizen. So if Wilkinson is working for the GOP, that’s potentially a violation of the law. So, of course, Oczkowski is claiming that Wilkinson is actually just a contractor who won’t be doing any work on the US campaigns:
In addition, there are questions about whether or not Data Propria is acting as a back-door way for Brad Parscale to earn even more money from the Trump campaign. Potentially A LOT of money. It’s described as an unusual situation for a presidential incumbent to direct large sums of money to outside firms controlled by the campaign managers because it’s presidents can generally drive a harder bargain than that. In other words, Trump is apparently allowing Parscale to turn the 2020 campaign into a personal enrichment project. Which is unusual, but also very Trumpian so maybe it’s not that unusual in this context:
Keep in mind that if Data Propria is structured in a way that could financially benefit Parscale at the expense of the Trump campaign, we should probably asking who else is getting a cut. Is Parscale’s digital marketing empire going to double as a way for Trumpers to turn campaign contributions into personal profits? Who knows, but the fact that Cloud Commerce, Data Propria’s parent company, has a rather shady background doesn’t dissuade suspicions. For instance, the former CEO plead guilty to stock fraud in 2008 but stayed with the company until 2015:
And the current CEO of Cloud Commerce, Andrew Van Noy, appears to have fake his professional biography:
Again, how Trumpian.
So Data Propria has all the markings of a shady operation run by shady people. Shady people with an expertise in manipulative mass propaganda. And it’s currently using that expertise for the GOP’s benefit in the 2018 mid-terms. Mid-terms where the GOP is poised to spend the final month spreading conspiracy theories designed to stoke right-wing outrage and practicing for 2020.
Remember the lawsuit brought against Facebook by the app developer Six4Three? That was the lawsuit alleging that, contrary to Facebook’s denials, the exploitation of the “friends permissions” policies — which allowed Cambridge Analytica’s personality quiz app to collect large amounts of information on the 87 million Facebook ‘friends’ of the ~300 thousand people who actually downloaded the app — was actually aggressively pushed on app developers as a means of enticing them to make apps for Facebook. Six4Three further alleges that Facebook then ran a kind of shakedown operation on the most successful apps, looking for ways to extract more fees from the successful app developers and using the threat of cutting off access to all of that ‘friends permissions’ data in the negotiations. Six4Three goes on to allege that during the period of 2014–2015, when Facebook started phasing out the ‘friends permissions’ option, the top Facebook executives themselves were involved in looking at which app developers they wanted to target with the threat of cutting off access. Finally, Six4Three charges that Mark Zuckerberg himself was directly involved with this scheme. In all, Six4Three claims 40,000 app developer companies were targeted by Facebook with threats of cutting off data access unless they allow Facebook to co-opt them. In other words, this lawsuit is a potential nightmare for Facebook and an even bigger nightmare for Mark Zuckerberg.
And it sounds like that public relations nightmare has indeed gotten bigger, thanks to the UK legal system. Or may have gotten bigger. We don’t know yet. What we do know is that the UK parliament has just gotten its hands a number of internal Facebook documents that the company does not want released or discussed. The documents were actually seized by Six4Three during the legal discovery process of their lawsuit. That lawsuit is taking place in the US legal system and the documents in question are subject to an order of a Californian superior court and cannot be shared or made public, at risk of being found in contempt of court.
So how did the UK parliament get its hands on Facebook documents held by Six4Three that were under a California court order to remain sealed from the public? Well, Damian Collins, the chair of the culture, media and sport select committee in the UK parliament, invoked a rare parliamentary mechanism to compel the founder of Six4Three to hand over the documents during a business trip to London. The parliament actually sent a serjeant at arms to his hotel with a final warning and a two-hour deadline to comply with its order. The founder still failed to do hand over the documents and was escorted to parliament, where he turned them over under threat of fines and imprisonment.
Why was the UK parliament so interested in these particular documents? Well, in their lawsuit against Facebook, Six4Three alleges that the documents more or less demonstrate that Facebook was not only aware of the ‘friends permissions’ loophole used by Cambridge Analytica but actively promoting it to developers as an incentive. So while it’s unclear if the documents are expected to contain evidence of all of Six4Three’s charges against Facebook, they’re at least expected to contain evidence of some of those charges. At this point we have no idea what exactly is in the documents, but now that the UK parliament has its hands these this cache of document we might get an idea sooner or later. And it’s already very clear that Facebook isn’t happy about any of this and that might be, in part, because it’s claimed that the documents included correspondences with Mark Zuckerberg. So it’s entirely possible that the UK parliament now has its hands on evidence that Zuckerberg was not just fully aware of the policies that led to the Cambridge Analytica data privacy nightmare but was actively using those policies to lead some sort of app developer shakedown system:
“Facebook, which has lost more than $100bn in value since March when the Observer exposed how Cambridge Analytica had harvested data from 87m US users, faces another potential PR crisis. It is believed the documents will lay out how user data decisions were made in the years before the Cambridge Analytica breach, including what Zuckerberg and senior executives knew.”
Documents revealing what Mark Zuckerberg and other senior executives knew; That sounds like a big “uh oh!” for more people than just Mark Zuckerberg, but it’s still the biggest “uh oh” for Zuckerberg.
And it’s so ominous specifically because of what Six4Three alleges: that Facebook was actively using the offer of ‘friends permissions’ data on unsuspecting Faceboook users as a means of first enticing users into making apps for Facebook and then threatening those developers with cutting off access to that data unless they gave Facebook a bigger cuts of their revenues. So the fact that UK parliament got its hands on this data is a pretty big nightmare for Facebook, but it’s really just an extension of an existing nightmare for Facebook of that Six4Three lawsuit. It’s a reflection of how success Six4Three’s legal threat has been thus far:
And now that the UK parliament has these documents and unclear what, if anything, Facebook can do about it. They may have run out of legal tricks:
And that lack of further legal options, in turn, raises the question of what Facebook’s answers are going to be the future requests to have Mark Zuckerberg testify in front of the parliament. Is Facebook still going to refuse to have Zuckerberg testify if it turns out those seized documents are indeed filled with confirmations of all of Six4Three’s scandalous allegations, including confirmations of Zuckerberg’s personal involvement in a shakedown scheme exploiting user data?
It raises a generic question about the future of Mark Zuckerberg at the company. Uber had to ditch its founder Travis Kalanick after it because obvious that he had an amoral public image. Will ditching Zuckerberg be the price Facebook ends up having to pay to earn the public’s trust back? That would depend on how extensively Zuckerberg himself was involved in the Facebook’s breaches of trust and how deeply those breaches were in the end. And that’s all part of what makes this story so ominous for Facebook and especially Zuckerberg: the public doesn’t yet know how deep the breaches of trust were and who at Facebook was involved with that breach of trust. But the UK parliament might now know. And that just might include now knowing about a scheme involving the exploitation of user data to shakedown app developers personally orchestrated by Mark Zuckerberg.
Christopher Wylie, the former head of research at Cambridge Analytica who became one of the key insider whistle-blowers about how Cambridge Analytica operated and the extent of Facebook’s knowledge about it, gave an interview last month to Campaign Magazine about his thoughts on artificial intelligence and the risk big data and AI pose to human creativity and the suppression of good ideas. It’s an interesting interview for people interested in the potential impact of artificial intelligence on human civilization, with Wylie concluding that regulations on the use of artificial intelligence is going to be required.
But there are a few points Wiley makes about the psychological warfare techniques used by Cambridge Analytica that seem like the kind of thing that everyone should be interested in. Because, of course, Wylie was talking about psychological warfare techniques used by Cambridge Analytica on the public.
Specifically, Wylie recounts how, as director of research at Cambridge Analytica, his original role was to determine how the company could use the information warfare techniques used by SCL Group — Cambridge Analytica’s parent company and a defense contractor providing psy op services for the British military. Wylie’s job was to adapt the psychological warfare strategies that SCL had been using on the battlefield to the online space.
It was the use of military psy op techniques on the general public that Wylie says started giving him second thoughts about his work. As Wylie put it, “When you are working in information operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary consideration. It is fair game to deny and manipulate information, coerce and exploit any mental vulnerabilities a person has, and to bring out the very worst characteristics in that person because they are an enemy...But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are undermining democracy and treating voters in the same way as you are treating terrorists.”
Wylie also draws parallels between the psychological operations used on democratic audiences and the battlefield techniques used to be build an insurgency. It starts with targeting people more prone to having erratic traits, paranoia or conspiratorial thinking, and get them to “like” a group on social media. The information you’re feeding this target audience may or may not be real. The important thing is that it’s content that they already agree with so that “it feels good to see that information”. Keep in mind that one of the goals of the ‘psychographic profiling’ that Cambridge Analytica was to identify traits like neuroticism.
Wylie goes on to describe the next step in this insurgency-building technique: keep building up the interest in the social media group that you’re directing this target audience towards until it hits around 1,000–2,000 people. Then set up a real life event dedicated to the chosen disinformation topic in some local area and try to get as many of your target audience to show up. Even if only 5 percent of them show up, that’s still 50–100 people converging on some local coffee shop or whatever. The people meet each other in real life and start talking about about “all these things that you’ve been seeing online in the depths of your den and getting angry about”. This target audience starts believing that no one else is talking about this stuff because “they don’t want you to know what the truth is”. As Wylie puts it, “What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.”
Wylie goes on to make an important distinction between the kind of targeted digital advertising used by the Barack Obama presidential campaigns vs the Trump campaign. Wylie worked with Obama’s former national director of targeting so he really does have the kind of experience to make this comparison. And according to Wylie, the two campaigns took very different approaches. “When the Obama campaign put out information, it was clear it was a campaign ad, and the messaging, within the realm of politics, was honest and genuine. The Obama campaign did not use coercive, manipulative disinformation as the basis of its campaign, full stop. So, it’s a false equivalency and people who say that [it is equivalent] don’t really understand what they’re talking about,” as Wylie put it, which, of course, is a reminder that the Trump campaign did in fact use manipulative disinformation as the basis of its campaign. Those were the services Cambridge Analytica was providing.
So based on Wylie’s recounting of his experiences as the head of research for Cambridge Analytica, it appears that the insurgency-building techniques used by the military that rely on pumping out disinformation to audiences identified as neurotic and conspiratorial-minded are great for political campaigns. It seems like the kind of thing everyone should know. Especially the neurotic and conspiratorial-minded:
“He believes that poor use of data is killing good ideas. And that, unless effective regulation is enacted, society’s worship of algorithms, unchecked data capture and use, and the likely spread of AI to all parts of our lives is causing us to sleepwalk into a bleak future.”
The future is indeed bleak. That’s the view from the inside of the Psy Op AI Big Data complex. A Psy Op AI Big Data complex in the process of taking the psychological warfare lessons from the battlefield and porting them over to the digital domain for commercial and political use, as Christopher Wylie would know. Utilizing those battlefield lessons used by SCL Group to the digital domain for military use was his first job as Cambridge Analytica’s research director:
But then the job shifted to a new focus that started giving Wylie second thoughts: offering those newly developed digital psy op techniques to political clients. As Wylie saw it, this shift was like “treating voters in the same way as you are treating terrorists”:
So as we can see, Wylie was in an unusually good position to recognize how Cambridge Analytica was utilizing psychological warfare techniques on the US and UK populaces: his job was literally to first develop online psy op techniques for military clients and then his job became using those same techniques for political clients.
And as Whylie describes, Cambridge Analytica was adapting the disinformation techniques used to build insurgencies to change political attitudes. And the first step in that was political insurgency building campaign was the identification of a target audience selected based on psychological profiles like having erratic traits, paranoia or conspiratorial thinking. This target audience is then propagandized on social media until the group grows large enough to try and arrange for real life meetups.
And if that disinformation propaganda campaign succeeds in getting enough people to ‘like’ a group’s social media page, the next step is to get the target people to meet up in real life. And for those that do meet up, their belief in the disinformation campaign is deepened along with a sense that the disinformation campaign is trust being intentionally censored by ‘them’. In other words, it’s a system for radicalizing a target audience with disinformation and turning them into genuine political insurgents:
“What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.”
As we can see, Christopher Wylie had some very understandable reasons to become uncomfortable with his job. And note how he didn’t have the same feelings about his time working for Obama’s digital campaign that also used Big Data. Because Obama was using that information for things like identifying unregistered voters and encouraging them to register, not psychologically profile people to micro-target an array of disinformation campaigns:
And note Wylie’s observation about how personality traits can be revealed by the kinds of questions people are asked all the time by the marketing industry. Questions like whether or not you’re a big fan of Justin Bieber. Answer enough seemingly innocuous questions and a psychological profile on you can be built
And that potential ability to infer psychological profile traits based on answers to consumer preference questions like whether or not you’re a Belieber is important to keep in mind in the context of these revelations about Cambridge Analytica. Because recall how Cambridge Analytica’s ability to psychologically profile Facebook users centered around the strategy of paying ~270,000 Facebook users to download a psychological testing app developed by Aleksandr Kogan and then using the “Friends permissions” feature Facebook offered app developers to get detailed profiles on the rest of ~87 million ‘friends’ of those 300,000 people. Cambridge Analytica then used that group of ~270,000 psychological profiles as a training data set to develop algorithms that could infer psychological traits based on the rest of the profile data Facebook was getting from the ‘friends permission’ app developer loophole. Data like the pages users ‘liked’ which was apparently available to app developers for those ~87 million people. So inferring psychological traits using seemingly unrelated pieces of data on someone is something Wylie has experience with.
So while it might seem like replicating what Cambridge Analytica did would be difficult because Cambridge Analytica had psychological profiling data that most other organizations wouldn’t have, don’t forget that infer psychological profiles is already happening. The age of personalized propaganda is here.
Finally, note how Wylie reiterates the Facebook was very aware of what Cambridge Analytica was up to, something we’ve seen evidence of over and over as this scandal has unfolded:
So if you’re wondering how many similar mass disinformation influence operations using insurgency-building techniques are operating on Facebook, the fact that Facebook had no problem with Cambridge Analytica’s mass disinformation political insurgency-building techniques operating on its platform points towards plenty of other mass disinformation political insurgency-building campaigns going on right now. Disinformation campaigns targeting the most neurotic and paranoid people that can be identified. Digital disinformation campaigns designed to get people meeting up in real life and mutually reinforcing a belief in the disinformation.
And given the modern Republican Party has been descending into an Alex Jonesian-far right John Birch-style disinfotainment paradigm for years now, there’s no reason we shouldn’t expect the application of these kinds of weaponized psychological warfare techniques to grow in coming years. Especially since, again, Facebook seems fine with this. Alex Jones really is the heart and soul of the modern American conservative movement at the grassroots level these days and that makes the application of disinformation campaigns targeting American conservative audiences a basic necessity of day to day politics in America. There’s plenty of disinformation targeting all American audiences as is, but the modern media targeting right-wing Americans is exceptionally conspiratorial in recent years and that shows no sign of abatement in the age of Trump.
The digital disinformation techniques for making an idea go viral and organize people in the real world are techniques that are only going to get more and more refined. Especially as the ability to infer psychological profiles on all of us becomes more refined and we all become much easier to predict.
It’s all one big reason why it’s going to be important for Americans in particular to realize that one of the consequences of the unlimited spending in politics in modern American in the post-Citizens United is that Americans are living in a sea of right-wing propaganda. Everyone is a target of that propaganda machine but conservative audiences are especially targeted by the disinformation campaigns. The story of Cambridge Analytica is just one of the stories about how that propaganda machine operates using targeted disinformation campaigns. And the story told by Christopher Wylie is a story of that right-wing propaganda machine employing military-grade psychological warfare techniques centered on using disinformation on targeted audiences, especially conservative audiences. It’s an absolutely vital story for Americans today, especially conservative Americans who are the key targets of these psychological warfare campaigns.
The kind of disinformation campaign described by Wylie is the kind of campaign that comes in the form of propaganda telling you that anyone who doesn’t support the propaganda is actually part of a conspiracy against you it’s the truth being censored by ‘Them’. And that’s a key characteristic of contemporary right-wing media content: selling audiences on information that’s supposedly being censored by ‘Them’, where ‘They’ are a supposed all power liberal establishment of academia and Hollywood. It’s a narrative that’s absolutely perfect for exactly the kind of disinformation campaign Wylie described. The Fox News/Alex Jonesification of the American right-wing has an eerie resemblance to a giant disinformation campaign because that’s what it is. And Trump is this right-wing disinformation campaign’s God King. That’s why public awareness of sophisticated disinformation campaign techniques is so important these days. The identification of disinformation campaigns is a necessary basic civic skill for democracy to function. We are being unwittingly propagandized. It’s the sad truth. But while ‘They’ do indeed effectively censor important stories, not every story on Facebook claiming to be too truthy to handle is actually true. That basic truth has sadly become exceptionally important in contemporary America. Especially for right-wing conspiracy-minded Americans who are being exceptionally targeted by insurgency-building psychological warfare disinformation campaigns right now by a lot more actors than just Cambridge Analytica.
There was more news about the inner workings of Facebook recently as a result of the lawsuit by Facebook app developer Six4Three. It was, of course, bad news. Bad for Facebook and bad for Facebook’s users.
First recall how Six4Three charged Facebook with threatening to cut off access to user data for app developers as part of a scheme to crush its competition and extract money from the app developers. Also recall how the internal Facebook documents that Six4Three obtained as part of its lawsuit were recently obtained by British MPs.
Well, those internal documents were just released by the British parliament. And, surprise!, it turns out they’re filled with internal Facebook discussions that basically confirm people’s worst assumptions about how the company viewed its trove of personal data. The revelations include:
* Back in 2012, Facebook debated changing its rules for app developers in order to extract more money from them. The idea was that developers would have to pay Facebook for the user data that they were already extracting (as we’ve learned from the Cambridge Analytica scandal). But developers wouldn’t directly pay for access to that data as long as they were making money for Facebook in other ways, specifically through the revenue-sharing model Facebook had with developers and also from developers buying Facebook ads. So the idea would be that the revenues shared with Facebook and ads purchased on Facebook would count towards this fee, with the expectation that the vast majority of app developers wouldn’t end up having to pay anything for maintaining access to the user data. In this way, Facebook could effectively sell user data without directly selling.
* Facebook really was treating app developers very differently depending on the perceived competitive threat they posed. For example, companies like Airbnb, Lyft and Netflix were getting whitelisted for special access to user data at the same time Vine, an app developed by Twitter and seen as a potential competitor to Facebook’s functions, got its access to user data cut off.
* Remember the stories about Facebook quietly grabbing and call and text logs off of smartphones running the Android operating system? Well, it turns out Facebook was debating whether or not to bypass Android’s permissions system so this data could be grabbed while giving users as little notice as possible.
And that’s just some of what was revealed in the documents released by the UK parliament:
“As expected, the UK Parliament has released a set of internal Facebook emails that were seized as part of its investigation into the company’s data-privacy practices. The 250-page document, which includes conversations between Facebook CEO Mark Zuckerberg and other high-level executives, is a window into the social media giant’s ruthless thinking from 2012 to 2015 — a period of time when it was growing (and collecting user data) at an unstoppable rate. While Facebook was white-listing companies like Airbnb, Lyft and Netflix to get special access to people’s information in 2013, it went out of its way to block competitors such as Vine from using its tools.”
Yep, this released cache of Facebook emails included conversations between Mark Zuckerberg and other high-level executives. This wasn’t just the musings of lower level employees. And those high-level communications include Zuckerberg himself approving of a strategy to cut off access to user data to app that Facebook views as potential competitors:
We also have an email from Zuckerberg in 2012 where he expresses skepticism that there’s really much of a risk of giving app developers access to so much information about Facebook users. Risks like having that data leak out to the world. It’s an example of how wildly cavalier Facebook was with handing over so much user data to all sorts of different app developer companies: Zuckerberg apparently couldn’t imagine that data leaks — like the leaking of the data collected by Aleksandr Kogan’s psychological profiling app to Cambridge Analytica for uses in political campaigns — would actually happen:
And around the same time Zuckerberg was dismissing the dangers of data leaks, he was contemplated setting up a scheme where app developers would effectively pay for continued access to that data, but it would be indirect payments in the form of forcing these developers to buy Facebook ads:
“So instead of every [developer] paying us directly, they’d just use our payments or ads products.” That’s a Zuckerberg quote to keep in mind the next time you hear Facebook claiming that it doesn’t sell user data.
And then there were the discussion about bypassing an Android permission screen that would ask for access to smartphone call and text logs, a massive potential privacy violation. Note how Facebook primary concern was that it was “a pretty high risk thing to do from a PR perspective”
But Facebook wants to assure us that “we’ve never sold people’s data”:
So given’s Facebook’s response that, sure, it thought about selling user data but didn’t actually do it, it’s worth keep in mind that the arrangement Facebook already had in place, where app developers could get access to all of this precious user data in exchange for developing apps for Facebook, was itself a form of selling the data. The ‘payment’ for the data came in the form of developing the app which, in turn, made people more likely to use Facebook.
It’s also worth noting how much Facebook was thinking about charging for access to this data: $250,000 spent on Facebook ads:
“Facebook staffers explored how to use access to Facebook users’ data to get companies to spend more on advertising. In 2012, Facebook staffers debated removing restrictions on user data for developers who spent $250,000 or more on ads.”
So that gives us an idea of how much Facebook thought its user data would be worth to app developers. $250,000...in the form of Facebook ad spending so no one could accuse Facebook of directly selling user data.
And true to form, Mark Zuckerberg responded to the release of these documents showing Facebook staffers and Zuckerberg himself considering this pay to play scheme by assuring the world that Facebook never even considered selling the data of its users:
Then there’s Facebook’s use of its Onavo security app to spy on which apps people use in order to determine which companies Facebook should buy. So the Facebook security app doubled as spyware:
Finally, the released documents include references to Facebook giving some app developers for big companies (Airbnb, Lyft, Netflix, etc) preferential access to user data:
So what was that preferential access to user data provided to this select group of app developers? Well, according to the following article, it included giving them extended access to ‘friends’ user data in 2015 after most other apps had been cut off and additional information on those ‘friends’ including phone numbers:
“Facebook maintained secret deals with a handful of companies, allowing them to gain “special access to user records,” long after it cut off most developers’ access to such user data back in 2015, according to a new Friday report by the Wall Street Journal, citing court documents it did not publish and other unnamed sources.”
So we are told that these select companies were given “special access to user records” long after it was cut off to most developers back in 2015. How long this special access was given is unclear, and Facebook insists they only got short-term extensions
So we have Facebook flatly contradicting this report, leaving us with a ‘should we believe Facebook or the journalists?’ decision to make. Keep in mind that Facebook’s pattern for virtually all of these scandals has been to deny them until they become undeniable, so it looks like we might have another one of those situations with this ‘company white-list’ story.
And it sounds like this story could end up getting a lot worse for Facebook as we learn more. Because while we don’t know the full scope of the “special access” these developers were given to user data, it sounds like it at least includes the phone numbers of the Facebook ‘friends’ of app users:
So what other “additional information about a user’s Facebook friends” were these companies given access to? At this point we don’t know. But it’s worth noting one of the handy things having those phone numbers would have allowed these ‘white-listed’ app developers to do: target these ‘friends’ with Facebook ads.
Recall how the 2016 Trump campaign made extensive use of the “Custom Audiences” Facebook ad feature, where advertisers could feed Facebook a list of email address or phone numbers for the purpose of directly targeting Facebook users with ads. So by handing over information like friends phone numbers, Facebook was making it much easier for these companies to directly target the friends of app users with Facebook ads. In other words, Facebook was likely handing over user data for the purpose of selling more Facebook ads.
But rest assured, Facebook never ever sold user data. ;)
Now that Facebook has been outed as being the Grinch of data privacy in 2018, the company presumably didn’t have the merriest of Christmases this year. If you ignore all the profits. So it’s perhaps fitting that what is arguably the worst of the Facebook data privacy revelations of 2018 came last: The New York Times reported last week about previously undisclosed arrangements Facebook has had with around 150 major corporations. The arrangements started in 2010 and many continue today.
Critically, these data sharing arrangements were defined by Facebook internally to not fall under the 2011 consent agreement Facebook signed with the Federal Trade Commission (FTC) to not disclose user data without getting user permissions first.
The 2011 consent agreement was made by Facebook after it was discovered that Facebook was handing out user data without their consent following some policy changes Facebook made in 2009. Facebook called this 2009 privacy change “instant personalization” It affected about 400 million users and made some of their information accessible to all of the internet. That 2009 change also involved sharing additional information, like users’ locations and religious and political leanings, with Microsoft and other partners. As we’re going to see, Microsoft admits it was building profiles of Facebook users.
But the most scandalous of the new revelations involves a relative handful of companies: it turns out companies like Netflix, Spotify, and Royal Bank of Canada were given read/write/delete privileges to Facebook users private messages. The ostensible reason for this access was so Facebook’s messaging functionality could be incorporated into third party apps. Adding to the scandal is that Netflix and Spotify were apparently given access to private messages long after they stopped using that feature in their apps.
Recall that this isn’t the first time we’ve learned about Facebook giving third-parties access to private messages. Blackberry was apparently scooping up private messages as part of the permissions it got as a device maker. And app developers were also potentially given this permission, including the developer of the Cambridge Analytica app. So the access to private messages given to third-parties isn’t a new revelation, but it’s clearly still a growing revelation as these scandals continue to trickle out.
We’re also learning more about how Facebook was able to engage in this kind of behavior despite signing a consent decree with the FTC in 2011: the auditing of Facebook’s privacy policies was basically outsourced to PricewaterhouseCooper, Facebook largely dictated the terms of the audits, and the only thing PricewaterhouseCooper did was verify that Facebook would tell them that they were internally policing their privacy policies.
So we’re learning that Facebook was forced to sign a 2011 consent decree with the FTC following its “instant personalization” scandal of 2009. The decree mandated that Facebook had to get user permissions before handing out user data. And Facebook got around this decree by defining large numbers of companies as “extensions of Facebook” that, for some reason, therefore don’t require user permissions before user data is shared with them. This included giving some of these large companies access to information like private messages. And in some cases these companies were given this access to private messages long after they stopped the features in their apps that used them. At this point we should expect new bad news about Facebook, but even by Facebook standards this was pretty awful:
“In all, the deals described in the documents benefited more than 150 companies — most of them tech businesses, including online retailers and entertainment sites, but also automakers and media organizations. Their applications sought the data of hundreds of millions of people a month, the records show. The deals, the oldest of which date to 2010, were all active in 2017. Some were still in effect this year.”
So starting in 2010, Facebook started working out deals involving the data on hundreds of millions of people a month to more than 150 companies, with some of those deals are still in effect today.
What exactly was the nature of those data sharing arrangements? Well, since Facebook is is clearly determined to reveal as little as possible and deny as much as possible, we have to rely on whatever sources of information we can get. In this case, the the New York Times somehow got its hands in the 2017 records of Facebook’s internal system for tracking these data sharing agreements, giving us the best glimpse so far:
It’s one of the grand ironies on the situation: there’s clearly a massive data privacy disaster that’s been going on for years but Facebook’s privacy regarding their data sharing policies is preventing us from knowing the full scope of that disaster.
And yet Facebook assures us that all of these data sharing agreements violated user privacy at all and, more importantly for Facebook, that none of these arrangements violated the 2011 consent agreement Facebook signed with the US Federal Trade Commission (FTC) after previous privacy violations were discovered. That’s the consent agreement Facebook had to sign after it suddenly changed its privacy policies of 400 million users in 2009 and made some of their information accessible to the entire internet and started sharing information like user locations, religious, and political leanings with companies like Microsoft. Facebook dubbed this scheme “instant personalization”, and touted it under the banner of making the internet experience better. But after the FTC concluded this policy change as a deceptive practice, Facebook agreed to sign the consent decree, agreeing to introduce a “comprehensive privacy program” that involved reviewing new products and features. Facebook also hired PricewaterhouseCoopers to audit its privacy policies every two year. So the current privacy disaster, which started in 2010, is an ongoing privacy disaster that Facebook started shortly before the 2011 consent agreement Facebook was forced to enter into after its 2009 “instant personalization” privacy disaster:
And, of course, the “comprehensive privacy program” that Facebook agreed to as part of the 2011 consent agreement was basically a farce. There was immediate resistance from within Facebook. But, more importantly, Facebook found a loophole. A farcical subjective loophole, but a loophole nonetheless; Facebook’s executives convinced themselves that these ‘special relationships’ with +150 companies didn’t fall under the comprehensive privacy program because they were already governed by business contracts requiring them to follow Facebook’s data policies. The fact that Facebook’s data policies are the problem the “comprehensive privacy program” was supposed to address in the first place doesn’t appear to have been an issue for these Facebook executives. And Facebook assures us that the privacy team was indeed reviewing these special relationships. But the level of review “depended on the specific partnership and the time it was created.” So Facebook was determining on a case-by-case basis what kind of privacy reviews the privacy team could make, which, at this point, implies that the special agreements that made the biggest privacy violations probably got the most weakest reviews:
Keep in mind that Facebook started these special arrangements in 2010, before the 2011 consent agreement. So it would be interesting to know if the special agreements made in 2010 got the weakest privacy reviews due to have the greatest privacy violations.
We also learn that this special exemption from the “comprehensive privacy program” for these special arrangements includes the special data sharing arrangements Facebook had with the 60+ device makers that we learned about this summer, where Facebook justified these special data sharing arrangements by arguing that the device makers are “extensions of Facebook”. The exemption from the comprehensive privacy program also appears to exempt Facebook from having to get user consent before sharing the data. So the special arrangement the device makers got from Facebook where they were treated as extensions of Facebook wasn’t limited to device makers. Device makers were just one example of the kinds of businesses that Facebook considered an extension of itself. It’s all very special:
As we should expect, data privacy experts disagree with Facebook’s cavalier interpretation of what can fall outside of its consent agreement with the FTC. And those experts include former employees of the FTC’s consumer protection division:
“No one should trust Facebook until they change their business model.” Good advice.
Beyond that, we learn that Facebook allowed Apple to hide from Facebook user that their Apple devices were asking for data and Apple devices were still sharing with Facebook the contact numbers and calendar entries for people who changed their account settings to disable all sharing. it highlights how this isn’t just a Facebook scandal. All of those partners, especially those device manufacturers that hid the sharing of this data, are also implicated in this:
And as we’ve learned over and over in these Facebook data sharing scandals, while Facebook claimed to have overhauled its data sharing policies in 2014, the exception was the rule in the sense that a large number of the biggest companies were getting exceptions to these rule changes. In addition to phasing out the “friends permissions feature in 2014 — the feature at the heart of the Cambridge Analytica scandal that allowed the company to scoop up profile data on ~87 million Facebook users who were “friends” with the ~270,000 people who actually downloaded the Cambridge Analytica app — the “instant personalization” feature was also phased out that year. Except it wasn’t phased out for companies like Pandora, Rotten Tomatoes, Sony, Amazon, Yahoo, and Microsoft’s search engine Bing. And Microsoft admits that Bing was using this data to build profiles of Facebook users, which is a clear admission that this information wasn’t simply being used to provide some sort of service for Facebook users:
And while large Microsoft was given ongoing access to the names and email address of Facebook user friends, far more scandalous is the revelation that Netflix, Spotify, Royal Bank of Canada, were given the ability to read Facebook users’ private messages, which is arguably the biggest Facebook privacy violation we know of so far. The justification of this wild privacy violation was so these private messaging features could be incorporated into these companies’ own websites and app, and yet Netflix and Royal Bank of Canada were given this access to private messages even after they stopped offering that private messaging feature:
So Facebook was literally giving large companies access to read Facebook users’ private messages. For years. Even after these companies stopped offering the features that ostensibly used that data. It’s kind of amazing Facebook even considered this move, let alone executed it.
But don’t forget that we’ve already learned that Blackberry was apparently scooping up private messages as part of the permissions it got as a device maker. And app developers were also potentially given this permission, including the developer of the Cambridge Analytica app. This is just the latest update in an ongoing private messages scandal.
But then we learn that Facebook wasn’t just giving away data to all of these large companies. Facebook was also collecting data from them and using that data for creepy features like the “People You May Know” friends suggestion feature:
So what is the FTC going to do about this ever growing scandal? Well, that’s unclear because it appears that the FTC isn’t actually directly involved in the oversight of Facebook’s data privacy policies! That task has been effectively outsource to PricewaterhouseCoopers. And Facebook pays for and largely dictates the scope of these audits which are limited mostly to documenting that Facebook has conducted the internal privacy reviews it claims it had. So Facebook signs a consent decree in 2011 with the FTC, and the enforcement of that decree largely came down to Facebook paying PricewaterhouseCoopers to confirm that Facebook confirmed that it’s doing these internal privacy reviews. And, of course, the key lesson we’ve learned from this report is that Facebook internally decided that special data sharing arrangements with all of these large companies were actually subject to the 2011 consent agreement. So it’s not just a scandal involving Facebook and its various data sharing partners. The FTC and PricewaterhouseCoopers are also part of the scandal:
So how many of these special arrangements are still in place today? We don’t get to know and Facebook isn’t telling. We are only told that Facebook is currently ‘in the process of winding many of them down’:
Yes, Facebook would like to assure us that this time it’s actually ending these data sharing arrangements. For real! Trust us!
Beyond that, as the following article makes clear, Facebook wants to assure us that these kinds of secret data sharing agreements were actually very clear to users. That’s the explanation Ime Archibong, Facebook’s vice president of product partnerships, gave on the company’s blog following the above report in response to the public outcry over the revelation of companies like Netflix, Spotify, and Royal Bank of Canada being given read/write/delete privileges to private messages. According to Archibong, when Facebook users signed into the services for Netflix or Spotify using their Facebook login they were effectively giving permission for this private message data sharing.
Keep in mind one of the key revelations in the above article: Facebook concluded that it didn’t actually need to get user permissions or even inform them that it was happening for these data sharing agreements because Facebook determined that these third-parties, like device manufacturers, were effectively extensions of Facebook. So now that there’s an outcry, we are told by Facebook that, actually, users were giving their permission for these data sharing arrangements when they used features like the ‘log in with Facebook’ feature.
Also keep in mind that offering the option of signing into an online service using your Facebook login is popular and nearly ubiquitous feature offered on the internet these days. The above report covered Facebook’s data sharing arrangements with +150 companies, but there are a lot more than 150 companies that use the ‘log in with Facebook’ option. So if Facebook is quietly bundling all sort of user permissions into the use of these ‘log in with Facebook’ options, there’s probably all sorts of data sharing arrangements with other websites that we have yet to learn about.
Amusingly, Archibong points to the language used by Royal Bank of Canada’s 2013 press release about the new Facebook features being integrated into its app — which had private message read/write/delete privileges — as an example of how it was clear to users that Royal Bank of Canada was going to get access to this kind of information. But as the article points out, that press release language doesn’t doesn’t mention any about the need to read and delete Facebook messages and the words “private” and “privacy” appear nowhere in the press release.
Archibong also refutes the idea that Facebook was “shipping” user private messages to companies, clarifying that it was actually an automated process. Apparently having this system work in an automated way (which is the only way it would realistically work anyway) is supposed to make it ok.
Finally, the article notes that in the above New York Times report, the companies listed as have private message access all deny that they used this or even knew about it. And yet Archibong asserts that “We worked with them to build messaging integrations into their apps so people could send messages to their Facebook friends.” It’s another reminder that while this is a Facebook-centric scandal, it involves a lot more companies than just Facebook. And the more we learn about how Facebook was using the ‘log in with Facebook’ feature as a means of getting users to agree to data sharing agreements, the more companies this scandal is going to involve:
““In the past day, we’ve been accused of disclosing people’s private messages to partners without their knowledge,” Ime Archibong, Facebook’s vice president of product partnerships, said in a post on the company’s blog. “That’s not true — and we wanted to provide more facts about our messaging partnerships.””
That had to be a fun blog post for Facebook’s vice president of product partnerships to write. And in this blog post, he basically tells Facebook users that they had in fact did have knowledge that Facebook was sending these companies private messages. At least you should have known because you granted those permissions when you signed in to the Spotify, Netflix, Dropbox or Royal Bank apps with your Facebook credentials:
“No third party was reading your private messages, or writing messages to your friends without your permission,” Archibong stressed Wednesday night.
You, the Facebook user, clearly gave your permissions for Spotify, Netflix, Dropbox and Royal Bank of Canada to get access to your private messages. That appears to be the official line coming out of Facebook. And as evidence of how users were made well aware of the private message sharing arrangements, Archibong points to the Royal Bank of Canada’s press release (as if a press release is meaningful for informing users), and yet that press release in no way makes it clear:
Archibong also sets of a kind of straw man argument to shoot down by refuting the idea that Facebook was “shipping over private messages to partners”, insisting that it was actually all automated. As if that’s a meaningful distinction:
And while Royal Bank, Spotify, and Netflix all deny they had any involvement in this scandal, Archibong points out that Facebook “worked with them to build messaging integrations into their apps”:
Don’t forget what we learned in the previous article: Facebook continued giving Spotify and Netflix access to private messages even after they removed those features from their apps.
We also learned that Facebook was receiving information from a number of their data sharing partners.
And those fun facts, combined with the denials by Spotify, Netflix, and Royal Bank, raise an obvious question: How much worse is this scandal going to get? Because it appears that these kinds of massive Facebook data sharing practices have been such an open secret for years. An open secret about how people’s secrets — at least any secrets found in those private messages — have literally been open to a large number of companies for years.
Ironically, while Facebook is trying to place the blame for all of this on users — claiming that users were, in fact, giving their permissions for these kinds of privacy violations — that blame-the-users excuse is actually going to be a somewhat valid excuse going forward. Because at this point, after all of these scandals and Facebook’s denials, obfuscations, and attempts to pin the blame on users, if you’re handing your data over to Facebook you really shouldn’t expect that data to remain private. You’ve been warned. Facebook itself may not have directly warned you like they claim, but you’ve definitely been warned.
Birds of a feather tweet together and tweet very similarly in a predictable manner. That’s the gist of a new study that adds a new dimension to our understanding of the potential impact of the Cambridge Analytica and the privacy risks involved with social media in general: According to a study conducted by academic researchers, the social media posts of just 8 or 9 of your social media “friends” on Facebook and Twitter can be used to predict with 95 percent accuracy the post you will make on the social network. Even if you’ve never had a Facebook or Twitter account they can still predict what you probably would post if you did have an account.
First, recall the system Cambridge Analytica set up to make inferences about the political views of Facebook user: Cambridge Analytica had ~270,000 people download an app that had them take a psychological profiling quiz. Then, using the extensive data on users that Facebook made available to developers like the the posts people “liked” on Facebook, Cambridge Analytica developed an algorithm for predicting psychological profiles based on people’s “likes”. Then, using the “friends permissions” feature that Facebook offered to developers at that time that allowed app developers to get similar information about “likes” from all of the Facebook ‘friends’ of the people who downloaded an app, Cambridge Analytica obtained the “likes” of all of the friends of those ~270,000 app users, which totaled around ~87 million Facebook users. Cambridge Analytica then used that “friends permission” data to develop a psychological/political profile on those 87 million users. It was one big example of how the actions of your social media ‘friends’ could end up compromising your privacy.
But this new research takes privacy risks posed by social media friends to a whole new level. Because it’s not limited to your social media friends. It’s about your real life friends too who might happen to have social media profiles even if you don’t. As long as you have enough real life friends on these social media platforms, entities with information about what 8 or 9 of your friends post will be able to predict what you post and build a profile on your likes, interests and personality on social media. And as the Cambridge Analytica scandal demonstrated, when you know what someone posts and likes you can potentially make educated guesses about their psychology and political views, so this research is showing how even people who stay off of social media can effectively be profiled in a manner similar to what Cambridge Analytica did.
So if you were hoping that not having a Facebook or Twitter account will eliminate the risk of having Cambridge Analytica-style psychological profiling done on you, think again. Because as long as enough of your friends are on Facebook or Twitter, tweeting and ‘liking’ away, you can still be profiled by the company you keep:
“As careful as you are online, the study suggests that you’re only as private as your friends have been.”
You’re only as private as your friends have been. More specifically, you’re only as private as your least private 8 or 9 friends have been. Even if you’ve never been on Facebook or Twitter. And according to these researchers, they could build a profile on your likes, interests and personality on social media. Or predict it if you aren’t on social media yet:
And as the Cambridge Anlytica story demonstrated, we are already in a world where the micro-targetting of the masses with disinformation campaigns fueld by profile of people’s likes, interests and personality on social media is a reality. So if these researchers can do this, we shouldn’t assume they’re the only ones trying to do this kind of stuff.
Of course, if companies don’t know who your online friends are, they won’t be able to make these kinds of inferences. But as we’ve seen before, the collection of personal data profiles on individuals is so ubiquitous that Facebook even has “shadow profiles” on non-Facebook users. Shadow profiles that Facebook generates using information from the vast data-brokerage industry that also has various types of profiles on all of us.. “Shadow Profiles” Facebook was developing, in part, with its policy of grabbing people’s smartphone contact lists whenever people downloaded the Facebook mobile app. And Facebook obviously isn’t the only company developing “shadow profiles”. There’s all sorts of versions of profiles on all of use for sale, especially in the US where this industry is barely regulated. So one question raised by this research is the extent to which those shadow profiles on individuals for sale in the date-brokerage industry includes information like the identities of your real life friends:
So do those “shadow profiles” available on all of us in the data-brokerage industry include information like who your online friends are in real life? If so, Facebook, and any other entity with access to information about your friends’ social media activity, can apply the same approach these researchers used to make all sorts of educated guesses about you whether you use Facebook or not.
And when Facebook claims that, “If you aren’t a Facebook user, we can’t identify you based on this information, or use it to learn who you are,” keep in mind that they were responding to the revelations that Facebook is tracking internet users across the web at every site that uses Facebook’s apps. And while that may or may not be true that Facebook doesn’t have the capability to identities of non-Facebook users they were tracking across the web, that’s no reason to assume the information Facebook was collecting on non-Facebook users across the web isn’t really useful for identifying those users when combined with other information. Like third-party information on you that Facebook, and any other company, can buy commercially in the giant data brokerage industry.
This is also a good time to remind ourselves about another one of the many Facebook scandals to emerge last year: the revelation that Facebook was giving over 60 device makers — companies like Apple, Amazon, BlackBerry, Microsoft and Samsung — access to extensive data about Facebook users, including lists of all of your Facebook friends if you use your Facebook on one of their devices. Some device makers were given the ability to retrieve data like Facebook users’ relationship status, religion, political leaning and upcoming events. And Blackberry was given access to private Facebook direct messages and second-degree Facebook friend lists. So we shouldn’t be surprise if the maker of your smartphone has enough Facebook data to make educated guesses about the views their customers (the people who buy their devices) and their non-customer friends. For example, Samsung can presumably use the Facebook user data from via this device maker data sharing arrangement to build predictive profiles about not just Facebook-using customers but all of their friends too whether they use Facebook or not and whether they own a Samsung device or not. Samsung would just have to somehow determine who those non-Facebook-using real life friends of their customers are, which is presumably the kind of information available in the large third-party data-brokerage industry.
Also keep in mind that plenty of other privacy-violating technologies we’re learning gather the kind of data that could be used to make high guesses about the identities of your real life friends. Remember all those stories about Google secretly using very precise location tracking techniques with Android smartphones? And how about the third-party market for cellphone location data that cellphone companies in the US have been making available? Those seem like the kinds of technologies that will be really useful for making guesses about real-world friends. And what kind of friend-detecting capabilities will Soli — the radar technology for everday devices that Google is working on with the ability to map the objects in a room — make available to device makers?
And, of course, Google and all the other email providers can just read our emails and learn sorts of information about our friends. And then there’s all the information Google and other search engine providers collect. The growing number of data points getting collected in our daily lives is getting to the point where avoiding the mass collection of the identities of our personal friends will require not having personal friends.
But while Facebook is by no means unique in this data profile commercial ecosystem, it’s worth noting how Facebook does play a uniquely role. Whether it’s the device makers or app developers, Facebook has made the acquisition of personal profiles and networks of relationship connections easier than ever for commercial entities. As this academic study demonstrated, relationship data really is useful when combined with a database of personal profile data and Facebook provides both. It’s one of Facebook’s more ironic legacies: turning our friendships and sharing into a liability.
So when Facebook tries to explain away one scandal after another by proclaiming that the company merely wants to get everyone ‘connected’, keep in mind this study and how potentially lucrative and powerful those ‘connections’ really are to Facebook and all the third-party entities Facebook is sharing that data with.
Here’s a pair of articles that point towards an area where the mass deployment of facial recognition technology, AI, and the public’s legal protections against abuses from the mass deployment of facial recognition technology and AI all collide in interesting and important ways. First, here’s an article from last month about Google winning a lawsuit in Illinois, which happens to be the state that has the strongest public protections against abuses of biometric technology so it’s was a big win for Google from a legal precedent standpoint. The only other state with laws regulating the use of biometric data by private companies is Texas. And only Illinois allows people to sue for damages. This is all due to the Illinois Biometric Information Privacy Act (BIPA) passed in 2008.
Texas and Washington are the only other states with law regulating how private companies may use biometric data, but Illinois is still the only state that authorizes statutory damages for violations. So the legal precedents from Illinois’s legal battles over consumer biometric privacy protections have limited applications at the moment since almost all states just use federal law. But this is an area where legal challenges are inevitably going to come in the future so all of these these cases coming out of Illinois are going to be something to watch.
The lawsuit against Google that was dismissed last month relied on the fact that BIPA bans the collection and storage of biometric data without someone’s consent. And that includes “faceprinting” someone with facial recognition technology. “Faceprinting” involves collecting the biometric data on an individual that can be used for identifying them in images and video via facial recognition. The woman who brought the lawsuit made the point that she is getting “faceprinted” by Google without her permission or signing up for Google’s services as a result of Google’s application of facial recognition technology to the millions of photos uploaded to Google’s cloud-based Google Photo service.
Much to the relief of Google and the rest of the growing numbers of companies that are using facial recognition technology on the public, the Illinois judge dismissed the case on the grounds that the plaintiff in the case did not suffer “concrete injuries.” Illinois’s BIPA doesn’t appear to apply to the collection of faceprints, at least in this commercial context of people giving photos and videos to services like Google. So a real legal challenge to the unhindered use of facial recognition collection on the public just got shot down. And that’s a green light to companies not just in Illinois but across the US to proceed ahead with faceprinting facial recognition technology getting used to the public without fear of legal repurcussions. So smile, you’re on a growing number of cameras and those cameras knows who you are and record it in a database:
“Individuals in Illinois who believe their rights under BIPA, the nation’s strongest biometrics privacy law, have been violated can sue for damages.”
Thanks to the 2008 BIPA law, Illinois is a wonderful headache for Big Tech and the only real potential hurdle for the business of biometrics in the United States. So it’s a pretty big deal for the future of mass facial recognition data collection in the US that this Illinois judge threw out this lawsuit. The collection of “face template” catalogs by companies like Google remains unhindered:
And note that one of the other Illinois BIPA lawsuits — a lawsuit against Six Flags involving the use of fingerprint scanning for season pass entry at the park that a woman’s son was offered and used without getting the mom’s permission in advance — was actually just ruled in favor of the mom and may pave the way for class action lawsuits over the collection of fingerprints. So BIPA is still scoring legal victories:
And that’s part of why Google and Facebook are both trying to carve out exemptions in BIPA through amendments that will avoid these facial recognition lawsuits. Like an amendment that would allow employers to use facial recognition for tracking employees that Google and Facebook are pushing. And the reason Google backed a 2016 proposed amendment to BIPA that would have made it only apply to scanned physical photographs and not uploaded photos. Until Big Tech is able to get BIPA overturned, crafting loopholes is the next best thing:
So facial recognition on uploaded photos got a thumbs up from Illinois’s courts but fingerprinting without permission got thumbs down. Net, it’s a big win for business given the growing applications of facial recognition that far outstrip the application for fingerprint scanning.
We’ll see what’s next for Illinois’s BIPA lawsuits, but we probably shouldn’t be too surprised if it involves the technology described in the next article: It turns out Walgreens is testing out new ‘smart-coolers’ for selling chilled foods. The coolers will be equipped with cameras and will have capabilities like iris-tracking. The coolers will use facial recognition-like approaches to analyzing customers faces. But, crucially from a BIPA standpoint, the coolers won’t be trying to identify people.. Instead, the smart coolers will analyze customers for demographic data, like age, gender, and race. So it’s going to be a highly invasive privacy violating technology for the Walgreens customers, where there eyes are literally tracked, but no actual attempt to match the customers with a faceprint data and identify them will take place so it technically won’t violate BIPA. That’s what Walgreens its trying out in Illinois: BIPA compliant mass facial analysis smart coolers for convenience stores:
“Walgreens is piloting a new line of “smart coolers”—fridges equipped with cameras that scan shoppers’ faces and make inferences on their age and gender. On January 14, the company announced its first trial at a store in Chicago in January, and plans to equip stores in New York and San Francisco with the tech.”
Chicago gets to be the new smart coolers testing grounds. Will iris scanning when there’s no attempt to know who’s eyes they are be BIPA compliant? We’ll see:
And while Walgreens doens’t say they chose Chicago specifically to see if their coolers were BIPA-compliant, but it seems likely given that the coolers specifically don’t try to identify people making them a perfect opportunity for business to see what BIPA will allow them to get away with:
What kinds of BIPA-related legal challenges will these smart-coolers bring? That’s going to be interesting to see.
But it’s worth keeping mine that customers are routinely identifying themselves when they pay with credit and debit cards. And thanks to the data brokerage industry it’s possible to buy demographic data on all of us. So it’s very technically possible for Walgreens to have non-personally identifying smart coolers and still identify a large number of the people after the fact simply by combing the name and demographic information collected from the credit and debit sales with the demographic information anonymously collected by the smart cooler.
Also keep in mind that once people get used to smart coolers that analyze your face but don’t identify you, it’s just a matter of time before companies start using smart coolers that do identify you and serve up some sort of personally customized sales pitch to you. You know that’s just a matter of time. And the more detailed the personal data file is about you, the more sophisticated and personalized customized smart cooler experience can be. Imagine Cambridge Anlytica-style personalization that factors in your psychological profile that Walgreens has on you.
It a reminder that a growing part of what makes the loss of anonymity troubling is that it allows for the increasingly detailed and sophisticated personal profiles that exist on all of us in the commercial and government space to be applied to us in real time. The world where the profit potential of customized marketing creates an incentive for the personal databases being collected on all of use to be accessible by everyday objects. Like coolers. Coolers that are one day going to be smarter than us probably. Super-AI coolers that serve up super sophistcated sales pitches. That’s probably going to be a thing someday.
And no one has a bigger personal profile databases — our likes and dislikes and tastes, etc — than Google and Facebook. So it’s worth keeping in mind that the stuff you post on Facebook will probably one day be known by the smart cooler at your local Walgreens and every other Walgreens. And it’s going to use it’s superior knowledge of you and it’s super-AI prowess to sell you things (except in Illinois maybe!). And the way things are going that future is probably coming sooner than you think. So stay strong and watch out for the specials on ice cream.
Here’s a quick update on Data Propria, the Cambridge Analytica offshoot created by Brad Parscale’s company Cloud Commerce. Recall the reports from back in June about how the GOP was hiring the services of Data Propria for the 2018 mid-terms. Data Propria employs four ex-Cambridge Analytica employees, including Cambridge Analytica’s chief data scientist. Cambridge Analytica’s former head of product, Matt Oczkowski, leads Data Propia. Oczkowski led the Cambridge Analytica team that worked for Trump’s 2016 campaign and was reportedly overheard bragging to a prospective client about how he’s already working on Trump’s 2020 campaign (which he subsequently denied). Also recall how Brad Parscale ran the Trump 2016 campaign’s extensive digital operations that included extensive micro-targeting of individuals outside of the Cambridge Analytica efforts.
So the fact that these ex-Cambridge Analytica employees, including key employees, were hired by a subsidiary of Brad Parscale and allegedly already working on Trump’s 2020 campaign appeared to be something the Trump campaign wanted to keep under wraps. Well, that just got a lot harder to deny following the announcement that Matt Oczkowski is now the running Parscale Digital in addition to Data Propria. Recall how Parscale Digital is the rebranded version of Parscale’s old marketing company. As the following article notes, Parscale sold his shared in Parscale Digital in August 2017 at the same time he purchased $9 million in stock for Cloud Commerce and took a seat on its board. August of 2017 is also the same month Parscale Digital was sold to Cloud Commerce. So Parscale is a co-owner of Cloud Commerce which the owner of Parscale Digital. And now Matt Oczkowski, the former head of product for Cambridge Analytica, is running Parscale Digital:
“Oczkowski will take on the dual role running Parscale and Data Propria, which are both owned by CloudCommerce.”
Matt Oczkowki, Cambridge Analytica’s former head of product, is turning out to be quite an important member of Trump’s digital campaign. Although we’re told Oczkowski position leading Parscale Digital is just in an interim capacity. It will be interesting to see how long of an interim that turns out to be given the 2020 election cycle is already upon us:
And to make it clear, Parscale Digital does indeed do political work for the Trump campaign, but it’s as a subcontractor for Parscale Strategy LLC which is Parscale’s political consulting firm (he’s got a lot of firms, as we can see). According to filings with the SEC and FEC, Donald J. Trump For President, Inc paid Parscale Strategy LLC more than $3.4 million in 2018 for digital consulting and online advertising, and Parscale Digital invoice Parscale Strategy for $729,000:
So given the prior attempts by Oczkowski to deny that he was doing any work for the Trump campaign, keep in mind that the work Parscale Digital and Data Propria do for the 2020 Trump campaign will probably be subcontracting work of this nature which will allow the Trump campaign to deny that it’s ever directly hired these services of these firms managed and staffed by ex-Cambridge Analytica employees. And yes, those denials are implausible when you actually look at the structure of these companies but doubling down on implausible deniability is kind of Trump specialty so we shouldn’t be too surprised if that’s what happens.
Here’s one of those Cambridge Analytica stories that could have implications going far beyond the Cambridge Analytica scandal. It’s a story about how David Carroll, an American academic, successfully sued Cambridge Analytica’s parent company, SCL, back in 2017 for a copy of all of the data that Cambridge Analytica held about him. And he won. He received a profile on him, but not the entire profile. SCL refused his request for the entire profile and in May of 2018 the British government ruled that SCL was required to give Carroll everything they had on him. But SCL continues to refuse and chose to be fined instead. Carroll continues to sue. But the fact that he won this case at all has potentially big implications. Because British law mandates that UK citizens have a right to be notified what information a company holds about them, but the question of whether or not Americans and anyone else outside the UK also has that right under British law wasn’t really established. And thanks to David Carroll’s successful lawsuit it appears that, yes, Americans and other non-UK citizens can sue UK companies for that information. The companies might not actually hand over the information and choose to be fined instead, but at least you can successfully sue for it. And if Carroll wins his ongoing lawsuit against SCL and manages to actually get a complete profile, all of the other 87 million Facebook users that Cambridge Analytica collected profiles on will have a much easier time making such requests for themselves:
“A year ago, Carroll filed a legal claim against the London-based conglomerate, demanding to see what was in his profile. Because, with few exceptions, British data protection laws allow people to request data on them that’s been processed in the UK, Carroll believed that even as an American, he had a right to that information. He just had to prove it.”
Yep, British data protection laws allow people to request data on them that’s been processed in the UK. Does that include all people anywhere in the world? That’s what David Carroll was testing with his lawsuit. And sure enough, shortly before the Cambridge Analytica story erupted in late March of 2018, Carroll learned that the British Information Commissioner’s Office ruled that Carroll’s legal claim against SCL for not handing over all of the data they held on him was valid and SCL was to be fined. It wasn’t a big fine, but it established some important legal precedents, including the precedent that non-UK citizens have a right to this data too. And in doing so, Carroll implicitly highlighted the kinds of rights US citizens currently don’t have in their own country:
It also turns out Carroll has personal experience with another side of the Facebook data privacy nightmare: in 2014, he went on sabbatical and created a startup that developed a Facebook app. And that’s where he learned about just how much data Facebook was giving away to app developers, including the “friends’ permissions” feature exploited by Cambridge Analytica that allowed it to use ~270,000 app users to grab ~87 million detailed profiles. It’s a remind that the Cambridge Analytica scandal wasn’t actually a secret to the thousands of companies on the receiving end of all that data:
Carroll ended up suing for the data after teaming up with a researcher, Paul-Olivier Dehaye, who was already investing SCL over its role in the Brexit campaign. Dehaye wanted to see if SCL really have the detailed data profiles they claimed but he also specifically wanted to test if British courts would find that non-UK residents have the right to make a data request. So he started reaching out to American academics and that put him in contact with Carroll:
Initially, in early 2017, SCL appeared to try to placate Carroll by emailing him a an Excel spreadsheet of information that included a range of different metrics for Carroll’s political beliefs. But it still seemed incomplete and so Carroll sued by arguing that giving him an incomplete set of the data they have on him represented a breach of the UK law:
Delaye then puts Carroll in contact with a British human rights lawyer, Ravi Naik, and in April of 2017 they send SCL the legal letter stating their case that the Excel file Carroll received was a violation of the law. SCL refuses to give additional data. And when asked why SCL refused Carrol’s request, then-CEO Alexander Nix says the company had no legal obligation and it was concerned about opening up a “bottomless pit” of requests that the company lacked the resources to deal with:
Carroll even had to deal with a weird intimidation attempt by a Cambride Analytica employee and received warnings in the fall of 2017 about a journalist who was investigating SCL and suddenly fell dead in a stairwell. So if Carroll drops dead between now and when his ongoing lawsuit is resolved that’s going to be pretty extra suspicious:
Flash forward to March 16, 2018, days before the Cambridge Analytica scandal erupts, and Carroll serves SCL with the formal legal claim of his intent to sue. And in May the British Information Commissioner’s Office (ICO) rules in Carrolls favor, giving SCL 30 days to comply. 30 days go by and SCL gets fined. It’s a small sum $27,000, of which $222 went to Carroll. But as a legal precedent it could be quite significant, especially when it comes to giving Americans a taste of what have meaningful data privacy laws feels like:
So it’s going to be interesting to see how the ability to sue UK-based companies impacts the US data privacy regulatory environment, an environment that basically has no regulations. It’s certainly going to complicate the use of UK-based digital psychological warfare firms like SCL and Cambridge Analytica for political dirty tricks operations.
And we can’t forget that we still have no idea how much Cambridge Analytica actually knows about Carroll. All he’s received is an Excel sheet that he was confident was incomplete and SCL has refused to give him the complete profile and the lawsuit is ongoing. You have to wonder just how big that full profile really is. And now that Carroll has established that Americans have a legal right to this data, it’s hard to imagine there aren’t going to be a lot more requests made going forward.
If Cambridge Analytica’s professed nightmare scenario emerges and it gets mass requests for the full data profiles, that also raises the question of how that could change how Americans view themselves when they get what will probably be the first highly detailed marketing profile on themselves. A profile that knows more about you than you do. Imagine if the full profile, which presumably was a compilation of what Cambridge Analytica collected from the Facebook app combined with what they could purchase about people in the large data brokerage market, is just a really compelling read for people. The Cambridge Analytica scandal could represent a valuable way for people to learn what that larger marketing industry knows about them because Cambridge Analytica combined third-party data with its own data. Will Americans be shocked by the richness of those details known about them or shrug it off? We’ll see, but it’s going to be worth keeping in mind that thanks to David Carroll’s lawsuit the Cambridge Analytica scandals now represents a rare opportunity to teach the public some very important lessons about the scale of the personal data brokerage industry. If the full Cambridge Anlytica profiles Carroll is still suing to get released end up being a compelling read for people it’s a great opportunity to teach the public about the kinds of data being collected on them. We don’t know why SCL continues to refuse to give Carroll the full profile they hold on him. But one obvious possibility is that the full scope of what they know about him would shock the public and that’s why they’re refusing. And a shockingly massive and detailed profile on you would probably be a pretty compelling read for you so it seems possible we could see a lot of people request their profiles if David Carroll’s suit wins out. Assuming SCL doesn’t destroy the data or finds some other excuse for not releasing it.
Given that the Cambridge Analytica scandal in the US has been from the beginning been a scandal rooted in the larger scandal of a US data collection industry that operates with few regulatory restraints, it’s perhaps appropriate if the one stakeholder in the data brokerage industry that normally never gets to see the data, the public, finally gets to see themselves the way the panopticon sees them. Transparency for the panopticon: it may not be he ultimate regulatory solution, but it’s a start.
Cambridge Analytica Christopher Wylie has a new book out about his experiences at the company, and based on the following book excerpt it’s sounding like a must-read book. For starters, Wylie describes a scene where his team unveils their core product to Cambridge Analytica’s investors and officers: a database of millions of Americans filled with details about their life. During this meeting, they bring up records of random people and then proceed to test the accuracy of data by calling these people on the phone and pretending to conduct a survey. The various investors, including Steve Bannon, all take turns calling random Americans to see if the data in their files about their likes and dislikes matches their answers on the phone.
Another major detail in Wylie’s description of this initial Cambridge Analytica database is that the Facebook harvested data was just of many data sources in the Cambridge Analytica model. It included all sorts of commercially available databases and state databases. Information like mortgage applications or Google maps satellite photos of their homes. A cutting edge full spectrum personal profile that included psychological profiles. That’s what Cambridge Analytica offered.
But probably the biggest revelation in this excerpt from Wylie’s book involves Palantir. Because that revelation also involves a desire by the US intelligence agencies to use Palantir to buy Cambridge Analytica’s data as a means of mass harvesting the kind of information the US government and national security contractors like Palantir aren’t legally allowed to collect. The loophole that the collections is allowed if its freely volunteered by individuals or companies. So the government gets to use the data profiles Facebook was freely offering to everyone else by hiring Palantir to hire Cambridge Analytica to mass harvest them from Facebook users. And merge it with all the other “freely available” commercial databases. But Cambridge Analytica also included psychological profiles built on people who took an online test to grab profiles on about a quarter of a million people and then they used the “friends permissions” option to boost it up to at least 87 million Facebook users that had algorithms used to infer their psychological profiles. That’s a pretty big new detail on the Cambridge Analytica scandal.
On one level, it’s not remotely surprising that the government would use the data Facebook makes available to app developers like Cambridge Analytica because Facebook is selling that info to everyone else, including other governments presumably. Commercializing that data is Facebook’s ultimate business model and selling ads is only one part of that commercialization. But on another level, if Facebook acts as a legal loophole that allows government agencies and contractors like Palantir to incorporate that mass harvested treasure trove of personal data into government databases that is actually a very big deal. Facebook wouldn’t be alone in offering mass harvested data profiles but Facebook is the leader in commercializing mass harvested data profiles so it’s sort of providing the cutting edge for government use data profiles on people.
But even Facebook’s cutting edge profiles would still just be one part of these super-profile databases that Cambridge Analytica was building. That’s what was power this wing of the Trump 2016 propaganda effort: super-databases on people so powerful that Palantir and the government wanted to rent it.
And that’s just the case for the US government. The governments all over have probably set up all sorts of Facebook apps to mass harvest profiles on as many people as possible all over the globe. But in this case in sounds like Cambridge Analytica was the company that was blazing the trail for the US national security state for this kind of government deep profile building. We can throw that on the pile of Cambridge Analytica’s accomplishments. It’s an example of why we shouldn’t assume we’ve heard the worst of Cambridge Analytica. Each new twist is a new low:
“Jucikas typed in a query, and a list of links popped up. He clicked on one of the many people who went by that name in Nebraska — and there was everything about her, right up on the screen. Here’s her photo, here’s where she works, here’s her house. Here are her kids, this is where they go to school, this is the car she drives. She voted for Mitt Romney in 2012, she loves Katy Perry, she drives an Audi. And not only did we have all her Facebook data, but we were merging it with all the commercial and state bureau data we’d bought as well. And imputations made from the U.S. Census. We had data about her mortgage applications, we knew how much money she made, whether she owned a gun. We had information from her airline mileage programs, so we knew how often she flew. We could see if she was married (she wasn’t). And we had a satellite photo of her house, easily obtained from Google Earth. We had re-created her life in our computer. She had no idea.”
They were recreating lives by merging all sorts of different databases. The Facebook data provided an important psychological dimension to the profiles but it was just one dimension of many. And up to 200 millions were projected by the end of the year. Profiles that recreated lives. For use by Robert Mercer and Steve Bannon:
But it wasn’t limited to Mercer and Bannon and that initial group of investors. They were offering the services of these services to all sorts of Republican figures. John Bolton’s super-PAC wanted to figure out how to increase militarism in the American youth. Even Jeb Bush wanted in but the Mercers wouldn’t allow it. And recall how Sam Patten worked for SCL on the 2015 Nigerian campaign where political hacking operations were employed. Also recall how Sam Patten plead guilty to FARA violations when he acted as a straw purchaser of Trump inauguration tickets for Ukrainian oligarch and Manafort associate Sergii Lovochkin (Lyovochkin). Also recall how SCL/Cambridge Analytica spinoff AIQ was doing consulting work for Ukrainian oligarch Sergei Taruta, who, like Lovochkin, appears to be a Ukrainian oligarch who straddles the East/West divide in the country while generally supporting moving Ukraine towards the West. We don’t know if Patten’s work with Cambridge Analytica was on behalf of a Ukrainian client but he had just been working with Manafort’s long-time partner Konstantin Kilimnik so it wouldn’t be surprising:
But by far the most controversial Cambridge Analytica client would habe to be Palantir. Palantir staffers wanted to open an interesting legal loophole and let Cambridge Analytica get around the prohibition against government mass-harvesting of data on American citizens. That database that recreated the details of millions of lives that was going to get sold to Big Brother:
“he staff suggested to Nix that if Cambridge Analytica gave them access to the harvested data, they could then, at least in theory, legally pass it along to the NSA.”
Cambridge Analytica’s databases were so detailed even the NSA wanted in. That’s what Christopher Wylie’s team built for Robert Mercer and Steve Bannon. And then sold to a bunch of Republicans. And that’s what we’re learning from just this excerpt of Wylie’s new book: that the Cambridge Analytica scandal wasn’t just a scandal about the harvesting of Facebook data and psychological profiles getting sold to the Trump campaign. It’s the scandal of Cambridge Analytica creating super databases that includes far more than just these Facebook profiles and selling it to all sorts of figures, including possibly the government, which seems like a much bigger scandal.
Facebook just disclosed a new data ‘oopsie’ involving app developers improperly accessing information they shouldn’t have been accessing (and yet somehow could access). Surprise!
Facebook haven’t released very much information about it yet. We’re simply told that roughly 100 developers may have improperly grabbed information about people belonging to certain Facebook Groups. We’re told the apps were primarily involved with social media management and video-streaming apps and these apps were able to access information about Facebook Group members even after Facebook changed its policies in April of 2018 in the wake of the Cambridge Analytica scandal. So we’re talking about apps that were somehow able to access this information of Facebook Group members for over a year and a half following the policy change.
Facebook won’t tell us what exactly the information was that these apps could access other than to say it included names and photos. The company also won’t say how many people were impacted. And while Facebook is claiming that they haven’t seen any signs of developers abusing the information, they assure us they are asking the developers to delete the data and will be conducting an audit to confirm the data is deleted. Keep in mind that Facebook’s audit can’t really consist of much more than asking these companies if they really deleted the data since it’s impossible for Facebook to truly confirm it. It’s a reminder that Facebook’s business model is based on collecting massive amounts of data and then completely losing control of the data when they sell/trade it away:
“The company did not detail the type of data that was improperly accessed beyond names and photos, and it did not disclose the number of users affected by the leak.”
We don’t know what was leaked and we don’t know how many people were affected. We just know there’s a leak and it involves roughly 100 app developers. There’s no doubt about it. That’s ominous. You don’t want ambiguity in a Facebook scandal. That just means it’s only going to get a lot worse. The Cambridge Analytica scandal made abundantly clear with one update after another of more people affected, more data collected, and a worse corporate culture that made it all inevitable. That’s how Facebook does scandals. Drip-drip-dripping it along. This is just the first drip in this new scandal.
And note how we’re told the roughly 100 app developers were primarily developers of social media management and video-streaming apps. The use of the word “primarily” also implies there are other types of apps involved. What types of apps might those be? Hopefully we’ll learn that in one of the future drips:
And note one of the other implications of this story: it demonstrates that Facebook either can’t or won’t be able to effectively implement policy changes. And when it comes to policy changes around the data it’s already given out to developers, it really can’t enforce those changes. It can only ask the developers to please not use the data they already got from Facebook in ways that violate the new policies and hope they do it. Now, yes, Facebook can theoretically enforce policy changes in how developers use data they’ve already collected in the Facebook apps they’re developing. But there are many uses for Facebook data that don’t involve Facebook apps, as the Cambridge Analytica scandal psychological profiling for political purposes using Facebook profiles also made abundantly clear. And Facebook has a history of knowing about these abuses and allowing them to happen and then pretending like they didn’t know about it until they can’t pretend anymore, as the Cambridge Analytica scandal also made abundantly clear.
So if this Facebook Groups scandal ends up involving the mass handover of a large number of detailed profiles, all that data will be out of Facebook’s control. It’s just out there. Possibly getting passed around, sold and traded. It’s another way Facebook connects people: through the data black market for all the data Facebook has quietly handed out:
And note the other ominous tidbit that’s undoubtedly going to be explosively bad tucked away in this initial drip: in addition to the roughly 100 app developers, there are app developer partners involved in this too. They already have the improperly harvested data. And at least 11 of them have been accessing that data in the last 60 days. At least 11, which means that number is also going to go up in a future drip:
So we’ll see how much worse this story gets. At this point we just know it’s going to get a lot worse. But keep in mind that Facebook appears to be pathologically driven to repeat the same Cambridge Analytica-style scandal over and over — where Facebook quietly maximize their profits by flagrantly selling data and it keeps getting worse and worse until it explodes into a scandal that starts off as a mild sounding story that soon erupts into another Facebook mega-scandal — and if this is another story following that pathology this is probably going to look like the Cambridge Analytica scandal, where the mass harvesting of 87 million detailed user profiles was enabled via “friends permission” feature that Facebook allowed for app developers from 2007–2014 that allowed apps to grab the profiles of all app users’ friends too. If this scandal involves app developers grabbing detailed profiles of all the members in a group this could end up dwarfing the Cambridge Analytica scandal. The Cambridge Analytica scandal was the story of a single app developer. This is at least 100.
In related news, an anonymous White House source disclosed to NBC News that Mark Zuckerberg had a secret dinner with President Trump and Peter Thiel when Zuckerberg was in DC to pitch his Libra cryptocurrency scheme to congress last month. It’s a reminder that we probably shouldn’t be surprised if this new data scandal involves Trump campaign Facebook Group apps and if that’s the case we’ll probably learn about it after the 2020 election. Kind of like the Cambridge Analytica scandal. But probably worse. Because that’s how Facebook does scandals. They just steadily get worse. Drip by drip. Scandal by scandal. A fire hose of drips.
Here’s an interesting story that could actually end up being incredibly impactful on the outcome of the 2020 US election. Or it might end up being a reaffirmation of Facebook’s dedication to making money helping Republicans spreads misinformation. We’ll see:
Facebook hinted that it might be making changes to its advertising tools. Changes that limit the ability to microtarget ads. And that predictably has led to an outcry from the Trump team. Recall how Trump’s 2016 campaign relied heavily on Facebook’s microtargeting technologies in 2016 and that’s seen as one of the core elements of his victory. Microtargeting was a big part of the campaign’s ‘secret sauce’ in 2016 and all indications are that the Trump team is planning on using refined microtargeting techniques even more extensively in 2020. So it really could be a very big deal if Trump’s campaign can’t rely on microtargeting.
But it remains extremely unclear what, if any, changes Facebook is actually going to make. The reporting is based on an anonymous individual familiar with Facebook’s thinking on the matter who claims that limiting microtargeting is one of the changes Facebook is considering. But on Monday, Facebook’s vice president of global marketing solutions asserted that the ad targeting technologies wouldn’t be impacted by any upcoming changes. Later she told Axios that a range of changes were still possible.
So maybe Facebook is limiting their microtargeting options and maybe it isn’t. But the very threat of that has the Trump campaign decrying that any limitations on microtargeting would suppress voter turnout and stifle free speech. It’s interesting spin since there is some truth to the complaint that limiting microtargeting would limit voter turnout. But that’s only because, as the article notes, microtargeting tools enhance the ability to send people the kind of highly inflammatory and deceptive ads that will get them emotionally engaged enough to go out an vote. Recall how the Cambridge Analytica scandal involved the develop of psychological profiles on users — based on their “Likes” and other Facebook profile information — and use those profiles to tailor the kinds of messages, often deceptive messages, that would emotionally move and inflame people. So if you limit the use of microtargeting, you do actually limit the ability to motivate people to get out and vote by limiting the ability to deliver tailored ads designed to emotionally inflame people. Also recall how one of the other goals of the Cambridge Analytica microtargeting effort was to suppress the vote by encouraging left-leaning voters to stay home and not vote at all. Because, in the end, microtargeting can encouraging voting or not voting because it’s fundamentally about micromanipulation. Micromanipulation that the Trump campaign absolutely needs for 2020:
“Facebook’s microtargeting technologies allow advertisers to home in on specific groups of users and deliver messaging tailored to them — a strategy the Trump campaign has used prolifically. Trump’s campaign director Brad Parscale has noted that the president’s team has tested thousands of variations of political ads in an attempt to reach small groups of voters, such as “15 people in the Florida Panhandle that I would never buy a TV commercial for.””
Searching for the precise message that will get those 15 people in the Florida Panhandle to get out and vote. That’s what the Trump team’s social media advertising campaign is going to be focused on and it’s a strategy that can’t work without the ability to microtarget, hence the freakout by the Trump campaign. A freakout that just might work at cowing Facebook because the company is making it very clear that it’s very unclear if there’s going to be any changes to the microtargeting tools:
And note how the arguing by the Trump team that limiting microtargeting is a limitation on free speech is true in the sense that it limits the ability to deliver targeted deceptive messages that are more likely to fly under the fact-checkers’ radar. Microtargeted free speech unfortunately includes the freedom to tell lies designed to emotionally inflame a specific person based on a psychological profile you’ve built of them, as the Trump team keeps making clear:
So we’ll see if (likely how) Facebook eventually ends up capitulating the the Trump team’s demands. But it’s worth noting that we already have a pretty good idea of what particular types of microtargeted ads the Trump team will be using on Facebook next year if Facebook leaves them with that option: microtargeting old people with ads designed to scare them about immigration. As the following article from back in April describes, 44 percent of Trump’s Facebook advertising is spent on audiences 65 years and older (compared to 4 percent spent on the 18–34 crowd) and 54 percent of Trump’s Facebook ads are using nativist language around immigration. And yet microtargeting is also being used. It’s a reminder that the microtargeting the Trump team is engaged in is largely going to be microtargeting designed to deliver inflammatory white nationalist memes most effectively to an individual:
“Trump’s campaign is spending 44 percent of its Facebook advertising budget to target users who are 65 and older, according to a report from Axios based on data from the political communications agency Bully Pulpit Interactive. That’s significantly more than the top 12 Democratic 2020 candidates, who are spending an average of 27 percent of their Facebook ad budgets on the over-65 crowd.”
It’s the Trumpian version of Big Data politics: Scaring old people about immigrants with customized social media ads. And the Trump campaign is a BIG customer for these services. It’s something to keep in mind when the Trump campaign freaks out following the report of the possibility new microtargeting policies: The Trump campaign is probably Facebook’s biggest client for those services. They are planning on spending hundreds of millions of dollars on this and the larger right-wing propaganda ecosystem will probably spend billions over the next year. Microtargeted white nationalist trolling is big money for Facebook in 2020:
And note how Trump’s digital campaign director Brad Parscale has talked about the “Lookalike audience” tool that Facebook also offers to find people similar a target list. Facebook’s ad system is basically set up to maximize microtargeting, which unfortunately doubles as a system for maximized inflammatory microtargeted disinformation:
And it happens to be the case that the elderly are the most susceptible to fake news on Facebook. Trump scaring grandma and grandpa with scary immigrant memes is the perfect storm for fake news. The researchers who found the elderly to be seven times more likely to share fake news than the younger Facebook users suggested an issue media literacy might be part of the issue. Which is undoubtedly true. The elderly who get right-wing disinformation on Facebook are also going to heavily overlap with Fox News viewers — where the median viewer age is around 68 — and the Fox News audience is an audience with self-evident major media literacy issues. So preying on media literacy deficits is a major part of the Trump 2020 strategy:
And it’s that ability to microtarget old people with messages about scary immigrants that the Trump campaign can’t afford to lose. It’s too important. An endless hurricane of inflammatory digital microtargeted lies and white nationalist memes. That was the Trump campaign’s digital ‘secret sauce’ in 2016 and it’s going to be next generation secret sauce in 2020. Unless Facebook ends the microtargeting. That’s part of what makes this story of Facebook thinking about changing those rules something to keep an eye on. This is a very big deal for the Trump campaign. Personalized provocations and deception is what the digital operations for Trump 2020 is all about. Otherwise it’s back to more generic provocations and deception, which Facebook is still quite good at delivering so the Trump campaign’s lies should be ok.
It happened again. Again: Facebook just had another giant data leak. A security researcher just found an unencrypted database on a Dark Web hacker forum contain Facebook user account info on 267 million users. The security researcher the database had no password protection and was available for anyone on the hacker forum to download for about two weeks. It appears to be mostly US users. Each entry in the database contained a Facebook user id, a full name, and a phone number. Importantly, it appears to be pretty up to date information, so it’s perfect for scam artists. The security researcher concluded that it was likely created by a criminal operation in Vietnam.
Interestingly, while the researcher raised the possibility that this information was simply scraped from the information Facebook users publicly make available on their profiles, they also suspect the information may have somehow been grabbed via the Facebook API used by app developers. Facebook used to give app developers direct access to information like the phone numbers associated with a user account via the API until the company restricted access to that information in 2018 following the Cambridge Analytica scandal, so it’s possible this information was all grabbed before those restrictions were put in place. But as the researchers note, it’s also possible someone found a vulnerability in the updated Facebook API.
So are we looking at a new ‘bug’ that allowed for the mass collection of data or is this the consequence of Facebook’s past policies? At this point we have no idea. But it’s worth recall the scandal revealed last month when Facebook admitted that at least 100 app developer partners may have improperly accessed user data from the members of Facebook Groups. We weren’t given any information how how many people were impacted by this and Facebook only gave a vague description of the type of information developers were able to grab, only admitting that it included names and photos. So Facebook admitted just last month that there was a bug with the API it makes available to the developer of Facebook Group apps, but that’s about all they told at the time. Might this latest leak be related to that Facebook Groups leak? Who knows. At this point there are so many reports of leaks it seems plausible that at least some of those leaks are related. Either way, if you’re a Facebook user in the US and you suddenly start getting a bunch of scammy phone calls or texts from unknown numbers you can probably thank Facebook for that:
“Reports indicate that this presents a treasure trove of data for telemarketers and spam purveyors because the data looks legitimate and comes from the social network itself, not from an untrusted source. (In some cases, leaked data that is old and outdated doesn’t help would-be scammers because the names and numbers are incorrect.)”
A treasure trove of data for telemarketers and spam purveyors for 267 million people. That’s what someone was just giving away on this hacker forum earlier this month. Was this data collected from scraping information users make publicly available? Or are we looking at another Cambridge Analytica-style leak where Facebook was basically giving this information away to app developers? Or maybe it was an API bug. At this point we have no idea about the source of this leak. We just know from experience at this point that there are a variety of explanations because we’ve seen so many different types of Facebook leaks:
Sophisticated scams on 267 million people are now that much more accessible to random scammers. But the information in this database doesn’t appear to limited to Facebook user ids, full names, and phone numbers. Based on a screenshot in the actual the Comparitech report, it looks like the data also potentially includes information like date of birth, location, gender, relationship status, and email addresses. But in that screenshot, all of the fields other than full name, userid, timestamp, and the phone number were set to null. And that suggests that whoever set this database up and made it public might have all of that additional information and intentionally scrubbed it so only the names, user ids, and phone numbers were released:
“How criminals obtained the user IDs and phone numbers isn’t entirely clear. One possibility is that the data was stolen from Facebook’s developer API before the company restricted access to phone numbers in 2018. Facebook’s API is used by app developers to add social context to their applications by accessing users’ profiles, friends list, groups, photos, and event data. Phone numbers were available to third-party developers prior to 2018.”
Maybe the data was grabbed from Facebook API when Facebook was just giving information like phone numbers away to third-party app developers. Or maybe it’s an ongoing security vulnerability in the API allowing someone to still access that information. We don’t know. But based on the screenshot in this report, it looks like that database had separate fields for information like date of birth, location, gender, and relationship status, but those fields were all set to null:
In other words, whoever decided to leak this database to the world for free just might have a larger, more comprehensive database on these same 267 million people for sale. Might this leak be a means of allowing the hacker community to ‘taste’ this data set and verify that it’s legit so people will be willing to pay for the complete data set? Again, we have no idea. All we know is someone decided to give this database away for free and that people should be extra wary of those odd phone calls and text as a consequence. And people should probably delete their Facebook accounts, which we already knew.
Brittany Kaiser, the Cambridge Analytica whistle-blower who appears to be the person releasing thousands of internal company documents via the @HindsightFiles twitter account that detail the global scale of the Cambridge Analytica/SCL political influence operations, recently hinted at what parts of that global operation she’s going to be discussing in the future: Asia. It turns out Cambridge Analytica has been operating in Singapore, Taiwan, South Korea, Myanmar, and the Philippines. And according to Kaiser, the Philippines appears to be particularly susceptible to the type of personalized microtargeting operation Cambridge Analytica specialized in because Facebook is extremely popular there and Filipino laws make it easy to access large amounts of personal information. As we’ll see in the second article below, the Philippines was also the country that had the second largest number of Facebook user who had their profiles scraped by Cambridge Analytica (around 1.2 million) and Cambridge Analytica’s parent company, SCL, has been working with a political client in the country since 2013.
SCL’s advice to the Filipino client revolved around taking a ‘tough on crime’ and ‘honorable’ political branding. No one knows who that political client was but it doesn’t appear the client was Rodrigo Duterte, a politician who didn’t need to hire SCL to tell him to run as a ‘tough on crime’ politician in 2013. There are a list of suspected clients, but no one know who exactly it was in April of 2018 when Quartz first reported on this.
We’ll see if Kaiser ends up revealing the mystery client or not. But the fact that there’s mystery about the identity of SCL’s Filipino client is a reminder that these political influencing services are generally going to be purchased in secretly or at least very quietly so there’s probably going to be a lot of mystery clients out there in this industry:
““With hundreds of Cambridge Analytica remnants operating around the world, the threat of public opinion manipulation is growing in Asia,” Brittany Kaiser told Nikkei. Her warning comes as Singapore, Taiwan, South Korea, and Myanmar all prepare to go to the polls in the coming months.”
Hundreds of Cambridge Analytica remnants operating around the world. That’s ominous. Yet that’s what Kaiser describes. She also describes the Philippines being particularly susceptible to these operations due to the large volume of data made commercially available:
Anyone anywhere in the world can potentially run a digital microtargeting operation in any other country as long as this vast marketplace of personal data remains a commercial product that anyone can buy. In other words, central to the story of Cambridge Analytica is the fact that a big part of what makes the story so important is that it’s just one example of a global industry. It was a peek behind the curtain. And as the following April 2018 Quartz report describes, the peek behind the curtain of what SCL has been up to the Philippines only revealed that the company had a political client. Not the identity of the client. And while there are educated guesses about the identity of this mystery client, it’s still a mystery. Which is part of what makes Kaiser’s references to “hundreds of Cambridge Analytica remnants operating around the world” so disturbing. It implies there’s a lot more mystery clients:
“On April 05, Facebook admitted some 1.2 million users in Philippines had their data improperly accessed by CA. That’s the highest number after the US, where the company used information from over 70 million users to help Donald Trump’s presidential campaign strategically target voters.”
Keep in mind that the 1.2 million figure for the number of people’s profiles from the Philippines acquired by Cambridge Analytica is probably a dramatic understatement. 1.2 million was just what Facebook initially admitted to and it’s almost guaranteed that Facebook’s first admission is an understatement. That’s how Facebook scandals always seem to go. An initial admission that’s alarming but not nearly as alarming as the final admission. SCL had been operating in the Philippines beginning in 2013, the same year Cambridge Analytica was started. There’s every reason to believe the company would have been grabbing as many Facebook profiles in the country as possible. A 2018 report found the Philippines near the 76 million social media users in the Philippines in 2018 and 75 million of them are on Facebook. So there’s a very good chance Cambridge Analytica got a lot more than 1.2 million Philippines Facebook profiles:
And note how one of the suspected mystery clients, senator Richard Gordon, matches one of the Filipino politicians who matches the pattern of a politician who started portraying a ‘tough on crime’ political brand starting in 2013. But he’s dismissed as the mystery client by some analysts because it’s assumed that Gordon couldn’t possibly afford the fees. That’s assuming he’s the one paying. The secrecy of the client also means the buyer is a secret. It could be a foreign or domestic backer of Gordon that’s the mystery client. We have no idea, but it’s a reminder of how firms offering these kind of political psy-op services to secret clients make it easy for foreign interests to secretly ‘invest’ in a candidate by paying for secret political consulting and influence/psy-op services:
So that’s going to be something to watch for as Kaiser starts revealing more about Cambridge Analytica’s internal operations in coming months. Will we learn about the mystery candidate? And what about the “hundreds of remnants” of Cambridge Analytica operating around the world that Kaiser warned about? Are these remnants still operating under newer firms like Emerdata? Hopefully we find out in Kaiser’s treasure trove of files.
Either way, it’s important to keep in mind that a big part of what makes this story so important is that it’s a notorious example of a larger industry. Cambridge Analytica is notable for being cutting edge and on a massive scale thanks to Facebook’s lax policies, and was run by and for fascists like Steve Bannon, Robert Mercer, and Donald Trump. It’s a particularly notorious example of a larger psy-op industry and it’s that larger industry that we need to be most worried about because this is the kind of industry that could grow really massive as it gets more effective. It’s the kind of industry that includes companies like Psy-Group, all potentially with their own mystery clients. Personalized persuasion technology is only going to get more and more persuasive and the better it gets the more mystery clients it’s inevitably going to get too. Especially in the political space.
And fascists will be there to exploit this kind of personal persuasion technology. That’s perhaps the most critical point of the Cambridge Analytica scandal: it’s not just a notorious example of an out of control personalized persuasion industry. It’s a notorious example of how it’s inevitably going to be used to great effect to help fascists like Robert Mercer, Steve Bannon, and Donald Trump because they thrive in an environment of lies and mistrust that these companies help promote. This is like fascist dream technology and that’s who we find at the cutting edge of it. So the bigger picture story that people need pay attention to on these matters is that that there’s a larger industry offering global services that Cambridge Analytica was one notorious example of, but there’s also a particular danger of fascists abusing this personalized persuasion technology which Cambridge Analytica is also a notorious example of. Cambridge Analytica is a good example of a lot of bad things, hence all the mystery.
The Trump administration’s response to the coronavirus has been so perfect that the criticisms of that perfect response are part of an elaborate hoax to take down President Trump. That’s literally the message being the Trump administration and broader right-wing media establishment has been doubling and tripling-down on over the last few days. It’s the kind of darkly surreal ‘leadership’ that on a significant public health issue that simultaneously feels unhinged and unprecedented and at the same time exactly what we should expect from the contemporary GOP and allied media. But while this kind of ‘hoax’ rhetoric from the GOP as a reflexive defense against criticisms of the Trump administration’s response (or lack of response) is no longer unprecedented and sadly exactly what we should expect after three years of this madness, it’s still unprecedented to see the ‘hoax’ deflection strategy applied to an urgent and growing real world emergent viral pandemic.
The US electorate is going to feel the impact of Trump’s meandering federal response to the coronavirus quite literally. In the form of the flu. Hopefully it’s going to be a very mild flu for most people or maybe no symptoms at all. But we’re all going to feel the impact of this virus at some point, directly or indirectly, as the virus spreads across the world and dismissing criticisms of the Trump administration’s response as a ‘hoax’ probably isn’t going to play well with the people who actually get sick in coming months or see the economy stall. Some of the damage to health and the economy going to be unavoidable but it’s almost unavoidable that there’s going to be a lot of very avoidable damage done too. Because if there’s one thing Trump can’t avoid it’s avoidable damage. That was always part of Trump’s ‘charm’: he was going to be a ‘Bull in China Shop’ and charge in and break stuff. And he’s done that, including trying to break the US’s federal disease response capabilities. So we’re poised for both an epidemic of the COVID-19 coronavirus and an epidemic of right-wing grievance-media-complex blustering about how all of the criticisms — for moves like putting Mike Pence in charge of the response to politically muzzle the government’s coronvirus messaging — are part of a hoax and a deep state plot to take down Trump. It’s going to get awful and weird if the Trump team decides to get awful and weird and apocalyptic like they always do when they are in trouble. The micro-targeting of apocalyptic garbage messaging is the kind of thing we should expect from Trump’s team if the COVID-19 outbreak comes to dominate the election. Especially if it looks like he’s going to lose and needs an excuse to postpone (cancel) the election. Super-flu may have been a PR disaster for Trump so far but there’s still plenty of propaganda opportunity. Right now the Trump team is pushing the idea that there’s a deep state plot to make him look bad with unfair criticisms. If the COVID-19 virus gets really bad and temporarily shuts down cities we’ll probably see a very different kind of deep state-themed message coming from the Trump administration and directly at the loyal base audience who will believe anything.
So it’s worth recalling an interesting story in Salon from right before the November 2018 mid-term elections about the giant voter profile database created by the Koch Brothers (now Koch Brother) for general use by Republican candidates and right-wing interest groups like the NRA. As the article describes, they created a company, Themis, as part of their work on the 2010 Project REDMAP initiative. Project REDMAP was the GOP project to win as many state-level races as possible to maximize Republican power in the once-a-decade census and redistricting process that became a Republican hyper-partisan gerrymandering bonanza. In 2011, they bought out a competitor, i360, merged it with Themis and kept the i360 name. Yep, the Koch Brothers’ i360 company that accumulated the GOP super-database of detailed profiles on virtually every US voter was initially part of their 2010 state-level super-gerrymandering schemes. It’s an indication of the level of personal granularity they were using in drawing the GOP-gerrymandered maps. Highly personal profiles on almost all Americans. It’s not hard to imagine that would be useful for drawing partisan district lines and, sure enough, it looks like that’s what the Republicans were doing in 2010, which is just a prelude for what’s in store for 2020.
And as the article also described, this detailed personalized i360 database of every voter allowed the Republicans to manage their messaging during a different public health crisis in a very effective manner. It was the reelection of Ohio Republican Senator Rob Portman, who represents a state that’s been heavily hit by the opioid and heroine crises. Those intertwined public health crises were top concerns for voters but also very polarizing issues. Some voters wanted to see the issues treated as medical and public health issues and others wanted to see a more tradition criminal approach to the problems. So the Republicans used their i360 database to guess which voters would prefer Senator Portman treat the opioid and heroine crises as medical and public health problems and send them messages about how Senator Portman wants to treat the problem like publican health and medical issues. Voters who would prefer Portman rely on the criminal justice system to deal with the opioid and heroine crises were delivered messages about his insistence that the criminal justice system be part of the solution. This was how they treated one of the key issues of the race. Just determining what the voter wanted to hear and saying that. The trick was accurately predicting what voters wanted to hear and it sounds like they did a good job. So they built a “Heroin Model” and “Heroin Treatment Model” to build an overall persuasion model to help craft messages for voters that tried to predict each Ohio voter’s views on how the opioid epidemic should be handled and whether or not they were impacted by it personally. The Kochs’ operation is described as better than what the Democratic or Republican parties can do on their own and the results of Portman’s race would support that assessment. Portman started out the race 9 points behind his Democratic opponent and ended up winning with 58 percent of the vote.
So if the COVID-19 coronavirus ends up becoming major issue in the 2020 election, it seems likely that the Trump team and Republican Party is going to want to deliver an “it’s all a hoax and everything is fine” message to the Republican base audience and a different “we are taking this very seriously and are very competent at handling it” message to everyone else. And they’re be able to largely deliver those messages in a very targeted manner because they have a profile on everyone. That’s one of the Koch freebies for Republicans. It’s not quite free but i360 is subsidized and a money-losing operation according to the article. It’s about giving access to Republican raises and right-wing groups access to a very detailed database on all American voters. So if the COVID-19 situation gets kind of nuts and the Trump team get desperate and decides, for example, to micro-target Republican evangelicals with End Times memes, keep in mind that they have the micro-targeting infrastructure set up to do that. That’s what they used for Senator Portman in Ohio in 2016 to send very different messages about his stances on the opioid and heroine crises and that micro-targeting infrastructure is only going to be more sophisticated in 2020. We already know Trump and the Republicans are willing to be insanely irresponsible when it comes to what they are saying about the coronavirus situation so it’s just a matter of how irresponsible they’ll ultimately get. And if they want to get really irresponsible in a micro-targeted manner they can do that:
“Thanks to that investment (and the Supreme Court’s campaign finance rulings that opened the floodgates for super PACs), the Koch network is better positioned than either the Democratic Party or the GOP to reach voters with their individually tailored communications.”
Better than the Democrats or Republicans. That was the status of the Koch’s i360 voter micro-targeting capabilities. There are up to 1,800 unique data points on 199 million active voters and 290 million US consumers. That’s almost everyone. And it includes granular data on your mortgage status. They certainly aren’t unique at aggregating all of this kind of data. But they were apparently the best as of the 2018 mid-terms. And it’s a sure bet that whatever the Trump team builds will include the Koch’s i360 data:
And in 2016, we got to see the i360 system go to work for Rob Portman’s US senate reelection bid in Ohio, where the opioid crisis was a key regional issue in the race. They developed models to predict if individual voters would prefer to hear that Rob Portman supported health care solutions to the epidemic or if they wanted criminal solutions and delivered those personalized messages. When i360 started with Portman he was 9 points down in the polls and ended with 58 percent of the vote. It’s hard to say how much that 17 point shift was due to the i360 modeling but it’s hard to imagine it didn’t help Portman:
The Heroin model and Heroin Treatment model. It was a big part of Senator Portman’s successful come-from-behind messaging campaign devised by i360. Might there be a COVID-19 model being devised by i360 right now? That seems like a near certainty. We’ll probably find out after the election about elaborate COVID-19 models that involved all sorts of parameters. Like whether or not someone caught the virus themselves or had a household member who did. Or the people with family members who die from it. There’s probably going to be all sorts of modeling involving the virus and how hard it hits various communities.
And while it’s unclear where the Trump would get their hands on information like who had COVID-19, keep in mind that we are living in the golden age of commercial personal data-brokerage databases. So making inferences about people from the meta-data about them is easier then ever. For example the commercially available smartphone-based location information sold by cellphone providers might alone give campaigns enough information to make educated guesses about who got the coronavirus. Something like looking for people with smartphone location information that suddenly shows move between their bedroom and bathroom for a week. If you cross-referenced that with reports of COVID-19 outbreaks you would have a good shot of guessing who got the COVID-19 illness and probably wants to hear a very different message about the COVID-19 outbreak response than someone who has yet to face the outbreak. Who knows what information source they ultimate use. The point is that there are so many to choose from it’s just a matter of time before they find a source for what they’re looking for. It’s one of the features of the Information Age so far. It’s been an explosion of information captured and packaged for commercial sale without the public really realizing it and that means there’s a good chance whatever information you’re looking for is for sale somewhere. It’s the End of Privacy in the form of a giant information marketplace.
It’s all pretty impressive if you ignore the destruction: Trump’s maelstrom of disinformation surrounding the COVID-19 virus is set up to be turbocharged with the kind of sophisticated micro-targeting campaign that can figure out what each voter wants to hear and deliver that message to them powered by that commercial marketplace of all of the aggregated personal profiles on virtually every American. There will presumably be Cambridge Analytica-style psychographic profiling too. Whatever helps change minds and voting behavior. And those Trump campaign models will attempt to predict for every US voter if they want to hear sobering messages about remaining calm or right-wing rants about the deep state is trying to make Trump look bad. It’s all a reminder that while part of the hurricane of disinformation that we’re going to experience coming out of the Trump administration over the COVID-19 virus will be due to Trump’s own personal inclination to lie all the time and almost pathologically, another part of that hurricane of disinformation will be coming from cutting-edge micro-targeting operations financed in large part by the Kochs.
Here’s just a quick followup on the legal repercussions of the Cambridge Analytica scandal: the company’s former director, Alexander Nix, was just issued his punishment from the UK government for his role as CEO of Cambridge Analytica. The ruling was made by the UK’s Insolvency Service which cited a number of violations by Cambridge Analytica including “bribery or honey trap stings, voter disengagement campaigns, obtaining information to discredit political opponents and spreading information anonymously in political campaigns.” The ruling wasn’t limited to Cambridge Analytica. Cambridge Analytica’s parent company, SCL Elections, “repeatedly offered shady political services to potential clients over a number of years” according to the report.
The report unfortunately doesn’t give examples of those specific charges, although that certainly sounds like a representative list of what we’ve already learned about the kinds of shady services the company offered clients. Services that Nix himself explicitly laid out in the now notorious undercover video with a journalist where he bragged tactics like hiring Ukrainian sex workers to discredit a client’s political opponent.
So what was Nix’s punishment for directing a company that routinely offered shady services to potential client for years? Nix is disqualified from running a company until 2028. That’s it:
“The chief investigator went on to say in his statement: “Alexander Nix’s actions did not meet the appropriate standard for a company director and his disqualification from managing limited companies for a significant amount of time is justified in the public interest.””
Yes, the disqualification for Nix from managing limited companies for a significant amount of time is indeed justified in the public interest. Assuming the punishment was meanginful enough to actually dissuade others from running unethical mass public propaganda outfits.
Unless, of course, it’s such a lenient sentence that it acts as an incentive for everyone else working in this shady industry to continue doing so with impunity. And that raises the question: so what about all of those Cambridge Analytica spin-offs like Emerdata? Or companies founded by Cambridge Analytica former employees like Data Propia that’s now working with Brad Parscale on the Trump reelection campaign? What are the many remnants of Cambridge Analytica up to and what about all of that data? How many entities around the world possess the Cambridge Analytica data trove and what are they doing with that data today? These are the kinds of questions that need to be answered when attempting to assess whether or not Nix’s punishment was in any way a deterrent to future crimes of this nature. Questions that were posed to former Cambridge Analytica CEO and Emerdata founder Julian Wheatland in the following Fast Company article of July of 2019. And if Wheatland’s implausible denials are any indication of whether or not that data is still being used in secret, some powerful deterrents are very necessary:
“Julian Wheatland, the former CEO of Cambridge Analytica and former director of a number of SCL-connected firms, told Fast Company this week that there were no plans to revive the companies. “I’m pretty sure nobody’s thinking of trying to start it up again under a different guise,” he said.”
LOL! Yes, Julian Wheatland, the former CEO of Cambridge Analytica, assures us that nobody’s thinking about to trying to start up Cambridge Analytica. Wheatland is, of course, one of the original directors of Emerdata, along with Alexander Nix:
So one of the founders of Emerdata is maintaining that there are no plans for starting a company like Emerdata. It’s not exactly reassuring. Especially since Emerdata has been footing the legal bills and bankruptcy proceedings for Cambridge Analytica and the rest of the SCL offshoots. It’s the court-appointed administrator, Crowe U.K. LLP, that controls what, which means the entity that controls all of that data is a client of Emerdata. And when US academic Davis Carroll sued Cambridge Analytica to obtain his personal data, Crowe instead decided to subject SCL to a criminal investigation, a move that could be seen as an attempt by Crowe to shield from public scrutiny the nature of that data:
Adding to suspicions that Emerdata is sitting on that trove of 700 terabytes of Cambridge Analytica data is the simply fact that a reporter was able to view the data, leaving Wheatland “completely mind-boggled” as to how that could be. A possibility that’s a lot less mind-boggling when you consider obvious possibilities like copies of the data having been made or former employees taking the data. Possibilities that Wheatland acknowledges:
But perhaps one of the biggest reasons to suspect Emerdata is not just holding that data trove but actively planning on utilizing it is now largely owned by Rebekah Mercer and her sister Jennifer and is directed by a figure from the Mercer Family Foundation. The Mercers don’t seem like the types to relinquishing all of that data just become of some laws and court rulings:
But concerns about how data trove might be used going forward aren’t limited to Emerdata. Other companies started by Cambridge Analytica employees include Data Propria, founded by Matt Oczkowski who now works for the Trump reelection campaign. And Auspex International, which sounds like a Cambridge Analytica dedicated to public persuasian campaigns in the Middle East and Africa. And there’s even Virginia-based Anaxi Solutions that specialized in government contracts:
Did any of these spin-offs happen to get its hands on that data trove? It’s hard to imagine that not being the case. Secretly copying and utilizing that 700 terabytes of data is a technical triviality after all. And it’s hard to imagine Alexander Nix’s punishment is acting as a significant deterrent. The only thing holding these spin-offs back is their own self-restraint and internal ethical standards. Which is why the real question over whether or not all of the Cambridge Analytica data is still be used today is the question of whether or not the people who made lots of money secretly doing it it in the first place would be willing to make lots more money secretly doing it again.
With the Trump 2020 reelection now in a state of self-inflicted COVID turmoil less than a month before election day the question only grows as to what sort of of what sort of orchestrated counter-turmoil we should expect to see created by the Republican Party and its many affiliates. And when we have to ask questions about Republican dirty trick that obviously raises all sorts of questions about what exactly happened during the 2016 campaign and all of the dirty trick mysteries that remain unresolved. So it’s worth noting one of the interesting questions that we sort of got an answer for in the Senate Intelligence Committee report released back in August: the question of whether or not Psy Group first approached the Trump campaign or vice versa.
First, recall how we previously learned that Psy Group was making pitches to the Trump campaign as early as March of 2016 to help Trump defeat Ted Cruz to win the GOP nomination. But it appeared at the time that it was Psy Group who approached the Trump team, raising all sorts of question as to who prompted Psy Group to make the offer in the first place. Although the Saudis and UAE were obviously suspects for being behind the offer since we also learned that Psy Group’s services were offered in early August of 2016 to help the Trump campaign win the general election on behalf of the crown princes of Saudi Arabia and the UAE. Right-wing Israeli forces were also an obvious suspect. But we never really got an answer on who was behind that initial pitch.
Well, tucked away in the Senate Intelligence Committee report we find this fun fact about the earliest reported contact between Psy Group and Republicans: It appears that Kory Bardash, the head of the group Republicans in Israel (also known as Republicans Overseas Israel), was the person who initially reached out to two figures at PsyGroup.
Republicans in Israel is a non-profit founded by Marc Zell, a figure described as the head of the Republican Party in Israel. Zell is obviously on board with Republican politics in general but to get a sense of just how completely on board he is with the contemporary GOP’s Trumpian embrace of fascism it’s worth noting that Zell condemned the anti-Nazi counter-protesters for the 2017 violence in Charlottesville, Virginia, that resulted in Trump’s infamous “good people on both sides” declaration. Zell also called Confederate general Robert E. Lee a great man and made an equivocating statement of his own about how both the North and South had terrible sides during the Civil War. So based on the Senate Intelligence Committee’s report it sounds like it was basically an arm of the Republican Party who made the initial outreach to Psy Group.
But keep in mind that this initial request for help was made while Republican primary was still in full swing. So this wasn’t really the Republican Party that reached out to Psy Group but instead a pro-Trump faction of it that happens to include Zell. And as we’ll see in the second article below, part of what makes a fascinating turn of events is that Zell was publicly against Trump winning the nomination. In early December 2015, Zell declared that Trump can’t and won’t be president. It wasn’t until August of 2016 — after Erik Prince and George Nader made their early August secret trip to Trump Tower make the assistance offer on behalf of the crown princes of Saudi Arabia and the UAE — that Zell was openly talking about his change of heart on Trump after openly opposing his nomination.
It’s a sequence of events that raises all sorts of interesting questions about the kinds of backroom negotiations that must have been taking place in late 2015-early 2016 after it became clear that the Republican electorate strongly preferred Trump over figures like Ted Cruz or Marco Rubio. What kind of secret offers did Trump have to make to the Republican establishment in order to earn the support of figures like Zell? And, in turn, what role might those secret offers have played in the formation of the dirty tricks campaign that was carried out across the world in 2016 in favor or Trump? It’s an example of how the more answers we get about what happened in 2016 the more questions get raised because the cover up is still ongoing:
“According to the report, Psy-Group initially got in touch with the Trump campaign in March 2016, when Kory Bardash, the head of Republicans in Israel, emailed Birnbaum, as well as Eitan Charnoff, a project manager at Psy-Group.”
So based on the earliest available evidence it was Kory Bardash, head of Republicans in Israel, who initiated the idea of hiring Psy Group to help the Trump campaign. Although in that email sent by Bardash he refers to having spoken to two Psy Group employees previously about each other. So there’s still the question of when those earlier conversations started and whether or not they included the topic of assisting Trump. At this point we can conclude that Republicans in Israel/Republicans Overseas Israel had made its decision to back Trump over more traditional figures like Ted Cruz or Marco Rubio by at least March of 2016. And they were so keen on backing Trump that they were willing to hire a foreign social media manipulation specialist like Psy Group to help Trump secure the nomination. In other words, they hadn’t simply warmed to Trump at that point. They were full Trump backers:
And note the chilling level of honesty from one of the Psy Group employees about how and why Psy Group’s techniques for pushing lies on social media are so effective: as long as you can invade people’s social media ‘echo chambers’ with bots and avatars people will believe what those bots and avatars tell them. It’s diabolically simple:
Finally, regarding Psy Group’s claims that it never actually carried out any of these services, as the article notes the company was paid over $1 million by George Nader — who appeared to be acting on behalf of the crown prince of the UAE — for some sort of services. Also recall that George Nader himself has reportedly given a different account of the services Psy Group provided, although we aren’t told what he said. So there really is an abundance of circumstantial evidence suggesting something was carried out by Psy Group on the Trump campaign’s behalf. Some worth millions of dollars in payment:
And now here’s an August 2016 article about how Marc Zell, described as Israel’s leading Republican, came around to not just accepting Trump but becoming a vocal supporter. Zell gives the typical explanation we heard at the time about how in private Trump was actually a very sober-minded and rational businessman behind the scenes, a far more plausible lie at the time compared to today. The article goes on to describe how some Republicans in Israel had yet to come around to Trump and Zell’s prediction that they would eventually do so. Guess who the example is of an Republican in Israel who hadn’t yet fully come around to Trump: Zell’s co-chair Kory Bardash, the same guy who made the secret outreach to Psy Group to ensure Trump gets the nomination five months earlier:
“Indicating his change of heart is no case of political expediency but rather one of genuine, albeit new-found conviction, Zell passionately advocated for a Trump presidency.”
It wasn’t political expediency. It was a newfound genuine passionate conviction about Trump. That’s how Marc Zell was spinning his seemingly sudden support of Trump at the time. And while the kinds of statements Zell was making at the time seemed consistent with standard public relations spin that shouldn’t in any way be taken at face value, the fact that we’ve now learned that Zell’s organization was secretly reaching out to Psy Group to secure Trump’s nomination back in March of 2016 does lend credence to his Zell’s claims. He really must have been very supportive of Trump to go as far as reaching out to Psy Group.
And if Zell and his Republicans in Israel organization really was enthusiastically backing Trump in early 2016, you have to wonder if the various Trump policy proposals cited that convinced Zell that Trump was the superior candidate really were major factors. Policy proposals like the ‘Muslim ban’ and building ‘the Wall’ with Mexico. Considering that the ‘Muslim ban’ and ‘the Wall’ are normally seen as the kinds of ‘red meat’ policies intended to appeal to the broader Republican voting base, as opposed to elite Republican organizers like Zell, the fact that Zell cites those as the kinds of policies that brought him around to supporting Trump raises the interesting question of how much Trump’s ‘red meat’ for the base is also highly appealing to the Republican elites like Zell on whose behalf the party is actually run:
Finally, note the example given of a Republican in Israel who might still need more convincing to support Trump: Kory Bardash, the same guy who reached out to Psy Group about supporting Trump months earlier:
Notice how Bardash was didn’t give any sort of full-throated support in his statement when contacted by the Times of Israel, as if he wanted to maintain the image of someone who was still only tepidly supportive of Trump. The same guy who was secretly trying to hire Psy Group on Trump’s behalf. And yet, to this day, we are told that it isn’t really known if Psy Group ever provided any services for the Trump campaign at all. It’s all a mystery that will apparently go unresolved forever.
Also note that since Psy Group was an Israeli company it only makes sens that the Israeli branch of the Republican Party would be the group that reaches out to them. But that doesn’t necessarily mean that the Psy Group operation was an Israeli Republican operation. The group of Republican elites (and their associates) behind the Psy Group plan could be much larger.
So as the final month of the 2020 election clusterf*ck plays out, with all of the upcoming dirty tricks we can now confidently expect from the GOP, it’s going to be important to keep in mind that the 2016 mystery of Psy Group now includes the mystery of which Republican Party elites secretly tried to hire it. There’s also the question of whether or not this same group would be willing and able to engage in more dirty tricks in 2020, although that’s not really a mystery.
In light of recent revelation from the Senate Intelligence Committee’s report on the 2016 Russia investigation regarding Psy Group that raises all sorts of interesting questions about who, in addition to the crown Princes of Saudi Arabia and the UAE, was behind the hiring of Psy Group in 2016 to help Donald Trump win the election — the revelation that it was the head of the Republican Party’s primary organization in Israel, Kory Bardash, who initiated the outreach to Psy Group in March of 2016 for the purpose of helping Donald Trump win the 2016 primary — it’s worth noting an earlier revelation about Psy Group that emerged in relation to a completely different scandalous case: Psy Group employees were caught working in tandem with Black Cube employees on a joint smear campaign project according to a lawsuit by Canadian hedge fund West Face Capital Inc. The hedge fund sued rival Canadian investment fund Catalyst Capital Group Inc. alleging that Catalyst hired Psy Group and Black Cube to run a sting operation and defamation campaign against it. The fund also sued Psy Group and Black Cube for $500 million in damages.
It sounds like the smear campaign was financed by private donors and focused on stigmatizing various pro-Palestinian BDS groups. West Face Capital charges that it was a target of this joint Psy Group/Black Cube project and lists the name of an Indian contractor hired by Psy Group to post defamatory content about the hedge fund online. And it’s according to this legal complaint that Psy Group and Black Cube employees were working in tandem with each other when publishing defamatory content using sophisticated masking techniques to hide their tracks. The hedge fund learned it was a target of Black Cube and Psy Group when employees recognized the image of Black Cube employee Stella Penn Pechanac in news reports about Harvey Weinstein hiring Black Cube to investigate women accusing him of rape and assault.
It wasn’t a particularly surprisingly revelation, if true, that Psy group and Black Cube employees were working in tandem and using sophisticated techniques to hide their tracks. But part of what makes it relevant in the context of the new revelation that the outreach to Psy Group came from the head of Republicans in Israel is that while Psy Group has long vociferously denied hacking the targets of their clients, Black Cube has been caught hacking the targets of its clients. So if Psy Group and Black Cube have a history of teaming up so closely to the point where the companies’ employees are working in tandem on joint projects those denials that Psy Group would hack a target are pointless. Not that the denials had much weight anyway since of course they would deny it. And if the Kori Bardash, the head of the main Republican Party outreach group in Israel, was the figure who initially tried to hire Psy Group back in March of 2016 we have to ask if those still-secret Trump backers tried to secretly hire Black Cube for the project too:
“According to the complaint, Psy-Group — whose operatives in the Canadian project allegedly included former Israeli television journalist Emmanuel Rosen — worked in tandem with Black Cube, publishing defamatory articles and social media posts about West Face and using sophisticated masking techniques to hide their tracks.”
As the West Face lawsuit demonstrates, Psy Group and Black Cube are certainly willing to work together when a client asks. So we have to ask: when Kori Bardash hired Psy Group in 2016, was Black Cube in on the deal? Was a few hacks of the Democrats part of the requested service package? It’s a question especially relevant in the context of the ‘Russian hackers’ fiasco. Also recall that the outreach by Kori Bardash to Psy Group took place in March of 2016, the same month we are told ‘Fancy Bear’ GRU hackers started their hacking campaign against the Democrats.
It’s also worth recalling that one of the disinformation campaigns Black Cube worked on that involved hacking was a campaign where it was hired by Cambridge Analytica to hack the political opponent of Cambridge Analytica’s client, Nigerian President Goodluck Jonathan. Again, we have to ask, did Psy Group hire Black Cube for any Republican-related projects in 2016? How about Cambridge Analytica? Maybe a few hacking-related projects? We don’t know and we’ll presumably never find out. Which is all the more reason we have to ask.
Here’s an interesting story that directly relates to the ongoing, if belated, legal repercussions still emanating from the Cambridge Analytica scandal. But perhaps more importantly it relates to the decision President Trump needs to make over whether or not he’s going to pardon Steve Bannon. A decision that will presumably hinge directly on Bannon’s knowledge of Trump-related crimes and fears that he might be called in to testify about them. A decision that will also presumably be complicated by the fact that pardoning Bannon for a crime also eliminates his ability to exert his Fifth Amendment right against self-incrimination in US courts when asked to testify about those pardoned crimes:
The US Federal Trade Commission (FTC) recently asked a federal court to force Steve Bannnon to testify under oath as part of the FTC’s Cambridge Analytica investigation. The probe will also the ask the question of whether or not Bannon himself should be found personally liable for his role in the Cambridge Analytica scandal and associated data breaches. The FTC also reportedly wants to ask Bannon if copies of the Cambridge Analytica data exist and who might be in possession of them. Given that multiple copies of that treasure trove surely exist and was quite possibly used by the Trump 2020 campaign and all sorts of other Republican campaigns this could be a highly explosive question for Bannon to answer. At least assuming he won’t just commit perjury with the expectation of a pardon.
Bannon already agreed to an in-person interview in September, with the understanding that he would be invoking the Fifth Amendment. But instead of showing up he skipped it. He’s already signaled that he’ll be invoking the Fifth and already defied the FTC on the matter. It really is some sort of legal showdown. The FTC wants to place Bannon in a position where he potentially faces criminal charges, which in theory should make him a pretty compliant witness. Except, of course, Bannon knows he might be pardoned by Trump, but that pardoning window is only going to stay open until Trump leaves office. At the same time, if Trump pardons Bannon for any Cambridge Analytica crimes he might want to do it AFTER Bannon testifies so Bannon can retain his Fifth Amendment right against self-incrimination during the testimony and also lie under oath if need be since any perjury charges can be pardoned away too. So Trump needs to not only decide if he’s going to pardon Bannon but also decide when to pardon Bannon. Before or after the interview.
The new date request by the FTC is December 8, which is soon enough that if Trump wants to wait for Bannon to testify to allow Bannon to invoke the Fifth Amendment and only later decide whether to pardon him that will be an option. Trump about a week and a half to decide, just as Bannon has about a week and a half to decide whether he’s going to show up for this interview or defy a court order and keep holding out for that pardon:
“FTC lawyers said they want to question Bannon about whether the Facebook user data collected by Cambridge Analytica still exists and was shared with anyone else.”
Who has the stolen Facebook data? That appears to be one of the key questions the FTC wants to ask Bannon, which is the kind of question that threatens far more people than just Bannon. The GOP has had at least 4 years to secretly learn how to utilize the combined data sets of Facebook’s personal data profiles and the Cambridge Analytic psychological profiles. What did Republican campaigns do with all that data and how might it have been used in 2018 or 2020? If anyone knows the answers to those questions it’s Steve Bannon, which is why pardoning him has to be sooooo tempting right now. Except for that pesky issue of pardons and the Fifth Amendment:
What will Trump do? What about Bannon? Will he even show up this time? These are the questions we’ll get answered in about a week and a half. Unlike the questions about what happened to all the Cambridge Analytica data and stolen Facebook data, which will presumably remain unanswered one way or another.
In light of the recent reports about how Mark Zuckerberg and Joel Kaplan have been personally intervening to protect figures like Alex Jones or outlets like Ben Shapiro’s Daily Wire from the consequences of breaking Facebook’s rules, here’s an article from Judd Legum’s Popular.info newsletter from back in June about another example of Facebook seemingly bending the rule in ways intended to maximize the reach and influence of Shapiro’s Daily Wire:
First, recall how we just learned how Facebook decided to continue allowing the In Feed Recommendations (IFR) — a feature that inserts posts into people’s feeds from accounts they don’t follow, ostensibly to ‘foster new connections’ — to serve links to conservative personalities including Ben Shapiro despite rules against political IFR content. Why did Facebook continue serving up links to Ben Shapiro to people who hadn’t signed up for Shapiro content? Because, Kaplan’s content policy team argued, if they dropped the links to Shapiro and other conservatives that might trigger a new round of right-wing accusations about Facebook ‘shadow-banning’ them.
Preemptive capitulation in the face of possible ‘shadow-banning’ charges were consistently used internally as an excuse to continue policies that help right-wing causes and personalities. So you have to wonder if that was also the internal excuse used when Facebook decided to not enforce the rules against the Daily Wire in another case of systematic rule-breaking discovered by Popular.info last year: it appears that the Daily Wire was secretly paying one of the most prolific super-spreaders of far right junk ‘news’ content on Facebook to promote the Daily Wire’s content. The network of high-profile webpages are all run by Corey and Christy Pepple, who are best known as the creators of Mad World News. The network specializes in taking old, highly racially charged stories and recycling them (without indication they are years old) in a ways designed to exploit Facebook’s algorithms. And yet there’s one source of content pushed by this network that isn’t solely recycled racist click-bait: the Daily Wire content, which appears to be the only publisher with this arrangement.
It turns out this kind of arrangement is in direct violation of Facebook’s rules. And yet, when directly confronted with the evidence, Facebook refused to do anything about it and denied that the Daily Wire was breaking the rules at all.
So what are the consequences Facebook allowing the Daily Wire to flout their third-party promotion rules? Well, in May of 2020, the Daily Wire was the seventh-ranked publisher on Facebook and on a per-article basis receives far greater distribution than any other major publisher. The Daily Wire really is getting an enormous service from the Pepples’ network in the form of outsized Facebook traffic. A service that Facebook should be punishing both the Daily Wire and Pepples for engaging in, and yet Facebook refuses to acknowledge anything wrong even took place. So it looks like we can add ‘allowing the Daily Wire to piggy-back on a racist-click-bait empire even when it’s against the rules’ to the list of things Facebook has been doing to ensure Facebook remains the greatest propagator of far right content ever known.
But there’s another aspect of this story that should be pointed out in the context of the ongoing internet-driven radicalization of conservative audiences taking place around the world: The reliance by Mad World News LLC on old polarizing click-bait articles that inflame fears and prejudices isn’t just an example of amoral marketing tactics. It’s actually a means of attracting and keeping an extremist-minded audience. Extremists-minded in the most fundamental sense according to some recently published researched that examined the perceptual traits of extremists.
The new research, which was led by Dr Leor Zmigrod’s lab at Cambridge University’s department of psychology, compared how people of different political orientations fundamentally perceived the world. The study gave 522 participants a battery of 37 cognitive tests and 22 personality surveys that focused on self-regulation and personality characteristics. The study was designed to ask the following questions: to what extent do the ideologies people espouse reflect their cognitive and personality characteristics? What are the commonalities and differences between the psychological underpinnings of diverse ideological orientations? What are the contributions of cognitive processes versus personality traits to the understanding of ideologies? and which psychological traits are associated with one’s likelihood of being attracted to particular ideologies?
The surveys allowed the researchers to assess both the political ideologies and worldviews of the participants as well as their personal psychological traits (self-reported) and included self-reported questionnaires on nationalism, patriotism, social and economic conservatism, system justification, dogmatism, openness to revising one’s viewpoints and engagement with religion. They then measured a variety of cognitive traits by asking participants to carry out tasks like viewing a dot moving on a screen and determining if the dot was moving left or right as quickly as possible. They distilled the survey psychological information down to three core psychological dimensions — conservatism, dogmatism, and religiosity — and compared the fundamental cognitive traits of people sharing these cognitive traits. Perhaps not surprisingly, they found that people who self-reported higher levels of conservative, dogmatism, and religiosity were literally slower and more cautious at making assessments about the physical world around them like whether or not the dot was moving to the left or right.
Overall, they found that the political conservatism factor in their model, which reflects tendencies towards political conservatism and nationalism, was significantly associated with greater caution and temporal discounting and reduced strategic information processing in the cognitive domain, and by greater goal-directedness, impulsivity, and reward sensitivity, and reduced social risk-taking in the personality domain. They also found that people who tended towards extremism has poorer emotional self-regulation. It’s the kind of research that, if it pans out, highlights how insidiously manipulative Facebook’s relationship is with groups like Mad World News LLC and Shapiro’s Daily Wire by revealing what appears to be a greater vulnerability possessed by psychologically conservative-oriented people to manipulative media practices. The mutual relationship between Facebook and the right-wing disinfotainment media outlets that dominate it by pumping how highly sensually charging content intended to trigger the fear and anxiety centers of the brain isn’t just a mass psychological manipulation campaign. It’s a mass psychological manipulation campaign that is systematically having a greater impact on the psychologically conservative segments of society. It’s a potentially diabolical method for societal polarization that operates at an subconscious level.
But there’s another major twist in this story: while this is the kind of study that’s interesting on its own as an example of the kind of research that’s taking place these days examining the relationship between fundamental psychological and cognitive traits and our political orientation, part of what makes this new research so interesting in the context of Facebook is that the research was carried out by the Cambridge department of psychology. And it was Cambridge University’s psychology department research that was at the heart of the Cambridge Analytica scandal where Aleksander Kogan was appointed. Research that shares a number of parallels with this new research coming out of the Cambridge Psychology department. In particular, recall how Kogan’s research was similarly focused on discerning basic psychological characteristics from people based information like their Facebook “Likes”, and then using those psychological profiles to predict people’s politics. This new research sounds a lot like an extension of Kogan’s research, and it’s coming out of the same department. Recall how Kogan’s research found that people who scored high on the “neuroticism” scale were also easier to manipulate with inflammatory content, and much of Cambridge Analytica’s actual services relied on identifying the most neurotic people and serving them up provocative ads. This new research appears to more or less validate that political strategy. It’s a remarkable and scary fun fact.
Another parallel between the lax attitude Facebook takes towards Mad World News serving up recycled old inflammatory stories and their lax attitude towards Cambridge Analytica’s 2016 voter micro-targeting campaign was that both Mad World News and Cambridge Analytica serve up inflammatory content to people not necessarily seek it out. Mad World News is so ubiquitous you can’t help but come across it if you browse Facebook. Its size and reach makes exposure to it somewhat inevitable, especially for conservative readers. And with the Cambridge Analytica scandal the end goal was manipulative micro-targeted political ads based on psychological profiles inferred from Facebook data. In both cases, Facebook was allowing itself to be used for extremism outreach. Targeted outreach.
So we have new research out of Cambridge’s psychology department that sounds A LOT like the research the Cambridge Analytica scandal was based on, and this new research is confirming the relationship between political orientation and psychological and cognitive traits. But it’s also confirming the premise that the Cambridge Analytica effort was based on: that if you can identify the basic psychological profile of someone there’s a good chance you can predict their politics and the psychological profiles associated with conservative politics tend to be easier to manipulate with inflammatory and provocative content. Which all, again, is why Facebook’s protection of the Daily Wire’s secret relationship with super-peddlers of right-wing deceptive provocative content was so diabolical
“But that actually understates how well The Daily Wire does on Facebook. While the New York Times published 15,587 articles in May, and the Washington Post published 8,048, The Daily Wire published just 1,141. On a per article basis, The Daily Wire receives more distribution than any other major publisher. And it’s not close.”
Yes, The Daily Wire receives more distribution than any other major publisher on a per article bases and it’s not close. Why is that? Oh right, cheating. Cheating with the help of one of the biggest peddlers of provocative right-wing click-bait trash on the internet, the Mad World News network of Corey and Christy Pepple:
It’s a sign of just how prevalent click-bait trash truly is on Facebook: in order to become one of the top publishers, The Daily Wire had to ride the coattails of MadWorldNews.com. And somehow The Daily Wire is the only site promoted by this network, a strong indication of a secret commercial arrangement:
And yet when faced with this evidence, Facebook dismissed it, first by suggesting that it couldn’t determine if there was a financial relationship between Mad World News LLC and The Daily Wire. But then Facebook went on to assert that its rules about “branded content” — which state that content that a group was paid to post be labeled as branded — doesn’t apply to paid links anyway, which is inaccurate. Facebook explicitly states links count branded content. So when presented with evidence of this relationship between The Daily Wire and Mad World News LLC, Facebook basically tried to lie to the journalists. Which is more or less how Facebook behaved when previous faced with evidence of The Daily Wire breaking Facebook’s rules:
And, finally, note that if it seems like The Daily Wire has been getting exceptional treatment from Facebook, even by the lax standards the platform has for conservative groups, that might have something to do with Mark Zuckerberg’s personal relationship with Ben Shapiro:
Now, here’s a Guardian piece on the recently published Cambridge University study examining how basic psychological and cognitive traits can effect your politics. Basic psychological and cognitive traits like taking in and processing information. And as the study found, the more difficulty you have perceiving and retaining information, the more likely you are to be politically conservative, with the implication being that deceptive media tactics are going to be more effective on psychologically conservative-minded individuals:
“A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.”
If you’re an extremists, you probably tend to view the world in black and white terms. It’s not a particularly surprising finding. But far more interesting is that if you’re an extremist you’re probably also more likely to struggle with complex tasks. If you have trouble processing reality at a fundamental level you’re more likely to become an extremist. Again, it’s not too surprising, but it’s still a relatively new and important finding. And it’s even more important a finding if it turns out groups have been exploiting exactly these psychological vulnerabilities for years to radicalize people over Facebook and other social media platforms:
And note how they found that demographics, like race and gender, were far less important in predicting your politics than these basic psychological and cognitive traits. It’s the kind of finding that could prove to be important in all sorts of areas, but especially when it comes to political advertising:
Welcome to the future of political manipulation. After all, if psychological traits are as predictive of politics as this study suggests, it would be foolish not to incorporate that information into your political marketing practices, which is part of why the development of mass databases of consumer psychological profiles by companies like Facebook is so scandalous. Or at least should be scandalous. That’s why this kind of research is potentially so significant. It’s the foundation for the next generation of Cambridge Analytica scandals:
“Ideologies can be generally described as doctrines that rigidly prescribe epistemic and relational norms or forms of hostility [33]. The present investigation espouses a domain-general outlook towards the definition of ideology—focusing on the factors associated with thinking ideologically in multiple domains, such as politics, nationalism and religion. This includes dogmatism, which can be conceptualized as a content-free dimension of ideological thought reflecting the certainty with which ideological beliefs are held and the intolerance displayed towards alternative or opposing beliefs [34–36]. Evaluating the psychological similarities and differences between diverse ideological orientations in concert facilitates a comprehensive overview of the nature of ideological cognition. Here, we seek to map out the psychological landscape of these ideological orientations by investigating which psychological factors among those measured by a large battery of cognitive tasks and personality surveys are most predictive of an individual’s ideological inclinations. This work aims to bridge methodologies across the cognitive and political sciences, identify key foci for future research, and illustrate the use of incorporating cognitive and personality assessments when predicting ideological convictions.”
A map of the socio-political psychological landscape. Just like the work that went into the Cambridge Analytica scandal, except in that case it was all about leveraging Facebook profile information. This study appears to be far more indepth and general, which means it’s the kind of research that will be the foundation for future Cambridge Analytica-style scandals. And what they found was that when you distilled the psychological traits they measured down to three dimensions — political conservatism, dogmatism, and religiosity — there was a remarkably strong correlation between those psychological traits and fundamental cognitive traits like the caution exhibit in these cognitive speed tests. Based on these findings there really does appear to be a significant correlation between politics and these fundamental dimensions of the personality.
Again, these findings aren’t super surprising. These associations between politics and fundamental cognitive processes have long been suspected. But this is the kind of research that allows people to translate those suspicions into actionable strategies of mass manipulation, as the Cambridge Analytica scandal made clear. That’s part of why this research is so important. It’s that kind of research that allows us to execute more sophisticated Cambridge Analytica-style manipulation campaigns in the future but also detect those campaigns:
And note the interesting finding distinguishing social conservatives (those scoring high on both political conservatism and religiosity) from economic conservatives (those scoring high on both political conservatism but not religiosity): the social conservatives tend to have heightened agreeableness and risk perception the economic conservatives (i.e. libertarians) lacked, while the economic conservatives had enhanced sensation-seeking the social conservatives lacks. It’s the kind of fundamental psychological divide that represents the kind of wedge-potential that could shape the future of politics. Which, again, is why this kind of research is so important: this is the kind of knowledge the future of political campaigning is going to be based on:
Of course, future political machinations are only going to be relying on this kind of research if it pans out and can reliably produce the desired results. Which remains an open question. But the question of whether or not political campaigns can successfully persuade voters based on psychological profiles isn’t going to remain open forever. The longer Facebook keeps itself ripe for crass psycho-political manipulation the more empirical data we’re going to have on whether or not this stuff works. It’s never been entirely clear if the Cambridge Analytica scandal actually succeeded in ultimately changing voter attitudes. But we’ll find out sooner or later. Thanks to the actions of Facebook, or lack of actions. Typically, actions by Mark Zuckerberg or Joel Kaplan that result in a lack of actions by the rules enforcement division of the company. As long as that general pattern of malign activist neglect continues at Facebook, the world is going to find out if this stuff actually works. Sooner rather than later. Or at least the political strategists of the future will find out if this actually works. The people being manipulated by it will presumably remain in the dark, filled with the kind of artificially high levels of emotional angst and confusion that could make them vote for the Trumps of future.
So it turns out the future might suck much worse than necessary, but at least some of the biggest victims of that sucky future — the psychological conservative voters who are the most vulnerable to cutting-edge psychological manipulation campaigns and are effectively tricked into voting against their best interests — won’t necessarily be fully aware of the mass suckiness because they’ll be too distracted with the cutting edge propaganda.
Finally, note that one of the implications of this news Cambridge University research is that trashy right-wing sites like Mad World News that recycle emotionally charged content intended to play on people’s fears and bigotries could arguably be characterized as cutting-edge propaganda and the future of politics. In other words, we should fully expect to see more of it. Much more. Because it works on a visceral level. Mad World News is a prophetic name. A self-fulfilling prophetic name. The Facebook giant really is driving the world mad with recycled bigoted trash and it actually moves people and keeps them emotionally engaged, which is why Mad World News is such a powerhouse. Mad World News and Cambridge Analytica is the future of right-wing politics. At least that’s what recent research out of Cambridge University’s Psychology Department suggests. Have fun pondering that.
What did Mark Zuckerberg know and when did he know it? Those are the questions posed by lawsuits made public last week being waged by groups of Facebook shareholders angered over what they characterize as a wildly expensive corporate bribe paid by Facebook to the US Federal Trade Commission (FTC) in the form of a $5 billion settlement over the Cambridge Analytica scandal. According to the lawsuit, The FTC said in court that Facebook’s fine would have been closer to $106 million, but the company agreed to pay $5 billion as a kind of quid pro quo to avoid having Zuckerberg or Sheryl Sandberg deposed and any liability for the Zuckerberg or even be deposed. Yes, Facebook apparently got to pay the FTC in order to allow Zuckerberg to not just avoid liability but even being deposed. In February 2019, the FTC sent Facebook’s lawyers a draft complaint that named both the company and Zuckerberg personally as a defendant, according to the suit, so it would appear the large settlement wasn’t preemptive but in response to the direct threat to Zuckerberg’s personal reputation.
The suit also alleges Zuckerberg and Sandberg both declined to be interviewed in relation to a previous FTC investigation that resulted in a 2012 settlement. In that case, PricewaterhouseCoopers was hired to audit Facebook’s privacy compliance as part of the 2012 FTC settlement. Instead of agreeing to be interviewed, the company allowed other managers to provide untrue statements about the company’s practices.
It’s quite an explosive array of charges, if true. For starters, is overpaying fines in order to exculpate CEOs from liability or depositions even an option? Does that happen? Because if so, that’s not just a Facebook scandal.
But there’s one aspect of the suit that raises all sorts of fascinating questions. Questions that we probably should have been asking all along: The suit notes that part of the motivation for paying the enormous fine was to avoid the public humiliation that Zuckerberg and Sandberg would have to endure and Zuckerberg reportedly has political ambitions. It’s a suspicion many had after Zuckerberg announced his national ‘listening tour’ in 2017.
And that prospect of Zuckerberg’s political ambitions raises the obvious question we really should have been asking all along: if Zuckerberg is planning on running for office, wouldn’t he want Facebook to remain a platform ripe for exactly the kind of psycho-political manipulation the Cambridge Analytica scandal was all about? Who would be better positioned to exploit the power of Facebook’s ability to manipulate voters than Zuckerberg? After all, all indications are he approved of most of these scandalous policies himself. The guy knows where the bodies are buried. He buried many of them personally. It’s what this whole lawsuit is all about:
“How it went down: In February 2019, the FTC sent Facebook’s lawyers a draft complaint that named both the company and Zuckerberg personally as a defendant, the shareholders said. The FTC also said in court that Facebook’s fine would have been closer to $106 million, but the company agreed to the $5 billion penalty to avoid having Zuckerberg or Chief Operating Officer Sheryl Sandberg deposed and any liability for the CEO, the suit alleged.”
Facebook f*#% up so badly with the Cambridge Analytica scandal that Zuckerberg himself was facing possible liabilities. Personally. So the company paid whatever it took to make Mark’s problems go away. And it apparently took $4.9 billion or so, a price this company was willing to pay to protect one person. Sure, that one person is the founder and CEO, but still, that’s not really how publicly traded corporations are supposed to behave.
It’s the kind of behavior that’s so strange from a corporate standpoint that it doesn’t just raise questions about the influence Zuckerberg has over the board — which would have had to approve this FTC deal — it also raises questions about just how much of a liability is Zuckerberg’s scandalous knowledge to the value of everyone else’s shares in Facebook. In other words, while it would generally be improper for a publicly traded company to pay billions of dollars to protect the CEO, the question of what’s right or wrong from a corporate fiduciary standpoint gets rather complicated when you think about how much damage Mark Zuckerberg could have potentially done to shareholder had he been deposed. How much would Facebook’s shareholder value have fallen if Zuckerberg, Facebook personified, was deposed? That’s the question Facebook had to be thinking about when it approved this deal.
But it’s not just Zuckerberg who is being protected. Both Zuckerberg and Sheryl Sandberg declined to be interviewed by the auditors looking into Facebook’s compliance with the 2012 settlement. Keeping Facebook’s top executives out of the line of direct questioning is like a corporate top priority:
It’s the kind of picture that hints at a corporate culture where the top executives are on board with all the sleaziest stuff, everyone knows it, and therefore everyone knows they can’t allow these top executives to be deposed. It’s in everyone’s interest. At least everyone who is somehow benefiting from these scandalous policies. It’s one of the dark ironies of this shareholder lawsuit: paying $5 billion to keep Zuckerberg off the stand may have been the net value-saving move in that situation, even for the shareholders who would have seen more of their stock value wiped away had Zuckerberg been forced to face investigators. If they paid $5 billion to bribe the FTC, there’s a greater-than-$5 billion scandal about to be exposed. Perhaps quite a few greater-than-$5 billion scandals. In a way, the idea that the company paid $5 billion to protect Zuckerberg’s public image in anticipation of a run for public office is a far more benign explanation:
“If Zuckerberg had been personally named in the complaint he could have faced substantial fines for future violations and would have suffered “extensive reputational harm”, the suit claims. It adds: “The risk would have been highly material to Zuckerberg, who is extraordinarily sensitive about his public image and has been reported to have political ambitions.””
It’s hard to ignore what the plaintiffs point out: Zuckerberg really is extraordinarily sensitive about his public image. And it’s undeniable that speculation about Zuckerberg running for higher office was rampant throughout 2017 and early 2018. It was the Cambridge Analytica scandal in the spring of 2018 that that really killed that meme. There’s a lot of circumstantial evidence backing what these shareholder plaintiffs are alleging.
But did Facebook make these moves on behalf of Mark Zuckerberg? Or because of Mark Zuckerberg? Was allowing Zuckerberg to get deposed ever really a realistic option for the company? Or would it have ultimately cost shareholders far more than $5 billion in value had it allowed Zuckerberg to be deposed and reveal high-level knowledge of all the wrong doing? That’s one of the big questions these shareholders face in making the case that Facebook paid billions to make Zuckerberg’s problems go away, and not Zuckerberg’s and Facebook’s problems:
Not that these aren’t mutually exclusive scenarios. We could be looking at a situation where Facebook’s board agreed that it was worth the money to pay $5 billion in fines in order to avoid the potential fallout of Zuckerberg being deposed because that could have ultimately cost the company far more than $5 billion if the scope of Zuckerberg’s awareness of wrongdoing was revealed to regulators. And at the same time, Zuckerberg might have ongoing political ambitions that the board is more than happy to corruptly help protect.
That’s all part of what’s going to make this lawsuit a fascinating legal story to watch play out. Zuckerberg’s personal culpability in a broad range of Facebook scandals is Facebook’s best defense against charges that it improperly paid billions just to protect Zuckerberg’s personal reputation. Good luck to the shareholders. It’ll be interesting to see if Zuckerberg ends up getting deposed. Or if the Facebook ends up settling for a surprising large sum without a Zuckerberg deposition.