Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

The Cambridge Analytica Microcosm in Our Panoptic Macrocosm

Let the Great Unfriending Commence! Specifically, the mass unfriending of Facebook. Which would be a well deserved unfriending after the scandalous revelations in a recent series of articles centered around the claims of Christopher Wylie, a Cambridge Analytica whistle-blower who helped found the firm and worked there until late 2014 until he and others grew increasingly uncomfortable with the far right goals and questionable actions of the firm.

And it turns out those questionable actions by Cambridge involve a far larger and more scandalous Facebook policy brought forth by another whistle-blower, Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012.

So here’s a rough breakdown of what’s been learned so far:

According to Christopher Wylie, Cambridge Analytica was “harvesting” massive amount data off of Facebook from people who did not give their permission by utilizing a Facebook loophole. This “friends permissions” loophole allowed app developers to scrape information not just from the Facebook profiles of the people that agree to use their apps but also their friends’ profiles too. In other words, if your Facebook friend downloaded Cambridge Analytica’s app, Cambridge Analytica was allowed to grab private information from your Facebook profile without your permission. And you would never know it.

So how many profiles was Cambridge Analytica allowed to “harvest” utilizing this “friends permission” feature? About 50 million, and only a tiny fraction (~270,000) of that 50 million people actually agreed to use Cambridge Analytica’s app. The rest were all their friends. So Facebook literally used the connectivity of Facebook users against them.

Keep in mind that this isn’t a new revelation. There were reports last year about how Cambridge Analytica paid ~100,000 people a dollar or two (via Amazon’s Mechanical Turks micro-task platform) to take an online survey. But the only way they could be paid was to download an app that gave Cambridge Analytica access to the profiles of all their Facebook friends, eventually yielding ~30 million “harvested” profiles. Although according to these new reports that number is closer to 50 million profiles.

Before that, there was also a report from December of 2015 about Cambridge Analytica’s building of “psychographic profiles” for the Ted Cruz campaign. And that report also included the fact that this involved Facebook data harvested largely without users’ permissions.

So the fact that Cambridge Analytica was secretly harvesting private Facebook user data without their permissions isn’t the big revelation here. What’s new is the revelation that what Cambridge Analytica did was integral to Facebook’s business model for years and very widespread.

This is where Sandy Parakilas comes into the picture. According to Parakilas, this profile-scraping loophole that Cambridge Analytica was exploiting with its app was routinely exploiting by possibly hundreds of thousands of other app developers for years. Yep. It turns out that Facebook had an arrangement going back to 2007 where the company would get a 30 percent cut in the money app developers make off their Facebook apps and in exchange these developers were given the ability to scrape the profiles of not just the people who used their apps but also their friends. In other words, Facebook was essentially selling the private information of its users to app developers. Secretly. Well, except it wasn’t a secret to all those app developers. That’s also part of this scandal

This “friends permission” feature started getting phased out around 2012, although it turns out Cambridge Analytica was one of the very last apps allowed to use it up into 2014.

Facebook has tried to defend itself by asserting that Facebook was only making this available for things like academic research and that Cambridge Analytica was therefore misusing that data. And academic research was in fact the cover story Cambridge Analytica used. Cambridge Analytic actually set up a shell company, Global Science Research (GRS), that was run by a Cambridge University professor, Aleksandr Kogan, and claimed to be purely interested in using that Facebook data for academic research. The collected data was then sent off to Cambridge Analytica. But according to Parakilas, Facebook was allowing developers to utilize this “friends permissions” feature reasons as vague as “improving user experiences”. Parakilas saw plenty of apps harvesting this data for commercial purposes. Even worse, both Parakilas and Wylie paint a picture of Facebook releasing this data and then doing almost nothing to ensure that it’s not misused.

So we’ve learned that Facebook was allowing app developers to “harvest” private data on Facebook users without their permissions from 2007-2014, and now we get to perhaps the most chilling part: According to Parakilas, this data almost certainly floating around in the black market. And it was so easy to set up an app and start collecting this kind of data that anyone with basic app create skills could start trawling Facebook for data. And a majority of Facebook users probably had their profiles secretly “harvested” during this period. If true, that means there’s likely a massive black market of Facebook user profiles just floating around out there and Facebook has done little to nothing to address this.

Parakilas, whose job it was to police data breaches by third-party software developers from 2011-2012, understandably grew quite concerned over the risks to user data inherent in this business model. So what did Facebook’s leadership do when he raised these concerns? They essentially asked him “do you really want to know how this data is being use” attitude and actively discouraged him from investigating how this data may be abused. Intentionally not knowing about abuses was other part of the business model. Cracking down on “rogue developers” was very rare and the approval of Facebook CEO Mark Zuckerberg himself was required to get an app kicked off the platform.

Facebook has been publicly denying allegations like this for years. It was the public denials that led Parakilas to come forward.

And it gets worse. It turns out that Aleksandr Kogan, the University of Cambridge academic who ended up teaming up with Cambridge Analytica and built the app that harvested the data, has a remarkably close working relationship with Facebook. So close that Kogan actually co-authored an academic study published in 2015 with Facebook employees. In addition, one of Kogan’s partners in the data harvesting, Joseph Chancellor, was also an author on the study and went on to join Facebook a few months after it was published.

It also looks like Steve Bannon was overseeing this entire process, although he claims to know nothing.

Oh, and Palantir, the private intelligence firm with deep ties to the US national security state owned by far right Facebook board member Peter Thiel, appears to have had an informal relationship with Cambridge Analytica this whole time, with Palantir employees reportedly traveling to Cambridge Analytica’s office to help build the psychological profiles. And this state of affairs is an extension of how the internet has been used from its very conception a half century ago.

And that’s all part of why the Great Unfriending of Facebook really is long overdue. It’s one really big reason to delete your Facebook account comprised of many many many small egregious reasons.

So let’s start taking a look at those many small reasons to delete your Facebook account with a look at a New York Times story about Christopher Wylie and his story of the origins of Cambridge Analytica and the crucial role Facebook “harvesting” played in providing the company with the data it needed to carry out the goals of its chief financiers: waging the kind of ‘culture war’ the billionaire far right Mercer family and Steve Bannon wanted to wage:

The New York Times

How Trump Consultants Exploited the Facebook Data of Millions

By MATTHEW ROSENBERG, NICHOLAS CONFESSORE and CAROLE CADWALLADR
MARCH 17, 2018

LONDON — As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem.

The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.

So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.

Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”

“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”

Details of Cambridge’s acquisition and use of Facebook data have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate about the merits of the firm’s so-called psychographic modeling techniques.

But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.

Cambridge paid to acquire the personal information through an outside researcher who, Facebook says, claimed to be collecting it for academic purposes.

During a week of inquiries from The Times, Facebook downplayed the scope of the leak and questioned whether any of the data still remained out of its control. But on Friday, the company posted a statement expressing alarm and promising to take action.

“This was a scam — and a fraud,” Paul Grewal, a vice president and deputy general counsel at the social network, said in a statement to The Times earlier on Friday. He added that the company was suspending Cambridge Analytica, Mr. Wylie and the researcher, Aleksandr Kogan, a Russian-American academic, from Facebook. “We will take whatever steps are required to see that the data in question is deleted once and for all — and take action against all offending parties,” Mr. Grewal said.

Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.

In Britain, Cambridge Analytica is facing intertwined investigations by Parliament and government regulators into allegations that it performed illegal work on the “Brexit” campaign. The country has strict privacy laws, and its information commissioner announced on Saturday that she was looking into whether the Facebook data was “illegally acquired and used.”

In the United States, Mr. Mercer’s daughter, Rebekah, a board member, Mr. Bannon and Mr. Nix received warnings from their lawyer that it was illegal to employ foreigners in political campaigns, according to company documents and former employees.

Congressional investigators have questioned Mr. Nix about the company’s role in the Trump campaign. And the Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.

While the substance of Mr. Mueller’s interest is a closely guarded secret, documents viewed by The Times indicate that the firm’s British affiliate claims to have worked in Russia and Ukraine. And the WikiLeaks founder, Julian Assange, disclosed in October that Mr. Nix had reached out to him during the campaign in hopes of obtaining private emails belonging to Mr. Trump’s Democratic opponent, Hillary Clinton.

The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.” Only a tiny fraction of the users had agreed to release their information to a third party.

“Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”

Still, he added, “it’s a serious abuse of our rules.”

Reading Voters’ Minds

The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.

Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.

The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.

Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.

Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.

Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.

The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.

Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.

But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.

Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.

When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.

All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”

He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.

Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.

“We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.”

Mr. Nix tells a different story. Appearing before a parliamentary committee last month, he described Dr. Kogan’s contributions as “fruitless.”

An International Effort

Just as Dr. Kogan’s efforts were getting underway, Mr. Mercer agreed to invest $15 million in a joint venture with SCL’s elections division. The partners devised a convoluted corporate structure, forming a new American company, owned almost entirely by Mr. Mercer, with a license to the psychographics platform developed by Mr. Wylie’s team, according to company documents. Mr. Bannon, who became a board member and investor, chose the name: Cambridge Analytica.

The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European.

But in July 2014, an American election lawyer advising the company, Laurence Levy, warned that the arrangement could violate laws limiting the involvement of foreign nationals in American elections.

In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.”

In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican.

Cambridge Analytica, in its statement to The Times, said that all “personnel in strategic roles were U.S. nationals or green card holders.” Mr. Nix “never had any strategic or operational role” in an American election campaign, the company said.

Whether the company’s American ventures violated election laws would depend on foreign employees’ roles in each campaign, and on whether their work counted as strategic advice under Federal Election Commission rules.

Cambridge Analytica appears to have exhibited a similar pattern in the 2016 election cycle, when the company worked for the campaigns of Mr. Cruz and then Mr. Trump. While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees.

Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support.

Cambridge executives have offered conflicting accounts about the use of psychographic data on the campaign. Mr. Nix has said that the firm’s profiles helped shape Mr. Trump’s strategy — statements disputed by other campaign officials — but also that Cambridge did not have enough time to comprehensively model Trump voters.

In a BBC interview last December, Mr. Nix said that the Trump efforts drew on “legacy psychographics” built for the Cruz campaign.

After the Leak

By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.

Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”

Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.

Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”

Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”

But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.

While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.

Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations.

All the scrutiny appears to have damaged Cambridge Analytica’s political business. No American campaigns or “super PACs” have yet reported paying the company for work in the 2018 midterms, and it is unclear whether Cambridge will be asked to join Mr. Trump’s re-election campaign.

In the meantime, Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on.

———-

“How Trump Consultants Exploited the Facebook Data of Millions” by MATTHEW ROSENBERG, NICHOLAS CONFESSORE and CAROLE CADWALLADR; The New York Times; 03/17/2018

“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”

Cambridge Analytica was supposed to be the arsenal of weapons to fight the culture war Cambridge Analytica’s leadership wanted to wage. But that arsenal couldn’t be built without data on what makes us ‘tick’. That’s where Facebook profile harvesting came in:

The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.

So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.

Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”

And the acquisition of these 50 million Facebook profiles has never been acknowledge by Facebook, until now. And most or perhaps all of that data is still in the hands of Cambridge Analytica:


But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.

And Facebook isn’t alone in suddenly discovering that its data was “harvested” by Cambridge Analytica. Cambridge Analytica itself wouldn’t admit this either. Until now. Now Cambridge Analytica admits it did indeed obtained Facebook’s data. But the company blames it all on Aleksandr Kogan, the Cambridge University academic who ran the front-company that paid people to take the psychological profile surveys, for violating Facebook’s data usage rules. It also claims it deleted all the “harvested” information two years ago as soon as it learned there was a problem. That’s Cambridge Analytica’s new story and it’s sticking to it. For now:


Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.

But Christopher Wylie has a very different recollection of events. In 2013, Wylie was a 24-year-old political operative with ties to veterans of President Obama’s campaigns interested in using psychological traits to affect voters’ behavior. He even had a team of psychologists and data scientists, some of them affiliated with Cambridge University (where Aleksandr Kogan was also working at the time). And that expertise in psychological profiling for political purposes is why Mr. Nix recruited Wylie and his team.

Then Nix has a chance meeting with Steve Bannon and Robert Mercer. Mercer shows interest in the company because he believes it can make him a Republican kingmaker, while Bannon was focused on the possibility of using personality profiling to shift America’s culture and rewire its politics. The Mercers end up investing $1.5 million in a pilot project: polling voters and testing psychographic messaging in Virginia’s 2013 gubernatorial race:


The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.

Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.

The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.

Then a chance meeting brought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.

Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and the Mercers declined to comment.

Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.

So the pilot project proceed, but there was a problem: Wylie’s team simply did not have the data it needed. They only had the kind of data traditional analytics firms had: voting records and consumer purchase histories. And getting the kind of data they wanted to gain insight into voter neuroticisms and psychological traits could be very expensive:


The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.

Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.

But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.

And that’s where Aleksandr Kogan enters the picture: First, Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of set up he needed. Researchers there claimed to have developed techniques for mapping personality traits based on what people “liked” on Facebook. Better yet, this team already had an app that paid users small sums to take a personality quiz and download an app that would scrape private information from their Facebook profiles and from their friends’ Facebook profiles. In other words, Cambridge University’s Psychometrics Centre was already employing exactly the same kind of “harvesting” model Kogan and Cambridge Analytica eventually ended up doing.

But there was a problem for Wylie and his team: Cambridge University’s Psychometrics Centre declined to work with them:


Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.

But it wasn’t a particularly big problem because Wylie found another Cambridge University psychology professor who was familiar with the techniques and willing to do the job: Aleksandr Kogan. So Kogan built his own psychological profile app and began harvesting data for Cambridge Analytica in June 2014. Kogan was even allowed to keep the harvested data for his own research according to his contract with Cambridge Analytica. According to Facebook, the only thing Kogan told them and told the users of his app in the fine print was that he was collecting information for academic purposes. Although Facebook didn’t appear to have ever attempted to verify that claim:


When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.

All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”

In the end, Kogan’s app managed to “harvest” 50 million Facebook profiles based on a mere 270,000 people actually signing up for Kogan’s app. So for each person who signed up for the app there were ~185 other people who had their profiles sent to Kogan too.

And 30 million of those profiles contained information like places of residence that allowed them to match that Facebook profile with other records (presumably non-Facebook records) and build psychographic profiles, implying that those 30 million records were mapped to real life people:


He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million — a number previously reported by The Intercept — contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.

Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.

So this harvesting starts in mid-2014, but by early 2015, Wylie and more than half his original team leave the firm to start a rival firm, although it sounds lie concerns over the far right cause they were working for was also behind their departure:


By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.

Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”

Finally, this whole scandal goes public. Well, at least partially: At the end of 2015, the Guardian reports this Facebook profile collection scheme Cambridge Analytica was doing for the Ted Cruz campaign. Facebook doesn’t publicly acknowledge the truth of this report, but it did publicly state that it was “carefully investigating this situation.” Facebook also sent a letter to Cambridge Analytica demanding that it destroy this data…except the letter wasn’t sent until August of 2016.


Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.

Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”

Facebook now claims that Cambridge Analytica “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.” But, of course, this was a lie. The New York Times was shown sets of the raw data.

And even more disturbing, a former Cambridge Analytica employee claims he recently saw hundreds of gigabytes on Cambridge Analytica’s servers. Unencrypted. Which means that data could potentially be grabbed by any Cambridge Analytica employee with access to that server:


Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”

But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.

While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.

So, to summarize the key points from this New York Times article:

1. In 2013, Cambridge Analytica is formed when Alexander Nix, then a salesman for the small elections division at SCL Group, recruits Christopher Wylie and a team of psychologist to help develop a “political data” unit at the company, with an eye on the 2014 US mid-terms.

2. By chance, Nix and Wylie meet Steve Bannon and Robert Mercer, who are quickly sold on the idea of psychographic profiling for political purposes. Bannon was intrigue by the idea of using this data to wage the “culture war.” Mercer agrees to invest $1.5 Billion in a pilot project involving the Virginia gubernatorial race. Their success is limited as Wylie soon discovers that they don’t have the data they really need to carry out their psychographic profiling project. But Robert Mercer remained committed to the project.

3. Wylie found that Cambridge University’s Psychometrics Centre had exactly the kind of data they were seeking. Data that was being collected via an app administered through Facebook, where people were paid small amounts a money to take a survey, and in exchange Cambridge University’s Psychometrics Centre was allowed to scrape their Facebook profile as well as the profiles of all their Facebook friends.

4. Cambridge University’s Psychometrics Centre rejected Wylies offer to work with them, but there was another Cambridge University psychology professor who was willing to do so, Aleksandr Kogan. Kogan proceeded to start a company (as a front for Cambridge Analytica) and develop his own app, getting ~270,000 people to download it and give their permission for their profiles to be collected. But using the “friends permission” feature, Kogan’s app ended collecting another ~50 million Facebook profiles from the friends of those 270,000 people. ~30 million of those profiles were matched to US voters.

5. By early 2015, Wylie and his left-leaning team members leave Cambridge Analytica and form their own company, apparently due to concerns over the far right goals of the firm.

6. Cambridge Analytica goes on to work for the Ted Cruz campaign. In late 2015, it’s reported that Cambridge Analytica work for Cruz involved working with Facebook data from people who didn’t give it permission. Facebook issues a vague statement about how it’s going to investigate.

7. In August 2016, Facebook sends a letter to Cambridge Analytica asserting that the data was obtained and used without permission and must be deleted immediately. The New York Times was just shown copies of exactly that data to write this article. Hundreds of gigabytes of data that is completely outside Facebook’s control.

8. Cambridge Analytica CEO (now former CEO) Alexander Nix told lawmakers that the firm didn’t possess any Facebook data. So he was clearly completely lying.

9. Finally, a former Cambridge Analytica employee showed the New York Times hundreds of gigabytes of Facebook data. And it was unencrypted, so anyone with access to it could make a copy and give it to whoever they want.

And that’s what we learned from just the New York Times’s version of this story. The Guardian Observer was also talking with Christopher Wylie and other Cambridge Analytica whistle-blowers. And while it largely covers the same story as the New York Times report, the Observer article contains some additional details.
1. For starters, the following article notes that the Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. That’s important to note because the stated use of the data grabbed by Aleksandr Kogan’s app was for research purposes. But “improving user experience in the app” is a far more generic reason for grabbing that data than academic research purposes. And that hints at something we’re going to see below from a Facebook whistle-blower: that all sorts of app developers were grabbing this kind of data using the ‘friends’ loophole for reasons that had absolutely nothing to do with academic purposes and this was deemed fine by Facebook.

2. Facebook didn’t formally suspend Cambridge Analytica and Aleksandr Kogan from the platform until one day before the Observer article was published, which is more than two years after the initial reports in late 2015 about the Cambridge Analytica misusing Facebook data for the Ted Cruz campaign. So if Facebook felt like Cambridge Analytica and Aleksandr Kogan was improperly obtaining and misusing its data it sure tried hard not to let on until the very last moment.

3. Simon Milner, Facebook’s UK policy director, told the UK MP when asked if Cambridge Analytica had Facebook data that, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.” Which, again, as we’re going to see, was a total lie according to a Facebook whistle-blower because Facebook was routinely providing exactly the kind of data Kogan’s app was collecting to thousands of developers.

4. Aleksandr Kogan had a license from Facebook to collect profile data, but for research purposes, so when he used the data for commercial purposes he was violating his agreement, according to the article. Also, Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps. And as we’re going to see in subsequent articles, it does indeed look like Kogan is correct and he was very open about using the data from the Cambridge Analytica app for commercial purposes and Facebook had no problem with this.

5. In addition to being a Cambridge University professor, Aleksandr Kogan has links to a Russian university and took Russian grants for research. This will undoubtedly raise speculation about the possibility that Kogan’s data was handed over to the Kremlin and used in the social-media influencing campaign carried out by the Kremlin-linked Internet Research Agency. If so, it’s still important to keep in mind that, based on what we’re going to see from Facebook whistle-blower Sandy Parakilas, the Kremlin could have easily set up all sorts of Facebook apps for collecting this kind of data because apparently anyone could do it as long as the data was for “improving the user experience”. That’s how obscene this situation is. Kogan was not at all needed to provide this data to the Kremlin because it was so easy for anyone to obtain. In other words, we should assume all sorts of governments have this kind of data.

6. The legal letter sent by Facebook to Cambridge Analytica in August 2016 demanding that it delete the data was sent just days before it was officially announced that Steve Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him. That sure does seem like Facebook knew about Bannon’s involvement with Cambridge Analytica and the fact that Bannon was going to become Trump’s campaign manager and bring Cambridge Analytica into the campaign.

7. Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.”

So as we can see, like the proverbial onion, the more layers you peel back on the story Cambridge Analytica and Facebook have been peddling about how this data was obtained and used, the more acrid and malodorous it gets. With a distinct tinge of BS:

The Guardian

Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach

Whistleblower describes how firm linked to former Trump adviser Steve Bannon compiled user data to target American voters

Carole Cadwalladr and Emma Graham-Harrison

Sat 17 Mar 2018 18.03 EDT

The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box.

A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.

Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.

Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.

The New York Times is reporting that copies of the data harvested for Cambridge Analytica could still be found online; its reporting team had viewed some of the raw data.

The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.

Cambridge Analytica and Facebook are one focus of an inquiry into data and politics by the British Information Commissioner’s Office. Separately, the Electoral Commission is also investigating what role Cambridge Analytica played in the EU referendum.

On Friday, four days after the Observer sought comment for this story, but more than two years after the data breach was first reported, Facebook announced that it was suspending Cambridge Analytica and Kogan from the platform, pending further information over misuse of data. Separately, Facebook’s external lawyers warned the Observer it was making “false and defamatory” allegations, and reserved Facebook’s legal position.

The revelations provoked widespread outrage. The Massachusetts Attorney General Maura Healey announced that the state would be launching an investigation. “Residents deserve answers immediately from Facebook and Cambridge Analytica,” she said on Twitter.

The Democratic senator Mark Warner said the harvesting of data on such a vast scale for political targeting underlined the need for Congress to improve controls. He has proposed an Honest Ads Act to regulate online political advertising the same way as television, radio and print. “This story is more evidence that the online political advertising market is essentially the Wild West. Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, it’s clear that, left unregulated, this market will continue to be prone to deception and lacking in transparency,” he said.

Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.

Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”

Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”

Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.

At the time of the data breach, Wylie was a Cambridge Analytica employee, but Facebook described him as working for Eunoia Technologies, a firm he set up on his own after leaving his former employer in late 2014.

The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.

That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.

“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.

Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.

“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”

Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.

“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.

A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.

Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.

Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.

Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.

The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.

The algorithm and database together made a powerful political tool. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.

“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.

At the time, more than 50 million profiles represented around a third of active North American Facebook users, and nearly a quarter of potential US voters. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.”

Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.

GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.

SCL Elections, an affiliate, worked with Facebook over the period to ensure it was satisfied no terms had been “knowingly breached” and provided a signed statement that all data and derivatives had been deleted, he said. Cambridge Analytica also said none of the data was used in the 2016 presidential election.

Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.

———-

“Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach” by Carole Cadwalladr and Emma Graham-Harrison; The Guardian; 03/17/2018

“Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.””

Exploiting everyone’s inner demons. Yeah, that sounds like something Steve Bannon and Robert Mercer would be interested in. And it explains why Facebook data would have been potentially so useful for exploiting those demons. Recall that the original non-Facebook data that Christopher Wylie and initial Cambridge Analytica team was working with with in 2013 and 2014 wasn’t seen as effective. It didn’t have that inner-demon-influencing granularity. And then they discovered the Facebook data available through this app loophole and it was taken to a different level. Remember when Facebook ran that controversial experiment on users where they tried to manipulate their emotions by altering their news feeds? It sounds like that’s what Cambridge Analytica was basically trying to do using Facebook ads instead of the newsfeed, but perhaps in a more microtargeted way.

And that’s all because Facebook’s “platform policy” allowed for the collection of friends’ data to “improve user experience in the app” with the non-enforced request that the data not be sold on or used for advertising:


The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.

Just imagine how many app developers were using this over the 2007-2014 period Facebook had this “platform policy” that allowed data captures of friends’ “to improve user experience in the app”. It wasn’t just Cambridge Analytica that took advantage of this. That’s a big part of the story here.

And yet when Simon Milner, Facebook’s UK policy director, was asked if Cambridge Analytica had Facebook data, he said, “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”:


Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.

Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”

Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”

And note how the article appears to say the data Cambridge Analytica collected on Facebook users included “emails, invoices, contracts and bank transfers that reveal more than 50 million profiles.” It’s not clear if that’s a reference to emails, invoices, contracts and bank transfers that involved with setting up Cambridge Analytica or emails, invoices, contracts and bank transfers from Facebook users, but if that was from users that would be wildly scandalous:


Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profilesmostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.

So it will be interesting to see if that point of ambiguity is ever clarified somewhere. Because wow would that be scandalous if emails, invoices, contracts and bank transfers of Facebook users were released through this “platform policy”.

Either way, it looks unambiguously awful for Facebook. Especially now that we learn that the cease and destroy letter Facebook sent to Cambridge Analytica in August of 2016 was suspiciously sent just days before Steve Bannon, a founder and officer of Cambridge Analytica, becomes Trump’s campaign manager and brings the company into the Trump campaign:


The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.

That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.

“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.

And the only thing Facebook did to confirm that the Facebook data wasn’t misused, according to Christopher Wylie, was to ask that a box be checked a box on a form:


Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.

“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”

And, again, Facebook denied it’s data based passed along to Cambridge Analytica when questioned by both the US Congress and UK Parliament:


Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.

“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.

A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.

And not how Facebook now admits Aleksandr Kogan did indeed get the data legally. It just wasn’t used properly. It’s why Facebook is saying it shouldn’t be called a “data breach”: because it wasn’t a breach because the data was obtained properly:


Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.

Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.

But Aleksandr Kogan isn’t simply arguing that he did nothing wrong when he obtained that Facebook data via his app. Kogan also argues that he had a “close working relationship” with Facebook, which has granted him permission for his apps, and everything he did with the data was legal. So Aleksandr Kogan’s story is quite notable because, again, as we’ll see below, there is evidence that his story is closest to the truth of all the stories we’re hearing: that Facebook was totally fine with Kogan’s apps obtaining the private data of millions of Facebook friends. And Facebook was perfectly fine with how that data was used or was at least consciously trying to not know how the data might be misused. That’s the picture that’s going to emerge so keep that in mind when Kogan asserts that he had a “close working relationship” with Facebook. He probably did based on available evidence:


Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.

Kogan maintains everything he did was legal, and guess what? It probably was legal. That’s part of the scandal here.

And regarding those testimony’s by Cambridge Analytica’s now-former CEO Alexander Nix that the company never worked with Facebook, note how the Observer got to see a copy of the contract Cambridge Analytica entered into with Kogan’s GSR and the contract was entirely premised on harvesting and processing the Facebook data. Which, again, hints at the likelihood that they thought what they were doing at the time (2014) was completely legal. They talked about it in the contract:


The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.

“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.

Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.

GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.

““The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.”

A contract to create a ‘gold standard’ of 2 million Facebook accounts that are ‘matched’ to real life voters for the use of “understanding personality from Facebook profile information.” That was the actual contract Kogan had with Cambridge Analytica. All for the purpose of developing a system that would allow Cambridge Analytica to infer your inner demons from your Facebook profile and then manipulate them.

So it’s worth noting how the app permissions setup Facebook allowed from 2007-2014 of letting app developers collect Facebook profile information of the people who use their apps and their friends created this amazing arrangement where app developers could generate a ‘gold standard’ of of people using apps and a test set from all their friends. If the goal was getting people to encourage their friends to download an app that would have been a very useful data set. But it would of course also have been an incredibly useful data set for anyone who wanted to collect the profile information of Facebook users. Because, again, as we’re going to see, a Facebook whistle-blower is claiming that Facebook user profile information was routinely handed out to app developers.

So if an app developer wanted to experiment on, say, how to use that available Facebook profile information to manipulate people, getting a ‘gold standard’ of people to take a psychological profile survey would be an important step in carrying out that experiment. Because those people who take your psychological survey form the data set you can use to train your algorithms that take Facebook profile information as the input and create psychological profile data as the output.

And that’s what Aleksandr Kogan’s app was doing: grabbing psychological information from the survey while simultaneously grabbing the Facebook profile data from the test-takers, along with the Facebook profile data of all their friends. Kogan’s ‘gold standard’ training set was the people who actually used his app and handed over a bunch of personality information from the survey and the test set would have been the tens of millions of friends whose data was also collected. Since the goal of Cambridge Analytica was to infer personality characteristics from people’s Facebook profiles, pairing the personality surveys from the ~270,000 people who took the app survey to their Facebook profiles allowed Cambridge Analytica to train their algorithms that guessed at personality characteristics from the Facebook profile information. Then they had all the rest of the profile information on the rest of the ~50 million people to apply those algorithms.

Recall how Trump’s 2016 campaign digital director, Brad Parscale, curiously downlplayed the utility of Cambridge Analytica’s data during interviews where he was bragging about how they were using Facebook’s ad micro-targeting features to run “A/B testing on steriods” on micro-targeted audiences i.e. strategically exposing micro-targeted Facebook audiences sets of ads that differed in some specific way design to explore a particular psychological dimension of that micro-audience. So it’s worth noting that the “A/B testing on steroids” Brad Parscale referred to was probably focused on the ~30 million of that ~50 million set of people that Cambridge Analytica obtained a Facebook profile who could be matched back to real people. Those 30 million Facebook users that Cambridge Analytica had Facebook profile data on were the test set. And the algorithms designed to guess the psychological makeup of people from their Facebook profiles that Cambridge Analytica refined on the training set of ~270,000 Facebook users who took the psychological profiles were likely unleashed on that test set of ~30 million people.

So when we find out that the Cambridge Analytica contract with Aleksandr Kogan’s GSR company included language like building a “gold standard”, keep in mind that this implied that there was a lot of testing to do after the algorithmic refinements based on that gold standard. And the ~30-50 million profiles they collected from the friends of the ~270,000 people who downloaded Kogan’s app made for quite a test set.

Also keep in mind that the denials that Cambridge Analytica worked with Facebook data by former CEO Alexander Nix aren’t the only laughable denials of Cambridge Analytica’s officers. Any denials by Steve Bannon and his lawyers that he knew about Cambridge Analytica’s use of Facebook profile data should also be seen laughable, starting with the denials from Bannon’s lawyers that he knows nothing about what Wylie and others are claiming:


Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.

Steve Bannon: the Boss Who Knows Nothing (Or So He Says)

Steve Bannon “knows nothing about the claims being asserted.” LOL! Yeah, well, not according to Christopher Wylie, who, in the following article, has some rather significant claims about the role Steve Bannon in all this. According to Wylie:

1. Steve Bannon was the person overseeing the acquisition of Facebook data by Cambridge Analytica. As Wylie put it, “We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss.” Now, when Wylie says Bannon was Nix’s boss, note that Bannon served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016. And Nix was CEO during this period. So technically Nix was the boss. But it sounds like Bannon was effectively the boss, according to Wylie.

2. Wylie acknowledges that it’s unclear whether Bannon knew how Cambridge Analytica was obtaining the Facebook data. But Wylie does say that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And Bannon “approved the data-collection scheme we were proposing”. So if Bannon and Mercer didn’t know the details of how the purchase of massive amounts of Facebook data took place that would be pretty remarkable. Remarkably uncurious, given that acquiring this data was at the core of what the company was doing and they approved of the data-collection scheme. A scheme that involved having Aleksandr Kogan set up a separate company. That was the “scheme” Bannon and Mercer would have had to approve so the question if they didn’t realize that they were acquire this Facebook data using this “friend sharing” feature Facebook made available to app developers that would have been a significant oversight.

The article goes on to include a few more fun facts, like…

3. Cambridge Analytica was doing focus group tests on voters in 2014 and identified many of the same underlying emotional sentiments in voters that formed the core message behind Donald Trump’s campaign. In focus groups for the 2014 midterms, the firm found that voters responded to calls for building a wall with Mexico, “draining the swamp” int Washington DC, and to thinly veiled forms of racism toward African Americans called “race realism”. The firm also tested voter attitudes towards Russian President Vladimir Putin and discovered that there’s a lot of Americans who really like the idea of a really strong authoritarian leader. Again, this was all discovered before Trump even jumped into the race.

4. The Trump campaign rejected early overtures to hire Cambridge Analytica, which suggests that Trump was actually the top choice of the Mercers and Bannon, ahead of Ted Cruz.

5. Cambridge Analytica CEO Alexander Nix was caught by Channel 4 News in the UK boasting about the secrecy of his firm, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”

So based on these allegations, Steve Bannon was closely involved in approval the various schemes to acquire Facebook data and probably using self-destructing emails in the process:

The Washington Post

Bannon oversaw Cambridge Analytica’s collection of Facebook data, according to former employee

By Craig Timberg, Karla Adam and Michael Kranish
March 20, 2018 at 7:53 PM

LONDON — Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.

The 2014 effort was part of a high-tech form of voter persuasion touted by the company, which under Bannon identified and tested the power of anti-establishment messages that later would emerge as central themes in President Trump’s campaign speeches, according to Chris Wylie, who left the company at the end of that year.

Among the messages tested were “drain the swamp” and “deep state,” he said.

Cambridge Analytica, which worked for Trump’s 2016 campaign, is now facing questions about alleged unethical practices, including charges that the firm improperly handled the data of tens of millions of Facebook users. On Tuesday, the company’s board announced that it was suspending its chief executive, Alexander Nix, after British television released secret recordings that appeared to show him talking about entrapping political opponents.

More than three years before he served as Trump’s chief political strategist, Bannon helped launch Cambridge Analytica with the financial backing of the wealthy Mercer family as part of a broader effort to create a populist power base. Earlier this year, the Mercers cut ties with Bannon after he was quoted making incendiary comments about Trump and his family.

In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.

“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”

Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.

Bannon received more than $125,000 in consulting fees from Cambridge Analytica in 2016 and owned “membership units” in the company worth between $1 million and $5 million, according to his financial disclosure.

It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.

Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.

Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.

Bannon “approved the data-collection scheme we were proposing,” Wylie said.

The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.

“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”

The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.

In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.

The firm also tested views of Russian President Vladimir Putin.

“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”

The controversy over Cambridge Analytica’s data collection erupted in recent days amid news reports that an app created by a Cambridge University psychologist, Aleksandr Kogan, accessed extensive personal data of 50 million Facebook users. The app, called thisisyourdigitallife, was downloaded by 270,000 users. Facebook’s policy, which has since changed, allowed Kogan to also collect data —including names, home towns, religious affiliations and likes — on all of the Facebook “friends” of those users. Kogan shared that data with Cambridge Analytica for its growing database on American voters.

Facebook on Friday banned the parent company of Cambridge Analytica, Kogan and Wylie for improperly sharing that data.

The Federal Trade Commission has opened an investigation into Facebook to determine whether the social media platform violated a 2011 consent decree governing its privacy policies when it allowed the data collection. And Wylie plans to testify to Democrats on the House Intelligence Committee as part of their investigation of Russian interference in the election, including possible ties to the Trump campaign.

Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”

The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”

Nix could not be reached for comment.

Cambridge Analytica was set up as a U.S. affiliate of British-based SCL Group, which had a wide range of governmental clients globally, in addition to its political work.

Wylie said that Bannon and Nix first met in 2013, the same year that Wylie — a young data whiz with some political experience in Britain and Canada — was working for SCL Group. Bannon and Wylie met soon after and hit it off in conversations about culture, elections and how to spread ideas using technology.

Bannon, Wylie, Nix, Rebekah Mercer and Robert Mercer met in Rebekah Mercer’s Manhattan apartment in the fall of 2013, striking a deal in which Robert Mercer would fund the creation of Cambridge Analytica with $10 million, with the hope of shaping the congressional elections a year later, according to Wylie. Robert Mercer, in particular, seemed transfixed by the group’s plans to harness and analyze data, he recalled.

The Mercers were keen to create a U.S.-based business to avoid bad optics and violating U.S. campaign finance rules, Wylie said. “They wanted to create an American brand,” he said.

The young company struggled to quickly deliver on its promises, Wiley said. Widely available information from commercial data brokers provided people’s names, addresses, shopping habits and more, but failed to distinguish on more fine-grained matters of personality that might affect political views.

Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”

After Cruz faded, the Mercers switched their allegiance to Trump and pitched their services to Trump’s digital director, Brad Parscale. The company’s hiring was approved by Trump’s son-in-law, Jared Kushner, who was informally helping to manage the campaign with a focus on digital strategy.

Kushner said in an interview with Forbes magazine that the campaign “found that Facebook and digital targeting were the most effective ways to reach the audiences. …We brought in Cambridge Analytica.” Kushner said he “built” a data hub for the campaign “which nobody knew about, until towards the end.”

Kushner’s spokesman and lawyer both declined to comment Tuesday.

Two weeks before Election Day, Nix told a Post reporter at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”

The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.

Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.

Cambridge Analytica’s parent company, SCL Group, has an ongoing contract with the State Department’s Global Engagement Center. The company was paid almost $500,000 to interview people overseas to understand the mind-set of Islamist militants as part of an effort to counter their online propaganda and block recruits.

Heather Nauert, the acting undersecretary for public diplomacy, said Tuesday that the contract was signed in November 2016, under the Obama administration, and has not expired yet. In public records, the contract is dated in February 2017, and the reason for the discrepancy was not clear. Nauert said that the State Department had signed other contracts with SCL Group in the past.

———-

“Bannon oversaw Cambridge Analytica’s collection of Facebook data, according to former employee” by Craig Timberg, Karla Adam and Michael Kranish; The Washington Post; 03/20/2018

“Conservative strategist Stephen K. Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data as part of an ambitious program to build detailed profiles of millions of American voters, a former employee of the data-science firm said Tuesday.”

Steve Bannon oversaw Cambridge Analytica’s early efforts to collect troves of Facebook data. That’s what Christopher Wylie claims, and given Bannon’s role as vice president of the company it’s not, on its face, an outlandish claim. And Bannon apparently approved the spending of nearly $1 million to acquire that Facebook data in 2014. Because, according to Wylie, Alexander Nix didn’t actually have permission to spend that kind of money without approval. Bannon, on the hand, did have permission to make those kinds of expenditure approvals. That’s how high up Bannon was at that company even though he was technically the vice president while Nix was the CEO:


In an interview Tuesday with The Washington Post at his lawyer’s London office, Wylie said that Bannon — while he was a top executive at Cambridge Analytica and head of Breitbart News — was deeply involved in the company’s strategy and approved spending nearly $1 million to acquire data, including Facebook profiles, in 2014.

“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss,” said Wylie, who was Cambridge Analytica’s research director. “Alexander Nix didn’t have the authority to spend that much money without approval.”

Bannon, who served on the company’s board, did not respond to a request for comment. He served as vice president and secretary of Cambridge Analytica from June 2014 to August 2016, when he became chief executive of Trump’s campaign, according to his publicly filed financial disclosure. In 2017, he joined Trump in the White House as his chief strategist.

“We had to get Bannon to approve everything at this point. Bannon was Alexander Nix’s boss…Alexander Nix didn’t have the authority to spend that much money without approval.””

And while Wylie acknowledges that unclear whether Bannon knew how Cambridge Analytica was obtaining the data, Wylie does assert that both Bannon and Rebekah Mercer participated in conference calls in 2014 in which plans to collect Facebook data were discussed. And, generally speaking, if Bannon was approval $1 million expenditures on acquiring Facebook data he probably sat in on at least one meeting where they described how they were planning on actually getting the data by spending on that money. Don’t forget the scheme involved paying individuals small amounts of money to take the psychological survey on Kogan’s app, so at a minimum you would expect Bannon to know about how these apps were going to result in the gathering of Facebook profile information:


It is unclear whether Bannon knew how Cambridge Analytica was obtaining the data, which allegedly was collected through an app that was portrayed as a tool for psychological research but was then transferred to the company.

Facebook has said that information was improperly shared and that it requested the deletion of the data in 2015. Cambridge Analytica officials said that they had done so, but Facebook said it received reports several days ago that the data was not deleted.

Wylie said that both Bannon and Rebekah Mercer, whose father, Robert Mercer, financed the company, participated in conference calls in 2014 in which plans to collect Facebook data were discussed, although Wylie acknowledged that it was not clear they knew the details of how the collection took place.

Bannon “approved the data-collection scheme we were proposing,” Wylie said.

What’s Bannon hiding by claiming ignorance? Well, that’s a good question after Britain’s Channel 4 News aired a video Tuesday in which Nix was highlighting his firm’s secrecy, including the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing”:


Meanwhile, Britain’s Channel 4 News aired a video Tuesday in which Nix was shown boasting about his work for Trump. He seemed to highlight his firm’s secrecy, at one point stressing the need to set up a special email account that self-destructs all messages so that “there’s no evidence, there’s no paper trail, there’s nothing.”

The company said in a statement that Nix’s comments “do not represent the values or operations of the firm and his suspension reflects the seriousness with which we view this violation.”

Self-destructing emails. That’s not suspicious or anything.

And note how Cambridge Analytica was apparently already honing in on a very ‘Trumpian’ message in 2014, long before Trump was on the radar:


The data and analyses that Cambridge Analytica generated in this time provided discoveries that would later form the emotionally charged core of Trump’s presidential platform, said Wylie, whose disclosures in news reports over the past several days have rocked both his onetime employer and Facebook.

“Trump wasn’t in our consciousness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or anything.”

The year before Trump announced his presidential bid, the data firm already had found a high level of alienation among young, white Americans with a conservative bent.

In focus groups arranged to test messages for the 2014 midterms, these voters responded to calls for building a new wall to block the entry of illegal immigrants, to reforms intended the “drain the swamp” of Washington’s entrenched political community and to thinly veiled forms of racism toward African Americans called “race realism,” he recounted.

The firm also tested views of Russian President Vladimir Putin.

“The only foreign thing we tested was Putin,” he said. “It turns out, there’s a lot of Americans who really like this idea of a really strong authoritarian leader and people were quite defensive in focus groups of Putin’s invasion of Crimea.”

Intriguingly, given these early Trumpian findings in their 2014 voter research, it appears that the Trump campaign turned down early overtures to hire Cambridge Analytica, which suggests that Trump really was the top preference for Bannon and the Mercers, not Ted Cruz:


Cambridge Analytica initially worked for 2016 Republican candidate Sen. Ted Cruz (Tex.), who was backed by the Mercers. The Trump campaign had rejected early overtures to hire Cambridge Analytica, and Trump himself said in May 2016 that he “always felt” that the use of voter data was “overrated.”

And as the article reminds us, the Trump campaign has completely denied EVER using Cambridge Analytica’s data. Brad Parscale, Trump’s digital director, claimed he got all the data they were working with from the Republican National Committee:


Two weeks before Election Day, Nix told a Post reporter at the company’s New York City office that his company could “determine the personality of every single adult in the United States of America.”

The claim was widely questioned, and the Trump campaign later said that it didn’t rely on psychographic data from Cambridge Analytica. Instead, the campaign said that it used a variety of other digital information to identify probable supporters.

Parscale said in a Post interview in October 2016 that he had not “opened the hood” on Cambridge Analytica’s methodology, and said he got much of his data from the Republican National Committee. Parscale declined to comment Tuesday. He has previously said that the Trump campaign did not use any psychographic data from Cambridge Analytica.

And that denial by Parscale raises an obvious question: when Parscale claims they only used data from the RNC, it’s clearly very possible that he’s just straight up lying. But it’s also possible that he’s lying while technically telling the truth. Because if Cambridge Analytica gave its data to the RNC, it’s possible the Trump campaign acquired the Camgridge Analytica data from the RNC at that point, giving the campaign a degree of deniability about the use of such scandalously acquired data if the story of it ever became public. Like now.

Don’t forget that data of this nature would have been potentially useful for EVERY 2016 race, not just the presidential campaign. So if Bannon and Mercer were intent on helping Republicans win across the board, handing that data over to the RNC would have just made sense.

Also don’t forget that the New York Times was shown unencrypted copies of the Facebook data collected by Cambridge Analytica. If the New York Times saw this data, odds are the RNC has too. And who knows who else.

Facebook’s Sandy Parakilas Blows an “Utterly Horrifying” Whistle

It all raises the question of whether or not the Republican National Committee now possess all that Cambridge Analytica data/Facebook data right now. And that brings us to perhaps the most scandalous article of all that we’re going to look at. It’s about Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012 who is now a whistle-blower about exactly the kind of “friend’s permission” loophole Cambridge Analytica exploited. And as the following article makes horrifically clear:

1. It’s not just Cambridge Analytica or the RNC that might possess this treasure trove of personal information. It’s the entire data brokerage industry that probably has thier hands on this data. Along with anyone who has picked it up through the black market.

2. It was relatively easy to write an app that could exploit this “friends permissions” feature and start trawling Facebook for profile data for app users and their friends. Anyone with basic app coding skills could do it.

3. Parakilas estimates that perhaps hundreds of thousands of developers likely exploited exactly the same ‘for research purposes only’ loophole exploited by Cambridge Analytica. And Facebook had no way of tracking how this data was used by developers once it left Facebook’s servers.

4. Parakilas suspects that this amount of data will inevitably end up in the black market meaning there is probably a massive amount of personally identifiable Facebook data just floating around for the entire marketing industry and anyone else (like the GOP) to data mine.

5. Parakilas knew of many commercial apps that were using the same “friends permission” feature to grab Facebook profile data use it commercial purposes.

6. Facebook’s policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings. That appears to be part of the legal protection Facebook employed when it had this policy: don’t complain, it’s in the fine print.

7. Perhaps most scandalous of all, Facebook took a 30% cut of payments made through apps in exchange for giving these app developers access to Facebook user data. Yep, Facebook was effectively selling user data, but by structuring the sale of this data as a 30% share of the payments made through the app Facebook also created an incentive to help developers maximize the profits they made through the app. So Facebook literally set up a system that incentivized itself to help app developers make as much money as possible off of the user data they were handing over.

8. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users. So as a 2010, ~1 in 10 Facebook apps were using this app loophole to grab information about both the users of the app and their friends.

9. While Cambridge Analytica was far from alone in exploiting this loophole, it was actually one of the very last firms given permission to be allowed to do so. Which means that particular data set collected by Cambridge Analytica could be uniquely valuable simply be being larger and containing and more recent data than most other data sets of this nature.

10. When Parakilas brought up these concerns to Facebook’s executives and suggested the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach. One Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”

11. Shortly after arriving at the company’s Silicon Valley headquarters, Parakilas was told that any decision to ban an app required the personal approval of Mark Zuckerberg. Although the policy was later relaxed to make it easier to deal with rogue developers. That said, rogue developers were rarely dealt with.

12. When Facebook eventually phased out this “friends permissions” policy for app developers, it was likely done out of concerns over the commercial value of all this data they were handing out. Executives were apparently concerned that competitors were going to use this data to build their own social networks.

So, as we can see, the entire saga of Cambridge Analytica’s scandalous acquisition of private Facebook profiles on ~50 million Americans is something Facebook made routine for developers of all sorts from 2007-2014, which means this is far from a ‘Cambridge Analytica’ story. It’s a Facebook story about a massive problem Facebook created for itself (for its own profits):

The Guardian

‘Utterly horrifying’: ex-Facebook insider says covert data harvesting was routine

Sandy Parakilas says numerous companies deployed these techniques – likely affecting hundreds of millions of users – and that Facebook looked the other way

Paul Lewis in San Francisco
Tue 20 Mar 2018 07.46 EDT

Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to Cambridge Analytica, according to a new whistleblower.

Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.

“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.

Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.

Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.

“It has been painful watching,” he said, “because I know that they could have prevented it.”

Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”

Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.

He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.

He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”

Facebook did not respond to a request for comment on the information supplied by Parakilas, but directed the Guardian to a November 2017 blogpost in which the company defended its data sharing practices, which it said had “significantly improved” over the last five years.

“While it’s fair to criticise how we enforced our developer policies more than five years ago, it’s untrue to suggest we didn’t or don’t care about privacy,” that statement said. “The facts tell a different story.”

‘A majority of Facebook users’

Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.

That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.

The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.

Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.

Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.

Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.

Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.

Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”

Facebook announced on Monday that it had hired a digital forensics firm to conduct an audit of Cambridge Analytica. The decision comes more than two years after Facebook was made aware of the reported data breach.

During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.

While the previous policy of giving developers access to Facebook users’ friends’ data was sanctioned in the small print in Facebook’s terms and conditions, and users could block such data sharing by changing their settings, Parakilas said he believed the policy was problematic.

“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”

It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.

The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.

While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.

“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.

If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.

The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.

Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.

He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.

“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”

‘They treated it like a PR exercise’

Parakilas said he lobbied internally at Facebook for “a more rigorous approach” to enforcing data protection, but was offered little support. His warnings included a PowerPoint presentation he said he delivered to senior executives in mid-2012 “that included a map of the vulnerabilities for user data on Facebook’s platform”.

“I included the protective measures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do malicious things with the data,” he said. “On the list of bad actors I included foreign state actors and data brokers.”

Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”

That changed, Parakilas said, when he heard the congressional testimony given by Facebook lawyers to Senate and House investigators in late 2017 about Russia’s attempt to sway the presidential election. “They treated it like a PR exercise,” he said. “They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.”

It was at that point that Parakilas decided to go public with his concerns, writing an opinion article in the New York Times that said Facebook could not be trusted to regulate itself. Since then, Parakilas has become an adviser to the Center for Humane Technology, which is run by Tristan Harris, a former Google employee turned whistleblower on the industry.

———-

“‘Utterly horrifying’: ex-Facebook insider says covert data harvesting was routine” by Paul Lewis; The Guardian; 03/20/2018

“Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach.”

The platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012: That’s who is making these claims. In other words, Sandy Parakilas is indeed someone who should be intimately familiar with Facebook’s policies of handing user data over to app developers because it was his job to ensure that data wasn’t breached.

And as Parakilas makes clear, he wasn’t actually able to do his job. When the data left Facebook’s servers after getting handed over to app developer Facebook had no idea what developers were doing with the data and apparently no interest in learning:


“My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.

Parakilas said Facebook had terms of service and settings that “people didn’t read or understand” and the company did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused.

Parakilas, whose job was to investigate data breaches by developers similar to the one later suspected of Global Science Research, which harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica, said the slew of recent disclosures had left him disappointed with his superiors for not heeding his warnings.

“It has been painful watching,” he said, “because I know that they could have prevented it.”

Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”

And this completely lack of oversight by Facebook led Parakilas to assume there was “something of a black market” for that Facebook data. But when he expressed these concerns with fellow executives he was warned not to look. Not knowing how this data was being used was ironically part of Facebook’s legal strategy, it seems:


Parakilas said he “always assumed there was something of a black market” for Facebook data that had been passed to external developers. However, he said that when he told other executives the company should proactively “audit developers directly and see what’s going on with the data” he was discouraged from the approach.

He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.

He added: “They felt that it was better not to know. I found that utterly shocking and horrifying.”

“They felt that it was better not to know. I found that utterly shocking and horrifying.”

Well, at least one executive at Facebook was utterly shocked and horrified by the “better not to know” policy towards handing personal private information over to developers. And that one executive, Parakilas, left the company and is now a whistle-blower.

And one of the things that made Parakilas particularly concerned that this was widespread among app was the fact that it was so easy to create apps that could then just be released onto Facebook to trawl for Facebook profile data from users and their unwitting friends:


The ease with which it was possible for anyone with relatively basic coding skills to create apps and start trawling for data was a particular concern, he added.

And while rogue app developers were at times dealt with, it was exceedingly rare with Parakilas not witnessing a single audit of a developer’s systems during his time there.

Even more alarming is that Facebook was apparently quite on encouraging app developers to grab this Facebook profile data as an incentive to encourage even more app develop. Apps were seen as so important to Facebook that Mark Zuckerberg himself had to give his personal approval to ban on app. And while that policy was later relaxed to not require Zuckerberg’s approval, it doesn’t sound like that policy change actually resulted in more apps getting banned:


Parakilas said that when he worked at Facebook it failed to take full advantage of its enforcement mechanisms, such as a clause that enables the social media giant to audit external developers who misuse its data.

Legal action against rogue developers or moves to ban them from Facebook were “extremely rare”, he said, adding: “In the time I was there, I didn’t see them conduct a single audit of a developer’s systems.”

During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.

So how many Facebook users had their private profile information likely via this ‘fine print’ feature that allowed app developers to scrape the profiles of app users and their friends? According to Parakilas, probably a majority of Facebook users. So that black market of Facebook profiles probably includes a majority of Facebook users. But even more amazing is that Facebook handed out this personal user information to app developers in exchange for a 30 share of the money they made through the app. Facebook was basically directly selling private user data to developers, which is a big reason why Parakilas’s estimate that a majority of Facebook users were impacted by this is likely true. Especially if, as Parakilas hints, the number of developers grabbing user profile information via these apps might be in the hundreds of thousands. That’s a lot of developers potentially feeding into that black market:


‘A majority of Facebook users’

Parakilas, 38, who now works as a product manager for Uber, is particularly critical of Facebook’s previous policy of allowing developers to access the personal data of friends of people who used apps on the platform, without the knowledge or express consent of those friends.

That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform.

The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.

Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.

Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so.

Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge. The company now has stricter protocols around the degree of access third parties have to data.

During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg, although the policy was later relaxed to make it easier to deal with rogue developers.

“Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data.”

And that, right there, is perhaps the biggest scandal here: Facebook just handed user data away in exchange for revenue streams from app developers. And this was a key element of its business model during this 2007-2014 period. “Read the fine print” in the terms of service was the excuse they use:


“It was well understood in the company that that presented a risk,” he said. “Facebook was giving data of people who had not authorised the app themselves, and was relying on terms of service and settings that people didn’t read or understand.”

It was this feature that was exploited by Global Science Research, and the data provided to Cambridge Analytica in 2014. GSR was run by the Cambridge University psychologist Aleksandr Kogan, who built an app that was a personality test for Facebook users.

And this is all why Aleksandr Kogan’s assertions that he had a close working relationship with Facebook and did nothing technically wrong do actually seem to be backed up by Parakilas’s whistle-blowing. Both because it’s hard to see what Kogan did that wasn’t part of Facebook’s business model and also because it’s hard to ignore that Kogan’s GSR shell company was one of the very last apps to have permission to exploit their “friends’ permission” app loophole. That sure does suggest that Kogan really did have a “close working relationship” with Facebook. So close he got seemingly favored treatment, and that’s compared to the seemingly vast number of apps that were apparently using this “friends permissions” feature: 1 in 10 Facebook apps, according to a 2010 study:


The test automatically downloaded the data of friends of people who took the quiz, ostensibly for academic purposes. Cambridge Analytica has denied knowing the data was obtained improperly, and Kogan maintains he did nothing illegal and had a “close working relationship” with Facebook.

While Kogan’s app only attracted around 270,000 users (most of whom were paid to take the quiz), the company was then able to exploit the friends permission feature to quickly amass data pertaining to more than 50 million Facebook users.

“Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.

If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said.

““Kogan’s app was one of the very last to have access to friend permissions,” Parakilas said, adding that many other similar apps had been harvesting similar quantities of data for years for commercial purposes. Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.”

As of 2010, around 11 percent of app developers requested data belonging to friends of users. Keep that in mind when Facebook claims that Aleksandr Kogan improperly obtained data from the friends of the people who downloaded Kogan’s app.

So what made Facebook eventually end this “friends permissions” policy in mid-2014? While Parakilas has already left the company by then, he does recall conversations with executive who were nervous about competitors building their own social networks from all the data Facebook was giving away:


Parakilas said he was unsure why Facebook stopped allowing developers to access friends data around mid-2014, roughly two years after he left the company. However, he said he believed one reason may have been that Facebook executives were becoming aware that some of the largest apps were acquiring enormous troves of valuable data.

He recalled conversations with executives who were nervous about the commercial value of data being passed to other companies.

“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”

That’s how much data Facebook was handing out to encourage new app development: so much data that they were concerned about creating competitors.

Finally, it’s important to note that the picture painted by Parakilas only goes until the end of 2012, when he left in frustration. So we don’t actually have testimony of Facebook insiders who were involved with app data breaches like Parakilas during the period when Cambridge Analytica was engaged in its mass data collection scheme:


Frustrated at the lack of action, Parakilas left Facebook in late 2012. “I didn’t feel that the company treated my concerns seriously. I didn’t speak out publicly for years out of self-interest, to be frank.”

Now, it seems like a safe bet that the problem only got worse after Parakilas left given how the Cambridge Analytica situation played out, but we don’t know yet just had bad it was at this point.

Aleksandr Kogan: Facebook’s Close Friend (Until He Belatedly Wasn’t)

So, factoring in what we just saw with Parakilas’s claims about extent to which Facebook was handing out private Facebook profile data – the internal profile that Facebook builds up about you – to app developers for widespread commercial applications, let’s take a look at the some of the claims Aleksandr Kogan has made about his relationship with Facebook. Because while Kogan makes some extraordinary claims, they are also consistent with Parakilas’s claims, although in some cases Kogan’s description actually goes much further than Parakilas.

For instance, according to the following Observer article …

1. In an email to colleagues at the University of Cambridge, Aleksandr Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions.

2. Kogan also claims in that email that the contract his GSR company signed with Facebook in 2014 made it absolutely clear the data was going to be used for commercial applications and that app users were granting Kogan’s company the right to license or resell the data. “We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote.We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.” So Kogan says he made it clear to Facebook and user the app was for commercial purposes and that the data might be resold which sounds like the kind of situation Sandy Parakilas said he witnessed except even more open (which should be easily verifiable if the app code still exists).

3. Facebook didn’t actually kick Kogan off of its platform until March 16th of this year, just days before this story broke. Which consistent with Kogan’s claims that he had a good working relationship with Facebook.

4. When Kogan founded Global Science Research (GSR) in May 2014, he co-founded it with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.

5. Facebook gave Kogan’s University of Cambridge lab provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level”. 57 billion Facebook relationships in all. The data was anonymized and aggregated, so it didn’t literally include details on individual Facebook friends and was instead the aggregate “friend” counts at a national. The data was used to publish a study in Personality and Individual Differences in 2015 and two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. But it’s still a sign that Kogan is indeed being honest when he says he had a close working relationship with Facebook. It’s also a reminder that when Facebook claims that it was just handing out data for “research purposes” only, if that was true it would have handed out anonymized aggregated data like they did in this situation with Kogan.

6. That study co-authored by Kogan’s team and Facebook didn’t just use the anonymized aggregated friendship data. The study also used non-anonymized Facebook ata collected through Facebook apps using exactly the same techniques Kogan’s app for Cambridge Analytica used. This study was published in August of 2015. Again, it was a study co-authored by Facebook. GSR co-founder Joseph Chancellor left GSR a month later and joined Facebook as a user experience research in November 2015. Recall that it was a month later, December 2015, when we saw the first news reports of Ted Cruz’s campaign using Facebook data. Also recall that Facebook responded to that December 2015 report by saying it would look into the matter. Facebook finally sent Cambridge Analytica a letter in August of 2016, days before Steve Bannon became Trump’s campaign manager, asking that Cambridge Analytica delete the data. So the fact that Facebook co-authored a paper with Kogan and Chancellor in August of the 2015 and then Chancellor joined Facebook in 2015 is a pretty significant bit of context for looking into Facebook’s behavior. Because Facebook didn’t just know it was guilty of working closely with Kogan. They also knew they just co-authored an academic paper using data gathered with the same technique Cambridge Analytica was charged with using.

7. Kogan does challenge one of the claims by Christopher Wylie. Specifically, Wylie claimed that Facebook became alarmed over the volume of data Kogan’s app was scooping up (50 million profiles) but Kogan assuaged those concerns by saying it was all for research. Kogan says this is a fabrication and Facebook never actually contacted him expressing alarm.

So, according to Aleksandr Kogan, Facebook really did have an exceptionally close relationship with Kogan and Facebook really was totally on board with what Kogan and Cambridge Analytica were doing:

The Guardian

Facebook gave data about 57bn friendships to academic
Volume of data suggests trusted partnership with Aleksandr Kogan, says analyst

Julia Carrie Wong and Paul Lewis in San Francisco
Thu 22 Mar 2018 10.56 EDT
Last modified on Sat 24 Mar 2018 22.56 EDT

Before Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.

Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.

A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.

“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”

Facebook downplayed the significance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was literally numbers – numbers of how many friendships were made between pairs of countries – ie x number of friendships made between the US and UK,” Facebook spokeswoman Christine Chen said by email. “There was no personally identifiable information included in this data.”

Facebook’s relationship with Kogan has since soured.

“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported that the data had been obtained by Cambridge Analytica.

“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.

On Friday 16 March, in anticipation of the Observer’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans, Facebook suspended Kogan from the platform, issued a statement saying that he “lied” to the company, and characterised his activities as “a scam – and a fraud”.

On Tuesday, Facebook went further, saying in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.

But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting activities – for it to inform the users whose personal information was improperly shared.

And Kogan has offered a defence of his actions in an interview with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.

The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.

Between June and August of that year, GSR paid approximately 270,000 individuals to use a Facebook questionnaire app that harvested data from their own Facebook profiles, as well as from their friends, resulting in a dataset of more than 50 million users. The data was subsequently given to Cambridge Analytica, in what Facebook has said was a violation of Kogan’s agreement to use the data solely for academic purposes.

In his email to colleagues at Cambridge, Kogan said that he had created the Facebook app in 2013 for academic purposes, and used it for “a number of studies”. After he founded GSR, Kogan wrote, he transferred the app to the company and changed its name, logo, description, and terms and conditions. CNN first reported on the Cambridge email. Kogan did not respond to the Guardian’s request for comment on this article.

“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”

Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.

“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.

Whistleblower Christopher Wylie told the Observer that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and … we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”

The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.

For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.

The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.

———-

“Facebook gave data about 57bn friendships to academic” by Julia Carrie Wong and Paul Lewis; The Guardian; 03/22/2018

“Before Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships.

An anonymized, aggregate dataset of 57bn Facebook friendships sure makes it a lot easier to take Kogan at his word when he claims a close working relationship with Facebook.

Now, keep in mind that the aggregate anonymized data was aggregate at the national level, so it’s not as if Facebook gave Kogan a list of 57 billion Facebook friendships. And when you think about it, that aggregated anonymized data is far less sensitive than the personal Facebook profile data Kogan and other app developers were routinely grabbing during this period. It’s the fact that Facebook gave this data to Kogan in the first place that lends credence to his claims.

But the biggest factor lending credence to Kogan’s claims is the fact that Facebook co-authored a study with Kogan and other at the University of Cambridge using that anonymized aggregated data. Two Facebook employees were named as co-authors of the study. That is definitely a sign of close working relationship:


Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time.

A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred.

“The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.”

Even more damning for Facebook is that the research co-authored by Kogan, Facebook, and other researchers didn’t just included the anonymized aggregated data. It also included a second data set of non-anonymized data that was harvested in exactly the same way Kogan’s GSR app worked. And while Facebook apparently wasn’t involved in that part of the study, that’s beside the point. Facebook clearly knew about it if they co-authored the study:


The collaboration between Kogan and Facebook researchers which resulted in the report published in 2015 also used data harvested by a Facebook app. The study analysed two datasets, the anonymous macro-level national set of 57bn friend pairs provided by Facebook and a smaller dataset collected by the Cambridge academics.

For the smaller dataset, the research team used the same method of paying people to use a Facebook app that harvested data about the individuals and their friends. Facebook was not involved in this part of the study. The study notes that the users signed a consent form about the research and that “no deception was used”.

The paper was published in late August 2015. In September 2015, Chancellor left GSR, according to company records. In November 2015, Chancellor was hired to work at Facebook as a user experience researcher.

But, alas, Kogan’s relationship with Facebook as since soured, with Facebook now acting as if Kogan had totally violated their trust. And yet it’s hard to ignore the fact that Kogan wasn’t formally kicked off Facebook’s platform until March 16th of this year, just a few days before all these stories about Kogan and Facebook were about to go public:


Facebook’s relationship with Kogan has since soured.

“We ended our working relationship with Kogan altogether after we learned that he violated Facebook’s terms of service for his unrelated work as a Facebook app developer,” Chen said. Facebook has said that it learned of Kogan’s misuse of the data in December 2015, when the Guardian first reported that the data had been obtained by Cambridge Analytica.

“We started to take steps to end the relationship right after the Guardian report, and after investigation we ended the relationship soon after, in 2016,” Chen said.

On Friday 16 March, in anticipation of the Observer’s reporting that Kogan had improperly harvested and shared the data of more than 50 million Americans, Facebook suspended Kogan from the platform, issued a statement saying that he “lied” to the company, and characterised his activities as “a scam – and a fraud”.

On Tuesday, Facebook went further, saying in a statement: “The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.

““The entire company is outraged we were deceived.” And on Wednesday, in his first public statement on the scandal, its chief executive, Mark Zuckerberg, called Kogan’s actions a “breach of trust”.”

Mark Zuckerberg is complaining about a “breach of trust.” LOL!

And yet Facebook has yet to explain the nature of its relationship with Kogan or why it was that they didn’t kick him off the platform until only recently. But Kogan has an explanation: He’s a scapegoat and he wasn’t doing anything Facebook didn’t know he was doing. And when you notice that Kogan’s co-founder of GSR, Joseph Chancellor, is now a Facebook employee, it’s hard not to take his claims seriously:


But Facebook has not explained how it came to have such a close relationship with Kogan that it was co-authoring research papers with him, nor why it took until this week – more than two years after the Guardian initially reported on Kogan’s data harvesting activities – for it to inform the users whose personal information was improperly shared.

And Kogan has offered a defence of his actions in an interview with the BBC and an email to his Cambridge colleagues obtained by the Guardian. “My view is that I’m being basically used as a scapegoat by both Facebook and Cambridge Analytica,” Kogan said on Radio 4 on Wednesday.

The data collection that resulted in Kogan’s suspension by Facebook was undertaken by Global Science Research (GSR), a company he founded in May 2014 with another Cambridge researcher, Joseph Chancellor. Chancellor is currently employed by Facebook.

But if Kogan’s claims are to be taken seriously, we have a pretty serious scandal on our hands. Because Kogan claims that not only did he make it clear to Facebook and his app users that the data they were collecting was for commercial use – with no mention of academic or research purposes of the University of Cambridge – but he also claims that he made it clear the data GSR was collecting could be licensed and resold. And Facebook at no point raised any concerns at all about any of this:


“We made clear the app was for commercial use – we never mentioned academic research nor the University of Cambridge,” Kogan wrote. “We clearly stated that the users were granting us the right to use the data in broad scope, including selling and licensing the data. These changes were all made on the Facebook app platform and thus they had full ability to review the nature of the app and raise issues. Facebook at no point raised any concerns at all about any of these changes.”

Kogan is not alone in criticising Facebook’s apparent efforts to place the blame on him.

“In my view, it’s Facebook that did most of the sharing,” said Albright, who questioned why Facebook created a system for third parties to access so much personal information in the first place. That system “was designed to share their users’ data in meaningful ways in exchange for stock value”, he added.

Now, it’s worth noting that the casual acceptance of the commercial use of the data collected over these Facebook apps and the potential licensing and reselling of that data is actually a far more seriously situation than the one Sandy Parakilas described during his time at Facebook. Recall that, according to Parakilas, app developers simply had to tell Facebook was that they were going to use the profile data on app users and their friends to ‘improve the user experience.’ It was fine if they were commercial apps from Facebook’s perspective. But Parakilas didn’t describe a situation where app developers openly made it clear they might license or resell the data. So Kogan’s claim that it was clear his app had commercial applications and might involve reselling the data is even more egregious than the situation Parakilas described. But don’t forget that Parakilas left Facebook in late 2012 and Kogan’s app would have been approved in 2014 so it’s entirely possible Facebook’s policies got even more egregious after Parakilas left.

And it’s worth noting how Kogan’s claims differ from Christopher Wylie’s. Wylie asserts that Facebook grew alarmed by the volume of data GSR’s app was pulling from Facebook users and Kogan assured them it was for research purposes. Whereas Kogan says Facebook never expressed any alarm at all:


Whistleblower Christopher Wylie told the Observer that Facebook was aware of the volume of data being pulled by Kogan’s app. “Their security protocols were triggered because Kogan’s apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cambridge email, Kogan characterised this claim as a “fabrication”, writing: “There was no exchange with Facebook about it, and … we never claimed during the project that it was for academic research. In fact, we did our absolute best not to have the project have any entanglements with the university.”

So as we can see, when it comes to Facebook’s “friends permissions” data sharing policy, its arrangement with Aleksandr Kogan was probably one of the more responsible ones it engaged in because, hey, at least Kogan’s work was ostensibly for research purposes and involved at least some anonymized data.

Cambridge Analytica’s Informal Friend: Palantir

And as we can also see, the more we learn about this situation, the harder it gets to dismiss Kogan’s claims that Facebook is making in a scapegoat in order to cover up not just the relationship Facebook had with Kogan but the fact that what Kogan was doing was routine for app developers for years.

But as the following New York Times article makes clear, Facebook’s relationship with Aleksandr Kogan isn’t the only working relationship Facebook needs to worry about that might lead back to Cambridge Analytica. Because it turns out there’s another Facebook connection to Cambridge Analytica and it’s potentially far, far more scandalous than Facebook’s relationship with Kogan: It turns out Palantir might be the originator of the idea to create Kogan’s app for the purpose of collecting psychological profiles. That’s right, according to documents the New York Times has seen, Palantir, the private intelligence firm with a close relationship with the US national security state, was in talks with Cambridge Analytica from 2013-2014 about psychologically profiling voters and it was an employee of Palantir who raised the idea of creating that app in the first place.

And this is of course wildly scandalous if true because Palantir was founded by the Facebook executive Peter Thiel who also happens to be a far right political activist and a close ally of President Trump.

But it gets worse. And weirder. Because it sounds like one of the people encouraging SCL (Cambridge Analytica’s parent company) to work with Palantir was none other than Sophie Schmidt, daughter of Google CEO Eric Schmidt.

Keep in mind that this isn’t the first time we’ve heard about Palantir’s ties to Cambridge Analytica and Sophie Schmidt’s role in this. It was reported by the Observer last May. According to that May 2017 article in the Observer, Schmidt was passing through London in June of 2013 when she decided to called up her former boss at SCL and recommend that they contact Palantir. Also if interest is that if you look at the current version of that Observer article, all mention of Sophie Schmidt has been removed and there’s a note that the article is the subject of legal complaints on behalf of Cambridge Analytica LLC and SCL Elections Limited. But in the original article she’s mentioned quite extensively. It would appear that someone is very upset about the Sophie Schmidt angle to this story.

So this Palantir/Sophie Schmidt side of this story isn’t a new. But we’re learning a lot more information about that relationship now. For instance:

1. In early 2013, Cambridge Analytica CEO Alexander Nix, an SCL director at the time, and a Palantir executive discussed working together on election campaigns.

2. And SCL employee wrote to a colleague in a June 2013 email that Schmidt is pushing them to work with Palantir. “Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” .

3. According to Christopher Wylie’s testimony to lawmakers, “There were Palantir staff who would come into the office and work on the data…And we would go and meet with Palantir staff at Palantir.” Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014.

4. The Palantir employee who floated the idea of create the app ultimately built by Aleksandr Kogan is Alfredas Chmieliauskas. Chmieliauskas works on business development for Palantire according to his LinkedIn page.

5. Palantir and Cambridge Analytica never formally started working together. A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. Wylie acknowledges that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge Analytica’s psychographic models. In other words, while there was never a formal relationship, there was an pretty significant informal relationship.

6. Mr. Chmieliauskas was in communication with Wylie’s team in 2014 during the period when Cambridge Analytica was initially trying to convince the University of Cambridge team to work with them. Recall that Cambridge Analytica initially discovered that the University of Cambridge team had exactly the kind of data they were interested in collected via a Facebook app, but the negotiations ultimately failed and it was then that Cambridge Analytica found Aleksandr Kogan who agreed to create his own app. Well, according to this report, it was Chmieliauskas who initially suggested to Cambridge Analytica that the firm create its own version of the University of Cambridge team’s app as leverage in those negotiations. In essence, Chmieliauskas wanted Cambridge Analytica to show the University of Cambridge team that they could collect the information themselves, presumably to drive a harder bargain. And when those negotiations failed Cambridge Analytica did indeed create their own app after teaming up with Kogan.

7. Palantir asserts that Chmieliauskas was acting in his own capacity when he continued communicating with Wylie and made the suggestion to create their own app. Palantir initially told the New York Times that it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Palantir later revised this, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

And, again, do not forget that Palantir is own by Peter Thiel, the far right billionaire early investor in Facebook and one of Facebook’s board members to this day. He was also a Trump delegate in 2016 and was in discussions with the Trump administration to lead the powerful President’s Intelligence Advisory Board, although he ultimately turned that offer down. Oh, and he’s an advocate of the Dark Enlightenment.

Basically, Peter Thiel was a member of the ‘Alt Right’ before that term was ever coined. And he’s a very powerful influence at Facebook. So learning that Palantir and Cambridge Analytica were in discussion to work together on election projects in 2013 and 2014, a Palantir employee was advising Cambridge Analytica during the negotiations with the University of Cambridge team, and that Palantir employees helped engineer Cambridge Analytica’s psychographic model based on Facebook is the kind of revelation that just might qualify as the most scandalous revelation in this entire mess:

The New York Times

Spy Contractor’s Idea Helped Cambridge Analytica Harvest Facebook Data

By NICHOLAS CONFESSORE and MATTHEW ROSENBERG
MARCH 27, 2018

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday.

The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook.

The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

“We learned today that an employee, in 2013-2014, engaged in an entirely personal capacity with people associated with Cambridge Analytica,” the company said. “We are looking into this and will take the appropriate action.”

The company said it was continuing to investigate but knew of no other employees who took part in the effort. Mr. Wylie told lawmakers that multiple Palantir employees played a role.

Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.

In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.

A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.

In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.

“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”

Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.

———-

“Spy Contractor’s Idea Helped Cambridge Analytica Harvest Facebook Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

“The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

Yep, a Facebook board member’s private intelligence firm was working closely with Cambrige Analytica as they developed their psychological profiling technology. It’s quite a revelation. The kind of explosive revelation that has Palantir first denying that there was any relationship at all, followed with acknowledgement/denial that, yes, a Palantir employee, Alfredas Chmieliauskas, was indeed working with Cambridge Analytica but not on behalf of Palantir:


It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

The Palantir employee, Alfredas Chmieliauskas, works on business development for the company, according to his LinkedIn page. In an initial statement, Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

Adding the scandalous nature of it all is that Google CEO Eric Schmidt’s daughter suddenly appeared in June of 2013 to also promote to her old boss at SCL a relationship with Palantir:


Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

Ms. Schmidt did not respond to requests for comment, nor did a spokesman for Cambridge Analytica.

But this June 2013 proposal by Sophie Schmidt wasn’t what started Cambridge Analytica’s relationship with Palantir. Because that reportedly started in early 2013, when Alexander Nix and a Palantir executive discussed working together on election campaigns:


In early 2013, Alexander Nix, an SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns.

So Sophie Schmidt swooped in to promote Palantir to Cambridge Analytica months after the negotiations began. It raises the question of who encouraged her to do that.

Palantir now admits these negotiations happened, but claims that they chose not to work with Cambridge Analytica because they “wanted to steer clear of election work.” And emails indicate that Palantir did indeed formally turn down the idea of working with Cambridge Analytica since the emails show that Nix and Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined. And yet, according to Christopher Wylie, some Palantir employees helped engineer their psychogrophic models. And that suggests Palantir turned down a formal relationship in favor of an informal one:


A Palantir spokeswoman acknowledged that the companies had briefly considered working together but said that Palantir declined a partnership, in part because executives there wanted to steer clear of election work. Emails reviewed by The Times indicate that Mr. Nix and Mr. Chmieliauskas sought to revive talks about a formal partnership through early 2014, but Palantir executives again declined.

In his testimony, Mr. Wylie acknowledged that Palantir and Cambridge Analytica never signed a contract or entered into a formal business relationship. But he said some Palantir employees helped engineer Cambridge’s psychographic models.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

“There were Palantir staff who would come into the office and work on the data…And we would go and meet with Palantir staff at Palantir.”

That sure sounds like a relationship! Formal or not.

And that informal relationship continued during the period when Cambridge Analytica was in negotiation with the initial University of Cambridge Psychometrics Centre in 2014:


In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients.

And it was during those negotiations, in May of 2014, when Chmieliauskas first proposed the idea of just replicating what the University of Cambridge Psychometrics Centre was doing for leverage in the negotiations. When those negotiations ultimately failed, Cambridge Analytica found another Cambridge University psychologist, Aleksandr Kogan, to build the app for them:


“I had left field idea,” Mr. Chmieliauskas wrote in May 2014. “What about replicating the work of the cambridge prof as a mobile app that connects to facebook?” Reproducing the app, Mr. Chmieliauskas wrote, “could be a valuable leverage negotiating with the guy.”

Those negotiations failed. But Mr. Wylie struck gold with another Cambridge researcher, the Russian-American psychologist Aleksandr Kogan, who built his own personality quiz app for Facebook. Over subsequent months, Dr. Kogan’s work helped Cambridge develop psychological profiles of millions of American voters.

And that’s what we know so far about the relationship between Cambridge Analytica and Palantir. Which raises a number of questions. Like whether or not this informal relationship continued well after Cambridge Analytica started harvesting all that Facebook information. Let’s look at seven key the facts about we know Palantir’s involvement in this so far:

1. Palantir employees helped build the psychographic profiles.

2. Mr. Chmieliauskas was in contact with Wylie at least as late as May of 2014 as Cambridge Analytica was negotiating with the University of Cambridge’s Psychometrics Centre.

3. We don’t know when this informal relationship between Palantir and Cambridge Analytica ended.

4. We don’t know if the informal relationship between Palantir and Cambridge Analytica – which largely appears to center around Mr. Chmieliauskas – really was largely Chmieliauskas’s initiative alone after Palantir initially rejected a formal relationship (it’s possible) or if Chmieliauskas was directed to pursue this relationship informally but on behalf of Palantir to maintain deniability in the case of awkward situations like the present one (also very possible, and savvy given the current situation).

5. We don’t know if the Palantir employees who helped build those psychographic profiles were working with the data Cambridge Analytica harvested from Facebook or were they working with the earlier, inadequate sets of data that didn’t include the Facebook data? Because if the Palantir employees helped build the psychographic profiles based on the Facebook data that implies this informal relationship went on a lot longer than May of 2014 since that’s when it first started getting collected via Kogan’s app. How long? We don’t yet know.

6. Neither do we know how much of this data ultimately fell into the hands of Palantir. As Wylie described it, “There were Palantir staff who would come into the office and work on the data…And we would go and meet with Palantir staff at Palantir.” So did those Palantir employees who were working on “the data” take any of that data back to Palantir?

7. For that matter, given that Peter Thiel sits on the board of Facebook, and given how freely Facebook hands out this kind of data, we have to ask the question of whether or not Palantir already has direct access to exactly the kind of data Cambridge Analytica was harvesting. Did Palantir even need Cambridge Analytica’s data? Perhaps Palantir was already using apps of their own to harvest this kind of data? We don’t know. At the same time, don’t forget that even if Palantir had ready access to the same Facebook profile data gathered by Kogan’s app, it’s still possible Palantir would have had an interest in the company purely to see how the data was analyzed and learn from that. In other words, the interest in Cambridge Analytica may be been more related to the algorithms, and not the data, for Peter Thiel’s Palantir. Don’t forget that if anyone is the real power behind the throne at Facebook it’s probably Thiel.

8. What on earth is going on with Sophie Schmidt, daughter of Google CEO Eric Schmidt, pushing Cambridge Analytica to work with Palantir in June of 2013, months after Cambridge Analytic and Palantir began talking with each other? That seems potentially significant.

Those are just some of the questions raised about Palantir’s ambiguously ominous relationship with Cambridge Analytica. Bad don’t forget that it’s not just Palantir that we need to ask these kinds of questions. For instance, what about Steve Bannon’s Breitbart? Does Breitbart, home the neo-Nazi ‘Alt Right’, also have access to all that harvested Cambridge Analytica data? Not just the raw Facebook data but also the processed psychological profile data on 50 million Americans that Cambridge Analytica generated. Does Breitbart have the processed profiles too? And what about the Republican Party? And all the other entities out there who gained access to this Facebook profile data. Just how many different entities around the globe possess that Cambridge Analytica data set?

It’s Not Just Cambridge Analytica. Or Facebook. Or Google. It’s Society.

Of course, as we saw with Sandy Parakilas’s whistle-blower claims, when it comes to the question of who might possess Facebook profile data harvested during the 2007-2014 period when Facebook had “friends permissions” policy, the list of suspects includes potentially hundreds of thousands of developers and anyone who has purchased this information on the black market.

Don’t forget one of the other amazing aspects of this whole situation: if hundreds of thousands of developers were using this feature to scrape user profiles, that means this really was an open secret. Lots and lots of people were doing this. For years. So, like many scandals, perhaps the most scandalous part of it is that we’re learning about something we should have known all along and many of did know all along. It’s not like it’s a secret that people are being surveilled in detail in the internet age and this data is being stored and aggregated in public and private databases and put up for sale. We’ve collectively known this all along. At least on some level.

And yet this surveillance is so pervasive that it’s almost never thought about on a moment by moment basis at an individual level. When people browse the web they presumably aren’t thinking about the volume of tracking cookies and other personal information slurped up as a result of that mouse click. Nor are they thinking about how that click contributes to the numerous personal profiles of them floating around the commercial data brokerage marketplace. So in a more fundamental sense we don’t actually know we’re being surveilled because we’re not thinking about it.

It’s one example of how humans aren’t wired to naturally think about the macro forces impacting their lives in day to day decisions, which was fine when we were cave men but becomes a problematic instinct when we’re literally mastering the laws of physics and shaping our world and environment. From physics and nature to history and contemporary trends, the vast majority of humanity spends very little time studying these topics. Which is completely understandable given the lack of time or resources to do so, but that understandable instinct creates world perfectly set up for abuse by surveillance states, both public and private, which makes it less understandable and much more problematic.

So, in the interest of gaining perspective on how we got to this point where the Facebook emerged as an ever-growing Panopticon in just a few short years after its conception, let’s take a look at one last article. It’s an article by investigative journalist Yasha Levine, who recently published the must-read book Surveillance Valley: The Secret Military History of the Internet. It’s a book filled with vital historical fun fact about the internet. Fun facts like…

1. How the internet began as a system built for national security purposes with a focus on military hardware and command and control communication purposes in general. But there was also a focus on building a system that could collect, store, process, and distribute of massive volumes of information used to wage the Vietnam war. Beyond that, these early computer networks also acted as a collection and sharing system for dealing with domestic national security concerns (concerns that centered around tracking anti-war protesters, civil rights activists, etc). That’s what the internet started out as. A system for storing data about people and conflict for US national security purposes.

2. Building databases of profiles on people (foreign and domestic) was one of the very first goals of these internet predecessors. In fact, one of the key visionaries behind the development of the internet, Ithiel de Sola Pool, both helped shape the development of the early internet as a surveillance and counterinsurgency technology and also pioneered data-driven election campaigns. He even started a private firm to do this: Simulmatics. Pool’s vision was a world where the surveillance state acted as a benign master that the kept the peace peacefully by using superior knowledge to nudge people in the ‘right’ direction.

3. This vision of vast database of personal profiles for the purpose was largely a secret at first, but it didn’t remain that way. And there was actually quite a bit of public paranoia in the US about these internet-predecessors, especially within the anti-Vietnam war activist communities. Flash forward a couple decades and that paranoia has faded almost entirely…until scandals like the current one erupt and we temporarily grow concerned.

4. What Cambridge Analytica is accused of doing is what the data giants like Facebook and Google do every day and have been going for years. And it’s not just the giants. Smaller firms are scooping up fast amounts of information too…it’s just not as vast as what the giants are collecting. Even cute apps, like the wildly popular Angry Birds, has been found to collect all sorts of data about users.

5. While it’s great that public attention is being directed at the kind of sleazy manipulative activities Cambridge Analytica was engaging in, deceptively wielding real power over real unwitting people, it is a wild mischaracterization to act like Cambridge Analytica was exerting mass mind-control over the masses using internet marketing voodoo. What Cambridge Analytica, or any of the other sleazy manipulators, were doing was indeed influential, but it needs to be viewed in the context of a political state of affairs where massive numbers of Americans, including Trump voters, really have been collectively failed by the American power establishment for decades. The collapse of the American middle class and rise of the plutocracy is what created the kind of macro environment where carnival barker like Donald Trump could use firms like Cambridge Analytica to ‘nudge’ people in the direction of voting for him. In other words, the focus on Cambridge Analytica’s manipulation of people’s psychological profiles in the absence of the recognition of the massive political failures of last several decades in America – the mass socioeconomic failures of the American embrace of ‘Reaganonics’ and right-wing economic gospel coupled with the American Left’s failure to effectively repudiate these doctrines – is profoundly ahistorical. The story of the rise of the power of firms like Facebook, Google, and Cambridge Analytica is a story the implicitly includes the story of that entire history of political/socioeconomic failures tied to failure to effectively respond to the rise of the American right-wing over the last several decades. And we are making a massive mistake if we forget that. Cambridge Analytica wouldn’t have been nearly as effective in nudging people towards voting for someone like Trump if so many people weren’t already so ready to burn the current system down.

These are the kinds of historical chapters that can’t be left out of any analysis of Cambridge Analytica. Because Cambridge Analytica isn’t the exception. It’s an exceptionally sleazy example of the rules we’ve been playing by for a while, whether we realized it or not:

The Baffler

The Cambridge Analytica Con

Yasha Levine,
March 21, 2018

“The man with the proper imagination is able to conceive of any commodity in such a way that it becomes an object of emotion to him and to those to whom he imparts his picture, and hence creates desire rather than a mere feeling of ought.”

Walter Dill Scott, Influencing Men in Business: Psychology of Argument and Suggestion (1911)

This week, Cambridge Analytica, the British election data outfit funded by billionaire Robert Mercer and linked to Steven Bannon and President Donald Trump, blew up the news cycle. The charge, as reported by twin exposés in the New York Times and the Guardian, is that the firm inappropriately accessed Facebook profile information belonging to 50 million people and then used that data to construct a powerful internet-based psychological influence weapon. This newfangled construct was then used to brainwash-carpet-bomb the American electorate, shredding our democracy and turning people into pliable zombie supporters of Donald Trump.

In the words of a pink-haired Cambridge Analytica data-warrior-turned-whistleblower, the company served as a digital armory that turned “Likes” into weapons and produced “Steve Bannon’s psychological warfare mindfuck tool.”

Scary, right? Makes me wonder if I’m still not under Cambridge Analytica’s influence right now.

Naturally, there are also rumors of a nefarious Russian connection. And apparently there’s more dirt coming. Channel 4 News in Britain just published an investigation showing top Cambridge Analytica execs bragging to an undercover reporter that their team uses high-tech psychometric voodoo to win elections for clients all over the world, but also dabbles in traditional meatspace techniques as well: bribes, kompromat, blackmail, Ukrainian escort honeypots—you know, the works.

It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.

But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.

So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.

Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.

Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.

All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.

What do these companies know about us, their users? Well, just about everything.

Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.

On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.

It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.

In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.

These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.

Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.

The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.

The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”

Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.

Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.

Here’s how The National Journal’s Andrew Rice described i360 in 2015:

Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.

Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”

In 2008, the hip young Blackberry-toting Barack Obama was the first major-party candidate on the national scene to truly leverage the power of internet-targeted agitprop. With help from Facebook cofounder Chris Hughes, who built and ran Obama’s internet campaign division, the first Obama campaign built an innovative micro-targeting initiative to raise huge amounts of money in small chunks directly from Obama’s supporters and sell his message with a hitherto unprecedented laser-guided precision in the general election campaign.

Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.

What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.

On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.

So our present-day freakout over Cambridge Analytica needs to be put in the broader historical context of our decades-long complacency over Silicon Valley’s business model. The fact is that companies like Facebook and Google are the real malicious actors here—they are vital public communications systems that run on profiling and manipulation for private profit without any regulation or democratic oversight from the society in which it operates. But, hey, let’s blame Cambridge Analytica. Or better yet, take a cue from the Times and blame the Russians along with Cambridge Analytica.

***

There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.

This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.

The truth is that the internet has never been about egalitarianism or democracy.

The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.

The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.

Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”

In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.

Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.

He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.

These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.

Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.

At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”

With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.

This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.

Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.

A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.

***

Simulmatics and its first-generation imitations are now ancient history—dating back from the long-ago time when computers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The internet surrounds us, engulfing and monitoring everything we do. We are tracked and watched and profiled every minute of every day by countless companies—from giant platform monopolies like Facebook and Google to boutique data-driven election firms like i360 and Cambridge Analytica.

Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.

Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.

***

And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.

Cambridge Analytica has been one of the lesser bogeyman used to explain Trump’s victory for quite a while, going back more than year. Back in March 2017, the New York Times, which now trumpets the saga of Cambridge Analytica’s Facebook heist, was skeptically questioning the company’s technology and its role in helping bring about a Trump victory. With considerable justification, Times reporters then chalked up the company’s overheated rhetoric to the competition for clients in a crowded field of data-driven election influence ops.

Yet now, with Robert Meuller’s Russia investigation dragging on and producing no smoking gun pointing to definitive collusion, it seems that Cambridge Analytica has been upgraded to Class A supervillain. Now the idea that Steve Bannon and Robert Mercer concocted a secret psychological weapon to bewitch the American electorate isn’t just a far-fetched marketing ploy—it’s a real and present danger to a virtuous info-media status quo. And it’s most certainly not the extension of a lavishly funded initiative that American firms have been pursuing for half a century. No, like the Trump uprising it has allegedly midwifed into being, it is an opportunistic perversion of the American way. Employing powerful technology that rewires the inner workings of our body politic, Cambridge Analytica and its backers duped the American people into voting for Trump and destroying American democracy.

It’s a comforting idea for our political elite, but it’s not true. Alexander Nix, Cambridge Analytica’s well-groomed CEO, is not a cunning mastermind but a garden-variety digital hack. Nix’s business plan is but an updated version of Ithiel de Sola Pool’s vision of permanent peace and prosperity won through a placid regime of behaviorally managed social control. And while Nix has been suspended following the bluster-filled video footage of his cyber-bragging aired on Channel 4, we’re kidding ourselves if we think his punishment will serve as any sort of deterrent for the thousands upon thousands of Big Data operators nailing down billions in campaign, military, and corporate contracts to continue monetizing user data into the void. Cambridge Analytica is undeniably a rogue’s gallery of bad political actors, but to finger the real culprits behind Donald Trump’s takeover America, the self-appointed watchdogs of our country’s imperiled political virtue had best take a long and sobering look in the mirror.

———-

“The Cambridge Analytica Con” by Yasha Levine; The Baffler; 03/21/2018

“It’s good that the mainstream news media are finally starting to pay attention to this dark corner of the internet —and producing exposés of shady sub rosa political campaigns and their eager exploitation of our online digital trails in order to contaminate our information streams and influence our decisions. It’s about time.”

Yes indeed, it is great to see that this topic is finally getting the attention it has long deserved. But it’s not great to see the topic limited to Cambridge Analytica and Facebook. As Levine puts it, “We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries.” Societies in general are held together via overt and covert influence, but we’ve gotten really, really good at that over the last half century in America and the story of Cambridge Analytica, and the larger story of Sandy Parakilas’s whistle-blowing about mass data collection, can’t really be understood outside that historical context:


But this story is being covered and framed in a misleading way. So far, much of the mainstream coverage, driven by the Times and Guardian reports, looks at Cambridge Analytica in isolation—almost entirely outside of any historical or political context. This makes it seem to readers unfamiliar with the long history of the struggle for control of the digital sphere as if the main problem is that the bad actors at Cambridge Analytica crossed the transmission wires of Facebook in the Promethean manner of Victor Frankenstein—taking what were normally respectable, scientific data protocols and perverting them to serve the diabolical aim of reanimating the decomposing lump of political flesh known as Donald Trump.

So if we’re going to view the actions of Cambridge Analytica in their proper light, we need first to start with an admission. We must concede that covert influence is not something unusual or foreign to our society, but is as American as apple pie and freedom fries. The use of manipulative, psychologically driven advertising and marketing techniques to sell us products, lifestyles, and ideas has been the foundation of modern American society, going back to the days of the self-styled inventor of public relations, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ailing consumer society together. And when it comes to marketing candidates and political messages, using data to influence people and shape their decisions has been the holy grail of the computer age, going back half a century.

And the first step in putting the Cambridge Analytica story in proper perspective is recognizing that what it is accused of doing – grabbing personal data and building profiles for the purpose of influencing voters – is done every day by entities like Facebook and Google. It’s a regular part of our lives. And you don’t even need to use Facebook or Google to become part of this vast commercial surveillance system. You just need to communicate with someone who does use those platforms:


Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.

Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.

All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.

What do these companies know about us, their users? Well, just about everything.

Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.

On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.

The next step in contextualizing this is recognizing that Facebook and Google are merely the biggest fish in an ocean of data brokerage markets that has many smaller inhabitants trying to do the same thing. This is part of what makes Facebook’s handing over of profile data to app developers so scandalous…Facebook clearly new there was a voracious market for this information and made a lot of money selling into that market:


It’s not just the big platform monopolies that do this, but all the smaller companies that run their businesses on services operated by Google and Facebook. It even includes cute games like Angry Birds, developed by Finland’s Rovio Entertainment, that’s been downloaded more than a billion times. The Android version of Angry Birds was found to pull personal data on its players, including ethnicity, marital status, and sexual orientation—including options for the “single,” “married,” “divorced,” “engaged,” and “swinger” categories. Pulling personal data like this didn’t contradict Google’s terms of services for its Android platform. Indeed, for-profit surveillance was the whole point of why Google started planning to launch an iPhone rival as far back as 2004.

In launching Android, Google made a gamble that by releasing its proprietary operating system to manufacturers free of charge, it wouldn’t be relegated to running apps on Apple iPhone or Microsoft Mobile Windows like some kind of digital second-class citizen. If it played its cards right and Android succeeded, Google would be able to control the environment that underpins the entire mobile experience, making it the ultimate gatekeeper of the many monetized interactions among users, apps, and advertisers. And that’s exactly what happened. Today, Google monopolizes the smart phone market and dominates the mobile for-profit surveillance business.

These detailed psychological profiles, together with the direct access to users that platforms like Google and Facebook deliver, make both companies catnip to advertisers, PR flacks—and dark-money political outfits like Cambridge Analytica.

And when it comes to political campaigns, the digital giants like Facebook and Google already have special election units set up to give privileged access to political campaigns so they can influence voters even more effectively. The stories about the Trump campaign’s use of Facebook “embeds” to run a massive systematic advertising campaign of “A/B testing on steroids” to systematically experiment on voter ad responses is part of that larger story of how these giants have already made the manipulation of voters big business:


Indeed, political campaigns showed an early and pronounced affinity for the idea of targeted access and influence on platforms like Facebook. Instead of blanketing airwaves with a single political ad, they could show people ads that appealed specifically to the issues they held dear. They could also ensure that any such message spread through a targeted person’s larger social network through reposting and sharing.

The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.

The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook’s data system. So armed, digital political operatives can then use those people’s social networks to identify other prospective voters who might be supportive of their candidate—and then target them with a whole new tidal wave of ads. “There’s a level of precision that doesn’t exist in any other medium,” Crystal Patterson, a Facebook employee who works with government and politics customers, told the New York Times back in 2015. “It’s getting the right message to the right people at the right time.”

Naturally, a whole slew of companies and operatives in our increasingly data-driven election scene have cropped up over the last decade to plug in to these amazing influence machines. There is a whole constellation of them working all sorts of strategies: traditional voter targeting, political propaganda mills, troll armies, and bots.

Some of these firms are politically agnostic; they’ll work for anyone with cash. Others are partisan. The Democratic Party Data Death Star is NGP VAN. The Republicans have a few of their own—including i360, a data monster generously funded by Charles Koch. Naturally, i360 partners with Facebook to deliver target voters. It also claims to have 700 personal data points cross-tabulated on 199 million voters and nearly 300 million consumers, with the ability to profile and target them with pin-point accuracy based on their beliefs and views.

Here’s how The National Journal’s Andrew Rice described i360 in 2015:

Like Google, the National Security Agency, or the Democratic data machine, i360 has a voracious appetite for personal information. It is constantly ingesting new data into its targeting systems, which predict not only partisan identification but also sentiments about issues such as abortion, taxes, and health care. When I visited the i360 office, an employee gave me a demonstration, zooming in on a map to focus on a particular 66-year-old high school teacher who lives in an apartment complex in Alexandria, Virginia. . . . Though the advertising industry typically eschews addressing any single individual—it’s not just invasive, it’s also inefficient—it is becoming commonplace to target extremely narrow audiences. So the schoolteacher, along with a few look-alikes, might see a tailored ad the next time she clicks on YouTube.

Silicon Valley doesn’t just offer campaigns a neutral platform; it also works closely alongside political candidates to the point that the biggest internet companies have become an extension of the American political system. As one recent study showed, tech companies routinely embed their employees inside major political campaigns: “Facebook, Twitter, and Google go beyond promoting their services and facilitating digital advertising buys, actively shaping campaign communication through their close collaboration with political staffers . . . these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution.”

And offering special services to campaign manipulate voters isn’t just big business. It’s a largely unregulated business. If Facebook decides to covertly manipulate you by altering its newsfeed algorithms so it shows you news articles more from your conservative-leaning friends (or liberal-leaning friends), that’s totally legal. Because, again, subtly manipulating people is as American as apple pie:


Now, of course, every election is a Facebook Election. And why not? As Bloomberg News has noted, Silicon Valley ranks elections “alongside the Super Bowl and the Olympics in terms of events that draw blockbuster ad dollars and boost engagement.” In 2016, $1 billion was spent on digital advertising—with the bulk going to Facebook, Twitter, and Google.

What’s interesting here is that because so much money is at stake, there are absolutely no rules that would restrict anything an unsavory political apparatchik or a Silicon Valley oligarch might want to foist on the unsuspecting digital public. Creepily, Facebook’s own internal research division carried out experiments showing that the platform could influence people’s emotional state in connection to a certain topic or event. Company engineers call this feature “emotional contagion”—i.e., the ability to virally influence people’s emotions and ideas just through the content of status updates. In the twisted economy of emotional contagion, a negative post by a user suppresses positive posts by their friends, while a positive post suppresses negative posts. “When a Facebook user posts, the words they choose influence the words chosen later by their friends,” explained the company’s lead scientist on this study.

On a very basic level, Facebook’s opaque control of its feed algorithm means the platform has real power over people’s ideas and actions during an election. This can be done by a data shift as simple and subtle as imperceptibly tweaking a person’s feed to show more posts from friends who are, say, supporters of a particular political candidate or a specific political idea or event. As far as I know, there is no law preventing Facebook from doing just that: it’s plainly able and willing to influence a user’s feed based on political aims—whether done for internal corporate objectives, or due to payments from political groups, or by the personal preferences of Mark Zuckerberg.

And this contemporary state of affairs didn’t emerge spontaneously. As Levine covers in Surveillance Valley, this is what the internet – back when it was the ARPANET military network – was all about from its very conception:


There’s another, bigger cultural issue with the way we’ve begun to examine and discuss Cambridge Analytica’s battery of internet-based influence ops. People are still dazzled by the idea that the internet, in its pure, untainted form, is some kind of magic machine distributing democracy and egalitarianism across the globe with the touch of a few keystrokes. This is the gospel preached by a stalwart chorus of Net prophets, from Jeff Jarvis and the late John Perry Barlow to Clay Shirky and Kevin Kelly. These charlatans all feed on an honorable democratic impulse: people still want to desperately believe in the utopian promise of this technology—its ability to equalize power, end corruption, topple corporate media monopolies, and empower the individual.

This mythology—which is of course aggressively confected for mass consumption by Silicon Valley marketing and PR outfits—is deeply rooted in our culture; it helps explain why otherwise serious journalists working for mainstream news outlets can unironically employ phrases such as “information wants to be free” and “Facebook’s engine of democracy” and get away with it.

The truth is that the internet has never been about egalitarianism or democracy.

The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting.

The influence of U.S. counterinsurgency doctrine on the development of modern computers and the internet is not something that many people know about. But it is a subject that I explore at length in my book, Surveillance Valley. So what jumps out at me is how seamlessly the reported activities of Cambridge Analytica fit into this historical narrative.

“The early internet came out of a series of Vietnam War counterinsurgency projects aimed at developing computer technology that would give the government a way to manage a complex series of global commitments and to monitor and prevent political strife—both at home and abroad. The internet, going back to its first incarnation as the ARPANET military network, was always about surveillance, profiling, and targeting

And one of the key figures behind this early ARPANET version of the internet, Ithiel de Sola Pool, got his start in this area in the 1950’s working at the Hoover Institution at Stanford University to understand the nature and causes of left-wing revolutions and distill this down to a mathematical formula. Pool, an virulent anti-Communist, also worked for JFK’s 1960 campaign and went on to start a private company, Simulmatics, offering services in modeling and manipulating human behavior based on large data sets on people:


Cambridge Analytica is a subsidiary of the SCL Group, a military contractor set up by a spooky huckster named Nigel Oakes that sells itself as a high-powered conclave of experts specializing in data-driven counterinsurgency. It’s done work for the Pentagon, NATO, and the UK Ministry of Defense in places like Afghanistan and Nepal, where it says it ran a “campaign to reduce and ultimately stop the large numbers of Maoist insurgents in Nepal from breaking into houses in remote areas to steal food, harass the homeowners and cause disruption.”

In the grander scheme of high-tech counterinsurgency boondoggles, which features such storied psy-ops outfits as Peter Thiel’s Palantir and Cold War dinosaurs like Lockheed Martin, the SCL Group appears to be a comparatively minor player. Nevertheless, its ambitious claims to reconfigure the world order with some well-placed algorithms recalls one of the first major players in the field: Simulmatics, a 1960s counterinsurgency military contractor that pioneered data-driven election campaigns and whose founder, Ithiel de Sola Pool, helped shape the development of the early internet as a surveillance and counterinsurgency technology.

Ithiel de Sola Pool descended from a prominent rabbinical family that traced its roots to medieval Spain. Virulently anticommunist and tech-obsessed, he got his start in political work in 1950s working on project at the Hoover Institution at Stanford University that sought to understand the nature and causes of left-wing revolutions and reduce their likely course down to a mathematical formula.

He then moved to MIT and made a name for himself helping calibrate the messaging of John F. Kennedy’s 1960 presidential campaign. His idea was to model the American electorate by deconstructing each voter into 480 data points that defined everything from their religious views to racial attitudes to socio-economic status. He would then use that data to run simulations on how they would respond to a particular message—and those trial runs would permit major campaigns to fine-tune their messages accordingly.

These new targeted messaging tactics, enabled by rudimentary computers, had many fans in the permanent political class of Washington; their livelihoods, after all, were largely rooted in their claims to analyze and predict political behavior. And so Pool leveraged his research to launch Simulmatics, a data analytics startup that offered computer simulation services to major American corporations, helping them pre-test products and construct advertising campaigns.

Simulmatics also did a brisk business as a military and intelligence contractor. It ran simulations for Radio Liberty, the CIA’s covert anti-communist radio station, helping the agency model the Soviet Union’s internal communication system in order to predict the effect that foreign news broadcasts would have on the country’s political system. At the same time, Simulmatics analysts were doing counterinsurgency work under an ARPA contract in Vietnam, conducting interviews and gathering data to help military planners understand why Vietnamese peasants rebelled and resisted American pacification efforts. Simulmatic’s work in Vietnam was just one piece of a brutal American counterinsurgency policy that involved covert programs of assassinations, terror, and torture that collectively came to be known as the Phoenix Program.

And part of what drove Pool’s was a utopian belief that computers and massive amounts of data could be used to run society harmoniously. Left-wing revolutions were problems to be managed with Big Data. It’s a pretty important historical context when thinking about the role Cambridge Analytica played in electing Donald Trump:


At the same time, Pool was also personally involved in an early ARPANET-connected version of Thiel’s Palantir effort—a pioneering system that would allow military planners and intelligence to ingest and work with large and complex data sets. Pool’s pioneering work won him a devoted following among a group of technocrats who shared a utopian belief in the power of computer systems to run society from the top down in a harmonious manner. They saw the left-wing upheavals of the 1960s not as a political or ideological problem but as a challenge of management and engineering. Pool fed these reveries by setting out to build computerized systems that could monitor the world in real time and render people’s lives transparent. He saw these surveillance and management regimes in utopian terms—as a vital tool to manage away social strife and conflict. “Secrecy in the modem world is generally a destabilizing factor,” he wrote in a 1969 essay. “Nothing contributes more to peace and stability than those activities of electronic and photographic eavesdropping, of content analysis and textual interpretation.”

And guess what: the American public wasn’t enamored with Pool’s vision of a world managed by computing technology and Big Data models of society. When the public learned about these early version of the internet inspired by visions of a computer-managed world in the 60’s and 70’s, the public got scared:


With the advent of cheaper computer technology in the 1960s, corporate and government databases were already making a good deal of Pool’s prophecy come to pass, via sophisticated new modes of consumer tracking and predictive modeling. But rather than greeting such advances as the augurs of a new democratic miracle, people at the time saw it as a threat. Critics across the political spectrum warned that the proliferation of these technologies would lead to corporations and governments conspiring to surveil, manipulate, and control society.

This fear resonated with every part of the culture—from the new left to pragmatic centrists and reactionary Southern Democrats. It prompted some high-profile exposés in papers like the New York Times and Washington Post. It was reported on in trade magazines of the nascent computer industry like ComputerWorld. And it commanded prime real estate in establishment rags like The Atlantic.

Pool personified the problem. His belief in the power of computers to bend people’s will and manage society was seen as a danger. He was attacked and demonized by the antiwar left. He was also reviled by mainstream anti-communist liberals.

A prime example: The 480, a 1964 best-selling political thriller whose plot revolved around the danger that computer polling and simulation posed for democratic politics—a plot directly inspired by the activities of Ithiel de Sola Pool’s Simulmatics. This newfangled information technology was seen a weapon of manipulation and coercion, wielded by cynical technocrats who did not care about winning people over with real ideas, genuine statesmanship or political platforms but simply sold candidates just like they would a car or a bar of soap.

But that fear somehow disappeared in subsequent decades, only to be replaced with a faith in our benevolent techno-elite. And a faith that this mass public/private surveillance system is actually an empowering tool that will lead to a limitless future. And that is perhaps the biggest scandal here: The public didn’t just forgot to keep an eye on the powerful. The public forgot to keep an eye on the people whose power is derived from keeping an eye on the public. We built a surveillance state at the same time we fell into a fog of civic and historical amnesia. And that has coincided with the rise of a plutocracy, the dominance of right-wing anti-government economic doctrines, and the larger failure of the American political and economic elites to deliver a society that actually works for average people. To put it another way, the rise of the modern surveillance state is one element of a massive, decades-long process of collectively ‘dropping the ball’. We screwed up massively and Facebook and Google are just one of the consequences of this. And yet we still don’t view the Trump phenomena within the context of that massive collective screw up, which means we’re still screwing up massively:


Yet the fear that Ithiel de Sola Pool and his technocratic world view inspired half a century ago has been wiped from our culture. For decades, we’ve been told that a capitalist society where no secrets could be kept from our benevolent elite is not something to fear—but something to cheer and promote.

Now, only after Donald Trump shocked the liberal political class is this fear starting to resurface. But it’s doing so in a twisted, narrow way.

***

And that’s the bigger issue with the Cambridge Analytica freakout: it’s not just anti-historical, it’s also profoundly anti-political. People are still trying to blame Donald Trump’s surprise 2016 electoral victory on something, anything—other than America’s degenerate politics and a political class that has presided over a stunning national decline. The keepers of conventional wisdom all insist in one way or another that Trump won because something novel and unique happened; that something had to have gone horribly wrong. And if you’re able to identify and isolate this something and get rid of it, everything will go back to normal—back to status quo, when everything was good.

So the biggest story here isn’t that Cambridge Analytica was engaged in mass manipulation campaign. And the biggest story isn’t even that Cambridge Analytica was engaged in a cutting-edge commercial mass manipulation campaign. Because both of those stories are eclipsed by the story that even if Cambridge Analytica really was engaged in a commercial cutting edge campaign, it probably wasn’t nearly as cutting edge as what Facebook and Google and the other data giants routinely engage in. And this situation has been building for decades and within the context of the much larger scandal of the rise of a oligarchy that more or less runs America by and for powerful interests. Powerful interests that are overwhelmingly dedicated to right-wing elitist doctrines that view the public as a resources to be controlled and exploited for private profit.

It’s all a reminder that, like so many incredibly complex issues, creating very high quality government is the only feasible answer. A high quality government managed by a self-aware public. Some sort of ‘surveillance state’ is almost an inevitability as long as we have ubiquitous surveillance technology. Even the array of ‘crypto’ tools touted in recent years have consistently proven to be vulnerable, which isn’t necessarily a bad thing since ubiquitous crypto-technology comes with its own suite of mega-collective headaches. National security and personal data insecurity really are intertwined in both mutually inclusive and exclusive ways. It’s not as if the national security hawk arguments that “you can’t be free if you’re dead from [insert war, terror, random chaos things a national security state is supposed to deal with]” isn’t valid. But fears of Big Brother are also valid, as our present situation amply demonstrates. The path isn’t clear, which is why a national security state with a significant private sector component and access to ample intimate details is likely for the foreseeable future whether you like it or not. People err on immediate safety. So we better have very high quality government. Especially high quality regulations for the private sector components of that national security state.

And while digital giants like Google and Facebook will inevitably have access to a troves of personal data that they need to offer the kinds of services people need, there’s no reason any sort of regulating them heavily so they don’t become personal data repository for sale. Which is what they are now.

What do we do about services that people use to run their lives which, by definition, necessitate the collection of private data by a third-party? How do we deal with these challenges? Well, again, it starts with being aware of them and actually trying to collectively grapple with them so some sort of general consensus can be arrive at. And that’s all why we need to recognize that it is imperative that the public surveils the surveillance state along with surveilling the rest of the world going on around us too. A self-aware surveillance state comprised of a self-aware populace of people who know what’s going on with their surveillance state and the world. In other words, part of the solution to ‘Big Data Big Brother’ really is a society of ‘Little Brothers and Sisters’ who are collectively very informed about what is going on in the world and politically capable of effecting changes to that surveillance state – and the rest of government or the private sector – when necessary change is identified. In other other words, the one ‘utopian’ solution we can’t afford to give up on is the utopia of a well-function democracy populated by a well-informed citizenry. A well-armed citizenry armed with relevant facts and wisdom (and an extensive understanding of the history and technique of fascism and other authoritarian movements). Because a clueless society will be an abusively surveilled society.

But the fact that this Cambridge Analytica scandal is a surprise and is being covered largely in isolation of this broader historic and contemporary context is a reminder that we are no where near that democratic ideal of a well-informed citizenry. Well, guess what would be a really valuable tool for surveilling the surveillance state and the rest of the world around us and becoming that well-informed citizenry: the internet! Specifically, we really do need to read and digest growing amounts of information to make sense of an increasingly complex world. But the internet is just the start. The goal needs to be the kind of functional, self-aware democracy were situations like the current one don’t develop in a fog of collective amnesia and can be pro-actively managed. To put it another way, we need an inverse of Ithiel de Sola Pool’s vision of world with benevolent elites use computers and Big Data to manage the rabble and ward of political revolutions. Instead, we need a political revolution of the rabble fueled by the knowledge of our history and world the internet makes widely accessible. And one of the key goals of the political revolution needs to be to create a world with the knowledge the internet makes widely available is used to reign in our elites and build a world that works for everyone.

And yes, that implicitly implies a left-wing revolution since left-wing democratic movements those are the only kind that have everyone in mind. And yes, this implies an economic revolution that systematically frees up time for virtually everyone one so people actually have the time to inform themselves. Economic security and time security. We need to build a world that provide both to everyone.

So when we ask ourselves how we should respond to the growing Cambridge Analytica/Facebook scandal, don’t forget that one of the key lessons that the story of Cambridge Analytica teaches us is that there is an immense amount of knowledge about ourselves – our history and contemporary context- that we needed to learn and didn’t. And that includes envisioning what a functional democratic society and economy that works for everyone would look like and building it. Yes, the internet could be very helpful in that process, just don’t forget about everything else that will be required to build that functional democracy.

Discussion

17 comments for “The Cambridge Analytica Microcosm in Our Panoptic Macrocosm”

  1. Here’s a good example of many of the problem with Facebook are facilitated by the many privacy problems with the rest of the tech sector: A number of Facebook users discovered a rather creepy privacy violation by Facebook. It turns out that Facebook was collecting metadata about the calls and texts people were sending from their smartphones with the Facebook app and Googles Android operating system.

    And it also turns out that Facebook used a number of sleazy excuses to “get permission” to collect this the data. First, Facebook had users agree to giving such data away by hiding it away in obtuse language in the user agreement. Second, the default setting for the Facebook app was to give this data away. Users could turn off this data sharing, but it was never obvious it was on.

    Third, it was based on exploiting how Android’s user permissions system encourages people to share vasts amounts of data without realizing it. This is were this becomes a Google scandal too. If you had the Android operating system the Facebook app would try to get permission to access your phone contact information. This was ostensibly to be used for the Facebook’s friend recommendation algorithms. If you granted permission to read contacts during the Facebook app’s installation on older versions of Android – before version 4.1 (Jelly Bean) – giving permission to an app to read contact information also granted permission to call and message logs by default. So this was just an egregious privacy design by Google and Facebook egregiously exploited it (surprise!).

    And when this loose permissions system was fixed in later versions of Android Facebook continued to use a loophole to keep grabbing the call and text metadata. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. In other words, upgrading the Android operating system didn’t guarantee that upgrades to user data privacy rules would actually take effect on the apps you already have installed. Which, again, is egregious. But that’s what Google’s Android operating system allowed and Facebook totally exploited it until Google finally closed the loophole in October of 2017.

    Note that Apple’s iOS phones didn’t have this issue with the Facebook app because that iOS operating system simply does not give apps access to that kind of information. So the permissions Google is giving are bad even compared to it’s major competitor in the smartphone operating system space.

    It’s also quite analogous to what Facebook was doing with the “friends permissions” giveaway of Facebook profile information to app developers. In both cases we have a major platform built there was a giant privacy-violating loopholes built into the platforms that was developers know about but the public isn’t really aware they’re signing up for. That’s become much of the modern internet giant business model and as we can see it’s a model that feeds on itself. Google and Facebook feed information to each other indicating that the Big Data giant have determined that it’s more profitable to share their data on all of us than keep it locked and proprietary.

    Recall how Facebook whistle-blower Sandy Parakilas said he remembered Facebook executives getting concerned that they were giving so much of their information on people away to app developers that competitors would be able to create their own social networks. That’s how much data Facebook was giving away. And now we learn that Google’s operating system made an egregious amount of data available to app developers – like metadata on calls and texts – if people gave an app “contact” permissions.

    And so we can see that Facebook and Google just aren’t in the ad space. They’re in the data brokerage space too. They’ve clearly determined that maximizing profits just might require handing over the kind of data people assumed these data giants carefully guarded. Instead, they’ve been carefully and steadily handing that data out. Presumably because it’s more profitable:

    Gizmodo

    Facebook’s Defense for Sucking Up Your Call and Text Data Entirely Misses the Point

    Rhett Jones
    3/26/18 2:00pm

    A number of Facebook users discovered over the past few days that the social media company had collected a creepy level of information about their calls and texts. Many users claimed they never gave Facebook permission to gather this information. However, in response to the uproar, Facebook says the “feature” is opt-in only. Basically, the company’s saying it’s your own fault if you don’t like it.

    To understand what Facebook is defending requires a lot of explanation—and that’s the heart of the problem.

    But as the company faces growing scrutiny over its data practices, a number of users began digging around in their archives. Spurred by a tweet from developer Dylan McKay, social media users complained this weekend that Facebook had records of their contacts, as well as call and text metadata. Facebook has let users export their data since 2010.

    Downloaded my facebook data as a ZIP fileSomehow it has my entire call history with my partner's mum pic.twitter.com/CIRUguf4vD— Dylan McKay (@dylanmckaynz) March 21, 2018

    Ars Technica spoke with numerous users who felt blindsided, and the publication’s staff did their own tests, finding SMS data and contacts data from an Android device they used in 2015 and 2106. From the report:

    Facebook uses phone-contact data as part of its friend recommendation algorithm. And in recent versions of the Messenger application for Android and Facebook Lite devices, a more explicit request is made to users for access to call logs and SMS logs on Android and Facebook Lite devices. But even if users didn’t give that permission to Messenger, they may have given it inadvertently for years through Facebook’s mobile apps—because of the way Android has handled permissions for accessing call logs in the past.

    If you granted permission to read contacts during Facebook’s installation on Android a few versions ago—specifically before Android 4.1 (Jelly Bean)—that permission also granted Facebook access to call and message logs by default. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. Google deprecated version 4.0 of the Android API in October 2017—the point at which the latest call metadata in Facebook users’ data was found. Apple iOS has never allowed silent access to call data.

    To put all of that into plain English, Google’s Android OS has its own privacy issues, and coupled with Facebook’s apps, it could’ve made it possible for Facebook users to opt-into the company’s surveillance program without realizing it.

    Facebook responded on Sunday with a “Fact Check” blog post claiming that any assertion that “Facebook has been logging people’s call and SMS (text) history without their permission” is false. As the unsigned blog reads, in part:

    Call and text history logging is part of an opt-in feature for people using Messenger or Facebook Lite on Android. This helps you find and stay connected with the people you care about, and provides you with a better experience across Facebook. People have to expressly agree to use this feature. If, at any time, they no longer wish to use this feature they can turn it off in settings, or here for Facebook Lite users, and all previously shared call and text history shared via that app is deleted. While we receive certain permissions from Android, uploading this information has always been opt-in only.

    It’s true that Facebook, as far as we know, has always made SMS metadata collection an opt-in part of the setup process. But take a look at the difference between today’s opt-in screen and one users saw back in 2016.

    Today, Messenger gives you the options to “turn on” metadata collection, opt-out, or learn more. But before it faced criticism in 2016, the only options were “OK” or “settings.” So, it’s likely many people gave Facebook permission at one time without realizing it.

    This is an excellent illustration of the web that Facebook weaves. In the Cambridge Analytica scandal, Facebook allowed the personal data of 50 million users to get into the hands of a third-party app, in part because its policies gave up the data of users’ friends based on permission from a single user. When that third party transferred the information to a political data analysis firm, which was a violation of Facebook’s policies, Facebook did nothing when it found out in 2015 but issue a stern warning and make Cambridge Analytica sign a document promising that the data was deleted. Now, Facebook says that it no longer shotguns that data out to developers based on a single permission, so apparently everyone should feel okay going forward.

    Explaining what’s going on shouldn’t be so difficult or time-consuming. Facebook claims this is all designed to make things more convenient for you. But it doesn’t have to constantly track your text messages and the duration of your calls just to capture your contacts list. That could be a one-time thing that you do when you set the service up, and Facebook could periodically ask if you want to do another import a month later.

    However, Facebook has turned a convenience into an excuse for grabbing more information that it can combine with everything else to make a perfect psychological and social profile of you, the user. And it has demonstrated that it can’t be trusted to keep that data to itself.

    Mark Zuckerburg told CNN last week that he was open to more regulations being applied to his platform. “You know, I think in general technology is an increasingly important trend in the world and I actually think the question is more what’s the right regulation rather than ‘yes’ or ‘no’ should it be regulated,” he said. This is foolish because government regulations will undoubtedly get screwed up and lead to unintended consequences.

    But if Mark insists, the government could create strict terms of service requirements for what a company explains to a user before they sign up. Those regulations could require clear examples of how data might be used and even require users to complete a simple quiz to show they understand before finalizing the app’s setup. Of course, that kind of burdensome activity wouldn’t be necessary if Facebook would just make everything clear on its own. Unfortunately, with congressional hearings scheduled, and government agency investigations underway, it may be too late.

    ———-

    “Facebook’s Defense for Sucking Up Your Call and Text Data Entirely Misses the Point” by Rhett Jones; Gizmodo; 03/26/2018

    “To understand what Facebook is defending requires a lot of explanation—and that’s the heart of the problem.”

    It’s a key insight: It really a reflection of the heart of the problem that simply understanding what Facebook is defending requires a lot of explanation. When Facebook started collecting people’s call and text metadata over its app it was exploiting the fact that Google’s Android system allowed them to do that in the first place when users gave “contact” permissions to an app (most people probably didn’t assume that giving an app contact permission was also giving away call and text metadata). And then after Google changed the Android app permissions system and separated the permissions for contact information with permissions for the call and text metadata Facebook relied a loophole Google provided where apps that were already installed could continue collecting that data. And none of this was ever made clear to the millions of people using the Facebook app on their Android phones because it was hidden in the dense text of user agreements that no one reads. The convolutedness of the act obscures the act.

    And keep in mind that Facebook is claiming that it merely wanted this call and text metadata for its friend recommendations algorithm. Which is, of course, absurd. That data was going to obviously go into the pool of data Facebook is compiling on everyone.


    But as the company faces growing scrutiny over its data practices, a number of users began digging around in their archives. Spurred by a tweet from developer Dylan McKay, social media users complained this weekend that Facebook had records of their contacts, as well as call and text metadata. Facebook has let users export their data since 2010.

    Downloaded my facebook data as a ZIP fileSomehow it has my entire call history with my partner's mum pic.twitter.com/CIRUguf4vD— Dylan McKay (@dylanmckaynz) March 21, 2018

    Ars Technica spoke with numerous users who felt blindsided, and the publication’s staff did their own tests, finding SMS data and contacts data from an Android device they used in 2015 and 2106. From the report:

    Facebook uses phone-contact data as part of its friend recommendation algorithm. And in recent versions of the Messenger application for Android and Facebook Lite devices, a more explicit request is made to users for access to call logs and SMS logs on Android and Facebook Lite devices. But even if users didn’t give that permission to Messenger, they may have given it inadvertently for years through Facebook’s mobile apps—because of the way Android has handled permissions for accessing call logs in the past.

    If you granted permission to read contacts during Facebook’s installation on Android a few versions ago—specifically before Android 4.1 (Jelly Bean)—that permission also granted Facebook access to call and message logs by default. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. Google deprecated version 4.0 of the Android API in October 2017—the point at which the latest call metadata in Facebook users’ data was found. Apple iOS has never allowed silent access to call data.

    To put all of that into plain English, Google’s Android OS has its own privacy issues, and coupled with Facebook’s apps, it could’ve made it possible for Facebook users to opt-into the company’s surveillance program without realizing it.

    “To put all of that into plain English, Google’s Android OS has its own privacy issues, and coupled with Facebook’s apps, it could’ve made it possible for Facebook users to opt-into the company’s surveillance program without realizing it.”

    Facebook and Google working together to share more of what they know about us with each others. That’s basically what happened. It was a team effort.

    And as the article notes, when Facebook claims that this was all fine because it was an opt-in option they ignore the fact the app used to make it very unclear opting-out was an option at all. The opt-out option was hidden in the settings and opting-in was the default setting that people had selected when they installed the app. And it was like that as recently as 2016:


    Facebook responded on Sunday with a “Fact Check” blog post claiming that any assertion that “Facebook has been logging people’s call and SMS (text) history without their permission” is false. As the unsigned blog reads, in part:

    Call and text history logging is part of an opt-in feature for people using Messenger or Facebook Lite on Android. This helps you find and stay connected with the people you care about, and provides you with a better experience across Facebook. People have to expressly agree to use this feature. If, at any time, they no longer wish to use this feature they can turn it off in settings, or here for Facebook Lite users, and all previously shared call and text history shared via that app is deleted. While we receive certain permissions from Android, uploading this information has always been opt-in only.

    It’s true that Facebook, as far as we know, has always made SMS metadata collection an opt-in part of the setup process. But take a look at the difference between today’s opt-in screen and one users saw back in 2016.

    Today, Messenger gives you the options to “turn on” metadata collection, opt-out, or learn more. But before it faced criticism in 2016, the only options were “OK” or “settings.” So, it’s likely many people gave Facebook permission at one time without realizing it.

    And it’s also all an example of how the ostensibly helpful reasons to collect this personalized data (like make the friend recommendation algorithms better in this case) are used as an excuse to engage in the personal information equivalent of a smash and grab ransacking:


    This is an excellent illustration of the web that Facebook weaves. In the Cambridge Analytica scandal, Facebook allowed the personal data of 50 million users to get into the hands of a third-party app, in part because its policies gave up the data of users’ friends based on permission from a single user. When that third party transferred the information to a political data analysis firm, which was a violation of Facebook’s policies, Facebook did nothing when it found out in 2015 but issue a stern warning and make Cambridge Analytica sign a document promising that the data was deleted. Now, Facebook says that it no longer shotguns that data out to developers based on a single permission, so apparently everyone should feel okay going forward.

    Explaining what’s going on shouldn’t be so difficult or time-consuming. Facebook claims this is all designed to make things more convenient for you. But it doesn’t have to constantly track your text messages and the duration of your calls just to capture your contacts list. That could be a one-time thing that you do when you set the service up, and Facebook could periodically ask if you want to do another import a month later.

    However, Facebook has turned a convenience into an excuse for grabbing more information that it can combine with everything else to make a perfect psychological and social profile of you, the user. And it has demonstrated that it can’t be trusted to keep that data to itself.

    “However, Facebook has turned a convenience into an excuse for grabbing more information that it can combine with everything else to make a perfect psychological and social profile of you, the user. And it has demonstrated that it can’t be trusted to keep that data to itself.”

    While Facebook may not have perfect psychological and social profiles of everyone, they probably have the best or nearly the vest, with Google possibly knowing more about people. And it’s hard to imagine that this call and text metadata isn’t potentially pretty valuable information for putting together those personal profiles on everyone. So it’s worth noting that this is potentially the same kind of profile data that Facebook gave out to Cambridge Analytica and thousands of other app developers. In other words, this call and text metadata slurping scandal is potentially also part of the Cambridge Analytica scandal in the sense that the insights Facebook gained from the call and text metadata could have shown up in those profiles Facebook was handing out to app developers like Cambridge Analytica.

    Which is a reminder that this new scandal of Google’s Android OS giving Facebook this call and text metadata probably involves a lot more than just Facebook collecting this kind of data. Who knows how many other app developers whose apps requested “contact” permissions also went ahead and grabbed all the call and text metadata?

    Also don’t forget that this call and text metadata includes data about the people on the other side of those calls and texts. So Facebook was grabbing data on more people than just the app users. And any other Android developers were potentially grabbing that data too. It’s another parallel with the Facebook “friends permission” loophole exploited by Cambridge Analytica and other Facebook app developers: you don’t have to download these privacy violating apps to be impacted. Simply communicating with someone who does have the privacy violating app will get your privacy violated too.

    So as we can see, Facebook doesn’t just have a scandal involving giving private data away. It also has a scandal involving collecting private data too. A scandal that potentially any other Android app developer might also be involved in too. Which means there’s probably a black market for this kind of data too. Because Google, like Facebook, apparently couldn’t resist making itself a data-broker too. And now all this data is potentially floating around out there. It’s was a wildly irresponsible act on Google’s part to make that kind of data available under the “contacts” permissions in the Android operating system but that’s how much Google designed that system to make data collection a priority. Presumably to encourage more app developers to make Android apps. Access to our data is literally part of the incentive structure. It’s really quite stunning. And quite analogous to what Facebook is in trouble for with Cambridge Analytica.

    But at least those Facebook friend recommendation algorithms are probably very well powered, so there’s that.

    Posted by Pterrafractyl | April 1, 2018, 11:15 pm
  2. We should probably get ready for a lot more stories like this: Facebook just issued a flurry of new updates to its data-sharing policies. Some of these changes include new restrictions on the data made available to app developers while other changes are focused on clarifying the user agreements that disclose what data is taken.

    And there’s a new estimate from Facebook on the number of Facebook profiles grabbed by Cambridge Analytica’s app. It’s gone from 50 million to 87 million profiles:

    Associated Press

    Facebook: 87M Users May Have Had Data Breached By Cambridge Analytica

    By BARBARA ORTUTAY
    April 4, 2018 2:47 pm

    NEW YORK (AP) — Facebook revealed Wednesday that tens of millions more people might have been exposed in the Cambridge Analytica privacy scandal than previously thought and said it will restrict the data it allows outsiders to access on its users.

    Those developments came as congressional officials said CEO Mark Zuckerberg will testify next week, while Facebook unveiled a new privacy policy that aims to explain the data it gathers on users more clearly — but doesn’t actually change what it collects and shares.

    Facebook is facing its worst privacy scandal in years following allegations that a Trump-affiliated data mining firm, Cambridge Analytica, used used ill-gotten data from millions of users to try to influence elections. The company said Wednesday that as many as 87 million people might have had their data accessed — an increase from the 50 million disclosed in published reports.

    This Monday, all Facebook users will receive a notice on their Facebook feeds with a link to see what apps they use and what information they have shared with those apps. They’ll have a chance to delete apps they no longer want. Users who might have had their data shared with Cambridge Analytica will be told of that. Facebook says most of the affected users are in the U.S.

    Facebook is restricting access that apps can get about users’ events, as well as information about groups such as member lists and content. In addition, the company is also removing the option to search for users by entering a phone number or an email address. While this was useful to people to find friends who may have a common name, Facebook says malicious actors abused it by collecting people’s profile information through phone or email lists they had access to.

    This comes on top of changes announced a few weeks ago. For example, Facebook has said it will remove developers’ access to people’s data if the person has not used the app in three months.

    Earlier Wednesday, Facebook unveiled a new privacy policy that seeks to clarify its data collection and use.

    For instance, Facebook added a section explaining that it collects people’s contact information if they choose to “upload, sync or import” this to the service. This may include users’ address books on their phones, as well as their call logs and text histories. The new policy says Facebook may use this data to help “you and others find people you may know.”

    The previous policy did not mention call logs or text histories. Several users were surprised to learn recently that Facebook had been collecting information about whom they texted or called and for how long, though not the actual contents of text messages. It seemed to have been done without explicit consent, though Facebook says it collected such data only from Android users who specifically allowed it to do so — for instance, by agreeing to permissions when installing Facebook.

    Facebook also added clarification that local laws could affect what it does with “sensitive” data on people, such as information about a user’s race or ethnicity, health, political views or even trade union membership. This and other information, the new policy states, “could be subject to special protections under the laws of your country.” But it means the company is unlikely to apply stricter protections to countries with looser privacy laws — such as the U.S., for example. Facebook has always had regional differences in policies, and the new document makes that clearer.

    The new policy also makes it clear that WhatsApp and Instagram are part of Facebook and that the companies share information about users. The two were not mentioned in the previous policy. While WhatsApp still doesn’t show advertisements, and has its own privacy policy, Instagram long has and its policy is the same as Facebook’s. But the notice could be a sign of things to come for WhatsApp as well.

    Other changes incorporate some of the restrictions Facebook previously announced on what third-party apps can collect from users and their friends.

    Although Facebook says the changes aren’t prompted by recent events or tighter privacy rules coming from the EU, it’s an opportune time. It comes as Zuckerberg is set to appear April 11 before a House committee — his first testimony before Congress.

    As Facebook evolved from a closed, Harvard-only network with no ads to a giant corporation with $40 billion in advertising revenue and huge subsidiaries like Instagram and WhatsApp, its privacy policy has also shifted — over and over.

    Almost always, critics say, the changes meant a move away from protecting user privacy toward pushing openness and more sharing. On the other hand, regulatory and user pressure has sometimes led Facebook to pull back on its data collection and use and to explain things in plainer language — in contrast to dense legalese from many other internet companies.

    The policy changes come a week after Facebook gave its privacy settings a makeover. The company tried to make it easier to navigate its complex and often confusing privacy and security settings, though the makeover didn’t change what Facebook collects and shares either.

    Those who followed Facebook’s privacy gaffes over the years may feel a sense of familiarity. Over and over, the company — often Zuckerberg — owned up to missteps and promised changes.

    In 2009, the company announced that it was consolidating six privacy pages and more than 30 settings on to a single privacy page. Yet, somehow, the company said last week that users still had to go to 20 different places to access all of their privacy controls and it was changing this so the controls will be accessible from a single place.

    ———-

    “Facebook: 87M Users May Have Had Data Breached By Cambridge Analytica” by BARBARA ORTUTAY; Associated Press; 04/04/2018

    “Facebook is facing its worst privacy scandal in years following allegations that a Trump-affiliated data mining firm, Cambridge Analytica, used used ill-gotten data from millions of users to try to influence elections. The company said Wednesday that as many as 87 million people might have had their data accessed — an increase from the 50 million disclosed in published reports.

    50 million to now 87 million. It’s quite a jump. How high might it get when this is all over? We’ll see.

    And beyond that update, Facebook also updated their data-collection disclosure policies. Now they’re actually mentioning things like the grabbing of call and text data off of your smartphone, which they apparently didn’t feel the need to tell people about before:


    Earlier Wednesday, Facebook unveiled a new privacy policy that seeks to clarify its data collection and use.

    For instance, Facebook added a section explaining that it collects people’s contact information if they choose to “upload, sync or import” this to the service. This may include users’ address books on their phones, as well as their call logs and text histories. The new policy says Facebook may use this data to help “you and others find people you may know.”

    The previous policy did not mention call logs or text histories. Several users were surprised to learn recently that Facebook had been collecting information about whom they texted or called and for how long, though not the actual contents of text messages. It seemed to have been done without explicit consent, though Facebook says it collected such data only from Android users who specifically allowed it to do so — for instance, by agreeing to permissions when installing Facebook.

    And note how Facebook’s update on how local privacy laws could affect its handling of “sensitive” data implies that the absence of those local laws means the same “sensitive” data isn’t going to be handled in a sensitive manner. So if you were hoping the big new EU data privacy rules were going to impact Facebook’s policies outside the EU, nope:


    Facebook also added clarification that local laws could affect what it does with “sensitive” data on people, such as information about a user’s race or ethnicity, health, political views or even trade union membership. This and other information, the new policy states, “could be subject to special protections under the laws of your country.” But it means the company is unlikely to apply stricter protections to countries with looser privacy laws — such as the U.S., for example. Facebook has always had regional differences in policies, and the new document makes that clearer.

    And that’s just some of the updates Facebook issued today. And while a number of these updates are pretty notable, perhaps the most notable part of this flurry of updates is is that they’re updates that actually increase privacy protections, which is not how these updates have normally gone for Facebook in the past


    As Facebook evolved from a closed, Harvard-only network with no ads to a giant corporation with $40 billion in advertising revenue and huge subsidiaries like Instagram and WhatsApp, its privacy policy has also shifted — over and over.

    Almost always, critics say, the changes meant a move away from protecting user privacy toward pushing openness and more sharing. On the other hand, regulatory and user pressure has sometimes led Facebook to pull back on its data collection and use and to explain things in plainer language — in contrast to dense legalese from many other internet companies.

    And now let’s take a look at one of the other disclosures Facebook made today: Remember how Facebook whistle-blower Sandy Parakilas speculated that a majority of Facebook users probably had their Facebook profile information scraped by app developers using exactly the same technique Cambridge Analytica used? Well, it looks like Facebook has very belatedly arrived at the same conclusion:

    Washington Post

    Facebook said the personal data of most of its 2 billion users has been collected and shared with outsiders

    by Craig Timberg, Tony Romm and Elizabeth Dwoskin
    April 4, 2018 at 5:09 PM

    Facebook said Wednesday that most of its 2 billion users likely have had their public profiles scraped by outsiders without the users’ explicit permission, dramatically raising the stakes in a privacy controversy that has dogged the company for weeks, spurred investigations in the United States and Europe, and sent the company’s stock price tumbling.

    “We’re an idealistic and optimistic company, and for the first decade, we were really focused on all the good that connecting people brings,” Chief Executive Mark Zuckerberg said on a call with reporters Wednesday afternoon. “But it’s clear now that we didn’t focus enough on preventing abuse and thinking about how people could use these tools for harm as well.”

    As part of the disclosure, Facebook for the first time detailed the scale of the improper data collection for Cambridge Analytica, a political data consultancy hired by President Trump and other Republican candidates in the last two federal election cycles. The political consultancy gained access to Facebook information on up to 87 million users, 71 million of whom are Americans, Facebook said. Cambridge Analytica obtained the data to build “psychographic” profiles that would help deliver targeted messages intended to shape voter behavior in a wide range of U.S. elections.

    But in research sparked by revelations from a Cambridge Analytica whistleblower last month, Facebook determined that the problem of third-party collection of user data was far larger still and, with the company’s massive user base, likely affected a large cross-section of people in the developed world.

    “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped,” the company wrote in its blog post.

    The scraping by malicious actors typically involved gathering public profile information — including names, email addresses and phone numbers, according to Facebook — by using a “search and account recovery” function that Facebook said it has now disabled. Facebook didn’t make clear in its post exactly what data was collected.

    The data obtained by Cambridge Analytica was more detailed and extensive, including the names, home towns, work and educational histories, religious affiliations and Facebook “likes” of users, among other data. Other users affected were in countries including the Philippines, Indonesia, U.K., Canada and Mexico.

    Facebook initially had sought to downplay the problem, saying in March only that 270,000 people had responded to a survey on an app created by the researcher in 2014. That netted Cambridge Analytica the data on the friends of those who responded to the survey, without their permission. But Facebook declined to say at the time how many other users may have had their data collected in the process. The whistleblower, Christopher Wylie, a former researcher for the company, said the real number of affected people was at least 50 million.

    Wylie tweeted on Wednesday afternoon that Cambridge Analytica could have obtained even more than 87 million profiles. “Could be more tbh,” he wrote, using an abbreviation for “to be honest.”

    Cambridge Analytica on Wednesday responded to Facebook’s announcement by saying that it had licensed data on 30 million users. Facebook banned Cambridge Analytica from its platform last month for obtaining the data under false pretenses.

    Facebook’s announcement, made near the bottom of a blog post Wednesday afternoon on plans to restrict access to data in the future, underscores the severity of a data mishap that appears to have affected about one out of every four Americans and sparked widespread outrage at the carelessness of the company’s handling of information on its users. Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015.

    With its moves over the past week, Facebook is embarking on a major shift in its relationship with third-party app developers that have used Facebook’s vast network to expand their businesses. What was largely an automated process will now involve developers agreeing to “strict requirements,” the company said in its blog post Wednesday. The 2015 policy change curtailed developers’ abilities to access data about people’s friends networks but left open many loopholes that the company tightened on Wednesday.

    The news quickly reverberated on Capitol Hill, where lawmakers are set to grill Zuckerberg at a series of hearings next week.

    “The more we learn, the clearer it is that this was an avalanche of privacy violations that strike at the core of one of our most precious American values – the right to privacy,” said Sen. Ed Markey (D-Mass.), who serves on the Senate Commerce Committee, which has called on Zuckerberg to testify at a hearing next week.

    “This latest revelation is extremely troubling and shows that Facebook still has a lot of work to do to determine how big this breach actually is,” said Rep. Frank Pallone Jr. (D-N.J.), the top Democrat on the House Energy and Commerce Committee, which will hear from Zuckerberg on Wednesday.

    “I’m deeply concerned that Facebook only addresses concerns on its platform when it becomes a public crisis, and that is simply not the way you run a company that is used by over 2 billion people,” he said. “We need to know how they are going to fix this problem next week at our hearing.”

    Facebook announced plans on Wednesday to add new restrictions to how outsiders can gain access to this data, the latest steps in a years-long process by the company to improve its damaged reputation as a steward of the personal privacy of its users.

    Developers who in the past could get access to people’s relationship status, calendar events, private Facebook posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtaining the information.

    Cambridge Analytica, which collected this information with the help of Cambridge University psychologist Aleksandr Kogan, was founded by a multimillion-dollar investment by hedge-fund billionaire Robert Mercer and headed by his daughter, Rebekah Mercer, who was the company’s president, according to documents provided by Wylie. Serving as vice president was conservative strategist Stephen K. Bannon, who also was the head of Breitbart News. He has since left both jobs and also his post as top White House adviser to Trump.

    Until Wednesday, apps that let people input a Facebook event into their calendar could also automatically import lists of all the people who attended that event, Facebook said. Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group. App developers who want this access will now have to prove their activities benefit the group. Facebook will now need to approve tools that businesses use to operate Facebook pages. A business that uses an app to help it respond quickly to customer messages, for example, will not be able to do so automatically. Developers’ access to Instagram will also be severely restricted.

    Facebook is allow banning apps from accessing users’ information about their religious or political views, relationship status, education, work history, fitness activity, book reading habits, music listening and news reading activity, video watching and games. Data brokers and businesses collect this type of information to build profiles of their customers’ tastes.

    Facebook last week said it is also shutting down access to data brokers who use their own data to target customers on Facebook.

    Facebook’s broad changes to how data is used apply mostly to outsiders and third parties. Facebook is not limiting the data the company itself can collect, nor is it restricting its ability to profile users to enable advertisers to target them with personalized messages. One piece of data Facebook said it would stop collecting was the time of phone calls, a response to outrage from users of Facebook’s messenger service who discovered that allowing Facebook to access their phone contact list was giving the company access to their call logs.

    ———-

    “Facebook said the personal data of most of its 2 billion users has been collected and shared with outsiders” by Craig Timberg, Tony Romm and Elizabeth Dwoskin; Washington Post; 04/04/2018

    “Facebook said Wednesday that most of its 2 billion users likely have had their public profiles scraped by outsiders without the users’ explicit permission, dramatically raising the stakes in a privacy controversy that has dogged the company for weeks, spurred investigations in the United States and Europe, and sent the company’s stock price tumbling.”

    So a billion or so people probably had their Facebook profile data sucked away by app developers. Facebook apparently just discovered this. And while it’s laughable to imagine that Facebook just suddenly discovered this now, recall how Sandy Parakilas also said executives had a “it’s best not to know” attitude about how this data was used by third-parties, so it’s possible that Facebook technically didn’t officially know this until now because they officially never looked before:


    As part of the disclosure, Facebook for the first time detailed the scale of the improper data collection for Cambridge Analytica, a political data consultancy hired by President Trump and other Republican candidates in the last two federal election cycles. The political consultancy gained access to Facebook information on up to 87 million users, 71 million of whom are Americans, Facebook said. Cambridge Analytica obtained the data to build “psychographic” profiles that would help deliver targeted messages intended to shape voter behavior in a wide range of U.S. elections.

    But in research sparked by revelations from a Cambridge Analytica whistleblower last month, Facebook determined that the problem of third-party collection of user data was far larger still and, with the company’s massive user base, likely affected a large cross-section of people in the developed world.

    “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped,” the company wrote in its blog post.

    The scraping by malicious actors typically involved gathering public profile information — including names, email addresses and phone numbers, according to Facebook — by using a “search and account recovery” function that Facebook said it has now disabled. Facebook didn’t make clear in its post exactly what data was collected.

    The data obtained by Cambridge Analytica was more detailed and extensive, including the names, home towns, work and educational histories, religious affiliations and Facebook “likes” of users, among other data. Other users affected were in countries including the Philippines, Indonesia, U.K., Canada and Mexico.

    ““Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped,” the company wrote in its blog post.”

    LOL! They just discovered this and knew nothing about how their massive sharing of profile information with app developers might lead to a massive release of profile data. That’s their story and they’re sticking to it. For now.

    And notice how it’s just casually acknowledged that “Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015,” and Facebook is announcing all these new restrictions on the data app developers, or even data brokers, can access. And yet Facebook is acting like this is all sort of revelation:


    Facebook’s announcement, made near the bottom of a blog post Wednesday afternoon on plans to restrict access to data in the future, underscores the severity of a data mishap that appears to have affected about one out of every four Americans and sparked widespread outrage at the carelessness of the company’s handling of information on its users. Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015.

    Facebook announced plans on Wednesday to add new restrictions to how outsiders can gain access to this data, the latest steps in a years-long process by the company to improve its damaged reputation as a steward of the personal privacy of its users.

    Developers who in the past could get access to people’s relationship status, calendar events, private Facebook posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtaining the information.

    Cambridge Analytica, which collected this information with the help of Cambridge University psychologist Aleksandr Kogan, was founded by a multimillion-dollar investment by hedge-fund billionaire Robert Mercer and headed by his daughter, Rebekah Mercer, who was the company’s president, according to documents provided by Wylie. Serving as vice president was conservative strategist Stephen K. Bannon, who also was the head of Breitbart News. He has since left both jobs and also his post as top White House adviser to Trump.

    Until Wednesday, apps that let people input a Facebook event into their calendar could also automatically import lists of all the people who attended that event, Facebook said. Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group. App developers who want this access will now have to prove their activities benefit the group. Facebook will now need to approve tools that businesses use to operate Facebook pages. A business that uses an app to help it respond quickly to customer messages, for example, will not be able to do so automatically. Developers’ access to Instagram will also be severely restricted.

    Facebook is allow banning apps from accessing users’ information about their religious or political views, relationship status, education, work history, fitness activity, book reading habits, music listening and news reading activity, video watching and games. Data brokers and businesses collect this type of information to build profiles of their customers’ tastes.

    Facebook last week said it is also shutting down access to data brokers who use their own data to target customers on Facebook.

    Facebook’s broad changes to how data is used apply mostly to outsiders and third parties. Facebook is not limiting the data the company itself can collect, nor is it restricting its ability to profile users to enable advertisers to target them with personalized messages. One piece of data Facebook said it would stop collecting was the time of phone calls, a response to outrage from users of Facebook’s messenger service who discovered that allowing Facebook to access their phone contact list was giving the company access to their call logs.

    And note how Cambridge Analytica whistle-blower Christopher Wylie has already tweeted out that the new 87 million estimate might not be high enough:


    Facebook initially had sought to downplay the problem, saying in March only that 270,000 people had responded to a survey on an app created by the researcher in 2014. That netted Cambridge Analytica the data on the friends of those who responded to the survey, without their permission. But Facebook declined to say at the time how many other users may have had their data collected in the process. The whistleblower, Christopher Wylie, a former researcher for the company, said the real number of affected people was at least 50 million.

    Wylie tweeted on Wednesday afternoon that Cambridge Analytica could have obtained even more than 87 million profiles. “Could be more tbh,” he wrote, using an abbreviation for “to be honest.”

    “Could be more tbh.” It’s a rather ominous tweet considering the context.

    And don’t forget that the count in the original number of people using the Cambridge Analytica app, ~270,000, hasn’t been updated. That’s still just 270,000 people. So this scandal is providing us a sense of just how many people were likely getting their profile information grabbed by app developers using the “Friends Permission” feature. When it was 50 million people in total, that came out to about 187 friends getting their profiles grabbed for each person who actually downloaded the app. But if it’s 87 million people that makes it ~322 friends for each Cambridge Analytica app user on average.

    Along those lines, it’s worth noting the the average number of friends Facebook users have is 338 while the median number of friends in 200, according to a 2014 Pew research poll. So if that 87 million number keeps climbing, and therefore the assumed number of friends per user of the Cambridge Analytica app keeps climbing too, at some point we’re going to start getting into suspicious territory and have to ask the question of whether or not the users of that app were unusually popular or if Cambridge Analytica was getting data from more than just that app.

    After all, for all we know Cambridge Analytica may have simply purchased a bunch of data on the Facebook profile black market, something else Sandy Parakilas warned about. So how high might that 87 million number get if Cambridge Analytica was just buying this information for other app developers? Who knows, although at this point, “a billion profiles” can no longer be ruled out, thanks to Facebook’s very belated update today.

    Posted by Pterrafractyl | April 4, 2018, 3:33 pm
  3. And the hits keep coming: Here’s an article some more information on the disclosure Facebook made on Wednesday that “malicious actors” may have been using a couple of ‘features’ Facebook provides to scrape public profile information from Facebook accounts and associate that information with email address and phone numbers. This is separate from the data collection technique used by the Cambridge Analytica app, and thousands of other app developers, to grab the private profile information from app users and their friends.

    One technique used by these “malicious actors” was to simply feed phone numbers and email addresses into a Facebook “search” box that would return the Facebook profile associated with that email for phone number. All the public information on that profile could then subsequently be collected and associated with that email/phone data. Users had the option of turning off the ability for others to find their profile using this method, but it was turned on by default and apparently few people turned it off.

    The second technique involved used an account recovery tool Facebook provided names, profile pictures and links to the public profiles themselves for anyone pretending to be a Facebook user who forgot how to access their account.

    And according to Facebook, this was being done by actors obtaining emails addresses and phone numbers on people on the Dark Web and then setting up scripts to automate this process for large numbers of emails and phone numbers, “with few Facebook users likely escaping the scam.” In other words, almost every Facebook user probably had their email and phone number associated with their Facebook account via this method. Also keep in mind that you don’t need to go to the Dark Web to buy lists of email addresses and phone numbers, so placing an emphasis on the “Dark Web” as the source for this information is likely part of Facebook’s ongoing attempt to ensure that this scandal doesn’t turn into an educational experience for the public on how widespread the data brokerage industry really is and how much information on people is legally commercially available. In other words, these “malicious actors” were probably operators in the commercial data brokerage market in many cases.

    And as the article notes, pairing email and phone number information with the kind of information people made publicly available on their profiles is exactly the kind of information that identity thieves want to obtain as a starting point for stealing your identity.

    The article also includes more information on just what kind of private profile information app developers like Cambridge Analytica were allowed to grab. Because it’s important to note that we don’t have clarity yet on what exactly app developers were allowed to grab from Facebook profiles. We’ve heard vague descriptions of what was available to the app developers, like Facebook’s ‘profile’ of you (presumably, what they’ve learned or inferred about you) and the list of what you “liked”. But it hasn’t been clear if app developers also had access to literally all of your private Facebook posts. Well, based on the following article, it does indeed sound like app developers potentially had access to literally all of your private Facebook posts. And a lot of that data is probably available on the Dark Web and other black markets too at this point because why not? Facebook made it available and it’s valuable, so why wouldn’t we expect it to be available for sale?

    And the article makes one more stunning revelation regarding the permissions app developers had to scrape this private information: Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group.

    So while Facebook hasn’t yet admitted that they made almost all the private information on people’s Facebook profiles available for identity thieves and any other bad actors for years with little to no oversight and that this data is probably floating around on the Dark Web for sale, they are getting much close to admitting this given their latest round of admissions:

    The Washington Post

    Facebook: ‘Malicious actors’ used its tools to discover identities and collect data on a massive global scale

    by Craig Timberg, Tony Romm and Elizabeth Dwoskin
    April 4, 2017 at 8:13 PM

    Facebook said Wednesday that “malicious actors” took advantage of search tools on its platform, making it possible for them to discover the identities and collect information on most of its 2 billion users worldwide.

    The revelation came amid rising acknowledgement by Facebook about its struggles to control the data it gathers on users. Among the announcements Wednesday was that Cambridge Analytica, a political consultancy hired by President Trump and other Republicans, had improperly gathered detailed Facebook information on 87 million people, of whom 71 million were Americans.

    But the abuse of Facebook’s search tools — now disabled — happened far more broadly and over the course of several years, with few Facebook users likely escaping the scam, company officials acknowledged.

    The scam started when malicious hackers harvested email addresses and phone numbers on the so-called “Dark Web,” where criminals post information stolen from data breaches over the years. Then the hackers used automated computer programs to feed the numbers and addresses into Facebook’s “search” box, allowing them to discover the full names of people affiliated with the phone numbers or addresses, along with whatever Facebook profile information they chose to make public, often including their profile photos and hometown.

    “We built this feature, and it’s very useful. There were a lot of people using it up until we shut it down today,” Chief Executive Mark Zuckerberg said in a call with reporters Wednesday.

    Facebook said in a blog post Wednesday, “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped.”

    Facebook users could have blocked this search function, which was turned on by default, by tweaking their settings to restrict finding their identities by using phone numbers or email addresses. But research has consistently shown that users of online platforms rarely adjust default privacy settings and often fail to understand what information they are sharing.

    Hackers also abused Facebook’s account recovery function, by pretending to be legitimate users who had forgotten account details. Facebook’s recovery system served up names, profile pictures and links to the public profiles themselves. This tool could also be blocked in privacy settings.

    Names, phone numbers, email addresses and other personal information amount to critical starter kits for identity theft and other malicious online activity, experts on Internet crime say. The Facebook hack allowed bad actors to tie raw data to people’s real identities and build fuller profiles of them.

    Privacy experts had issued warnings that the phone number and email address lookip tool left Facebook users’ data exposed.

    Facebook didn’t disclose who the malicious actors are, how the data might have been used, or exactly how many people were affected.

    The revelations about the privacy mishaps come at a perilous time for Facebook, which since last month has wrestled with the fallout of how the data of tens of millions of Americans ended up in the hands of Cambridge Analytica. Those reports have spurred investigations in the United States and Europe and sent the company’s stock price tumbling.

    The news quickly reverberated on Capitol Hill, where lawmakers are set to grill Zuckerberg at a series of hearings next week.

    “The more we learn, the clearer it is that this was an avalanche of privacy violations that strike at the core of one of our most precious American values – the right to privacy,” said Sen. Ed Markey (D-Mass.), who serves on the Senate Commerce Committee, which has called on Zuckerberg to testify at a hearing next week.

    Perhaps the most urgent question for Facebook is whether its practices ran afoul of a settlement it brokered with the Federal Trade Commission in 2011 in response to previous controversies over its handling of user data.

    At the time, the FTC faulted Facebook for misrepresenting the privacy protections it afforded its users and required the company to maintain a comprehensive privacy policy and ask permission before sharing user data in new ways. Violating the terms could result in many millions of dollars of fines.

    The FTC said last week that it would open a new investigation in light of the Cambridge Analytica news, and Wedneday’s revelations are likely to complicate the legal situation, said David Vladeck, a former FTC director of consumer protection who oversaw the 2011 consent decree.

    “This is a company that is, in my view, likely grossly out of compliance with the FTC consent decree,” said Vladeck, now a Georgetown University Law professor. “I don’t think that after these revelations they have any defense at all.” He called the numbers “just staggering.”

    The data Cambridge Analytica obtained relied on different techniques and was more detailed and extensive than what the hackers collected using Facebook’s search functions. The Cambridge Analytica data set included user names, hometowns, work and educational histories, religious affiliations and Facebook “likes” of users and their friends, among other data. Other users affected were in countries including the Philippines, Indonesia, U.K., Canada and Mexico.

    Facebook said it banned Cambridge Analytica last month because the data firm improperly obtained profile information.

    Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015.

    Facebook in March declined to say how much user data went to Cambridge Analytica, saying only that 270,000 people had responded to a survey app created by the researcher in 2014. The researcher was able to gather information on the friends of the respondents without their permission, vastly expanding the scope of his data. That researcher then passed the information on to Cambridge Analytica.

    Facebook declined to say at the time how many other users may have had their data collected in the process. A Cambridge Analytica whistleblower, former researcher Christopher Wylie, said last month the real number of affected people was at least 50 million.

    With its moves over the past week, Facebook is embarking on a major shift in its relationship with third-party app developers that have used Facebook’s vast network to expand their businesses. What was largely an automated process will now involve developers agreeing to “strict requirements,” the company said in its blog post Wednesday. The 2015 policy change curtailed developers’ abilities to access data about people’s friends networks but left open many loopholes that the company tightened on Wednesday.

    “This latest revelation is extremely troubling and shows that Facebook still has a lot of work to do to determine how big this breach actually is,” said Rep. Frank Pallone Jr. (D-N.J.), the top Democrat on the House Energy and Commerce Committee, which will hear from Zuckerberg next Wednesday.

    “I’m deeply concerned that Facebook only addresses concerns on its platform when it becomes a public crisis, and that is simply not the way you run a company that is used by over 2 billion people,” he said.

    Facebook announced plans on Wednesday to add new restrictions to how app developers, data brokers and other third parties can gain access to this data, the latest steps in a years-long process to improve its damaged reputation as a steward of the personal privacy of its users.

    Developers who in the past could get access to people’s relationship status, calendar events, private Facebook posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtaining the information, Facebook said.

    Until Wednesday, apps that let people input a Facebook event into their calendar could also automatically import lists of all the people who attended that event, Facebook said. Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group. App developers who want this access will now have to prove their activities benefit the group. Facebook will now need to approve tools that businesses use to operate Facebook pages. A business that uses an app to help it respond quickly to customer messages, for example, will not be able to do so automatically. Developers’ access to Instagram will also be severely restricted.

    Facebook is now banning apps from accessing users’ information about their religious or political views, relationship status, education, work history, fitness activity, book reading habits, music listening and news reading activity, video watching and games. Data brokers and businesses collect this type of information to build profiles of their customers’ tastes.

    ———-

    “Facebook: ‘Malicious actors’ used its tools to discover identities and collect data on a massive global scale” by Craig Timberg, Tony Romm and Elizabeth Dwoskin; The Washington Post; 04/04/2018

    “But the abuse of Facebook’s search tools — now disabled — happened far more broadly and over the course of several years, with few Facebook users likely escaping the scam, company officials acknowledged.”

    Few Facebook users likely escaping the “scam” of using the feature Facebook turned on by default and was a obvious massive privacy violation. A “scam” that was also far less of a privacy violation than what Facebook made available to app developers, but still a scam that likely impacted almost all Facebook users. And the more information people made available on their public profiles, the more these “scammers” could collect about them:


    The scam started when malicious hackers harvested email addresses and phone numbers on the so-called “Dark Web,” where criminals post information stolen from data breaches over the years. Then the hackers used automated computer programs to feed the numbers and addresses into Facebook’s “search” box, allowing them to discover the full names of people affiliated with the phone numbers or addresses, along with whatever Facebook profile information they chose to make public, often including their profile photos and hometown.

    “We built this feature, and it’s very useful. There were a lot of people using it up until we shut it down today,” Chief Executive Mark Zuckerberg said in a call with reporters Wednesday.

    Facebook said in a blog post Wednesday, “Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped.”

    Facebook users could have blocked this search function, which was turned on by default, by tweaking their settings to restrict finding their identities by using phone numbers or email addresses. But research has consistently shown that users of online platforms rarely adjust default privacy settings and often fail to understand what information they are sharing.

    And then there was Facebook’s account recovery function that Facebook also made easy to exploit:


    Hackers also abused Facebook’s account recovery function, by pretending to be legitimate users who had forgotten account details. Facebook’s recovery system served up names, profile pictures and links to the public profiles themselves. This tool could also be blocked in privacy settings.

    And, again, while this kind of information wasn’t necessarily as extensive as the private information Facebook made available to app developers, it was still a very valuable starter kit of identity theft:


    Names, phone numbers, email addresses and other personal information amount to critical starter kits for identity theft and other malicious online activity, experts on Internet crime say. The Facebook hack allowed bad actors to tie raw data to people’s real identities and build fuller profiles of them.

    Privacy experts had issued warnings that the phone number and email address lookip tool left Facebook users’ data exposed.

    And, of course, that ‘identity theft starter kit’ data – associated phone numbers and emails with real names and other publicly available information – could potentially become combined with the private information made to app developers. Information to app developers that apparently included “people’s relationship status, calendar events, private Facebook posts, and much more data“:


    The data Cambridge Analytica obtained relied on different techniques and was more detailed and extensive than what the hackers collected using Facebook’s search functions. The Cambridge Analytica data set included user names, hometowns, work and educational histories, religious affiliations and Facebook “likes” of users and their friends, among other data. Other users affected were in countries including the Philippines, Indonesia, U.K., Canada and Mexico.

    Facebook said it banned Cambridge Analytica last month because the data firm improperly obtained profile information.

    Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015.

    With its moves over the past week, Facebook is embarking on a major shift in its relationship with third-party app developers that have used Facebook’s vast network to expand their businesses. What was largely an automated process will now involve developers agreeing to “strict requirements,” the company said in its blog post Wednesday. The 2015 policy change curtailed developers’ abilities to access data about people’s friends networks but left open many loopholes that the company tightened on Wednesday.

    Facebook announced plans on Wednesday to add new restrictions to how app developers, data brokers and other third parties can gain access to this data, the latest steps in a years-long process to improve its damaged reputation as a steward of the personal privacy of its users.

    Developers who in the past could get access to people’s relationship status, calendar events, private Facebook posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtaining the information, Facebook said.

    So if “people’s relationship status, calendar events, private Facebook posts, and much more data” was made available to app developers, it raises the question: what wasn’t made available?

    It’s all a reminder that there is indeed a “malicious actor” who took possession of all your private data and its name is Facebook.

    Posted by Pterrafractyl | April 5, 2018, 1:53 pm
  4. Here’s a series of articles that that serve as a reminder that Facebook isn’t just an ever-growing vault of personal data profiles on almost everyone (albeit a very leaky data vault). It’s also a medium through which non-Facebook ever-growing vaults of personal data, in particular the data brokerage giants like Acxiom, can be merged with Facebook’s vault, ostensibly for the purpose of making Facebook’s targeted ads even more targeted.

    This third-party is done through Facebook’s “Partner Categories” program: Facebook advertisers have the option of filtering their Facebook ad targeting based on, for instance, the group of people who purchased cereal using data from Acxiom’s consumer spending data base. As such, that are potentially Facebook’s biggest competitors become Facebook’s biggest partners.

    Not surprisingly, merging Facebook’s extensive personal data profiles with the already very extensive personal data profiles held by the data brokerage industry raises a number of privacy concerns. Privacy concerns that are hitting a peak in the wake of the Cambridge Analytica scandal. So, also not surprisingly, Facebook just announced the end of the Partner Categories program over the next six months as part of its post-Cambridge Analytica public relations campaign:

    Recode

    Facebook is cutting third-party data providers out of ad targeting to clean up its act
    Facebook says it’s going to stop using data from third-party data provides like Experian and Acxiom.

    By Kurt Wagner
    Mar 28, 2018, 6:11pm EDT

    Facebook is going to limit how much data it makes available to advertisers buying hyper-targeted ads on the social network.

    More specifically, Facebook says it will stop using data from third-party data aggregators — companies like Experian and Acxiom — to help supplement its own data set for ad targeting.

    Facebook previously let advertisers target people using data from a number of sources:

    * Data from Facebook, which the company collects from user activity and profiles.
    * Data from the advertiser itself, like customer emails they’ve collected on their own.
    * Data from third-party services like Experian, which can collect offline data such as purchasing activity, that Facebook uses to help supplement its own data set. When marketers use this data to target ads on Facebook, the social giant gives some of the ad money from that sale to the data provider.

    This third data set is primarily helpful to advertisers that might not have their own customer data, like small businesses or consumer packaged goods companies that sell their products through brick-and-mortar retailers.

    But now Facebook is changing its relationship with these third parties as part of a broader effort to clean up its data practices following the recent Cambridge Analytica privacy scandal. (Facebook still uses these companies to help with ad measurement, though a source says that the company is reevaluating that practice, too.)

    The thinking is that Facebook has less control over where and how these firms collect their data, which makes using it more of a risk. Apparently it’s not important enough to Facebook’s revenue stream to deal with a potential headache if something goes wrong.

    Facebook confirmed the move in a statement attributable to Graham Mudd, a product marketing director at the company.

    ”We want to let advertisers know that we will be shutting down Partner Categories,” Mudd said in the statement. “This product enables third-party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.”

    Had it been made earlier, Facebook’s decision to stop using third-party data providers for targeting would not have impacted the outcome of the Cambridge Analytica scandal, in which the outside firm collected the personal data of some 50 million Facebook users without their permission.

    ———

    “Facebook is cutting third-party data providers out of ad targeting to clean up its act” by Kurt Wagner; Recode; 03/28/2018

    “More specifically, Facebook says it will stop using data from third-party data aggregators — companies like Experian and Acxiom — to help supplement its own data set for ad targeting.”

    As we can see, Facebook isn’t just promising to cut off the personal data leaking out of its platforms to address privacy concerns. It’s also promising to cut off some of the data flowing into its platforms. Data from the data brokerage giants flowing into Facebook in exchange for some of the ad money when that data results in a sale:


    Facebook previously let advertisers target people using data from a number of sources:

    * Data from Facebook, which the company collects from user activity and profiles.
    * Data from the advertiser itself, like customer emails they’ve collected on their own.
    * Data from third-party services like Experian, which can collect offline data such as purchasing activity, that Facebook uses to help supplement its own data set. When marketers use this data to target ads on Facebook, the social giant gives some of the ad money from that sale to the data provider.

    This third data set is primarily helpful to advertisers that might not have their own customer data, like small businesses or consumer packaged goods companies that sell their products through brick-and-mortar retailers.

    And while the public explanation for this move is that this is being done to address privacy concerns, there’s also the suspicion that Facebook is willing to make this move simply because Facebook doesn’t necessarily need this third-party data to make its ads more effective. So while cutting out this data-brokerage data is a potential loss for Facebook, that loss might be outweighed by the growing headache of privacy concerns for Facebook that comes from directly incorporating third-party data into its ad algorithms when it can’t control whether or not these third-party data brokerages obtained their own data sets in an ethical manner. In other words, the headache isn’t worth the extra profit this data-sharing arrangement yields:


    The thinking is that Facebook has less control over where and how these firms collect their data, which makes using it more of a risk. Apparently it’s not important enough to Facebook’s revenue stream to deal with a potential headache if something goes wrong.

    So is it the case that Facebook is using this Cambridge Analytica scandal as an excuse to cut these data brokers that Facebook doesn’t actually need out of the loop? Well, as the following article notes, it’s not like Facebook doesn’t have the option of buying that data from the data brokers themselves and just incorporating the data into their internal ad targeting models. But Facebook always had that option and still chose to go ahead with this Partner Categories program, so it’s presumably the case that paying outright for that brokerage data is more expensive than setting up the Partner Categories program and giving the brokerages a cut of the ad sales.

    As the following article also notes, advertisers will still be able to get that data brokerage information for the purpose of further targeting Facebook users. How so? Because notice the second data set in the above article that Facebook uses for targeting ads: data sets from the advertisers themselves. Like lists of email addresses of the people they want to target. It’s the same Custom Audiences tool that was used extensively by the Trump campaign for its “A/B testing on steroids” psychological profiling techniques. So there’s nothing stopping advertisers from getting that list of email addresses from a data broker and then feeding that into Facebook, effectively leaving the same arrangement in place but in a less direct manner. But it’s less convenient and presumably less profitable if advertisers have to do this themselves. It’s a reminder that partnering means more profits in the business Facebook is in.

    Finally, as digital privacy expert Frank Pasquale also points out in the following article, there’s no real reason to assume Facebook is actually going to stand by this pledge to shut down the Partner Categories program over the next six months. It might just quietly start it up again in some other form or reverse simply reverse this decision after the public’s attention shifts away.

    So while there are valid questions as to why Facebook is making this policy change, there are unfortunately also valid questions over whether or not this policy change will make any difference and whether or not Facebook will even make this policy change at all:

    The Washington Post

    Facebook, longtime friend of data brokers, becomes their stiffest competition

    by Drew Harwell
    March 29, 2018

    Facebook was for years a best friend to the data brokers who make hundreds of millions of dollars a year gathering and selling Americans’ personal information. Now, the world’s largest social network is souring that relationship — a sign that the company believes it has overshadowed their data-gathering machine.

    Facebook said late Wednesday that it would stop data brokers from helping advertisers target people with ads, severing one of the key methods marketers used to link users’ Facebook data about their friends and lifestyle with their offline data about their families, finances and health.

    The data brokers have for years served a silent but critical role in directing users’ attention to Facebook’s ads. They also, critics say, stealthily contributed to the seemingly all-knowing creepiness of users seeing ads for things they never mentioned on their Facebook pages. A marketer who wanted to target new mothers, for instance, could use the data brokers’ information to send Facebook ads to all women who bought baby formula with a store rewards card.

    Acxiom, Experian and other data brokers once had a prized seat at Facebook’s table, through a program called Partner Categories, that allowed advertisers to tap into the shadow profiles crafted with data from Facebook and the brokers to drill down on their target audience. The data brokers got a cut of the money when the ads they helped place turned into a sale, and Facebook also shared some data with the brokers to help gauge how well its ads performed.

    A Facebook director said in a statement that the company will wind down that program over the next six months, which “will help improve people’s privacy on Facebook.” But privacy experts saw the move as an assertion of dominance from the social network, which in recent years has consolidated its power over an increasingly intimate level of detail about its users’ lives — and wants advertisers to pay for its expertise.

    “Facebook is officially in the data-mining business,” said Joel Winston, a privacy lawyer in Pittsburgh. “It’s a definitive signal that Facebook’s data capture and identity-targeting technology is light-years ahead of its competitors’.”

    The move comes as Facebook battles a major privacy scandal in the wake of revelations that a political data firm, Cambridge Analytica, took advantage of the site’s loose privacy rules and improperly obtained data on more than 30 million Facebook users. The company has in recent days outlined steps showing how users can see and limit what Facebook knows about them, following what chief executive Mark Zuckerberg called a “major breach of trust.”

    In 2015, Facebook restricted the kinds of data that outside developers, including the researcher who fed data to Cambridge Analytica, could gather from users and their friends. Christopher Wylie, Cambridge Analytica’s whistleblower, told The Washington Post that Cambridge Analytica had paired Facebook data with information from data brokers to build out its voter profiles.

    But the social network continued to strengthen its ties with the data brokers who gather and repackage user information. That year, Acxiom said its involvement in Partner Categories helped its advertising clients use Facebook “to better connect with people more inclined to buy certain products or services,” adding that its clients included most of the country’s top 10 insurers, retailers, automakers, hotels, telecommunications giants and banks. One year earlier, in 2014, the Federal Trade Commission issued a report finding that data brokers had collected information on nearly every American and saying that the brokers “operate with a fundamental lack of transparency.”

    While Facebook gathers much of its 2 billion users’ online information, the data brokers attempt to scoop up everything else, including billions of bits of information from voter rolls, property records, purchase histories, loyalty card programs, consumer surveys, car dealership records and other databases.

    The brokers use that raw data to build models predicting (with varying success) many hundreds of details about a customer’s behavior, finances and personality: age, family status, household income, whether she likes crossword puzzles, interest in buying a household pet, likelihood of having a funeral plan. The data brokers then sell those consumer profiles to marketers and major conglomerates seeking a vast and targeted customer base — including on Facebook, which now accounts for a fifth of the world’s online ads.

    Acxiom, the Arkansas-based broker that has worked with Facebook since 2013 and reported more than $880 million in revenue last year, estimated Facebook’s ditching of its data-sharing program would carve as much as $25 million from the company’s revenue and profit. In a statement late Wednesday, Acxiom said Facebook had alerted it that day to the news. “Today, more than ever, it is important for businesses to be able to rely upon companies that understand the critical importance of ethically sourced data and strong data governance. These are among Acxiom’s core strengths,” chief executive Scott Howe said. Its stock plunged more than 30 percent Thursday morning.

    Representatives for data broker Experian did not respond to questions, and data broker Oracle Data Cloud declined to comment. Experian stock moved downward slightly, while Oracle shares traded up about 1 percent. Facebook shares climbed about 3 percent, helping puncture weeks of losses.

    Data brokers’ models are often intricately and oddly detailed: Acxiom has categorized people into one of 70 “household life stage clusters,” including “Career-Centered Singles,” “Soccer and SUVs,” “Apple Pie Families” and “Rolling Stones.” But advertisers wanting more information — served straight from the source, in the person’s own words — have increasingly turned to Facebook, where they can grab first-party data from the actual customer, and not just third-party data gathered and analyzed from afar.

    Facebook and the data brokers have often dealt in the same kinds of personal information advertisers find impossible to resist. Experian, for instance, runs a Newborn Network that sells advertisers detailed information, gleaned from personal spending and demographic data, of women they predict are new and expectant mothers; the company says it “captures more than 80 percent of all U.S. births.” But Facebook users also freely share baby photos and mark their life events — a more direct way of relaying the same information to sellers of baby formula, cribs and maternity clothes.

    Advertisers will still be able to work with the data brokers to gather information and target customers; they’ll just have to do it outside Facebook. Critics pointed to a few ways, such as Facebook’s Custom Audiences tool, that will allow advertisers to still target customers en masse based on financial and other data they’ve pulled from across the Web.

    Some privacy experts cheered Facebook’s data-broker move as a step toward preserving user privacy. “It’s long overdue that Facebook owned up to the serious erosion of consumer privacy made possible by its alliance with powerful data brokers,” said Jeffrey Chester, executive director of the Washington privacy-rights nonprofit Center for Digital Democracy.

    Chris Sperandio, a product head of privacy at the marketing-data start-up Segment, said the move also helps Facebook dodge growing questions over the source of its user information. That is quickly becoming a high-stakes legal issue: A sweeping privacy rule coming to Europe in May, the General Data Protection Regulation, will make the company more liable and accountable for knowing where its data comes from.

    But some critics questioned what effect the move would have in a site that counts selling access to its users’ information as its biggest moneymaker. Facebook, privacy experts said, nets a vast range of real-time information — friendships, photos, work histories, interests and consumer tastes, as well as mobile, location and facial-recognition data — that advertisers view as more current and accurate than the broker information inferred from old receipts and government logs. What, they ask, would advertisers need to pay data brokers for?

    “We don’t know enough about Facebook’s data trove to know whether their abandonment of Partner Categories helps users avoid privacy invasions,” said Frank Pasquale, a University of Maryland professor who specializes in algorithms and privacy. “Even if we did have that knowledge, we have little reason to trust Facebook to actually follow through on it. It may well change course once media attention has gone elsewhere.”

    ———-

    “Facebook, longtime friend of data brokers, becomes their stiffest competition” by Drew Harwell; The Washington Post; 03/29/2018

    “Facebook said late Wednesday that it would stop data brokers from helping advertisers target people with ads, severing one of the key methods marketers used to link users’ Facebook data about their friends and lifestyle with their offline data about their families, finances and health.”

    Yep, one of the key methods marketers used to link Facebook data with all the offline data that these data brokerages were able to collect just might get severed. It’s potentially a big deal for Facebook and the advertising industry. Or potentially not. That’s part of what makes this such a fascinating move by Facebook: It’s potentially quite significant and potentially inconsequential:


    The data brokers have for years served a silent but critical role in directing users’ attention to Facebook’s ads. They also, critics say, stealthily contributed to the seemingly all-knowing creepiness of users seeing ads for things they never mentioned on their Facebook pages. A marketer who wanted to target new mothers, for instance, could use the data brokers’ information to send Facebook ads to all women who bought baby formula with a store rewards card.

    Acxiom, Experian and other data brokers once had a prized seat at Facebook’s table, through a program called Partner Categories, that allowed advertisers to tap into the shadow profiles crafted with data from Facebook and the brokers to drill down on their target audience. The data brokers got a cut of the money when the ads they helped place turned into a sale, and Facebook also shared some data with the brokers to help gauge how well its ads performed.

    And note how this cooperating with these brokerages as only growing during the same period that Facebook cut off the “friends permissions” privacy loophole exploited by Cambridge Analytica’s app and thousands of other apps in 2015. It’s a reminder that even when Facebook is getting better in some ways, it’s probably getting worse in other ways:


    In 2015, Facebook restricted the kinds of data that outside developers, including the researcher who fed data to Cambridge Analytica, could gather from users and their friends. Christopher Wylie, Cambridge Analytica’s whistleblower, told The Washington Post that Cambridge Analytica had paired Facebook data with information from data brokers to build out its voter profiles.

    But the social network continued to strengthen its ties with the data brokers who gather and repackage user information. That year, Acxiom said its involvement in Partner Categories helped its advertising clients use Facebook “to better connect with people more inclined to buy certain products or services,” adding that its clients included most of the country’s top 10 insurers, retailers, automakers, hotels, telecommunications giants and banks. One year earlier, in 2014, the Federal Trade Commission issued a report finding that data brokers had collected information on nearly every American and saying that the brokers “operate with a fundamental lack of transparency.”

    And while some of the data gathered by the data brokerages inevitably overlaps with what Facebook also gathers on people, there are quite a few categories of ‘offline’ data these data brokers systematically gather that Facebook can’t gather without seeming super extra creepy. Data brokers gather data from places like voter rolls, property records, purchase histories, loyalty card programs, consumer surveys, car dealership records. Imagine if Facebook directly gathered that kind of offline information about everyone instead of just buying it from the brokerages or setting up arrangements like the Partner Categories program. Imagine how incredibly creepy that would be if Facebook had an ‘offline data collective’ division. It’s a reminder that Facebook and the data brokers really are engaged in an ‘online’/’offline’ data gathering and aggregation joint effort. “Partner Categories” is an appropriate name because it’s a real partnership that’s important to both parties because it would be a bigger PR nightmare if Facebook had to collect all this offline data itself:


    While Facebook gathers much of its 2 billion users’ online information, the data brokers attempt to scoop up everything else, including billions of bits of information from voter rolls, property records, purchase histories, loyalty card programs, consumer surveys, car dealership records and other databases.

    The brokers use that raw data to build models predicting (with varying success) many hundreds of details about a customer’s behavior, finances and personality: age, family status, household income, whether she likes crossword puzzles, interest in buying a household pet, likelihood of having a funeral plan. The data brokers then sell those consumer profiles to marketers and major conglomerates seeking a vast and targeted customer base — including on Facebook, which now accounts for a fifth of the world’s online ads.

    And, of course, the Custom Audiences tool that lets advertisers feed in lists of things like email address to target specific audiences – used extensively by the 2016 Trump campaign – might make the decision to end the Partner Categories program moot:


    Advertisers will still be able to work with the data brokers to gather information and target customers; they’ll just have to do it outside Facebook. Critics pointed to a few ways, such as Facebook’s Custom Audiences tool, that will allow advertisers to still target customers en masse based on financial and other data they’ve pulled from across the Web.

    And as Frank Pasquale points out, we also don’t know enough about what Facebook knows about us to know now much of an impact ending the Partner Categories program will make to the privacy violations involved with Facebook’s whole business model. It’s entirely possible this change will make fusing data broker data with Facebook data less convenient and less profitable, but also still just as privacy violating both because the present day set up can be replicated indirectly (by Facebook advertisers coordinating with the data brokers separately) and also because Facebook might know almost everyone the data brokers know just from its own data collection methods. In other words, this could be largely cosmetic. And, as Pasquale also pointed out, Facebook might just change its mind and not end the program once public attention wanes:


    Some privacy experts cheered Facebook’s data-broker move as a step toward preserving user privacy. “It’s long overdue that Facebook owned up to the serious erosion of consumer privacy made possible by its alliance with powerful data brokers,” said Jeffrey Chester, executive director of the Washington privacy-rights nonprofit Center for Digital Democracy.

    But some critics questioned what effect the move would have in a site that counts selling access to its users’ information as its biggest moneymaker. Facebook, privacy experts said, nets a vast range of real-time information — friendships, photos, work histories, interests and consumer tastes, as well as mobile, location and facial-recognition data — that advertisers view as more current and accurate than the broker information inferred from old receipts and government logs. What, they ask, would advertisers need to pay data brokers for?

    “We don’t know enough about Facebook’s data trove to know whether their abandonment of Partner Categories helps users avoid privacy invasions,” said Frank Pasquale, a University of Maryland professor who specializes in algorithms and privacy. “Even if we did have that knowledge, we have little reason to trust Facebook to actually follow through on it. It may well change course once media attention has gone elsewhere.”

    So is this annoiced policy changed going to happen? Will it matter if it happens? It’s a pretty significant question and not one easy to answer given that Facebook’s algorithms are largely a black box.

    That said, Josh Marshall might have a significant data point for us with regards to how important the current third-party data sharing arrangement with data brokerage giants really is in terms of the performance of Facebook’s ad targeting performance: starting in early March advertisers started noticing a significant drop off in the targeting quality of Facebook’s ads. Facebooks ad targeting quality just got worse for some reason. And this was early March, which is before the Cambridge Analytica story hit in mid-march but possibly after Facebook knew the Cambridge Analytica story was coming. So the timing of this observation is interesting and Marshall has a hunch: Facebook was already experimenting with how its internal advertising algorithm would operate without direct access to the data brokerages and potentially without access to a lot of other data sources in anticipation of the new EU regulations and new regulations from the US Congress. In other words, Facebook already saw the writing on the wall before the recent wave of Cambridge Analytica revelations went public and has already started the shift to an in-house ad targeting algorithm and it shows.

    Now, it’s possible that Josh Marshall could be correct that Facebook has already started implementing an internal-only ad targeting algorithm and it’s noticably worse now but that it get better in the long run because Facebill will improve its third party-limited algorithm and the advertisers and brokers adapt to a new, less direct data-sharing arrangement. Maybe everyone will adapt and get up to par. Time will tell.

    But if not, and if the loss of these data sharing arrangements makes Facebook’s ads less effective in the long run because – maybe because it’s much more efficient to directly funnel the broker data and a whole bunch of other third-party data into Facebook and the indirect methods can’t replicate this arrangement – then it’s worth noting that this downgrade in Facebook’s ad targeting quality as a result of the loss of this third-party data would reflect a real form of privacy enhancement and generally should be cheered. And is also a statement on the public utility of the overall data brokerage industry that is dedicated to collecting, aggregated, and selling personal data profiles. There’s a lot of negative utility in this industry and this wave of Facebook scandals is just one facet of it. So if Marshall’s guess is correct and this observable dropoff in Facebook ad quality reflects a decision by Facebook to preemptively take third-party data out of its ad targeting algorithms in anticipation of the new EU data privacy laws and future congressional action in the US, let’s hope that drop-off is sustained for our privacy’s sake:

    Talking Points Memo
    Editor’s Blog

    Is Facebook In More Trouble Than People Think?

    By Josh Marshall | April 5, 2018 12:45 pm

    For more than a year, Facebook has faced a rolling public relations debacle. Part of this is the American public’s shifting attitudes toward Big Tech and platforms in general. But the driving problem has been the way the platform was tied up with and perhaps implicated in Russia’s attempt to influence the 2016 presidential election. Users’ trust in the platform has been shaken, politicians are threatening scrutiny and possible regulation, and there’s even a campaign to get people to delete their Facebook accounts. All of this is widely known and we hear more about it every day. But most users, most people in tech and also Wall Street (which is the source of Facebook’s gargantuan valuation) don’t yet get the full picture. We know about Facebook’s reputational crisis. But people aren’t fully internalizing that the current crisis poses a potentially dire threat to Facebook’s core business model, its core advertising business.

    Facebook is fundamentally an advertising business. Almost all of the company’s revenue comes from advertising that it targets with unparalleled efficiency to its billions of users. In a media world in which advertising rates face almost universal downward pressure, Facebook’s rates have consistently risen. Monopoly power may drive some of that growth. But the key driver is efficiency. If old-fashioned advertising shows my advertisement to 100 people for every actual buyer and other digital platforms show it to 30 people and Facebook shows it to 5 people, Facebook’s ads are just worth a lot more.

    As long as the rates bear some relationship to that efficiency (those numbers above are just for illustration), I’ll be happy to pay it. Because it’s objectively worth more. Indeed, as the prices have gone up, Facebook has actually gotten more efficient. As one digital ad agency executive recently told me, even if Facebook jacked up the prices a lot more, his firm would likely keep using them just as much because on this cost to efficiency basis it’s still cheap. This is the basis of Facebook’s astronomical market capitalization which today rates at over $450 billion, even after some recent reverses.

    So the money comes from the advertising. And the advertising comes from the data and the artificial intelligence that crunches it and models it into predictive efficiency. But what if there’s a breakdown in the data?

    Starting in a early March, a number of marketers running substantial sums on the Facebook ad engine, who’ve spoken to TPM, started noticing a new level of platform instability and reductions in targeting efficiency. To understand what this means, think about it like how an efficient debt or equity market operates. If there is relatively accurate information, no big external shocks and enough buyers and sellers, pricing should have relative stability and operate within certain bands. Accounting for some reasonable amount of bumpiness that’s what Facebook’s ad engine has looked like for a few years. But starting in March, if you’re down in the trenches working with the granular numbers, something started to look weird: price oscillations, reduced targeting efficiency and even glitches.

    We’ve talked to a number of advertisers who’ve reported this. We’ve also talk to others who haven’t. But the ones who have tend to be the ones more tightly tied to the numbers and in marketing operations with tighter ROI (return on investment). Where we’ve seen the most of this is with so-called DTC (direct to consumer) marketers. Facebook is an amazingly large ecosystem. And it’s all a black box. So there’s no way for us to talk to a representative sample of advertisers. But something is going on in at least substantial sectors of Facebook’s ad engine. What is it? Marketers who’ve asked mainly get told it’s their creativity. In other words, the ad you’re running isn’t working. Come up with another ad. Here at TPM, we operate in a different part of the programmatic ad universe. You hear comparable things like that a lot. And it’s hard to ignore. But we’ve talked to people with different people with (by definition) different ads in totally different industries. So that’s not it. Something is happening.

    So what’s up?

    One thing is already being discussed widely in the trade press. In response to the rolling public relations debacle Facebook has already dramatically reduced or has announced that it will reduce advertisers’ ability to use third party ad data on the Facebook platform. That is a big deal. What’s that mean?

    As you know, through your activity on Facebook, Facebook collects lots of data about you that it then uses to target ads. That’s “Facebook data” (or it’s yours, but you know what I mean). Facebook also allows advertisers to upload “1st party data”. What’s that? That’s if my book publishing company has a list of 50,000 emails, I can upload those emails to Facebook and run ads to those people. Then there’s “3rd party data”. That’s if the advertiser or Facebook itself goes to another personal data broker, buys access to that data and pours it into the Facebook ecosystem for more efficient targeting.

    If you’re not versed in the world of data and digital advertising, there’s a ton here to keep up with. But here’s the key. How reliant is Facebook’s advertising cash cow on third party data? Not just the third party data that Facebook allows advertisers to put into its ecosystem for better targeting (which is now being phased out) but 3rd party data Facebook uses itself to improve its ad targeting? As one data industry executive put it to me, sure Facebook can crunch its own data to find out all sorts of things about you. But in a lot of cases it may be easier, cheaper and in some cases simply more effective to buy that data from other sources. We don’t really know – and no one outside Facebook really knows – how good Facebook’s AI really is at modeling user data entirely on its own without other sorts of data mixed in. It’s a black box. It matters a lot in terms of Facebook’s core revenue stream.

    Here’s another question: when you consider Facebook’s own data, how reliant is Facebook on ways it collects and processes Facebook data which it may not be able to do any longer either because of new regulations that come into effect later this year for the EU or because of new regulations Congress may put into effect as it puts new scrutiny on Facebook’s behavior?

    My hunch is that the answer to most or all of these questions is “a lot more than most people realize.” We already know that Facebook is making a lot of changes to how it uses data, especially third party data and how it allows advertisers to use data. Some of this is already public. Indeed, it’s getting discussed a lot in the trade press – particular how Facebook will implement with and cope with the new regs from the European Union. So why all the choppiness in Facebook’s advertising and targeting metrics? I suspect that Facebook is trying to rejigger its algorithm on the fly more than people realize in order to see if they can get it to work as effectively for ads without a lot of data sources or data uses they really aren’t supposed to be doing or which they suspect they’ll lose access to in coming regulation. That is the most logical explanation of the instability in their reporting.

    If you talk to ad industry people, they treat it as a given that Facebook is already having to “rebuild their platform basically from the ground up” as one top agency executive told me, in response to “fake news”, propaganda campaigns, privacy scrutiny, etc. – all the stuff we’ve read about over recent months. But it’s Facebook. They’ll work it out, is what these people figure. And they’re probably right. Facebook is huge, has massive resources and access to the world’s largest audience for anything ever. They have oceans of data and a massive leg up on everyone. Down at the more granular level though, even in the industry press, it is treated as a given that the already publicly announced new restrictions on third party data will likely lead to at least some migration of advertisers to new platforms. Ginny Marvin, a top trade press reporter working at the granular ad tech and marketing level rather than up in tech big think land, tweeted this on March 30th: “FB removing 3P [3rd party] data is a big change for advertisers. But at FB’s scale, you’re not going to see advts sharply pivot elsewhere en masse. This will look more like a slow moving ship of budgets diverting to other media if they don’t get performance they want from FB.”

    For now, as Marvin notes, Facebook’s advertiser lock-in, market power and simple price value make it highly unlikely that there’s going to be any dramatic near-term move from Facebook even in the worse case scenario. But Facebook isn’t just making money hand over fist. It’s market valuation rests on the assumption that it will keep making that amount of money hand over fist and indeed keep increasing the amount of money it makes hand over fist. Any breakdown or significant slowdown in that growth and consistency is a big problem. Years ago, everyone counted Facebook out as a true profit platform, until it exceeded everyone’s expectations. Now, even with all the bad press, most figure that it’s profitable forever. Both conventional wisdoms were wrong. For now, keep in mind that Facebook isn’t just dealing with a reputational crisis. It’s having to clean up the reputational mess by rejiggering parts of its core revenue stream it’s not clear it really knows how to do. That creates a lot of unpredictability. More than most people seem to realize.

    ———-

    “Is Facebook In More Trouble Than People Think?” by Josh Marshall; Talking Points Memo; 04/05/2018

    “For more than a year, Facebook has faced a rolling public relations debacle. Part of this is the American public’s shifting attitudes toward Big Tech and platforms in general. But the driving problem has been the way the platform was tied up with and perhaps implicated in Russia’s attempt to influence the 2016 presidential election. Users’ trust in the platform has been shaken, politicians are threatening scrutiny and possible regulation, and there’s even a campaign to get people to delete their Facebook accounts. All of this is widely known and we hear more about it every day. But most users, most people in tech and also Wall Street (which is the source of Facebook’s gargantuan valuation) don’t yet get the full picture. We know about Facebook’s reputational crisis. But people aren’t fully internalizing that the current crisis poses a potentially dire threat to Facebook’s core business model, its core advertising business.”

    As Josh Marshall points out, if Facebook really does have to turn off the third-party data spigot, the question of what this will actually do to the quality of its ad targeting is a massive question. The importance of the direct third-party data sharing arrangement is one of the big questions swirling around Facebook for both Facebook’s investors (from a price per share standpoint) and the public (from a public privacy standpoint). The fact that the EU’s new data privacy rules are hitting Facebook in Europe right when the Cambridge Analytica scandal starts playing out in the US and threatens to snowball into a larger scandal about Facebook’s business model in general just makes it a bigger question for Facebook.

    And it’s a crisis for Facebook that will be numerically reflected in one key measure pointed out by Marshall: the number of advertisements that need to be shown to trigger a sale on Facebook compared to other platforms. It’s a 5-to-1 ratio for Facebook vs a 30-to-1 ratio for other digital platforms and 100-to-1 for traditional ads. Facebook really is much better at targeting its ads than even its digital peers. So when Facebook gets worse at targeting its ads, that does amount to real privacy gains because it’s one of the biggest and best ad cutting edge ad targeting platforms. This is why Facebook is worth over $450 billion:


    Facebook is fundamentally an advertising business. Almost all of the company’s revenue comes from advertising that it targets with unparalleled efficiency to its billions of users. In a media world in which advertising rates face almost universal downward pressure, Facebook’s rates have consistently risen. Monopoly power may drive some of that growth. But the key driver is efficiency. If old-fashioned advertising shows my advertisement to 100 people for every actual buyer and other digital platforms show it to 30 people and Facebook shows it to 5 people, Facebook’s ads are just worth a lot more.

    As long as the rates bear some relationship to that efficiency (those numbers above are just for illustration), I’ll be happy to pay it. Because it’s objectively worth more. Indeed, as the prices have gone up, Facebook has actually gotten more efficient. As one digital ad agency executive recently told me, even if Facebook jacked up the prices a lot more, his firm would likely keep using them just as much because on this cost to efficiency basis it’s still cheap. This is the basis of Facebook’s astronomical market capitalization which today rates at over $450 billion, even after some recent reverses.

    “If old-fashioned advertising shows my advertisement to 100 people for every actual buyer and other digital platforms show it to 30 people and Facebook shows it to 5 people, Facebook’s ads are just worth a lot more”

    And that’s why this is a pretty big story if there’s a real drop in the quality of Facebook’s ad targeting quality. Facebook is wildly ahead of almost all of its competition. Only Google and governments are going to compete with what Facebook knows about us all. So if Facebook effectively knows less about us, as reflected in a drop in the ad targeting observed starting in early March, that reflects a real de facto increase in public privacy. And it’s also a big story from a business standpoint because it’s it’s not just about Facebook, it’s also about the entire data brokerage industry. There’s a large part of the modern US economy potentially tied into this Facebook scandal. A scandal that now extends beyond the Cambridge Analytica app situation and has led to Facebook declaring the phaseout of its Partner Categories program. Is this ushering in a sea change in the data brokerage industry? If so, that’s big.

    Facebook was going to have a sea change in how it did business in the EU thanks to the new data privacy laws, but it’s this Cambridge Analytica scandal that appears to be driving the likelihood of sea change in the US market too. And that’s part of why it’s notable if Facebook really did start rejiggering its algorithms without that third-party data in early March, potentially in anticipation of this flurry of bad press, and then the ad targeting suddenly got worse. Because if it turns out that the loss of the third-party data makes Facebook’s ad targeting worse, we should note that. And ask ourselves whether or not making Facebook even worse at targeting ads would be desirable from a public privacy perspective. The more Facebook sucks at ads the better Facebook is for everyone from a privacy perspective. It’s one of the fundamental contradictions of Facebook’s business model that this Cambridge Analytica scandal risks exposing to the public:


    So the money comes from the advertising. And the advertising comes from the data and the artificial intelligence that crunches it and models it into predictive efficiency. But what if there’s a breakdown in the data?

    Starting in a early March, a number of marketers running substantial sums on the Facebook ad engine, who’ve spoken to TPM, started noticing a new level of platform instability and reductions in targeting efficiency. To understand what this means, think about it like how an efficient debt or equity market operates. If there is relatively accurate information, no big external shocks and enough buyers and sellers, pricing should have relative stability and operate within certain bands. Accounting for some reasonable amount of bumpiness that’s what Facebook’s ad engine has looked like for a few years. But starting in March, if you’re down in the trenches working with the granular numbers, something started to look weird: price oscillations, reduced targeting efficiency and even glitches.

    We’ve talked to a number of advertisers who’ve reported this. We’ve also talk to others who haven’t. But the ones who have tend to be the ones more tightly tied to the numbers and in marketing operations with tighter ROI (return on investment). Where we’ve seen the most of this is with so-called DTC (direct to consumer) marketers. Facebook is an amazingly large ecosystem. And it’s all a black box. So there’s no way for us to talk to a representative sample of advertisers. But something is going on in at least substantial sectors of Facebook’s ad engine. What is it? Marketers who’ve asked mainly get told it’s their creativity. In other words, the ad you’re running isn’t working. Come up with another ad. Here at TPM, we operate in a different part of the programmatic ad universe. You hear comparable things like that a lot. And it’s hard to ignore. But we’ve talked to people with different people with (by definition) different ads in totally different industries. So that’s not it. Something is happening.

    So what’s up?

    One thing is already being discussed widely in the trade press. In response to the rolling public relations debacle Facebook has already dramatically reduced or has announced that it will reduce advertisers’ ability to use third party ad data on the Facebook platform. That is a big deal. What’s that mean?

    And as Josh Marshall points out, the impact of the loss of this third-party data on Facebook’s ad targeting algorithms is largely speculative because we know so little about what Facebook knows about us without those third party algorithms. Facebook is a black box:


    As you know, through your activity on Facebook, Facebook collects lots of data about you that it then uses to target ads. That’s “Facebook data” (or it’s yours, but you know what I mean). Facebook also allows advertisers to upload “1st party data”. What’s that? That’s if my book publishing company has a list of 50,000 emails, I can upload those emails to Facebook and run ads to those people. Then there’s “3rd party data”. That’s if the advertiser or Facebook itself goes to another personal data broker, buys access to that data and pours it into the Facebook ecosystem for more efficient targeting.

    If you’re not versed in the world of data and digital advertising, there’s a ton here to keep up with. But here’s the key. How reliant is Facebook’s advertising cash cow on third party data? Not just the third party data that Facebook allows advertisers to put into its ecosystem for better targeting (which is now being phased out) but 3rd party data Facebook uses itself to improve its ad targeting? As one data industry executive put it to me, sure Facebook can crunch its own data to find out all sorts of things about you. But in a lot of cases it may be easier, cheaper and in some cases simply more effective to buy that data from other sources. We don’t really know – and no one outside Facebook really knows – how good Facebook’s AI really is at modeling user data entirely on its own without other sorts of data mixed in. It’s a black box. It matters a lot in terms of Facebook’s core revenue stream.

    But we might get an answer to the question of whether or not Facebook needs that third-party data to achieve the ad targeting proficiency is has today because of those new EU regulations and the real possibility of some sort of congressional action as a result of the Cambridge Analytica scandal. And that, of course, is why Josh Marshall suspects what we’re seeing in the reported drop in Facebook’s ad targeting is that Facebook is already preparing for coming regulation:


    Here’s another question: when you consider Facebook’s own data, how reliant is Facebook on ways it collects and processes Facebook data which it may not be able to do any longer either because of new regulations that come into effect later this year for the EU or because of new regulations Congress may put into effect as it puts new scrutiny on Facebook’s behavior?

    My hunch is that the answer to most or all of these questions is “a lot more than most people realize.” We already know that Facebook is making a lot of changes to how it uses data, especially third party data and how it allows advertisers to use data. Some of this is already public. Indeed, it’s getting discussed a lot in the trade press – particular how Facebook will implement with and cope with the new regs from the European Union. So why all the choppiness in Facebook’s advertising and targeting metrics? I suspect that Facebook is trying to rejigger its algorithm on the fly more than people realize in order to see if they can get it to work as effectively for ads without a lot of data sources or data uses they really aren’t supposed to be doing or which they suspect they’ll lose access to in coming regulation. That is the most logical explanation of the instability in their reporting.

    And if Josh Marshall’s hunch is correct and Facebook really did start rejiggering its ad targeting algorithms in anticipation of coming congressional regulation – which points towards an anticipation by Facebook of a very negative public response to the yet-to-be-released Cambridge Analytica story – we have to wonder just have many other privacy violating schemes Facebook has been up to with other third-parties beyond the data brokerage giants like Acxiom or Experian. Like what kinds of other classes of third-party providers might Facebook be incorporating into their algorithms?

    Well, here’s a chilling example of the kind of third-party data-sharing partnership Facebook might be interested in: hospital record meta data. Like what diseases people have an the medications they’re on and when they visited the hospital. From several major hospitals, including Stanford Medical School’s.

    Facebook says it would be for research purposes only by the medical community but Facebook would have been able to deanonymize the data. And it’s kind of obscene because Facebook says the plan for protecting everyone’s privacy is to using “hashing” – where patients would be assign an anonymous number that is assigned based on a mathematical algorithm that takes something like the patient name and turns it into a seemingly random number – and that only the medical research community will have access to the anonymized data so no one’s privacy is at risk. But using hashing to match the Facebook data set and the hospital data set means Facebook can match up the hospital data with its Facebook users. Facebook is trying to get deanonymized patient health data from hospitals. It’s a disturbing example of the kind of third-party data that Facebook is interested in.

    And there’s no real reason to believe they wouldn’t wildly abuse the data and probably turn the patients of those hospitals into focus groups of algorithmic testing using their medical records to pitch ads. Which will probably freak those people out. Facebook + hospital data = yikes.

    And this plan was being pursued last month. The Cambridge Analytica scandal disrupted active talks. The plan was “put on pause” by Facebook last week in response to the Cambridge Analytica outrage. Still, that’s just “on pause”. So it sounds like the plan is still “on” and we should expect a continued push into the medical record space by Facebook.

    Facebook’s pitch was to combine what health system data on patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with Facebook’s data on the person (such as: user is age 50, married with 3 kids, English isn’t a primary language, actively engages with the community by sending a lot of messages). And then the research project would try to use this combined information to improve patient care in some way, with an initial focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn’t have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.

    In other words, Facebook was setting up a research project dedicated to developer hospital decision-making support that utilizes Facebook’s pool of personalized data on people. Which is a path to plug Facebook into the hospital system. Yikes:

    CNBC

    Facebook sent a doctor on a secret mission to ask hospitals to share patient data

    * Facebook was in talks with top hospitals and other medical groups as recently as last month about a proposal to share data about the social networks of their most vulnerable patients.
    * The idea was to build profiles of people that included their medical conditions, information that health systems have, as well as social and economic factors gleaned from Facebook.
    * Facebook said the project is on hiatus so it can focus on “other important work, including doing a better job of protecting people’s data.”

    Christina Farr | @chrissyfarr
    Published 2:01 PM ET Thu, 5 April 2018 Updated 11:46 AM ET Fri, 6 April 2018

    Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.

    The proposal never went past the planning phases and has been put on pause after the Cambridge Analytica data leak scandal raised public concerns over how Facebook and others collect and use detailed information about Facebook users.

    “This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone’s data,” a Facebook spokesperson told CNBC.

    But as recently as last month, the company was talking to several health organizations, including Stanford Medical School and American College of Cardiology, about signing the data-sharing agreement.

    While the data shared would obscure personally identifiable information, such as the patient’s name, Facebook proposed using a common computer science technique called “hashing” to match individuals who existed in both sets. Facebook says the data would have been used only for research conducted by the medical community.

    The project could have raised new concerns about the massive amount of data Facebook collects about its users, and how this data can be used in ways users never expected.

    Led out of Building 8

    The exploratory effort to share medical-related data was led by an interventional cardiologist called Freddy Abnousi, who describes his role on LinkedIn as “leading top-secret projects.” It was under the purview of Regina Dugan, the head of Facebook’s “Building 8” experiment projects group, before she left in October 2017.

    Facebook’s pitch, according to two people who heard it and one who is familiar with the project, was to combine what a health system knows about its patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with what Facebook knows (such as: user is age 50, married with 3 kids, English isn’t a primary language, actively engages with the community by sending a lot of messages).

    The project would then figure out if this combined information could improve patient care, initially with a focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn’t have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.

    The people declined to be named as they were asked to sign confidentiality agreements.

    Facebook provided a quote from Cathleen Gates, the interim CEO of the American College of Cardiology, explaining the possible benefits of the plan:

    “For the first time in history, people are sharing information about themselves online in ways that may help determine how to improve their health. As part of its mission to transform cardiovascular care and improve heart health, the American College of Cardiology has been engaged in discussions with Facebook around the use of anonymized Facebook data, coupled with anonymized ACC data, to further scientific research on the ways social media can aid in the prevention and treatment of heart disease—the #1 cause of death in the world. This partnership is in the very early phases as we work on both sides to ensure privacy, transparency and scientific rigor. No data has been shared between any parties.”

    Health systems are notoriously careful about sharing patient health information, in part because of state and federal patient privacy laws that are designed to ensure that people’s sensitive medical information doesn’t end up in the wrong hands.

    To address these privacy laws and concerns, Facebook proposed to obscure personally identifiable information, such as names, in the data being shared by both sides.

    However, the company proposed using a common cryptographic technique called hashing to match individuals who were in both data sets. That way, both parties would be able to tell when a specific set of Facebook data matched up with a specific set of patient data.

    The issue of patient consent did not come up in the early discussions, one of the people said. Critics have attacked Facebook in the past for doing research on users without their permission. Notably, in 2014, Facebook manipulated hundreds of thousands of people’s news feeds to study whether certain types of content made people happier or sadder. Facebook later apologized for the study.

    Health policy experts say that this health initiative would be problematic if Facebook did not think through the privacy implications.

    “Consumers wouldn’t have assumed their data would be used in this way,” said Aneesh Chopra, president of a health software company specializing in patient data called CareJourney and the former White House chief technology officer.

    “If Facebook moves ahead (with its plans), I would be wary of efforts that repurpose user data without explicit consent.”

    When asked about the plans, Facebook provided the following statement:

    “The medical industry has long understood that there are general health benefits to having a close-knit circle of family and friends. But deeper research into this link is needed to help medical professionals develop specific treatment and intervention plans that take social connection into account.”

    “With this in mind, last year Facebook began discussions with leading medical institutions, including the American College of Cardiology and the Stanford University School of Medicine, to explore whether scientific research using anonymized Facebook data could help the medical community advance our understanding in this area. This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone’s data.”

    Last month we decided that we should pause these discussions so we can focus on other important work, including doing a better job of protecting people’s data and being clearer with them about how that data is used in our products and services.”

    ———-

    “Facebook sent a doctor on a secret mission to ask hospitals to share patient data” by Christina Farr; CNBC; 04/05/2018

    “Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.”

    Patient data from hospitals. It’s Facebook’s brave new third-party data frontier. Currently under the auspices of medical research, but its research for the purpose of showing Facebook’s utility in medical decision-support which is research to demonstrate the utility of sharing patient information with Facebook. That was the general pitch Facebook was making to several major US hospitals, including Stanford. And it’s a plan that, according to Facebook, was being pursued last month and has merely been “put on pause” in the wake of the Cambridge Analytica scandal:


    The proposal never went past the planning phases and has been put on pause after the Cambridge Analytica data leak scandal raised public concerns over how Facebook and others collect and use detailed information about Facebook users.

    “This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone’s data,” a Facebook spokesperson told CNBC.

    But as recently as last month, the company was talking to several health organizations, including Stanford Medical School and American College of Cardiology, about signing the data-sharing agreement.

    The way Facebook pitched it, the anonymized data from Facebook and the anonymized data from the hospitals would be combined and used for medical community research (research into Facebook as a patient care decision-support partner):


    Facebook’s pitch, according to two people who heard it and one who is familiar with the project, was to combine what a health system knows about its patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with what Facebook knows (such as: user is age 50, married with 3 kids, English isn’t a primary language, actively engages with the community by sending a lot of messages).

    The project would then figure out if this combined information could improve patient care, initially with a focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn’t have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.

    But what Facebook doesn’t acknowledge in that pitch is that the technique it’s proposing to anonymizing the data only anonymizes it to everyone except the hospital and Facebook. Facebook can easily deanonymize the hospital data if it gets its hands on it. The medical researchers aren’t the privacy threat. It’s actually anonymized for them because they don’t know the patients or the Facebook profiles. They’re just hashed ids. But Facebook sure as hell is a privacy threat because it’s Facebook with it’s hands on the deanonymized data:


    While the data shared would obscure personally identifiable information, such as the patient’s name, Facebook proposed using a common computer science technique called “hashing” to match individuals who existed in both sets. Facebook says the data would have been used only for research conducted by the medical community.

    The project could have raised new concerns about the massive amount of data Facebook collects about its users, and how this data can be used in ways users never expected.

    Health systems are notoriously careful about sharing patient health information, in part because of state and federal patient privacy laws that are designed to ensure that people’s sensitive medical information doesn’t end up in the wrong hands.

    To address these privacy laws and concerns, Facebook proposed to obscure personally identifiable information, such as names, in the data being shared by both sides.

    However, the company proposed using a common cryptographic technique called hashing to match individuals who were in both data sets. That way, both parties would be able to tell when a specific set of Facebook data matched up with a specific set of patient data.

    And note how the issue of patient consent didn’t come up in these early discussions, suggesting that Facebook is trying to work out a situation where people don’t know their patient record data was handed over to Facebook:


    The issue of patient consent did not come up in the early discussions, one of the people said. Critics have attacked Facebook in the past for doing research on users without their permission. Notably, in 2014, Facebook manipulated hundreds of thousands of people’s news feeds to study whether certain types of content made people happier or sadder. Facebook later apologized for the study.

    Health policy experts say that this health initiative would be problematic if Facebook did not think through the privacy implications.

    “Consumers wouldn’t have assumed their data would be used in this way,” said Aneesh Chopra, president of a health software company specializing in patient data called CareJourney and the former White House chief technology officer.

    “If Facebook moves ahead (with its plans), I would be wary of efforts that repurpose user data without explicit consent.”

    And, of course, it was Facebook’s mad science “Building 8” R&D group that is behind this proposal. The same group behind projects like the human-to-computer mind-reading interface technology that will allow human-to-computer interfaces (so Facebook can literally data mine your brain activity). And the same R&D group that was recently led by former DARPA chief Regina Dugan, until Dugan left last year with a cryptic message about stepping away to be “purposeful about what’s next, thoughtful about new ways to contribute in times of disruption.”. This next-generation Facebook stuff:


    The exploratory effort to share medical-related data was led by an interventional cardiologist called Freddy Abnousi, who describes his role on LinkedIn as “leading top-secret projects.” It was under the purview of Regina Dugan, the head of Facebook’s “Building 8” experiment projects group, before she left in October 2017.

    It’s a reminder that Facebook’s R&D teams are probably working on all sorts of new ways to tap into data-rich third-party sources. Hospitals are merely one particularly data rich example of the problem.

    And if Facebook really does cuts out third-party data brokers from its algorithms, let’s not forget that Facebook is probably going to use that as an excuse and imperative to reach out to all sorts of niche third-party data providers for direct access. Like hospitals. Don’t forget that the above plan was merely “put on pause”. They want to do more stuff like this going forward. And why not if they can get hospitals to give this kind of data out. And any other kind of institution they can convince to out our data. This is how Facebook can go “offline”. With direct data sharing services, like patient care decision-making support services, with one field of institution at a time. Hospitals are just one example.

    So given Facebook faces potential congressional action and new regulations, it’s going to be important to keep in mind that those regulations are going to have to include more than just the data brokerage giants like Experian. Because Facebook is interested in what you tell your doctor too. And presumably lots of other ‘services’ where they fuse their data about you with another data source for combined decision-making support. And the more Facebook promising to cut out third-party data, but more Facebook is going to try to directly collect “offline” data by fusing itself with other facets of our lives. It’s really quite disturbing.

    And who knows who else in the data brokerage industry might try to follow Facebook’s lead. Will Google also wants get into the patient care decision support market? Third-party data-brokerage decision-making support could be potentially applied to a lot more than just the medical sector. It’s a creepy new profit frontier.

    Beyond that, how else might Facebook attempt to replace the “offline” third-party data it’s pledging to phase out over the next six months? We’ll see, but we can be sure that Facebook is working on something.

    Posted by Pterrafractyl | April 8, 2018, 1:01 am
  5. Here’s a reminder that the proposal to combine Facebook data with patient hospital data – ostensibly for patient care decision-support purposes but also likely so Facebook can get its hands on patient medical record information – isn’t the only project Facebook has put ‘on pause’ (but not canceled) in the wake of the Cambridge Analytica scandal. For example, there’s a new hardware product for your home that Facebook is planning out rolling out later this year.

    It’s a “smart speaker” like the kind Amazon and Google already have sale. A smart speaker that will sit in your home and listen to everything and answer questions and schedule things. Potentially with cameras. Your personal home assistant. That’s the market Facebook is getting into later this year. But thanks to the public relations nightmare situation Facebook is experiencing at the moment the announcement of this new smart speaker at its developers conference in May has been cancelled. But it sounds like the roll out is still planned for this fall. So that smart speaker is a useful reminder to the US public and regulators of the future direction Facebook is planning on heading: in home “offline” data collection using internet-connected smart devices:

    Bloomberg Technology

    Facebook Delays Home-Speaker Unveil Amid Data Crisis

    By Sarah Frier
    March 27, 2018, 7:34 PM CDT

    * Social network had hoped to show off devices at F8 in May
    * Company still plans to launch products later this year

    Facebook Inc. has decided not to unveil new home products at its major developer conference in May, in part because the public is currently so outraged about the social network’s data-privacy practices, according to people familiar with the matter.

    The company’s new hardware products, connected speakers with digital-assistant and video-chat capabilities, are undergoing a deeper review to ensure that they make the right trade-offs regarding user data, the people said. While the hardware wasn’t expected to be available until the fall, the company had hoped to preview the devices at the largest annual gathering of Facebook developers, said the people, who asked not to be named discussing internal plans.

    The devices are part of Facebook’s plan to become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers. As concerns escalate about Facebook’s collection and use of personal data, now may be the wrong time to ask consumers to trust it with even more information by placing a connected device in their homes. A Facebook spokeswoman declined to comment.

    The social-media company had already found in focus-group testing that users were concerned about a Facebook-branded device in their living rooms, given how much intimate data the social network collects. Facebook still plans to launch the devices later this year.

    At the developer conference, set for May 1, the company will also need to explain new, more restrictive rules around what kinds of information app makers can collect on their users via Facebook’s service. The Menlo Park, California-based company said in a blog post this week that for developers, the changes “are not easy,” but are important to “mitigate any breach of trust with the broader developer ecosystem.”

    ———-

    “Facebook Delays Home-Speaker Unveil Amid Data Crisis” by Sarah Frier; Bloomberg Technology; 03/27/2018

    “Facebook Inc. has decided not to unveil new home products at its major developer conference in May, in part because the public is currently so outraged about the social network’s data-privacy practices, according to people familiar with the matter.”

    Yeah, it’s understandable that public outrage over years of deceptive and systemic mass privacy violations might complicate the roll out of your new in-home “smart speakers” which will be listening to everything happening in your home and sending that information back to Facebook. A pause on that grand unveiling does seem prudent.

    And yet Facebook still plans to actually launch its new smart speakers later this year:


    The social-media company had already found in focus-group testing that users were concerned about a Facebook-branded device in their living rooms, given how much intimate data the social network collects. Facebook still plans to launch the devices later this year.

    And that planned roll out of these smart speakers later this year is just one element of Facebook’s plan to “become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers”:


    The company’s new hardware products, connected speakers with digital-assistant and video-chat capabilities, are undergoing a deeper review to ensure that they make the right trade-offs regarding user data, the people said. While the hardware wasn’t expected to be available until the fall, the company had hoped to preview the devices at the largest annual gathering of Facebook developers, said the people, who asked not to be named discussing internal plans.

    The devices are part of Facebook’s plan to become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers. As concerns escalate about Facebook’s collection and use of personal data, now may be the wrong time to ask consumers to trust it with even more information by placing a connected device in their homes. A Facebook spokeswoman declined to comment.

    “The devices are part of Facebook’s plan to become more intimately involved with users’ everyday social lives, using artificial intelligence — following a path forged by Amazon.com Inc. and its Echo in-home smart speakers.”

    Yep, Facebook has all sorts of plans to become more intimately involved with your everyday life. Using artificial intelligence. And smart speakers. And no privacy concerns, of course.

    And in fairness this move to sell consumer devices that monitor you for the purpose of offering useful services with the data its collecting (and for selling you ads and profiling you) is merely following in the footsteps of companies like Google or Amazon with their wildly popular smart speakers. As the following article notes, A recent Gallup poll found found that 22 percent of Americans use “Home personal assistants” like Google Home or Amazon Echo. That is a huge percentage of the American public that’s already handing out exactly the kind of data Facebook is trying to collect with its new smart speaker.

    And as the following article also notes, if the creepy patents Google and Amazon have already filed are any indication of what we can expect from Facebook, we should expect Facebook to work on things like incorporating the smart speakers into smart home AI systems for monitoring children, with whisper detection capabilities and the ability to issue verbal commands at the kids. The smart home would replace the television as the technological parent of today’s kids and one of these mega corporations selling this technology will get audio and visual access to your home. Yes, the existing Google and Amazon patents would incorporate visual data too since these smart speakers tend to have cameras.

    And one patent involved a scenario where the camera on a smart speaker recognized a t-shirt on the floor and recognized a picture of Will Smith on the shirt and then tied that to a database of that person’s browsing history to see if they looked up Will Smith content online and then serving up targeted ads if they found a Will Smith hit. That’s a real patent from Google and that’s the kind of Orwellian patent race that Facebook is quietly getting ready to join later this year:

    The New York Times

    Hey, Alexa, What Can You Hear? And What Will You Do With It?

    By SAPNA MAHESHWARI
    MARCH 31, 2018

    Amazon ran a commercial on this year’s Super Bowl that pretended its digital assistant Alexa had temporarily lost her voice. It featured celebrities like Rebel Wilson, Cardi B and even the company’s chief executive, Jeff Bezos.

    While the ad riffed on what Alexa can say to users, the more intriguing question may be what she and other digital assistants can hear — especially as more people bring smart speakers into their homes.

    Amazon and Google, the leading sellers of such devices, say the assistants record and process audio only after users trigger them by pushing a button or uttering a phrase like “Hey, Alexa” or “O.K., Google.” But each company has filed patent applications, many of them still under consideration, that outline an array of possibilities for how devices like these could monitor more of what users say and do. That information could then be used to identify a person’s desires or interests, which could be mined for ads and product recommendations.

    In one set of patent applications, Amazon describes how a “voice sniffer algorithm” could be used on an array of devices, like tablets and e-book readers, to analyze audio almost in real time when it hears words like “love,” bought” or “dislike.” A diagram included with the application illustrated how a phone call between two friends could result in one receiving an offer for the San Diego Zoo and the other seeing an ad for a Wine of the Month Club membership.

    Some patent applications from Google, which also owns the smart home product maker Nest Labs, describe how audio and visual signals could be used in the context of elaborate smart home setups.

    One application details how audio monitoring could help detect that a child is engaging in “mischief” at home by first using speech patterns and pitch to identify a child’s presence, one filing said. A device could then try to sense movement while listening for whispers or silence, and even program a smart speaker to “provide a verbal warning.”

    A separate application regarding personalizing content for people while respecting their privacy noted that voices could be used to determine a speaker’s mood using the “volume of the user’s voice, detected breathing rate, crying and so forth,” and medical condition “based on detected coughing, sneezing and so forth.”

    The same application outlines how a device could “recognize a T-shirt on a floor of the user’s closet” bearing Will Smith’s face and combine that with a browser history that shows searches for Mr. Smith “to provide a movie recommendation that displays, ‘You seem to like Will Smith. His new movie is playing in a theater near you.’”

    In a statement, Amazon said the company took “privacy seriously” and did “not use customers’ voice recordings for targeted advertising.” Amazon said that it filed “a number of forward-looking patent applications that explore the full possibilities of new technology,” and that they “take multiple years to receive and do not necessarily reflect current developments to products and services.”

    Google said it did not “use raw audio to extrapolate moods, medical conditions or demographic information.” The company added, “All devices that come with the Google Assistant, including Google Home, are designed with user privacy in mind.”

    Tech companies apply for a dizzying number of patents every year, many of which are never used and are years from even being possible.

    Still, Jamie Court, the president of Consumer Watchdog, a nonprofit advocacy group in Santa Monica, Calif., which published a study of some of the patent applications in December, said, “When you read parts of the applications, it’s really clear that this is spyware and a surveillance system meant to serve you up to advertisers.”

    The companies, Mr. Court added, are “basically going to be finding out what our home life is like in qualitative ways.”

    Google called Consumer Watchdog’s claims “unfounded,” and said, “Prospective product announcements should not necessarily be inferred from our patent applications.”

    A recent Gallup poll found found that 22 percent of Americans used devices like Google Home or Amazon Echo. The growing adoption of smart speakers means that gadgets, some of which contain up to eight microphones and a camera, are being placed in kitchens and bedrooms and used to answer questions, control appliances and make phone calls. Apple recently introduced its own version, called the HomePod.

    Both Amazon and Google have emphasized that devices with Alexa and Google Assistant store voice recordings from users only after they are intentionally triggered. Amazon’s Echo and its newer smart speakers with screens use lights to show when they are streaming audio to the cloud, and consumers can view and delete their recordings on the Alexa smartphone app or on Amazon’s website (though they are warned online that “may degrade” their experience). Google Home also has a light that indicates when it is recording, and users can similarly see and delete that audio online.

    Amazon says voice recordings may help fulfill requests and improve its services, while Google says the data helps it learn over time to provide better, more personalized responses.

    But the ecosystem around voice data is still evolving.

    Take the thousands of third-party apps developed for Alexa called “skills,” which can be used to play games, dim lights or provide cleaning advice. While Amazon said it didn’t share users’ actual recordings with third parties, its terms of use for Alexa say it may share the content of their requests or information like their ZIP codes. Google says it will “generally” not provide audio recordings to third-party service providers, but may send transcriptions of what people say.

    And some devices have already shown that they are capable of recording more than what users expect. Google faced some embarrassment last fall when a batch of Google Home Minis that it distributed at company events and to journalists were almost constantly recording.

    In a starker example, detectives investigating a death at an Arkansas home sought access to audio on an Echo device in 2016. Amazon resisted, but the recordings were ultimately shared with the permission of the defendant, James Bates. (A judge later dismissed Mr. Bates’s first-degree murder charge based on separate evidence.)

    Kathleen Zellner, his lawyer, said in an interview that the Echo had been recording more than it was supposed to. Mr. Bates told her that it had been regularly lighting up without being prompted, and had logged conversations that were unrelated to Alexa commands, including a conversation about football in a separate room, she said.

    “It was just extremely sloppy the way the activation occurred,” Ms. Zellner said.

    The Electronic Privacy Information Center has recommended more robust disclosure rules for internet-connected devices, including an “algorithmic transparency requirement” that would help people understand how their data was being used and what automated decisions were then being made about them.

    Sam Lester, the center’s consumer privacy fellow, said he believed that the abilities of new smart home devices highlighted the need for United States regulators to get more involved with how consumer data was collected and used.

    “A lot of these technological innovations can be very good for consumers,” he said. “But it’s not the responsibility of consumers to protect themselves from these products any more than it’s their responsibility to protect themselves from the safety risks in food and drugs. It’s why we established a Food and Drug Administration years ago.”

    ———–

    “Hey, Alexa, What Can You Hear? And What Will You Do With It?” by SAPNA MAHESHWARI; The New York Times; 03/31/2018

    “While the ad riffed on what Alexa can say to users, the more intriguing question may be what she and other digital assistants can hear — especially as more people bring smart speakers into their homes.”

    It’s one of the conundrums of the smart speaker business model: it’s obvious these smart speaker manufacturers would love to just collect all the information they can about what people are saying and doing, but they need to maintain the pretense of not doing that in order to get people to buy their devices. So it’s no surprise that Google and Amazon routinely make it clear that their devices are only recording information after they’ve been activated by the users. But as these patents make clear, there are all sorts of home life surveillance applications that these companies have in mind. Like the smart home child monitoring system, with whisper detection capabilities and mischief-detecting AI capabilities:


    Amazon and Google, the leading sellers of such devices, say the assistants record and process audio only after users trigger them by pushing a button or uttering a phrase like “Hey, Alexa” or “O.K., Google.” But each company has filed patent applications, many of them still under consideration, that outline an array of possibilities for how devices like these could monitor more of what users say and do. That information could then be used to identify a person’s desires or interests, which could be mined for ads and product recommendations.

    In one set of patent applications, Amazon describes how a “voice sniffer algorithm” could be used on an array of devices, like tablets and e-book readers, to analyze audio almost in real time when it hears words like “love,” bought” or “dislike.” A diagram included with the application illustrated how a phone call between two friends could result in one receiving an offer for the San Diego Zoo and the other seeing an ad for a Wine of the Month Club membership.

    Some patent applications from Google, which also owns the smart home product maker Nest Labs, describe how audio and visual signals could be used in the context of elaborate smart home setups.

    One application details how audio monitoring could help detect that a child is engaging in “mischief” at home by first using speech patterns and pitch to identify a child’s presence, one filing said. A device could then try to sense movement while listening for whispers or silence, and even program a smart speaker to “provide a verbal warning.”

    “One application details how audio monitoring could help detect that a child is engaging in “mischief” at home by first using speech patterns and pitch to identify a child’s presence, one filing said. A device could then try to sense movement while listening for whispers or silence, and even program a smart speaker to “provide a verbal warning.”

    Listening for the mischievous whispers of children and issuing a verbal warning. Those are the kinds of capabilities companies like Google, Amazon, and now Facebook are going to be investing in. And it will probably be very popular because that would be a very handy tool for parents to have smart home systems that literally watch the kids. But it’s going to come at the cost of opening up our homes to monitoring by one of these data giants. And that’s insane, right?

    Another patent noted how the smart speakers could detect medical conditions from your voice, like detecting coughing, sneezing, and the breathing rate. And that’s just an example of the kind of personal data these devices are clearly capable of gathering and they’re only going to get better at it:


    A separate application regarding personalizing content for people while respecting their privacy noted that voices could be used to determine a speaker’s mood using the “volume of the user’s voice, detected breathing rate, crying and so forth,” and medical condition “based on detected coughing, sneezing and so forth.”

    The same application outlines how a device could “recognize a T-shirt on a floor of the user’s closet” bearing Will Smith’s face and combine that with a browser history that shows searches for Mr. Smith “to provide a movie recommendation that displays, ‘You seem to like Will Smith. His new movie is playing in a theater near you.’”

    “The same application outlines how a device could “recognize a T-shirt on a floor of the user’s closet” bearing Will Smith’s face and combine that with a browser history that shows searches for Mr. Smith “to provide a movie recommendation that displays, ‘You seem to like Will Smith. His new movie is playing in a theater near you.’””

    The smart speaker camera is going to interface things it sees in your home with your browser history. For ad targeting. That’s a patent.

    It’s why Consumer Watchdog’s Jamie Court warnings that these consumer home devices are really just home life spyware should be heeded. Because it’s pretty obvious that the plan is to turn these things into home activity monitoring devices. And with 22 percent of Americans saying they use a “Home personal assistants” in a recent Gallup poll, that really does make the coming era of smart device home monitoring a public privacy nightmare:


    Still, Jamie Court, the president of Consumer Watchdog, a nonprofit advocacy group in Santa Monica, Calif., which published a study of some of the patent applications in December, said, “When you read parts of the applications, it’s really clear that this is spyware and a surveillance system meant to serve you up to advertisers.”

    The companies, Mr. Court added, are “basically going to be finding out what our home life is like in qualitative ways.”

    Google called Consumer Watchdog’s claims “unfounded,” and said, “Prospective product announcements should not necessarily be inferred from our patent applications.”

    A recent Gallup poll found found that 22 percent of Americans used devices like Google Home or Amazon Echo. The growing adoption of smart speakers means that gadgets, some of which contain up to eight microphones and a camera, are being placed in kitchens and bedrooms and used to answer questions, control appliances and make phone calls. Apple recently introduced its own version, called the HomePod.

    Of course, both Google and Amazon assure us that their devices are only recording audio after they’re triggered. And it’s only being used to improve the user experience and make it more personalized:


    Both Amazon and Google have emphasized that devices with Alexa and Google Assistant store voice recordings from users only after they are intentionally triggered. Amazon’s Echo and its newer smart speakers with screens use lights to show when they are streaming audio to the cloud, and consumers can view and delete their recordings on the Alexa smartphone app or on Amazon’s website (though they are warned online that “may degrade” their experience). Google Home also has a light that indicates when it is recording, and users can similarly see and delete that audio online.

    Amazon says voice recordings may help fulfill requests and improve its services, while Google says the data helps it learn over time to provide better, more personalized responses.

    And while Google assures us those voice recordings will only be used to personalize the experience, Google’s user agreement includes the possibility of sending transcripts of what people say to third-party service providers. And it “generally” won’t send audio samples to those third-party providers. It’s an example of how little audio and visual snippets of people’s home life are becoming the new “mouse click” of consumer data collected and sold in exchange for a digital service:


    Take the thousands of third-party apps developed for Alexa called “skills,” which can be used to play games, dim lights or provide cleaning advice. While Amazon said it didn’t share users’ actual recordings with third parties, its terms of use for Alexa say it may share the content of their requests or information like their ZIP codes. Google says it will “generally” not provide audio recordings to third-party service providers, but may send transcriptions of what people say.

    And it’s not like these patents are necessarily future privacy nightmares. They’re potentially present privacy nightmares if it’s the case that these devices are actually just collecting data all the time in secret. And in a number of documented cases that’s been exactly what happened, including a murder case partially solved by an Amazon Echo with a propensity to start recording randomly:


    And some devices have already shown that they are capable of recording more than what users expect. Google faced some embarrassment last fall when a batch of Google Home Minis that it distributed at company events and to journalists were almost constantly recording.

    In a starker example, detectives investigating a death at an Arkansas home sought access to audio on an Echo device in 2016. Amazon resisted, but the recordings were ultimately shared with the permission of the defendant, James Bates. (A judge later dismissed Mr. Bates’s first-degree murder charge based on separate evidence.)

    Kathleen Zellner, his lawyer, said in an interview that the Echo had been recording more than it was supposed to. Mr. Bates told her that it had been regularly lighting up without being prompted, and had logged conversations that were unrelated to Alexa commands, including a conversation about football in a separate room, she said.

    “It was just extremely sloppy the way the activation occurred,” Ms. Zellner said.

    And that’s all why better consumer regulation in this area really is called fall, because there’s no way consumers can realistically navigate this technological landscape:


    Sam Lester, the center’s consumer privacy fellow, said he believed that the abilities of new smart home devices highlighted the need for United States regulators to get more involved with how consumer data was collected and used.

    “A lot of these technological innovations can be very good for consumers,” he said. “But it’s not the responsibility of consumers to protect themselves from these products any more than it’s their responsibility to protect themselves from the safety risks in food and drugs. It’s why we established a Food and Drug Administration years ago.”

    And that’s one of the big questions that really should be asked in the wake of the Cambridge Analytica scandal: does the US need something like the Food and Drug Administration for data privacy for devices? Something far more substantial than the regulatory infrastructure that exists today and is dedicated to ensuring transparency of data collection practices? It seems like the answer is obviously yes. And if the Cambridge Analytica scandal is enough evidence those Orwellian patents should suffice. It

    And as the Cambridge Analytica scandal also reminds us, we can either wait for the data abuses to happen and only belatedly deal with the problem or we can deal with it proactively. And dealing with it proactively realistically involves something like an FDA for data privacy.

    But as we also just saw with those creepy patents, especially the child monitoring/scolding patent, consumers have much more than data privacy concerns with the world of smart devices Google and Facebook and Amazon have in mind. That future is going to involve devices that are literally raising the kids. Move over television, it’s parenting brought to you by smart home AIs and Silicon Valley.

    And let’s also not forget one of the other lessons that we can take from the Cambridge Analytica scandal: the data collected by these smart devices isn’t just going to be collected by Google and Facebook and Amazon. Some of that data is going to be collected by all the third-party app developers too. Home life, brought to you by Google/Facebook/Amazon. That’s going to be a thing.

    At the same time it’s undeniable that there will be very positive applications for this kind of technology. And that’s why it’s such a shame companies with the track record of Facebook and Google and Amazon are the ones leading this kind of technological revolution: like much technology, the consumer home smart device technology is heavily reliant on trust in the manufacturer and trust that the manufacturer won’t screw things up and turn their device into a privacy nightmare. That’s not the kind of situation where you want Google, Facebook, and Amazon leading the way.

    So that’s all something to keep in mind when Facebook doesn’t talk about its upcoming smart speakers at its annual developers conference next month.

    Posted by Pterrafractyl | April 8, 2018, 9:32 pm
  6. Here’s a fascinating angle to the Cambridge Analytica scandal that involves an Eastern Ukrainian politician with pro-EU leanings and ties to Yulia Tymoshenko and the Azov Battalion:

    It turns out Cambridge Analytica outsourced the production of its “Ripon” psychological profiling software to a separate company, AggregateIQ (AIQ). AIQ was founded by Cambridge Analytica co-founder/whistle-blower Christopher Wylie, so it’s basically a subsidiary of Cambridge Analytica. But they were technically separate companies and it turns out that AIQ could end up playing a big role in an investigation into whether or not UK election laws were violated by the “Vote Leave” camp during the lead up to the Brexit vote.

    It looks like the “Vote Leave” camp basically secretly spent more than it legally could using AIQ as a vehicle for doing this. Here’s how it worked: There was official “leave” political campaign but there were also third-party pro-leave campaigns. One of those was Leave.EU. In 2016, Robert Mercer offered Leave.EU the services of Cambridge Analytica for free. Leave.EU relied on Cambridge Analytica’s services for its voter influence campaign.

    The official Vote Leave campaign, on the other hand, relied on AIQ for its data analytics services. Vote Leave eventually payed AIQ roughly 40 percent of its £7 million campaign budget. Here’s where the illegality came int: Vote Leave also ended up gathering more cash than British law legally allowed it to spend. Vote Leave could legally donate that cash to other campaigns but it couldn’t then coordinate with those campaigns. But that’s exactly what it looks like Vote Leave did. About a a week before the EU referendum, Vote Leave inexplicably donated £625,000 to the founder of a small, unofficial Brexit campaign called BeLeave. Grimes then immediately gave a substantial amount of the cash he received to AIQ. Vote Leave also donated £100,000 to another Leave campaign called Veterans for Britain, which then paid AIQ precisely that amount. So Vote Leave was basically using these small ‘leave’ groups as campaign money laundering vehicles, with AIQ as the final destination of that money.

    That’s all why AIQ is now the focus of British investigators. AIQ’s role in this came to light in part from thousands of pages of code that was discovered by a cybersecurity researcher at UpGuard on the web page of a developer named Ali Yassine who worked for SCL Group. Within the code are notes that show SCL had requested that code be turned over by AIQ’s lead developer, Koji Hamid Pourseyed.

    AIQ’s contract with SCL stipulates that SCL is the sole owner of “Ripon”, Cambridge Analytica’s campaign platform. The documents also include an internal wiki where AIQ developers also discussed a project known as The Database of Truth, a system that “integrates, obtains, and normalizes data from disparate sources, including starting with the RNC Data Trust. It’s a reminder that the story of Cambridge Analytica isn’t just a story about the Trump campaign or the Brexit vote. It’s also about the Republican Party’s political analytics in general

    Also included to the discovered AIQ files were notes related to active projects for Cruz, Abbott, and a Ukrainian oligarch, Sergei Taruta.

    So who is Sergei Taruta? Well, he’s a Ukrainian billionaire and co-founder of the Industrial Union of Donbass, one of the largest companies in Ukraine. He was appointed governor of the Donetsk Oblast in Eastern Ukraine by Petro Poroshenko in March of 2014 before being fired in October of 2014.

    Taruta went on to get elected to parliament where he remains today. He recently co-founded the “Osnova” political party that describes itself as populist and a promoter of “liberal conservatism” (presumably it’s “liberal” in the libertarian sense). It’s suspected by some that Rinat Akhmetov, Ukraine’s wealthiest oligarch and another Eastern Ukrainian who straddles the line between backing the Kiev government and maintaining friendly ties with the pro-Russian segments of Eastern Urkaine, is also one of the party backers. Ahkmetov was a significant backer of Yankovych’s Party of Regions and a dominant figure in the Opposition Bloc today. It was Ahkmetov who initially hired Paul Manafort back in 2005 to act as a political consultant.

    It’s reportedly pretty clear that Taruta’s Osnova party is designed to splinter away ex-supporters of Viktor Yankovych’s Party of Regions based on the politicians who have already declared they are going to join it. And yet as a politician Taruta is characterized as having never really tried to cozy up to the pro-Russian side and has a history of supporting pro-EU politicians. In 2006 he supported Viktor Yuschenko over Viktor Yanukovych. In 2010 he backed Yulia Tymoshenko over Yiktor Yanukovych.

    So Taruta a pro-EU Eastern Ukrainian politician, which is notable because he’s not the only pro-EU Eastern Ukrainian politician to be involved with entities and figures in the #TrumpRussia orbit. Don’t forget about Andreii Artemenko, the Ukrainian politician who was involved with that ‘peace plan’ proposal with Michael Cohen and Felix Sater – a proposal that may have been part of a broader offer made to Russia over both Ukraine and the Syria and Iran – and how Artemenko was a pro-EU member of the far right “Radical Party” and also has ties to Right Sector. Artemenko headed up the Kiev department of Yulia Tymoshenko’s Batkivshchyna Party party back in 2006 and was serving in a coalition headed by Tymoshenko.

    Also recall that the figure who appears to have arranged for the initial contact between Andreii Artemenko with Michael Cohen and Felix Sater was Alexander Oronov, the father-in-law of Michael Cohen’s brother. And Oronov was, himself, co-owned an ethanol plant with Viktor Topolov, another Ukrainian oligarch who was Viktor Yuschenko’s coal minister and who because an assassination target by Semion Mogilevych’s mafia organization. One of Topolov’s partners who was also targeted by Mogilevych, Slava Konstantinovsky, end up forming and joining one of the “volunteer battalions” fighting the separatists in the East.

    So now we learn that AIQ (so, basically Cambridge Analytica) is doing some sort of work for Sergei Taruta, putting another Eastern Ukrainian oligarch politicians with pro-EU leanings in the orbit of this #TrumpRussia scandal.

    So what kind of work did AIQ do for Taruta? That’s unclear. And it seems reasonable to assume that it’s work involving Taruta’s new party in Ukraine and its attempts to splinter off former Party of Regions voters.

    But as we’re also going to see, Sergei Taruta has been doing some lobbying work in Washington DC. Rather curious lobbying work: It turns out Taruta was at the center of a bizarre ‘congressional hearing’ that took place in the US capital last September. This hearing focused on corruption allegations Taruta has been promoting for over a year against the National Bank of Ukraine, the country’s central bank.

    There were two Ukrainian television stations covering the event and pretending like it was a real congressional hearing. Former CIA director James Woolsey, who was briefly part of the Trump campaign, was also at the event, along with former Republican House member Connie Mack, who is now a lobbyist. Mack was basically pretending to speak on behalf of the US Congress and expressing outrage over Taruta’s corruption allegations for the Ukrainian television audiences while expressing his resolve to investigate it. Rep. Ron Estes, a freshman Republican, booked the room in the US Capital for Mack and lobbying first. Estes’s office later said it won’t happen again.

    And there’s another twist to this strange attack on the National Bank of Ukraine: According to Vox Ukraine, a a number of the criticisms Taruta brings against the bank are based on distortions and half-truths. In other words, it doesn’t appear to be a genuine anti-corruption campaign. So what is Taruta motivation? Well it’s notable that his criticism of the National Bank of Urkaine extends back to the actions of its previous chair, Valeriya Gontareva (Hontareva). Gontareva was appointed chairman of the bank in June of 2014. And one of her first big moves was the government takeover of Ukraine’s biggest commercial bank, Privatbank. Privatebank was co-founded by Ihor Kolomoisky, another Eastern Ukrainian oligarch.

    Ihor Kolomoisky was appointed governor of the Eastern oblast of Dnipropetrovsk at the same time Taruta was appointed governor of Donetsk. Kolomoisky has been supporting the Kiev government in the civil war by financially supporting a number of the volunteer battalions, including directly creating the large private Dnipro Battalion. As we’ll see, both Kolomoisky and Taruta reportedly supported the neo-Nazi Azov Battalion according to a 2015 Reuters report. In other words, Kolomoisky is an Eastern Ukrainian oligarch with ties to the far right, kind of like Andreii Artemenko.

    Kolomoisky wasn’t happy about the takeover of Privatbank. When Gontareva presided over the bank’s nationalization, its accounts were missing more than $5 billion in large part because the bank lent so much money to people with connections to Kolomoisky. After the bank takeover, Gontareva received numerous threats. On April 10, 2017, she announced at a press conference that she was resigning from her post.

    So it looks like Sergei Taruta might be waging an international PR battle in against the National Bank of Ukraine as part of a counter move on half of Ihor Kolomoisky an the Privatbank investors.

    And then there’s the person who actually organized this fake congressional hear. A little-known figure came forward to take full responsibility: Anatoly Motkin, a one-time aide to a Georgian oligarch accused of leading a coup attempt. Motkin the founder and president of StrategEast, a lobbying firm that describes itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe.”

    That’s all some new context to factor into the analysis of Cambridge Analytica and the forces it was working for: one of its clients is a pro-EU Eastern Ukrainian oligarch who just set up a political party designed to appeal to for Yanukovych supporters.

    Ok, so first, let’s look at the story of Cambridge Analytica and AggregateIQ (AIQ), the Cambridge Analytica offshoot that was used to both develop the GOP’s “Ripon” analytics software and also acted as the analytics firm for the Vote Leave campaign. And the work AIQ was doing for Vote Leave was apparently so valuable that Vote Leave secretly launder almost a million pounds through two smaller ‘leave’ groups in order to get that money to AIQ and secretly exceed the legal spending caps. And that’s the discovery of thousands of AIQ documents by a cybersecurity firm is so politically significant in the UK right now. But as those documents also reveal, AIQ was doing work for other clients: Texas Governor Greg Abbott, Texas Senator Ted Cruz, and Ukrainian oligarch Sergei Taruta:

    Gizmodo

    AggregateIQ Created Cambridge Analytica’s Election Software, and Here’s the Proof

    Dell Cameron
    3/26/18 12:50pm

    A little-known Canadian data firm ensnared by an international investigation into alleged wrongdoing during the Brexit campaign created an election software platform marketed by Cambridge Analytica, according to a batch of internal files obtained exclusively by Gizmodo.

    Discovered by a security researcher last week, the files confirm that AggregateIQ, a British Columbia-based data firm, developed the technology Cambridge Analytica sold to clients for millions of dollars during the 2016 US presidential election. Hundreds if not thousands of pages of code, as well as detailed notes signed by AggregateIQ staff, wholly substantiate recent reports that Cambridge Analytica’s software platform was not its own creation.

    What’s more, the files reveal that AggregateIQ—also known as “AIQ”—is the developer behind campaign apps created for Texas Senator Ted Cruz and Texas Governor Greg Abbott, as well as a Ukrainian steel magnate named Serhiy Taruta, head the country’s newly formed Osnova party.

    Other records show the firm once pitched an app to Breitbart News, the far-right website funded by hedge-fund billionaire Robert Mercer—Cambridge Analytica’s principal investor—and are currently contracted by WPA Intelligence, a US-based consultancy founded by Republican pollster Chris Wilson, who was director of digital strategy for Cruz’s 2016 presidential campaign.

    The files were unearthed last week by Chris Vickery, research director at UpGuard, a California-based cyber risk firm. On Sunday night, after Gizmodo reached out to Jeff Silvester, co-founder of AIQ, the files were quickly taken offline.

    The files are likely to draw the interest of investigators on both sides of the Atlantic. Canadian and British regulators are currently pursuing leads to establish whether multiple “Leave” campaigns illegally coordinated during the 2016 EU referendum.

    Ties between AIQ and Cambridge Analytica—the focus of recent widespread furor over the misuse of data pulled from 50 million Facebook accounts—has likewise drawn the interest of US and British authorities. Cambridge CEO Alexander Nix was suspended by his company last week after British reporters published covertly recorded footage showing Nix boasting about bribing and blackmailing political rivals.

    Cambridge Analytica did not respond to a request for comment.

    AIQ is bound by an non-disclosure agreement the company signed in 2014 to take on former client SCL Group, Cambridge Analytica’s parent company, according to a source with direct knowledge of the contract.

    In an interview over the weekend with London’s The Observer, Christopher Wylie, the former Cambridge Analytica employee turned whistleblower, claimed that he helped establish AIQ years ago in an effort to help SCL Group expand its data operations. Silvester denied that Wylie was ever involved on that level, but admits that Wylie helped AIQ land its first big contract.

    “We did some work with SCL and had a contract with them in 2014 for some custom software development,” Silvester told Gizmodo. “We last worked with SCL in 2016 and have not worked with them since.”

    Data Exposed

    UpGuard first discovered code belonging to AIQ last Thursday on the web page of a developer named Ali Yassine who worked for SCL Group. Within the code—uploaded to the website GitHub in August 2016—are notes that show SCL had requested that code be turned over by AIQ’s lead developer, Koji Hamid Pourseyed.

    AIQ’s contract with SCL, a portion of which was published by The Guardian last year, stipulates that SCL is the sole owner of the intellectual property pertaining to the contract—namely, the development of Ripon, Cambridge Analytica’s campaign platform.

    The find led UpGuard to unearth a code repository on AIQ’s website. Within it were countless files linking AIQ to the Ripon program, as well as notes related to active projects for Cruz, Abbott, and the Ukrainian oligarch.

    In an internal wiki, AIQ developers also discussed a project known as The Database of Truth, a system that “integrates, obtains, and normalizes data from disparate sorces, including starting with the RNC Data Trust.” (RNC Data Trust is the Republican party’s primary voter file provider.) “The primary data source will be combined with state voter files, consumer data, third party data providers, historical WPA survey and projects and customer data.”

    The Database of Truth, according to the wiki, is a project under development for WPA Intelligence.

    Wilson told Gizmodo on Monday that he has almost no knowledge of the controversy unfolding over AIQ’s role in the UK. “I would never work with a firm that I felt had done something illegal or even unethical,” he said. AIQ’s work for WPA followed a competitive bid process, he said. “They offered us the best capabilities for the best price.”

    Vaporware

    Until recently, Cambridge Analytica operated largely in the shadows. For years, it planned to target right-leaning voters for a host of high-profile political campaigns, working for both Cruz and President Donald Trump. With its billionaire backing, the firm promised to leverage oceans of data collected about voters—which we now know was acquired from sources both legal and unauthorized.

    Known as Project Ripon, Cambridge Analytica’s goal was to furnish Republican candidates with a technology platform capable of reaching voters through the use of psychological profiling. (SCL Group has long used behavioral research to conduct “influence operations” on behalf of military and political clients worldwide.)

    Cambridge Analytica, which eventually chose AIQ to help build its platform, once boasted that it possessed files on as many as 230 million Americans compiled from thousands of data points, including psychological data harvested from social media, as well as commercial data available to virtually anyone who can afford it. The company intended to classify voters by select personality types, applying its system to craft messages, online ads, and mailers that, it believed, would resonate distinctively with voters of each group.

    Sources within the Cruz campaign, which largely funded Ripon’s development, claim the software never actually functioned. One former staffer told Gizmodo the product was nothing but “vaporware.”

    AIQ’s internal files show the company had unlimited access to the Ripon code, and a source within the Cruz campaign confirmed to Gizmodo that AIQ was solely responsible for the software’s development.

    The campaign eventually dumped more than $5.8 million into Ripon’s development—which is only about half the amount Robert Mercer, Cambridge Analytica’s principal investor, poured into Cruz’s White House bid. (After Trump took the nomination, Mercer contributed more than $15.5 million to his campaign, $5 million of which ended up back in Cambridge Analytica’s pockets.)

    A former Cruz aide, who requested anonymity to discuss their work for the campaign, told Gizmodo that even at the highest levels, no one knew that Cambridge Analytica had outsourced Ripon’s development. “Ultimately, I found out through some emails that they’re not even doing this work,” the source said. “It was being outsourced to AIQ.”

    According to the aide, when Cruz’s staff began to question AIQ over whether it was behind Ripon’s development, AIQ confirmed that it was, but said it was never supposed to discuss its work.

    The Brexit

    In 2016, Mercer reportedly offered up Cambridge Analytica’s services for free to Leave.EU, one of several group urging the UK to depart the European Union, according to The Guardian. Leave.EU was not, however, the official “Leave” group representing the Brexit campaign. Instead, a seperate group, known as Vote Leave, was formally chosen by election officials to lead the referendum.

    Whereas Leave.EU relied on Cambridge to influence voters through its use of data analytics, Vote Leave turned to AIQ, eventually paying the firm roughly 40 percent of its £7 million campaign budget, according to The Guardian. Over time, however, Vote Leave amassed more cash than it was legally allowed to spend. While UK election laws permitted Vote Leave to gift its remaining funds to other campaigns, further coordination between them was expressly forbidden.

    Roughly a week before the EU referendum, Vote Leave inexplicably donated £625,000 to a young fashion design student named Darren Grimes, the founder of a small, unofficial Brexit campaign called BeLeave. According to a BuzzFeed investigation, Grimes immediately gave a “substantial amount” of the cash he received from Vote Leave to AIQ. Vote Leave also donated £100,000 to another Leave campaign called Veterans for Britain, which, according to The Guardian, then paid AIQ precisely that amount.

    A review of the AIQ files by UpGuard’s Chris Vickery revealed several mentions of Vote Leave and at least one mention of Veterans for Britain, apparently related to website development.

    In an interview on Monday, Shahmir Sanni, a former volunteer for Vote Leave campaign, told The Globe and Mail that he had “first-hand knowledge about the alleged wrongdoing in the Brexit campaign.” Sanni, who was 22 when he worked for Vote Leave, said he was “encouraged to spin out” another campaign, but that he had “no control” over the £625,000 that was immediately spent on AIQ’s services.

    British authorities are pursuing leads to establish whether BeLeave and Veterans for Britain were merely a conduit through which Vote Leave sought to direct additional funds to AIQ. While the UK Electoral Commission took no action in early 2017, in November it claimed that “new information” had “come to light,” giving the commission “reasonable grounds to suspect an offence may have been committed.”

    In an email, the UK Election Commission told Gizmodo its investigation into Vote Leave payments was ongoing.

    ———-

    “AggregateIQ Created Cambridge Analytica’s Election Software, and Here’s the Proof” by Dell Cameron; Gizmodo; 3/26/2018

    “A little-known Canadian data firm ensnared by an international investigation into alleged wrongdoing during the Brexit campaign created an election software platform marketed by Cambridge Analytica, according to a batch of internal files obtained exclusively by Gizmodo.”

    As we can see, AIQ was the under-the-radar SCL subsidiary that actually created “Ripon”, the political modeling software Cambridge Analytica was offering to client. Cambridge Analytica co-founder/whistle-blower Christopher Wylie helped SCL found the firm. Also AIQ co-found, Jeff Silvester, admits that Wylie was involved with AIQ landing its first big contract but asserts that Wylie was never closely involved with the company. And Silvester also admits that the company had a contract with SCL in 2014 but haven’t worked with SCL since 2016. So AIQ is officially acting like it’s not really an SCL offshoot at this point:


    The files were unearthed last week by Chris Vickery, research director at UpGuard, a California-based cyber risk firm. On Sunday night, after Gizmodo reached out to Jeff Silvester, co-founder of AIQ, the files were quickly taken offline.

    In an interview over the weekend with London’s The Observer, Christopher Wylie, the former Cambridge Analytica employee turned whistleblower, claimed that he helped establish AIQ years ago in an effort to help SCL Group expand its data operations. Silvester denied that Wylie was ever involved on that level, but admits that Wylie helped AIQ land its first big contract.

    “We did some work with SCL and had a contract with them in 2014 for some custom software development,” Silvester told Gizmodo. “We last worked with SCL in 2016 and have not worked with them since.”

    And based on the AIQ’s contract with SCL, we have a better idea of when exactly AIQ’s work with SCL ended in 2016: the code found by UpGuard was uploaded to the code-repository website GitHub in August of 2016. That suggests that was the point when the coded was effectively handed off from AIQ to SCL. And August of 2016, it’s important to recall, is the same month that Steve Bannon, a Cambridge Analytica company officer – and “the boss” according to Wylie – went to work as campaign manager of the Trump campaign. So you have to wonder if that’s a coincidence or a reflection of concerns over this SCL/Cambridge Analytica/AIQ nexus getting some unwanted attention:


    Data Exposed

    UpGuard first discovered code belonging to AIQ last Thursday on the web page of a developer named Ali Yassine who worked for SCL Group. Within the code—uploaded to the website GitHub in August 2016—are notes that show SCL had requested that code be turned over by AIQ’s lead developer, Koji Hamid Pourseyed.

    AIQ’s contract with SCL, a portion of which was published by The Guardian last year, stipulates that SCL is the sole owner of the intellectual property pertaining to the contract—namely, the development of Ripon, Cambridge Analytica’s campaign platform.

    And in those discovered AIQ documents are notes on projects AIQ was doing for Cruz, Abbott and Taruta. Along with notes on a project for the GOP called The Database of Truth:


    The find led UpGuard to unearth a code repository on AIQ’s website. Within it were countless files linking AIQ to the Ripon program, as well as notes related to active projects for Cruz, Abbott, and the Ukrainian oligarch.

    In an internal wiki, AIQ developers also discussed a project known as The Database of Truth, a system that “integrates, obtains, and normalizes data from disparate sorces, including starting with the RNC Data Trust.” (RNC Data Trust is the Republican party’s primary voter file provider.) “The primary data source will be combined with state voter files, consumer data, third party data providers, historical WPA survey and projects and customer data.”

    The Database of Truth, according to the wiki, is a project under development for WPA Intelligence.

    AIQ is making the GOP a “Database of Truth”. Great.

    And that sounds like a separate system from Ripon. The Database of Truth appears to focus on the kind of data found in data brokerages – state voter files, consumer data, third party data providers, etc. – whereas Ripon software appeared to be specifically focused on the kind of psychological profiling Cambridge Analytica was specializing in:


    Vaporware

    Until recently, Cambridge Analytica operated largely in the shadows. For years, it planned to target right-leaning voters for a host of high-profile political campaigns, working for both Cruz and President Donald Trump. With its billionaire backing, the firm promised to leverage oceans of data collected about voters—which we now know was acquired from sources both legal and unauthorized.

    Known as Project Ripon, Cambridge Analytica’s goal was to furnish Republican candidates with a technology platform capable of reaching voters through the use of psychological profiling. (SCL Group has long used behavioral research to conduct “influence operations” on behalf of military and political clients worldwide.)

    Cambridge Analytica, which eventually chose AIQ to help build its platform, once boasted that it possessed files on as many as 230 million Americans compiled from thousands of data points, including psychological data harvested from social media, as well as commercial data available to virtually anyone who can afford it. The company intended to classify voters by select personality types, applying its system to craft messages, online ads, and mailers that, it believed, would resonate distinctively with voters of each group.

    And as we’ve heard from the Trump campaign, and their assertions that the Cambridge Analytica software wasn’t actually very useful, the Cruz campaign is also calling this Ripon software just “vaporware”. Denials of the effectiveness of Cambridge Analytica’s psychological profiling methods has been one of the across-the-board assertions we’ve seen from the people involved with this story:


    Sources within the Cruz campaign, which largely funded Ripon’s development, claim the software never actually functioned. One former staffer told Gizmodo the product was nothing but “vaporware.”

    AIQ’s internal files show the company had unlimited access to the Ripon code, and a source within the Cruz campaign confirmed to Gizmodo that AIQ was solely responsible for the software’s development.

    The campaign eventually dumped more than $5.8 million into Ripon’s development—which is only about half the amount Robert Mercer, Cambridge Analytica’s principal investor, poured into Cruz’s White House bid. (After Trump took the nomination, Mercer contributed more than $15.5 million to his campaign, $5 million of which ended up back in Cambridge Analytica’s pockets.)

    And while everyone involved with Cambridge Analytica has been claiming it’s largely useless, it’s hard to ignored the Brexit scandal that involved Vote Leave using two outside groups to launder almost a million pounds to AIQ for AIQ’s analytics services in excess of the legal spending caps. That’s quite a vote of confidence by Vote Leave:


    The Brexit

    In 2016, Mercer reportedly offered up Cambridge Analytica’s services for free to Leave.EU, one of several group urging the UK to depart the European Union, according to The Guardian. Leave.EU was not, however, the official “Leave” group representing the Brexit campaign. Instead, a seperate group, known as Vote Leave, was formally chosen by election officials to lead the referendum.

    Whereas Leave.EU relied on Cambridge to influence voters through its use of data analytics, Vote Leave turned to AIQ, eventually paying the firm roughly 40 percent of its £7 million campaign budget, according to The Guardian. Over time, however, Vote Leave amassed more cash than it was legally allowed to spend. While UK election laws permitted Vote Leave to gift its remaining funds to other campaigns, further coordination between them was expressly forbidden.

    Roughly a week before the EU referendum, Vote Leave inexplicably donated £625,000 to a young fashion design student named Darren Grimes, the founder of a small, unofficial Brexit campaign called BeLeave. According to a BuzzFeed investigation, Grimes immediately gave a “substantial amount” of the cash he received from Vote Leave to AIQ. Vote Leave also donated £100,000 to another Leave campaign called Veterans for Britain, which, according to The Guardian, then paid AIQ precisely that amount.

    A review of the AIQ files by UpGuard’s Chris Vickery revealed several mentions of Vote Leave and at least one mention of Veterans for Britain, apparently related to website development.

    In an interview on Monday, Shahmir Sanni, a former volunteer for Vote Leave campaign, told The Globe and Mail that he had “first-hand knowledge about the alleged wrongdoing in the Brexit campaign.” Sanni, who was 22 when he worked for Vote Leave, said he was “encouraged to spin out” another campaign, but that he had “no control” over the £625,000 that was immediately spent on AIQ’s services.

    As we can see, AIQ is an important entity in terms of understanding the broader scope of the kind of work and clients this SCL/Cambridge Analytica/Bannon/Mercer political influence project was undertaking. AIQ is critical for understanding the extent of the role this influence network played in the Brexit vote but also important for showing the other kinds of clients this network was taking on. Like Sergei Taruta.

    Now let’s take a closer look at Taruta with this Ukrainian Week profile from October about the creation of Taruta’s new Osnova political party. Many suspect has Rinat Akhmetov of the Opposition Bloc is behind Taruta’s new party. But there is no evidence of that yet and the party so far appears to be designed to appeal to former Party of Regions voters, man of which are now Opposition Bloc voters in many cases and Akhmetov is a major Opposition Bloc backer. So questions about Akhmetov’s involvement remain open but it’s clear that Osnova is trying to appeal to Akhmetov’s political constituency.

    As the article also notes, Taruta has a history of supporting pro-EU politicians, including Viktor Yuschenko and Yulia Tymoshenko. And he’s never cozied up to the pro-Russian groups.

    But Taruta does have one very notable Kremlin connection: In 2010, 50%+2 shares of the Taruta’s industrial conglomerate, Industrial Union of Donbas (IUD), was bought up by Russia’s Vneshekonombank, the foreign trade bank. It is 100% state-owned and Russian Premier Dmitry Medvedev is the chair of its supervisory board. So Taruta does have a notable direct business tie with with the Russian government. But as the article notes, there are no indications Taruta or his new party are taking Russian money. And based on his political history it would be surprising if he was taking Kremiln money because he’s clearly part of the pro-European branch of Ukraine’s politics.

    So we have AIQ doing some sort of work for Sergei (Serhiy) Taruta. Is that work data analytics for Osnova? We don’t know. If it probably involves Taruta’s campaign against the National Bank of Ukraine, because Taruta is clearly very interested in waging that political fight. So interested that he had a fake congressional hearing at the US capital that was broadcast on two Ukrainian television channels and sent the message that the US congress was going to investigate Taruta’s claims about corruption at Ukraine’s central bank. So it’s possible AIQ was involved in that kind of political work too. Especially given what we know about Cambridge Analytica and SCL and their reliance of psychological warfare methods to change public opinion. A fake congressional hearing, made possible with the help of a Republican Congressman, Rep. Estes, to schedule the room at the US Capital, seems like exactly the kind of advice we should expect from the Cambridge Analytica people.
    The question of what exactly AIQ has been doing for Taruta would be a pretty big question given the scandal and mystery swirling around Cambridge Analytica and SCL. The fake congressional hearing made it a much wierder big question about the ultimate goals and agenda of the people behind Cambridge Analytica:

    Ukrainian Week

    Osnova: Taruta’s political foundation
    Founded this fall, Donetsk oligarch Serhiy Taruta’s Osnova or Foundation party has already started campaigning although the next Verkhovna Rada election is two years away

    Denys Kazanskyi
    18 October, 2017

    Dozens of billboards with his portrait and the party’s name and slogan have popped up in Kyiv and in the southeastern oblasts of Ukraine. Information about the new party is not readily available, however, as it is still mostly just on paper. But any oligarchic project stands a good chance of meeting the threshold requirement for gaining seats in the Rada based on a solid advertising budget, as past experience has shown.

    Short on ideology

    The Osnova site states that the party’s ideology is based on the principles of liberal conservatism. In Ukrainian politics, however, these words typically mean very little. What kind of conservatism are we talking about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s other populists, all of whom count on a fairly undemanding electoral base. In some ways, he resembles Serhiy Tihipko, who tried over and over again to enter politics as a “new face,” although he had been in politics since his days in the Dnipropetrovsk Oblast Komsomol Executive.

    Who will join the Taruta team? Whose interests will the party promote and who will be its allies? Where will its money come from? Taruta himself is a very ambiguous figure. For a long time he was seen as an untypical Donetsk homeboy: a high-profile businessman with an intelligent demeanor without any known criminal background. He also differed from the other Donetsk politicians in his political positions. He never played up to pro-Russian parties and movements, supporting, instead, pro-Ukrainian forces that were never very popular in Donbas.

    For instance, in a 2006 interview in Ukrainska Pravda, the tycoon admitted that in 2004 he had cast his ballot for Viktor Yushchenko. “My position was the European choice,” he emphasized. In that same interview he also mentioned that he liked Yulia Tymoshenko.

    In 2010, Taruta, in fact, supported Tymoshenko in her bid for the presidency. “Of the two candidates running today, only Yulia Tymoshenko will be able to effectively defend business interests and overcome corruption,” he said in February 2010. “She represents political and economic stability in Ukraine and will work in the country’s interests, not the interests of some particular business clan. Besides, Ms. Tymoshenko has well-deserved authority in the eyes of leaders in Russia and Europe, which means she will always be able to work out a deal in favor of Ukrainian business. Only with President Tymoshenko will it be possible for Ukraine to see all those promising growth plans that we have outlined with our new Russian partners.”

    Positive image, poor performance

    And so, when Taruta was appointed Governor of Donetsk Oblast in 2014, just as the anti-Ukrainian putsch began there, Ukrainians by and large saw this as something positive. Taruta seemed to be just the right candidate with the strength to resolve the situation: a local oligarch who understood the local mentality well and was oriented towards Ukraine. But it was not to be. Taruta proved to be a weak politician and was unable to get control over the situation. The local police and SBU kept sabotaging orders from above and had little interest in defending the Oblast State Administration. Unlike Ihor Kolomoyskiy in neighboring Dnipropetrovsk Oblast, Taruta either did not dare or did not want to put together pro-Ukrainian Self-Defense squads. And so the Donbas Battalion was actually formed in Dnipro, and not in the Donbas. Meanwhile in Donetsk Oblast, the advantage went to the militants almost from the start.

    After he resigned as governor, Taruta was elected to the Verkhovna Rada. Eventually, he announced the formation of his own political party. Based on information leaked in the press, it was clear from the beginning that this new party was intended to pick up the electorate of the now-defunct Party of the Regions, mostly in Ukraine’s southern and eastern oblasts. This certainly makes sense, but the problem is that there are several similar parties already busy working to win over this same electorate. The monopoly enjoyed by PR has long since collapsed. Now, voters in those regions have the Opposition Bloc or Opobloc, Vadym Rabinovych’s Za Zhyttia [For Life] Party, the forever-lurking Vidrodzhennia [Revival] founded in 2004, and Nash Krai [Our Country]. Osnova will make five in this cluster and can only hope that yet another project along the lines of the also-defunct Socialist Party doesn’t make an appearance in the run-up to the 2019 election. In this kind of situation, the chances of Osnova succeeding without forming an alliance with any of the more popular political parties are very low.

    There were rumors at one point that : a local oligarch who understood the local mentality well and was oriented towards Ukraine. But it was not to be. Taruta proved to be a weak politician and was unable to get control over the situation. The local police and SBU kept sabotaging orders from above and had little interest in defending the Oblast State Administration. Unlike Ihor Taruta’s party was being supported by Rinat Akhmetov, but this is hard to confirm, one way or another, especially since relations between the two Donetsk tycoons were always strained. The chances of this being true are at most 50-50. One story is that the purpose of Osnova is to gradually siphon off Akhmetov’s folks from the Opposition Bloc, given that former Regionals split into the Akhmetov wing, which is more loyal to Poroshenko, and the Liovochkin-Firtash wing, which is completely opposed. If this is true, however, then Osnova is pretty much guaranteed a spot in the next Rada, because Akhmetov has both the money and the administrative leverage in Donetsk, Zaporizhzhia and Dnipropetrovsk Oblasts, where his businesses are located, to make sure of this.

    Filling Osnova’s ranks

    So far, it’s not obvious that Akhmetov is behind this new party of Taruta’s. Of those who have already confirmed that they will join Osnova, Akhmetov’s people are not especially evident. Right now, the party appears to be drawing people who are not especially known in Ukrainian politics. Indeed, judging from the party’s Facebook page, there are only three or four spokespersons other than Taruta.

    The PR-Russia connection

    Why Serhiy Taruta decided to put his faith in people related to the Yanukovych regime is not entirely understandable. Is this the personal initiative of the oligarch himself or is it at the request of some silent investor? It’s not clear who actually is funding the party, but it seems unlikely that Taruta is putting up his own money. Although this oligarch’s worth was estimated at over US $2 billion back in 2008, he claims today that his wealth has shrunk a thousand-fold. In an interview with Hard Talk in 2015, he announced that he had preserved only 0.1% of his former wealth.

    Which brings the story around to Taruta’s business interests. In 2010, 50%+2 shares of the Industrial Union of Donbas (IUD), founded by the oligarch, was bought up by Russia’s Vneshekonombank, the foreign trade bank. That means that Taruta and the bank are partners. Taruta himself holds only 24.999% of IUD, while the bank is 100% state-owned and Russian Premier Dmitry Medvedev is the chair of its supervisory board. And so, whether he intended it to be so or not, Serhiy Taruta is business partners with the Kremlin.

    What kind of influence the Kremlin has over the Donetsk oligarch and his party is not entirely clear and, so far, there is no evidence. Nor is there evidence that Osnova is being financed by Russian money. Given the political histories of the party’s spokespersons, however, and the nature of Taruta’s business interests, it’s worth getting a good glimpse into its inner workings. It’s entirely possible that, under the aegis of a pro-European politician, some more agents of influence from an enemy state could find their way to seats in the Rada.

    In the basement of the Capitol

    Anna Korbut

    On September 25, NewsOne reported on Serhiy Taruta’s event in Washington, “The highest level in the US, the Special Congressional Committee for Financial Issues [sic], will find out about the corruption at the NBU, Only thanks to the systematic work of the team that collected evidence about the corruption of the top officials at the National Bank of Ukraine, will the strongest in the world find out about this.” At the event, Taruta and Oleksandr Zavadetskiy, a one-time director of the NBU Department for Monitoring individuals connected to banks, were planning to report on the deals by-then-departed NBU Governor Valeria Hontareva had cut. The event did take place… in a tiny basement room at the Capitol where the Congress meets, with a very small audience—and NewsOne cameras.

    The speakers at the event were introduced, not without some problems in pronunciation, by Connie Mack IV, a Republican member of the US House of Representatives from 2005 to 2013. Since leaving his Congressional career behind, Mack has been working as a lobbyist and consultant. Over 2015-2016, his name often came up as a lobbyist for Hungary’s Viktor Orban Administration in the US.

    Former CIA director James Woolsey Jr. offered a few generalized comments about corruption. In addition to being the CIA boss in 1993-1995 under the first Clinton Administration, Woolsey held high posts under other US presidents as well and was involved in negotiations with the USSR over arms treaties in the 1980s.

    Interestingly, there were no current elected American officials in attendance at the event. Moreover, there is no such creature as a “Special Congressional Committee for Financial Issues” in the US Congress. The Congress has a Financial Services Committee and the Senate has Finance Committee. Among the joint Congressional committees there is none that specializes specifically on financial issues. The Senate Finance Committee met on September 25 but the agenda included only propositions from a number of senators on how to reform the Affordable Care Act. Pretty much the only reaction to Taruta’s US event was an article by JP Carroll in the Weekly Standard under the headline, “The mother of all fake news items: How a windowless room in the basement of the Capitol was set up to look like a fake [sic] Congressional hearing.” And some angry tweets in response.

    Later on, in fact, some questions did arise, such as the validity of information published in a pamphlet entitled: “Hontareva: a threat to Ukraine’s economic security,” which was handed out to participants. Yet, this very brochure had been challenged nearly a year earlier, in October 2016, by reporters at Vox Ukraine, who analyzed the information presented. In an article entitled “VoxCheck of the Year. How Much Truth There Is in Serhiy Taruta’s Pamphlet about the Head of Ukraine’s Central Bank,” journalists came to the conclusion that, while the data in the text was largely accurate, it had been completely manipulated. Somewhat later, they did a follow-up analysis of what Ms. Hontareva actually did wrong as NBU Chair.

    Translated by Lidia Wolanskyj

    ———-
    “Osnova: Taruta’s political foundation” by Denys Kazanskyi; Ukrainian Week; 10/18/2017

    “The Osnova site states that the party’s ideology is based on the principles of liberal conservatism. In Ukrainian politics, however, these words typically mean very little. What kind of conservatism are we talking about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s other populists, all of whom count on a fairly undemanding electoral base. In some ways, he resembles Serhiy Tihipko, who tried over and over again to enter politics as a “new face,” although he had been in politics since his days in the Dnipropetrovsk Oblast Komsomol Executive.”

    A party based on the principles of liberal conservatism. So a vague party for a vague cause. That seems like an appropriate fit for Sergei Taruta, an intriguingly vague figure. But a notable figure from Donetsk, the heartland of the separatists, because he never played up to the pro-Russian parties and movements and was consistently a support of the pro-Kiev forces. That included supporting Viktor Yuschenko in 2006 and Yulia Tymoshenko in 2010:


    Who will join the Taruta team? Whose interests will the party promote and who will be its allies? Where will its money come from? Taruta himself is a very ambiguous figure. For a long time he was seen as an untypical Donetsk homeboy: a high-profile businessman with an intelligent demeanor without any known criminal background. He also differed from the other Donetsk politicians in his political positions. He never played up to pro-Russian parties and movements, supporting, instead, pro-Ukrainian forces that were never very popular in Donbas.

    For instance, in a 2006 interview in Ukrainska Pravda, the tycoon admitted that in 2004 he had cast his ballot for Viktor Yushchenko. “My position was the European choice,” he emphasized. In that same interview he also mentioned that he liked Yulia Tymoshenko.

    In 2010, Taruta, in fact, supported Tymoshenko in her bid for the presidency. “Of the two candidates running today, only Yulia Tymoshenko will be able to effectively defend business interests and overcome corruption,” he said in February 2010. “She represents political and economic stability in Ukraine and will work in the country’s interests, not the interests of some particular business clan. Besides, Ms. Tymoshenko has well-deserved authority in the eyes of leaders in Russia and Europe, which means she will always be able to work out a deal in favor of Ukrainian business. Only with President Tymoshenko will it be possible for Ukraine to see all those promising growth plans that we have outlined with our new Russian partners.”

    And Taruta’s pro-Kiev orientation is no doubt a big reason he was appointed governor of Donetsk in March of 2014 following the post-Maidan collapse of the Yanukovych government. But he didn’t last long, resigning in October of 2014. And that was partly attributed to his limited support for the volunteer militias when compared to the appointed governor of the neighboring Dnipo oblast, Ihor Kolomoisky (note that, as we’ll see in a following article, both Taruta and Kolomoisky reportedly supported the Azov Battalion):


    Positive image, poor performance

    And so, when Taruta was appointed Governor of Donetsk Oblast in 2014, just as the anti-Ukrainian putsch began there, Ukrainians by and large saw this as something positive. Taruta seemed to be just the right candidate with the strength to resolve the situation: a local oligarch who understood the local mentality well and was oriented towards Ukraine. But it was not to be. Taruta proved to be a weak politician and was unable to get control over the situation. The local police and SBU kept sabotaging orders from above and had little interest in defending the Oblast State Administration. Unlike Ihor Kolomoyskiy in neighboring Dnipropetrovsk Oblast, Taruta either did not dare or did not want to put together pro-Ukrainian Self-Defense squads. And so the Donbas Battalion was actually formed in Dnipro, and not in the Donbas. Meanwhile in Donetsk Oblast, the advantage went to the militants almost from the start.

    After resigning as governor, he gets elected to the parliament. And now he has a new party, Osnova, which is characterized as clearly designed to pick up the electorate of the now-defunct Party of Regions:


    After he resigned as governor, Taruta was elected to the Verkhovna Rada. Eventually, he announced the formation of his own political party. Based on information leaked in the press, it was clear from the beginning that this new party was intended to pick up the electorate of the now-defunct Party of the Regions, mostly in Ukraine’s southern and eastern oblasts. This certainly makes sense, but the problem is that there are several similar parties already busy working to win over this same electorate. The monopoly enjoyed by PR has long since collapsed. Now, voters in those regions have the Opposition Bloc or Opobloc, Vadym Rabinovych’s Za Zhyttia [For Life] Party, the forever-lurking Vidrodzhennia [Revival] founded in 2004, and Nash Krai [Our Country]. Osnova will make five in this cluster and can only hope that yet another project along the lines of the also-defunct Socialist Party doesn’t make an appearance in the run-up to the 2019 election. In this kind of situation, the chances of Osnova succeeding without forming an alliance with any of the more popular political parties are very low.

    And while the translation is somewhat garbled here, it appears that there is speculation that Rinat Akhmetov, a top oligarch and one of the primary backers of the “Opposition Bloc”, may be behind Taruta’s Osnova initiative. But there’s no evidence of this and if true it would put Osnova in competition for Akhmetov’s Opposition Bloc voters. Also, people close to Akhmetov aren’t found in Osnova’s leadership:


    There were rumors at one point that : a local oligarch who understood the local mentality well and was oriented towards Ukraine. But it was not to be. Taruta proved to be a weak politician and was unable to get control over the situation. The local police and SBU kept sabotaging orders from above and had little interest in defending the Oblast State Administration. Unlike Ihor Taruta’s party was being supported by Rinat Akhmetov, but this is hard to confirm, one way or another, especially since relations between the two Donetsk tycoons were always strained. The chances of this being true are at most 50-50. One story is that the purpose of Osnova is to gradually siphon off Akhmetov’s folks from the Opposition Bloc, given that former Regionals split into the Akhmetov wing, which is more loyal to Poroshenko, and the Liovochkin-Firtash wing, which is completely opposed. If this is true, however, then Osnova is pretty much guaranteed a spot in the next Rada, because Akhmetov has both the money and the administrative leverage in Donetsk, Zaporizhzhia and Dnipropetrovsk Oblasts, where his businesses are located, to make sure of this.

    Filling Osnova’s ranks

    So far, it’s not obvious that Akhmetov is behind this new party of Taruta’s. Of those who have already confirmed that they will join Osnova, Akhmetov’s people are not especially evident. Right now, the party appears to be drawing people who are not especially known in Ukrainian politics. Indeed, judging from the party’s Facebook page, there are only three or four spokespersons other than Taruta.

    But while Taruta is clearly a pro-Kiev/pro-EU kind of Ukrainian politician, he does have one notable tie to the Kremiln: a majority stake in his industrial conglomerate was sold to a Russian state-own bank in 2010:


    The PR-Russia connection

    Why Serhiy Taruta decided to put his faith in people related to the Yanukovych regime is not entirely understandable. Is this the personal initiative of the oligarch himself or is it at the request of some silent investor? It’s not clear who actually is funding the party, but it seems unlikely that Taruta is putting up his own money. Although this oligarch’s worth was estimated at over US $2 billion back in 2008, he claims today that his wealth has shrunk a thousand-fold. In an interview with Hard Talk in 2015, he announced that he had preserved only 0.1% of his former wealth.

    Which brings the story around to Taruta’s business interests. In 2010, 50%+2 shares of the Industrial Union of Donbas (IUD), founded by the oligarch, was bought up by Russia’s Vneshekonombank, the foreign trade bank. That means that Taruta and the bank are partners. Taruta himself holds only 24.999% of IUD, while the bank is 100% state-owned and Russian Premier Dmitry Medvedev is the chair of its supervisory board. And so, whether he intended it to be so or not, Serhiy Taruta is business partners with the Kremlin.

    What kind of influence the Kremlin has over the Donetsk oligarch and his party is not entirely clear and, so far, there is no evidence. Nor is there evidence that Osnova is being financed by Russian money. Given the political histories of the party’s spokespersons, however, and the nature of Taruta’s business interests, it’s worth getting a good glimpse into its inner workings. It’s entirely possible that, under the aegis of a pro-European politician, some more agents of influence from an enemy state could find their way to seats in the Rada.

    And beyond building his mysterious new Osnova party, Taruta is also busy lobbying the US about his pet project of outing alleged corruption at Ukraine’s central bank. Or at least he’s busy making it look like he’s lobbying the US about this. And he’s willing to go to enormous lengths to create those appearances, like a September 25, 2017 fake congressional hearing in the US Capital where an ex-Congressman, Connie Mack, pretended to expression congerssional outrage over Taruta’s allegations and an ex-CIA chief, James Woolsey, gave words of support for the ‘anti-corruption drive’. And this was all televised in Ukraine and treated like a real US political event:


    On September 25, NewsOne reported on Serhiy Taruta’s event in Washington, “The highest level in the US, the Special Congressional Committee for Financial Issues [sic], will find out about the corruption at the NBU, Only thanks to the systematic work of the team that collected evidence about the corruption of the top officials at the National Bank of Ukraine, will the strongest in the world find out about this.” At the event, Taruta and Oleksandr Zavadetskiy, a one-time director of the NBU Department for Monitoring individuals connected to banks, were planning to report on the deals by-then-departed NBU Governor Valeria Hontareva had cut. The event did take place… in a tiny basement room at the Capitol where the Congress meets, with a very small audience—and NewsOne cameras.

    The speakers at the event were introduced, not without some problems in pronunciation, by Connie Mack IV, a Republican member of the US House of Representatives from 2005 to 2013. Since leaving his Congressional career behind, Mack has been working as a lobbyist and consultant. Over 2015-2016, his name often came up as a lobbyist for Hungary’s Viktor Orban Administration in the US.

    Former CIA director James Woolsey Jr. offered a few generalized comments about corruption. In addition to being the CIA boss in 1993-1995 under the first Clinton Administration, Woolsey held high posts under other US presidents as well and was involved in negotiations with the USSR over arms treaties in the 1980s.

    Interestingly, there were no current elected American officials in attendance at the event. Moreover, there is no such creature as a “Special Congressional Committee for Financial Issues” in the US Congress. The Congress has a Financial Services Committee and the Senate has Finance Committee. Among the joint Congressional committees there is none that specializes specifically on financial issues. The Senate Finance Committee met on September 25 but the agenda included only propositions from a number of senators on how to reform the Affordable Care Act. Pretty much the only reaction to Taruta’s US event was an article by JP Carroll in the Weekly Standard under the headline, “The mother of all fake news items: How a windowless room in the basement of the Capitol was set up to look like a fake [sic] Congressional hearing.” And some angry tweets in response.

    Later on, in fact, some questions did arise, such as the validity of information published in a pamphlet entitled: “Hontareva: a threat to Ukraine’s economic security,” which was handed out to participants. Yet, this very brochure had been challenged nearly a year earlier, in October 2016, by reporters at Vox Ukraine, who analyzed the information presented. In an article entitled “VoxCheck of the Year. How Much Truth There Is in Serhiy Taruta’s Pamphlet about the Head of Ukraine’s Central Bank,” journalists came to the conclusion that, while the data in the text was largely accurate, it had been completely manipulated. Somewhat later, they did a follow-up analysis of what Ms. Hontareva actually did wrong as NBU Chair.

    So now lets take a look at a report in this bizarre fake event written by the one American reporter who was invited to attend. As the article notes, the event will billed by the Ukrainian television channel as a meeting of the “US Congressional Committee on Financial Issues.” No current members of Congress were there. Instead, it was a private panel discussion hosted by former Rep. Connie Mack IV (R-FL), and Matt Keelen, a veteran political fundraiser and operative. It was open only to invited guests (including congressional staffers), two Ukrainian reporters (from NewsOne), and one American reporter. Mack was wearing his old congressional pin on his lapel.

    Much of the event was spent criticizing Ukraine’s former central banker Valeriya Hontareva (Gontareva). The “HONTAREVA report” is the product of Taruta, and he has been out promoting it since late 2016. According to VoxCheck, a Ukrainian fact checking website, “the data [in the report], though mostly correct, are manipulated in almost all occasions.” VoxCheck also notes that the report has split Ukrainian politicians.

    James Woolsey, the former CIA director and former Trump campaign adviser, was also at the event and briefly spoke. Woolsey talked about how “sweet” Russia was in the early years after the fall of the Berlin Wall and the need to find a way to make Russia “sweet” like that again.

    One Senate Aide described Woolsey’s appearance there a strange, strange event and an “inter-oligarch dispute”: “It was a strange, strange event. Even by Ukrainian standards, that was an odd one. . . . I mean, why would a former CIA director be in the basement of the Capitol for a inter-oligarch dispute? [Former] CIA directors don’t just go to events and say, how much we could get along with the Russians. They don’t do that without a reason.” And that seems like a good way to summarizee this: a strange, strange event that’s one element of a broad inter-oligarch dispute. A dispute that’s giving us some insights in the the kind of figures in Ukraine Cambridge Analytica and AIQ want to work for:

    The Weekly Standard

    The Mother of All Fake News
    How a windowless room in the basement of the Capitol was set up to look like a fake congressional hearing.

    1:12 PM, Sep 29, 2017 | By J.P. CARROLL

    Watchers of Ukraine’s NewsOne television channel on September 25 were treated to what was suggested to be a congressional hearing in Washington about corruption in the National Bank of Ukraine (the NBU), which is the Ukrainian equivalent of the Federal Reserve Board.

    The event, which took place in the basement of the U.S. Capitol, Room HC 8, was billed by the Ukrainian television channel as a meeting of the “US Congressional Committee on Financial Issues.” NewsOne teased it this way:

    The highest levels of corruption in the NBU are known by the US Congressional Committee on Financial Issues.

    Only thanks to the systematic work of the team that collected evidence of corruptions of the most important officials of the National Bank, the strongest of the world will find out about it.

    Shocking details and resonant details—live streaming on NewsOne! Turn on at 21:00—live from Washington DC

    Except, what was broadcast was not a hearing of any committee of Congress. No current members of Congress were even there. What was this odd event? A private panel discussion hosted by former Rep. Connie Mack IV (R-FL), along with veteran political fundraiser and operative, Matt Keelen. But unlike an actual congressional hearing, this private event was open only to invited guests (including congressional staffers), two Ukrainian reporters (from NewsOne), and one American reporter (me).

    Handed out to attendees was a report titled “HONTAREVA: Combatting Corruption in the National Bank of Ukraine.” The report’s subject is Valeriya Hontareva, who resigned as governor from the NBU in April in the wake of death threats after she reformed the Ukraine’s banking system, including nationalizing the largest bank, PrivatBank. Hontareva is an ally of Ukrainian President Petro Poroshenko.

    Joining Mack and Keelen at the front of the room were two panelists: Sergiy Taruta, a billionaire member of the Ukrainian parliament who previously served as governor of Donetsk in eastern Ukraine, and Oleksandr Zavadetskyi, who formerly worked at the NBU and claimed to have been fired after asking inappropriate questions regarding bank nationalization procedures while Hontareva was in charge.

    The HONTAREVA report is the product of Sergiy Taruta, and he has been out flogging it for nearly a year. VoxCheck, a Ukrainian fact checking website, analyzed Taruta’s report in late 2016 and says of the report: “VoxCheck has checked most of the facts from the Taruta’s brochure and has discovered that the data, though mostly correct, are manipulated in almost all occasions.”

    VoxCheck reports that the effect of Taruta’s “pamphlet” has been a “split [between] politicians and experts into two opposing camps, those who support Taruta and those who support Valeriya Hontareva.” (VoxCheck was similarly critical of Hontareva’s rebuttal.)

    Much of the event was spent criticizing Hontareva. Mack wore his old congressional pin on his lapel throughout. He opened by musing about his time on the House Foreign Affairs Committee. “It was always important for us as a committee and as a Congress to understand what’s happening around the world, and the topic of corruption would always come up,” he said.

    Curiously, James Woolsey, the former Clinton Administration CIA director and former Trump campaign adviser, also attended and briefly spoke during the event.

    Mack identified Woolsey as “a special guest with us today.” Woolsey got up from his seat in the sparse audience and recalled the time years ago when he helped negotiate a conventional arms treaty in Europe. He mentioned Ukraine in that context, but did not talk about corruption. Woolsey said in part that after the fall of the Berlin Wall, “For the next three to four years, the Russians were very easy to get along with. They were sweethearts.” The former CIA director went on to say, “I would love to see the international events work out in such a way that we end up being able to do two things. One, is to deal with the existence of corruption in the way that you referred to and that many people here are experts on. And the other is to keep Ukraine and other states in the region, such as Poland, from feeling that they are constantly under pressure from Russia to do the wrong thing. Resuscitate the days of friendly Russia in the early ‘90s.

    When asked as to why he hosted this event, former Congressman Mack told this reporter, “I represent a group that is interested in highlighting corruption, not just in Ukraine, but all over: from Central to South America, to Eastern Europe.” Mack acknowledged beforehand that the event was on the record, but when I asked Woolsey about his attendance after the event, he suggested that his remarks were off the record, despite the event being recorded and broadcast on Ukraine’s NewsOne.

    Whether intentional or not, the nature and location of the event gave Ukrainian journalists the pretext to misleadingly suggest the event was action by the United States Congress.

    In an interview after the event concluded, Taruta told NewsOne: “the fact that we’re here is exactly proof that the American government, the American Congress, are not indifferent to the corruption that is today at the highest echelons of power/government.”

    Ukrainian officials derided Mack’s panel as fake news. Via a press release, the NBU’s website responded this way:

    Serhii Taruta spreads false information about an alleged hearing in the Congress of the United States of America dedicated to Ukrainian authorities and the NBU.

    As far as the NBU is informed, the US Congress held no official hearing or meeting on the subjects indicated in Mr Taruta’s message either today or any other day. In reality, an informal meeting hosting less than 20 persons was held in a room taken on lease; the organizer and moderator was a representative of the lobbying company LibertyInternationalGroup, and the speakers were Mr Serhii Taruta and Mr Oleksandr Zavadetskyi, an NBU’s former employee. No officials from the US Administration or Congress attended the events.

    In an email, Dmytro Shymkiv, the deputy head of Presidential Administration of Ukraine, said: “The event on Capitol Hill about the National Bank of Ukraine was not a congressional hearing . . . The discussion was held without public scrutiny and was sponsored by a secret source. It just happened to be convened in a room on Capitol Hill by an American who was once, years ago, a congressman.” Mack, who is now a registered lobbyist, was last in Congress in 2013 after being defeated in a race for a U.S. Senate seat.

    It is unclear whether the event was “sponsored” in the sense that money was exchanged for use of the room. Meeting rooms—like HC-8—are typically used in conjunction with official congressional activity, but current members of Congress are able to sponsor use of the such rooms for constituent groups, provided they attend. If they cannot attend, one of their aides is required to attend. The room reservation form from the speaker’s office, which controls reservations, warns congressional offices that these rooms cannot be used for: “Commercial, profit-making, fundraising, advertising, political or lobbying purposes, nor for entertaining tour groups.”

    An inquiry to Speaker Ryan’s office about the use of the space was not returned.

    Mack is registered to lobby on behalf of Interconnection Commerce S.A. to try to raise awareness of “corruption within the National Bank of Ukraine.” POLITICO Influence reports that “It’s unclear who Interconnection S.A. represents. The firm lists an address in the British Virgin Islands and shows up in the Panama Papers leaks but otherwise has no online presence.”

    A Senate aide with knowledge of the event said, “It was a strange, strange event. Even by Ukrainian standards, that was an odd one. . . . I mean, why would a former CIA director be in the basement of the Capitol for a inter-oligarch dispute? [Former] CIA directors don’t just go to events and say, how much we could get along with the Russians. They don’t do that without a reason.”

    ———-

    “The Mother of All Fake News” by J.P. CARROLL; The Weekly Standard; 09/29/2017

    The HONTAREVA report is the product of Sergiy Taruta, and he has been out flogging it for nearly a year. VoxCheck, a Ukrainian fact checking website, analyzed Taruta’s report in late 2016 and says of the report: “VoxCheck has checked most of the facts from the Taruta’s brochure and has discovered that the data, though mostly correct, are manipulated in almost all occasions.””

    The fake congressional hear is a sign of how much Taruta wants to publicize his report report on the corruption at Ukraine’s central bank. But it’s also a sign that Taruta’s primary audience with this fake hearing was Ukrainians. And Taruta and his NewOne Ukrainian media partners were more than happy to maintain the pretense that this was a real congressional event for that Ukrainian audience. It was a private event hoax designed to look like a public event:


    The event, which took place in the basement of the U.S. Capitol, Room HC 8, was billed by the Ukrainian television channel as a meeting of the “US Congressional Committee on Financial Issues.” NewsOne teased it this way:

    The highest levels of corruption in the NBU are known by the US Congressional Committee on Financial Issues.

    Only thanks to the systematic work of the team that collected evidence of corruptions of the most important officials of the National Bank, the strongest of the world will find out about it.

    Shocking details and resonant details—live streaming on NewsOne! Turn on at 21:00—live from Washington DC

    Except, what was broadcast was not a hearing of any committee of Congress. No current members of Congress were even there. What was this odd event? A private panel discussion hosted by former Rep. Connie Mack IV (R-FL), along with veteran political fundraiser and operative, Matt Keelen. But unlike an actual congressional hearing, this private event was open only to invited guests (including congressional staffers), two Ukrainian reporters (from NewsOne), and one American reporter (me).

    Adding to the bizarreness was the speech by former CIA director James Woolsey about what sweethearts Russia was after the fall of the Berlin wall and the need to return to that point:


    Curiously, James Woolsey, the former Clinton Administration CIA director and former Trump campaign adviser, also attended and briefly spoke during the event.

    Mack identified Woolsey as “a special guest with us today.” Woolsey got up from his seat in the sparse audience and recalled the time years ago when he helped negotiate a conventional arms treaty in Europe. He mentioned Ukraine in that context, but did not talk about corruption. Woolsey said in part that after the fall of the Berlin Wall, “For the next three to four years, the Russians were very easy to get along with. They were sweethearts.” The former CIA director went on to say, “I would love to see the international events work out in such a way that we end up being able to do two things. One, is to deal with the existence of corruption in the way that you referred to and that many people here are experts on. And the other is to keep Ukraine and other states in the region, such as Poland, from feeling that they are constantly under pressure from Russia to do the wrong thing. Resuscitate the days of friendly Russia in the early ‘90s.

    And that’s all why one Senate Aide referred to it all as a strange, strange event to see a former CIA director show up at a hoax event that’s part of a larger inter-oligarch dispute:


    A Senate aide with knowledge of the event said, “It was a strange, strange event. Even by Ukrainian standards, that was an odd one. . . . I mean, why would a former CIA director be in the basement of the Capitol for a inter-oligarch dispute? [Former] CIA directors don’t just go to events and say, how much we could get along with the Russians. They don’t do that without a reason.”

    So let’s now take a closer look at that inter-oligarch dispute to get a better sense of who Taruta is aligned with in Ukraine. And in this case he’s clearly aligned with Ihor Kolomoisky, co-founder of the nationalized Privatbank.

    As the article also notes, when Taruta was selling the majority stake in the industrial conglomerate he co-founded, Industrial Union of Donbass, in 2010, he was a close ally Yulia Tymoshenko at the time. And according to leaked cables, Tymoshenko wanted him to keep the sale a secret over fears that she would be attacked for selling out Ukraine. It’s another indication of Taruta’s political pedigree.

    The article also has an explanation from James Woolsey on why he attended that event: he was duped. He agreed to show up in the audience and then was asked on the spot to make some remarks. That’s the line he’s going with.

    And the article identifies the person who has come forward to claim responsibility for arranging the event: Anatoly Motkin, a one-time aide to a Georgian oligarch. Motkin founded the StrategEast consulting firm that describes itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe.” Motkin claims that he decided to fund the event because Taruta brought the allegations about Gontareva to his attention.

    So that gives us a few more data points about Taruta: he was close to Tymoshenko, he’s doing Ihor Kolomoisky’s bidding in waging this fight against the nationalization of Privatbank, and the person who actually set up the even runs a lobbying firm for that described itself as “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe”:

    The Daily Beast

    The Allegedly Murderous Oligarch, the Duped CIA Chief, and the Trumpkin
    Who was behind a mysterious fake hearing in the basement of the U.S. Capitol?

    Betsy Woodruff
    03.27.18 5:04 AM ET

    On Sept. 25, 2017, a windowless room in the basement of the Capitol Building became the site of one of Washington’s more mysterious recent events.

    On hand: an investor who was once unsuccessfully sued for allegedly helping murder his own boss, a former congressman from the Florida panhandle, and a former Trump campaign staffer. One of two Ukrainian media outlets to cover the event is owned by an old associate of Paul Manafort’s—a man who federal prosecutors allege to be an “upper-echelon associate of Russian organized crime.”

    Oh, and the former director of the CIA was involved.

    The former CIA director told The Daily Beast he wouldn’t have gotten involved if he had known what was going on. One of the American lobbyists said the event was used for propaganda. The guy who got sued over his boss’s death? He now takes credit for the whole shebang.

    THE BANK TAKEOVER

    This story starts in Kyiv, Ukraine, on June 19, 2014. That’s when a woman named Valeriya Gontareva became the chair of the country’s powerful central bank. Ukrainian politics is rife with corruption, especially by American standards, and is dominated by the country’s powerful oligarchs. As chair of the national bank, Gontareva made a host of changes to the country’s financial system—and some powerful enemies.

    One of the biggest changes she oversaw was a government takeover of the country’s biggest commercial bank, Privatbank. The oligarch Ihor Kolomoisky (who The Wall Street Journal once described as “feisty”) co-founded it. When Gontareva presided over the bank’s nationalization, its accounts were missing more than $5 billion, according to the Financial Times, in large part because the bank lent so much money to people with connections to Kolomoisky.

    “International financial institutions applauded the state takeover,” wrote FT. “It has been widely seen as the culmination of Ukraine’s efforts since 2014 to clean up a dysfunctional banking sector dominated by oligarch-owned banks.”

    The bank’s founders weren’t pleased.

    After the bank takeover, Gontareva received numerous threats. One protester put a coffin outside her door, according to Reuters. On April 10, 2017, she announced at a press conference that she was resigning from her post. She touted her accomplishments at the event, but cautioned that in her absence the country’s financial sector could fake greater troubles.

    “I believe that resistance to changes and reforms will grow stronger now,” she said.

    THE FAKE HEARING

    Five months later, in Washington D.C., something odd happened: American lobbyists hosted an event, ostensibly on anti-corruption issues, in the basement of the Capitol Building. The event vilified Gontareva. Organizers distributed literature featuring a grim close-up of her face, calling her a threat to Ukraine’s economic security, and asking if she was “CINDERELLA OR WICKED STEPMOTHER?”

    Serhiy Taruta, a member of the Ukrainian parliament, is named as the author of the report. In 2008, Forbes estimated his net worth at $2.7 billion. According to a diplomatic cable published by WikiLeaks, American government officials believed Taruta played a role in the sale of a majority stake in the sale of one of Ukraine’s largest steel groups—valued at $2 billion—to a powerful Russian businessman. Taruta was a close ally of politician Yulia Tymoshenko at the time, and the cable said she and Taruta wanted to keep the deal “hidden from public view” to avoid criticism. Had the nature of the deal been made public, the cable said, Tymoshenko could have faced “increased attacks from political rivals for ‘selling out’ Ukrainian assets to Russian interests, perhaps to finance her presidential campaign.”

    The event’s organizers are adamant that they did not plan for it to look like a fake congressional hearing. But Ukrainian reporters who attended the event covered it that way. Former Rep. Connie Mack, one of the American lobbyists who organized the event, sported the pin that members of Congress wear. James Woolsey, former CIA director, attended and spoke briefly to the group.

    Woolsey’s spokesperson, Jonathan Franks, later said he was duped.

    “Ambassador Woolsey was deliberately misled about the nature of this event when he agreed to attend,” Franks told The Daily Beast. “He expected to be a member of the audience for a serious discussion of issues facing the Ukraine, an area he’s been interested in for decades. He didn’t agree to be identified a ‘special guest’ nor did he agree to speak. Perhaps he was guilty of being old fashioned, but it never occurred to him the organizers would lure him to an event in the Capitol in order to make him an involuntary participant in a sham.”

    Rep. Ron Estes, a freshman from Kansas, booked the room for Mack and Co. His office later told The Daily Beast this won’t happen again.

    Mack and Matt Keelen, a lobbyist whose firm’s website boasts of his “well fostered relationships” in the Trump administration, both disclosed in federal registration forms that they put on the event for a shell company based in the British Virgin Islands called Interconnection Commerce SA.

    “I never portrayed this as a hearing,” Mack told The Daily Beast. “We didn’t do anything to make it look like a hearing. It was in a very stale room in the basement, no markings of a congressional hearing at all.”

    At the event, Mack used the term “we” when referring to Congress, and was emphatic that members should investigate Gontareva.

    “One thing is clear: that we, the Congress of the United States—and there are taxpayer dollars at risk, and there are allegations, suggestions, and evidence—should investigate,” he said, according to an audio recording of the event.

    Mack blamed BGR Group, a lobbying firm that works for Ukraine’s current president, Petro Poroshenko, for pushing the narrative that he and Keelen put on a fake hearing.

    Two Ukrainian news outlets covered the event. One of those outlets, ChannelOne, described it as a hearing of the nonexistent “U.S. Congressional Committee on Financial Issues.”

    “That was pure propaganda on their part,” Mack said. “Whoever those news outlets are, it really is fake news. They had to go a long way to try to make it look like a hearing.”

    The other Ukrainian news outlet that covered the event was UkraNews, which—according to the Objective Project, which monitors media ownership in Ukraine—belongs to Dmitry Firtash.

    That name should ring a bell, if you’ve been following the far-flung drama into foreign influence on the 2016 election. Federal prosecutors in Chicago are seeking Firtash’s extradition to the United States to put him on trial for racketeering. Manafort, former Manafort deputy Rick Gates, and Firtash worked on a deal in 2008 to buy New York’s Drake Hotel—for a cool $850 million—but the deal fell through.

    Lanny Davis—a former special counsel in Bill Clinton’s White House who today represents Firtash—said his client had nothing to do with the hearing.

    “Mr. Firtash had and has no knowledge of, no position on, and no involvement whatsoever in the congressional briefing that occurred and takes no position and has no interest in the issues discussed,” Davis said.

    THE MYSTERY MAN

    So who dreamed up this fake hearing? And who paid for it? For months, the backer of this so-called sham was a mystery. But when The Daily Beast started asking who paid for the event, a little-known figure came forward to take full responsibility: Anatoly Motkin, a one-time aide to a Georgian oligarch accused of leading a coup attempt.

    A spokesperson for Motkin, formerly an associate to the now-deceased Badri Patarkatsishvili, told The Daily Beast that he paid for the entire event. Alison Patch, a spokesperson for Motkin, said Motkin paid for the event himself in his personal capacity.

    Motkin was an aide to Patarkatsishvili when he reportedly tried to foment a coup in Georgia. After Patarkatsishvili died, Motkin found himself embroiled in a legal battle with Patarkatsishvili’s cousin. The cousin alleged in documents filed as part of a civil suit in New York state court that Motkin was part of a plot to kill Patarkatsishvili (PDF).

    A spokesperson for Motkin said he decided to fund the event because Taruta, the Ukrainian billionaire, brought the allegations about Gontareva to his attention.

    “Although this report was entirely brought by Mr. Taruta’s initiative, for many years Mr. Motkin has worked on promoting democratic values amongst communities close to the former Soviet Union,” said Patch. “Knowing of his interest in supporting anti-corruption efforts, Mr. Taruta shared the information about his report. Mr. Motkin found the evidence presented compelling and decided that if he could help get the issues in front of people who may make a difference, he would.”

    Anders Aslund of the Atlantic Council, an expert on oligarchs’ politicking, didn’t quite believe it. Aslund said he believes the driving force behind the event was Ihor Kolomoisky—the Ukrainian oligarch whose cronies lost all that money when Privatbank was nationalized. Kolomisky would have millions of reasons to detest Gontareva, the object of the fake hearing’s ire, according to Aslund.

    “This was entirely Kolomoisky,” he said. “Kolomoisky is crooked and clever. He is a person who makes business by doing bankruptcy rather than making profits.”

    Kolomoisky has faced allegations of involvement in contract killings, which he denies. An attorney for Kolomoisky did not respond to multiple requests for comment.

    ———-

    “The Allegedly Murderous Oligarch, the Duped CIA Chief, and the Trumpkin” by Betsy Woodruff; The Daily Beast; 03/27/2018

    “Serhiy Taruta, a member of the Ukrainian parliament, is named as the author of the report. In 2008, Forbes estimated his net worth at $2.7 billion. According to a diplomatic cable published by WikiLeaks, American government officials believed Taruta played a role in the sale of a majority stake in the sale of one of Ukraine’s largest steel groups—valued at $2 billion—to a powerful Russian businessman. Taruta was a close ally of politician Yulia Tymoshenko at the time, and the cable said she and Taruta wanted to keep the deal “hidden from public view” to avoid criticism. Had the nature of the deal been made public, the cable said, Tymoshenko could have faced “increased attacks from political rivals for ‘selling out’ Ukrainian assets to Russian interests, perhaps to finance her presidential campaign.””

    That’s a key observation: Taruta was seen as a close Tymoshenko ally.

    But he’s also a Koloimoisky ally since this inter-oligarch dispute is Kolomoisky’s dispute and Taruta is fighting Kolomoisky’s fight:


    This story starts in Kyiv, Ukraine, on June 19, 2014. That’s when a woman named Valeriya Gontareva became the chair of the country’s powerful central bank. Ukrainian politics is rife with corruption, especially by American standards, and is dominated by the country’s powerful oligarchs. As chair of the national bank, Gontareva made a host of changes to the country’s financial system—and some powerful enemies.

    One of the biggest changes she oversaw was a government takeover of the country’s biggest commercial bank, Privatbank. The oligarch Ihor Kolomoisky (who The Wall Street Journal once described as “feisty”) co-founded it. When Gontareva presided over the bank’s nationalization, its accounts were missing more than $5 billion, according to the Financial Times, in large part because the bank lent so much money to people with connections to Kolomoisky.

    “International financial institutions applauded the state takeover,” wrote FT. “It has been widely seen as the culmination of Ukraine’s efforts since 2014 to clean up a dysfunctional banking sector dominated by oligarch-owned banks.”

    The bank’s founders weren’t pleased.

    After the bank takeover, Gontareva received numerous threats. One protester put a coffin outside her door, according to Reuters. On April 10, 2017, she announced at a press conference that she was resigning from her post. She touted her accomplishments at the event, but cautioned that in her absence the country’s financial sector could fake greater troubles.

    But what about James Woolsey? What’s his excuse for fighting Kolomoiksy’s fight? He was tricked. That was his excuse:


    Woolsey’s spokesperson, Jonathan Franks, later said he was duped.

    “Ambassador Woolsey was deliberately misled about the nature of this event when he agreed to attend,” Franks told The Daily Beast. “He expected to be a member of the audience for a serious discussion of issues facing the Ukraine, an area he’s been interested in for decades. He didn’t agree to be identified a ‘special guest’ nor did he agree to speak. Perhaps he was guilty of being old fashioned, but it never occurred to him the organizers would lure him to an event in the Capitol in order to make him an involuntary participant in a sham.”

    And what about Rep. Estes, the congressman who made this official room available for the stunt? Well, he assures us that it won’t happen again. It’s sort of an explanation:


    Rep. Ron Estes, a freshman from Kansas, booked the room for Mack and Co. His office later told The Daily Beast this won’t happen again.

    And note the two Ukrainian media companies that covered this. There was ChannelOne, which is owned by 1+1 Media, Ihor Kolomoisky’s media group. And also UkraNews, which belongs to Dmitry Firtash:


    Two Ukrainian news outlets covered the event. One of those outlets, ChannelOne, described it as a hearing of the nonexistent “U.S. Congressional Committee on Financial Issues.”

    “That was pure propaganda on their part,” Mack said. “Whoever those news outlets are, it really is fake news. They had to go a long way to try to make it look like a hearing.”

    The other Ukrainian news outlet that covered the event was UkraNews, which—according to the Objective Project, which monitors media ownership in Ukraine—belongs to Dmitry Firtash.

    That name should ring a bell, if you’ve been following the far-flung drama into foreign influence on the 2016 election. Federal prosecutors in Chicago are seeking Firtash’s extradition to the United States to put him on trial for racketeering. Manafort, former Manafort deputy Rick Gates, and Firtash worked on a deal in 2008 to buy New York’s Drake Hotel—for a cool $850 million—but the deal fell through.

    And recall what we saw in the above Ukraine Week piece about the makeup of the Opposition Bloc and the unproven speculation that Rinat Akhmetov could be behind Osnova: “One story is that the purpose of Osnova is to gradually siphon off Akhmetov’s folks from the Opposition Bloc, given that former Regionals split into the Akhmetov wing, which is more loyal to Poroshenko, and the Liovochkin-Firtash wing, which is completely opposed“. That sure sounds like Firtash represents a faction of the Opposition Bloc that would like to see Poroshenko go (recall that Andreii Artemenko’s peace plan proposal involved the collapse of the Porokshenko government under a wave scandal revelations. Artemenko would provide the scandal evidence). So it’s notable that we have Firtash’s news channel also promoting Taruta’s fake congressional along with Kolomoisky’s ChannelOne.

    And look who has come forward as the even organizer. Anatoly Motkin, a one-time aide to a Georgian oligarch:


    THE MYSTERY MAN

    So who dreamed up this fake hearing? And who paid for it? For months, the backer of this so-called sham was a mystery. But when The Daily Beast started asking who paid for the event, a little-known figure came forward to take full responsibility: Anatoly Motkin, a one-time aide to a Georgian oligarch accused of leading a coup attempt.

    A spokesperson for Motkin, formerly an associate to the now-deceased Badri Patarkatsishvili, told The Daily Beast that he paid for the entire event. Alison Patch, a spokesperson for Motkin, said Motkin paid for the event himself in his personal capacity.

    Motkin was an aide to Patarkatsishvili when he reportedly tried to foment a coup in Georgia. After Patarkatsishvili died, Motkin found himself embroiled in a legal battle with Patarkatsishvili’s cousin. The cousin alleged in documents filed as part of a civil suit in New York state court that Motkin was part of a plot to kill Patarkatsishvili (PDF).

    A spokesperson for Motkin said he decided to fund the event because Taruta, the Ukrainian billionaire, brought the allegations about Gontareva to his attention.

    “Although this report was entirely brought by Mr. Taruta’s initiative, for many years Mr. Motkin has worked on promoting democratic values amongst communities close to the former Soviet Union,” said Patch. “Knowing of his interest in supporting anti-corruption efforts, Mr. Taruta shared the information about his report. Mr. Motkin found the evidence presented compelling and decided that if he could help get the issues in front of people who may make a difference, he would.”

    And when we look at how Motkin’s lobbying firm describes itself, it’s “a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe”:

    Strategeast

    About US

    Anatoly Motkin
    Founder and President

    Anatoly Motkin is founder and president of StrategEast, a strategic center for political and diplomatic solutions whose mission is to guide and assist elites of the post-Soviet region into closer working relationships with the USA and Western Europe. In this role, Mr. Motkin uses his two decades of involvement in the development of media and political projects in the post-Soviet region to support various programs and combat corruption in the region.

    Mr. Motkin has devoted much of his career to assisting the processes of Westernization in post-Soviet states through the launching of a variety of media, political and business initiatives aimed to drive social awareness and connect communities. He has successfully invested in multiple technology startups, such as one of the most popular messaging apps and the ridesharing service app Juno, which was recently acquired by on-demand ride service Gett.

    Mr. Motkin has also created and produced several successful Russian-language media projects in his native Israel, as well as in Latvia, Belarus and Georgia.

    Projects established by Mr. Motkin include a partnership with Yedioth Ahronoth publishing group, the strongest media house in Israel, to produce an entertainment magazine “Tele-Boom”, Time Out – Israel and “7:40” – a primetime show on Channel 9 – the only Israeli TV broadcast channel in Russian. He is also the founder of Cursorinfo, one of the oldest Russian-language news websites and one of the most cited sources for information on current events in Israel.

    Mr. Motkin began his career as a political consultant advising the Israeli Government on the country’s Russian-speaking sector. During this time, Mr. Motkin served as the head of the Russian-speaking voters campaign for the Shinui party, assisting to triple the number of votes for the party and assisting the Shinui in winning 15 seats in the 2003 Knesset election.

    ———-

    “Strategeast: About US: Anatoly Motkin”; Strategeast.org; 04/07/2018

    Mr. Motkin has devoted much of his career to assisting the processes of Westernization in post-Soviet states through the launching of a variety of media, political and business initiatives aimed to drive social awareness and connect communities. He has successfully invested in multiple technology startups, such as one of the most popular messaging apps and the ridesharing service app Juno, which was recently acquired by on-demand ride service Gett.”

    And the involved of someone like Motkin in arranging the theatrics of what amounts to an inter-oligarch dispute over Ihor Kolomoisky’s nationalized bank points to one of the key observations in this situation: it appears to be an inter-oligarch fight of different factions of pro-western Ukrainian oligarchs. And Sergei Taruta appears to be squarely in the camp of faction that doesn’t support the separatists but also doesn’t support Poroshenko. As we’ve seen, Taruta has history ties to Yulia Tymosheko’s power base, but he also appears to be working with fellow East Ukrainian oligarch Ihor Kolomoisky.

    So, finally, let’s note something important about Taruta and Kolomoisky from this 2015 report by Joshua Cohen, who has done a lot of good reporting about the risk of the neo-Nazi in Ukraine. It’s a report that would explain some of the animosity between Kolomoisky and the Poroshenko government: The report describes the use of privately financed militias that are, in effect, private armies controlled by their Ukrainian oligarch financiers, with Ihor Kolomoisky being one of the biggest militia financiers. And this actually led to Kolmoiksy’s firing in 2015 after Komoloisky sent one of this private armies to seize control of the headquarters of the state-owned oil company, UkrTransNafta, after Kiev fired the company’s chief executive officer who happened to be an ally of Kolomoisky. This led to Kolomoiksy’s firing as governor of Dnipro. So that, in addition to the Privatbank nationalization, is no doubt part of why Koloimoisky might not be super enthiasistic about the Poroshenko government.

    Given the ongoing tensions between the neo-Nazis groups in Ukraine and the Kiev government and the ongoing Nazi threats from groups like the Azov Battalion to ‘march on Kiev’ and take over, it’s noteworthy that one of their biggest financial backers, Ihor Kolomoisky, has so much animosity towards the Poroshenko government. And in our look at Sergei Targuta it’s also pretty worthy that, as the article notes, both Kolomoisky and Taruta were partially financing the neo-Nazi Azov Battalion:

    Reuters

    The Great Debate

    In the battle between Ukraine and Russian separatists, shady private armies take the field

    By Josh Cohen
    May 5, 2015

    While the ceasefire agreement between the Ukrainian government and separatist rebels in the eastern part of the country seems largely to be holding, a recent showdown in Kiev between a Ukrainian oligarch and the government revealed one of the country’s ongoing challenges: private military battalions that do not always operate under the central government’s control.

    In March, members of the private army backed by tycoon Ihor Kolomoisky showed up at the headquarters of the state-owned oil company, UkrTransNafta. The standoff occurred after Kiev fired the company’s chief executive officer — an ally of Kolomoisky’s. Kolomoisky said that he was trying to protect the company from an illegal takeover.

    More than 30 of these private battalions, comprised mostly of volunteer soldiers, exist throughout Ukraine. Although all have been brought under the authority of the military or the National Guard, the post-Maidan government is still struggling to control them.

    Ukraine’s military is so weak that after the Russian Federation seized Crimea, Russian-sponsored separatists were able to take over large swathes of eastern Ukraine. Private battalions, funded partially by Ukrainian oligarchs, stepped into this vacuum and played a key role in stopping the separatists’ advance.

    By supplying weapons to the battalions and in some cases paying recruits, Ukraine’s richest men are defending their country — and also protecting their own economic interests. Many of the oligarchs amassed great wealth by using their political connections to purchase government assets at knockdown prices, siphon off profits from state-owned companies and bribe Ukrainian officials to win state contracts.

    When the Maidan protesters overthrew former President Viktor Yanukovich, they demanded that the new government clamp down on the oligarchs’ abuse of power. Instead, many became even more powerful: Kiev handed Kolomoisky and mining tycoon Serhiy Taruta governor posts in important eastern regions of Ukraine, for example.

    Many of these paramilitary groups are accused of abusing the citizens they are charged with protecting. Amnesty International has reported that the Aidar battalion — also partially funded by Kolomoisky — committed war crimes, including illegal abductions, unlawful detention, robbery, extortion and even possible executions.

    Other pro-Kiev private battalions have starved civilians as a form of warfare, preventing aid convoys from reaching separatist-controlled areas of eastern Ukraine, according to the Amnesty report.

    Some of Ukraine’s private battalions have blackened the country’s international reputation with their extremist views. The Azov battalion, partially funded by Taruta and Kolomoisky, uses the Nazi Wolfsangel symbol as its logo, and many of its members openly espouse neo-Nazi, anti-Semitic views. The battalion members have spoken about “bringing the war to Kiev,” and said that Ukraine needs “a strong dictator to come to power who could shed plenty of blood but unite the nation in the process.”

    Ukraine’s President Petro Poroshenko has made clear his intention to rein in Ukraine’s volunteer warriors. Days after Kolomoisky’s soldiers appeared at UkrTransNafta, he said that he would not tolerate oligarchs with “pocket armies” and then fired Kolomoisky from his perch as the governor of Dnipropetrovsk.

    By bringing the private volunteers under Kiev’s full control, Ukraine will benefit in a number of ways. The volunteer battalions will receive the same training as the military, which should help them to better integrate their tactics. They’ll qualify for regular military benefits and pensions. Finally, they will be subject to military law, which allows the government to better deal with any criminal or human rights violations that they commit.

    ———-

    “In the battle between Ukraine and Russian separatists, shady private armies take the field” by Josh Cohen; Reuters; 05/05/2015

    “Ukraine’s President Petro Poroshenko has made clear his intention to rein in Ukraine’s volunteer warriors. Days after Kolomoisky’s soldiers appeared at UkrTransNafta, he said that he would not tolerate oligarchs with “pocket armies” and then fired Kolomoisky from his perch as the governor of Dnipropetrovsk.”

    Yep, it was the private use of a private army to seize state assets in a business dispute that got Ihor Kolomoisky fired as government of the Dnipro Oblast in May of 2015. And that was just one example of how these neo-Nazi militias posed a threat to Ukrainian society. There’s also the obvious risk that they act on their own orders and try to seize control.

    But the greatest threat these neo-Nazi militias pose clearly involves working in coordination with a team of Ukrainian oligarchs. And that’s part of what makes an understanding of the opaque Ukrainian oligarchic fault lines so important, because there’s always the chance that these inter-oligarch disputes will result in these private armies getting used for a coup or something along those lines.

    And that’s a big part of why it’s notable that about Taruta and Kolomoisky have a history of financing groups like the Azov Battalion:


    “Some of Ukraine’s private battalions have blackened the country’s international reputation with their extremist views. The Azov battalion, partially funded by Taruta and Kolomoisky, uses the Nazi Wolfsangel symbol as its logo, and many of its members openly espouse neo-Nazi, anti-Semitic views. The battalion members have spoken about “bringing the war to Kiev,” and said that Ukraine needs “a strong dictator to come to power who could shed plenty of blood but unite the nation in the process.””

    And that’s also why it’s so notable if a company like AIQ is offering political services to someone like Taruta: Because Taruta appears to be allied with the pro-Western faction of Ukrainian oligarchs who want to replace their current Ukrainian government with their own faction. Much like Andreii Artemenko and his ‘peace plan’ proposal, which also appeared to be a plan from a pro-Western-anti-Poroshenko faction of Ukrainian oligarchs.

    In other words, the story about Sergei Taruta and the bizarre fake congressional campaign appears to be one element of a much larger very A real inter-oligarch dispute involving some very powerful oligarchs. And Cambridge Analytica/AIQ/SCL appears to be working for one of those sides and it’s the side currently out of power and trying to reverse that situation.

    Posted by Pterrafractyl | April 9, 2018, 4:27 pm
  7. So you know that creepy feeling you get when you Google something and ads creepily related to what you just browsed start following you around on the internet? Rejoice! At least, rejoice if you enjoy that creepy feeling. Because you’ll get to experience that creepy feeling watching broadcast tv too with the next generation of televisions and ATSC 3.0 broadcast format technology that just got offered to the American public for the first time on KFPH UniMás 35 in Pheonix, Arizona, with more market rollouts planned soon.

    So how is the ATSC 3.0 broadcast format for television going to allow creepily personalized ads to follow you on television too? The new format basically combines over-the-air TV with internet streaming. So part of what you’ll see on the screen will be content sent over the internet which will obviously be personalized. And that’s going to include ads.

    But it won’t just be delivery personalized content. The technology will also allow for tracking of user behavior. And there are no privacy standards at all. That will be up to individual broadcasters who will design their own app will will deliver the personalized content. Which obviously means there are going to be lots of broadcasters tracking your television viewing habits, creating the kind of nightmare privacy situation we’ve already seen on platforms like Facebook and app developers. This ATSC 3.0 broadcast format is like a new giant platform that everyone will share in the US, but there are no privacy standards for the app developers which might even be worse than Facebook.

    So that’s coming with the next generation of televisions. As one might imagine given the fact that this new technology threatens to turn the tv into the next consumer privacy nightmare, this technology was a major focus of several tech demonstrations at the recent National Association of Broadcasters (NAB) conference in Las Vegas. And as one might also imagine, the industry hasn’t had much to say about the privacy aspect of this privacy nightmare it’s about to unleash:

    TechHive

    Next-gen TV to usher in viewer tracking and personalized ads along with 4K broadcasts
    More of your life will be lost to advertisers when TV stations switch to a new digital format

    Martyn Williams By Martyn Williams

    Senior Correspondent, TechHive
    Apr 13, 2018 3:00 AM PT

    On Monday a little bit of U.S. television history was made when KFPH UniMás 35 became the first station to go on air using the new ATSC 3.0 broadcast format in Phoenix, Arizona. Over the coming weeks, several more broadcasters will follow and the first wide-scale test of the new format will be underway.

    The format attempts to blend over-the-air TV with internet streaming, can support 4K broadcasting and localized emergency alerts, and should be more robust for city reception; but it also gives TV stations the chance to start serving personalized advertising.

    Broadcasters haven’t talked much about the advertising aspect, and they’ve said even less about the potential privacy implications, but it was a major focus of several tech demonstrations at the National Association of Broadcasters (NAB) conference in Las Vegas this week.

    At the event, about 300 miles to the north of Phoenix, it was clear that TV stations are keen to use the new format to track more closely what viewers are watching and serve up the same kind of targeted ads that are common on the Internet.

    When viewers tune into an ATSC 3.0, the TV station has the ability to serve them an application that will run inside a browser on their TV. Viewers won’t see a traditional browser window, it will look something like the images above, and because it’s written in HTML5 it will work across all TVs.

    But the style of the app and the features it offers will be down to each individual broadcaster. Some might offer quick links to news clips and the weather and access to a catch-up service (i.e., video on demand that would let you watch previously aired programming you’d missed the first time), while smaller stations might just provide a TV guide.

    One thing many are likely to do is track exactly what you’re watching and for how long.

    The ATSC 3.0 format doesn’t define a privacy policy. It’s down to each TV station so there is no guarantee they will all be uniform.

    In a demonstration app on display at NAB, the TV tracked what a viewer watched and for how long. The pay-off for the viewer would be free or exclusive access to content. So, for example, imagine a future where a TV station gives you free access to premium content in return for being loyal to its newscasts.

    But the TV station would be getting more than loyalty. The data would be used to build a profile of the viewer and serve them personalized ads, delivered over the internet to their TV.

    That will be a lucrative new ad model for TV broadcasters–and that’s why the TV industry is so excited about ATSC 3.0.

    Can you imagine being a middle-of-the-road voter in a swing state when the election rolls around? If you thought political advertising was bad now, just wait until the campaigns get their teeth into targeting on this personalized level. It might be better to leave the TV off for six months.

    In the demonstrations I saw this week, apps were capable of tracking only what a user did inside the app in question. One station won’t be able to see what you watch on a rival, but that gets blurrier in markets where a single owner operates several channels.

    It’s worth remembering that ATSC 3.0 doesn’t inevitably mean a loss in privacy. None of this matters if you don’t hook up a TV to the internet, but then you forego additional services like catch-up.

    ———-

    “Next-gen TV to usher in viewer tracking and personalized ads along with 4K broadcasts” By Martyn Williams; TechHive; 04/13/2018

    “Broadcasters haven’t talked much about the advertising aspect, and they’ve said even less about the potential privacy implications, but it was a major focus of several tech demonstrations at the National Association of Broadcasters (NAB) conference in Las Vegas this week.”

    Mum’s the word on the potential privacy implications for American television viewers. Potential privacy implications that could be coming to a media market near you soon:

    On Monday a little bit of U.S. television history was made when KFPH UniMás 35 became the first station to go on air using the new ATSC 3.0 broadcast format in Phoenix, Arizona. Over the coming weeks, several more broadcasters will follow and the first wide-scale test of the new format will be underway.

    And while the broadcasting industry may not want to talk about potential privacy violations, they sure are excited to talk about collecting viewer data for the purpose of serving up personalized ads:


    The format attempts to blend over-the-air TV with internet streaming, can support 4K broadcasting and localized emergency alerts, and should be more robust for city reception; but it also gives TV stations the chance to start serving personalized advertising.

    At the event, about 300 miles to the north of Phoenix, it was clear that TV stations are keen to use the new format to track more closely what viewers are watching and serve up the same kind of targeted ads that are common on the Internet.

    And in this new app-based model for personalized broadcast television each broadcaster develop their own apps, meaning there’s going to be a lot of different apps/broadcasters potentially tracking what you do with those next-generation TVs:


    When viewers tune into an ATSC 3.0, the TV station has the ability to serve them an application that will run inside a browser on their TV. Viewers won’t see a traditional browser window, it will look something like the images above, and because it’s written in HTML5 it will work across all TVs.

    But the style of the app and the features it offers will be down to each individual broadcaster. Some might offer quick links to news clips and the weather and access to a catch-up service (i.e., video on demand that would let you watch previously aired programming you’d missed the first time), while smaller stations might just provide a TV guide.

    One thing many are likely to do is track exactly what you’re watching and for how long.

    The ATSC 3.0 format doesn’t define a privacy policy. It’s down to each TV station so there is no guarantee they will all be uniform.

    In a demonstration app on display at NAB, the TV tracked what a viewer watched and for how long. The pay-off for the viewer would be free or exclusive access to content. So, for example, imagine a future where a TV station gives you free access to premium content in return for being loyal to its newscasts.

    But the TV station would be getting more than loyalty. The data would be used to build a profile of the viewer and serve them personalized ads, delivered over the internet to their TV.

    That will be a lucrative new ad model for TV broadcasters–and that’s why the TV industry is so excited about ATSC 3.0.

    Although it’s worth noting that the demonstration apps shown to the author of that TechHive article weren’t capable of tracking what you do on different app. So each broadcaster would, in theory, only get to see what you do with their app and not other broadcasters’ apps. But, of course, a lot of broadcasters are going to own multiple channels in a market. Or they just might decide to share the data with each other:


    In the demonstrations I saw this week, apps were capable of tracking only what a user did inside the app in question. One station won’t be able to see what you watch on a rival, but that gets blurrier in markets where a single owner operates several channels.

    Also keep in mind that there are still significant potential privacy violations even if apps can’t read the activity of other apps. For instance, if an app is capable of simply detecting when you turn the tv off or on, that gives information about your day to day living schedule. It’s one of the generic privacy violations that come with the “internet-of-things”.

    And then there’s the possible privacy violations that come with next-generation televisions with built in microphones. Imagine how many apps will ask for permission to listen to everything you say in order to better personalize the service. Remember those stories about the CIA hacking into Samsung Smart TVs with built in microphones? That’s probably going to be the standard app behavior if people allow it.

    And, finally, the article notes that this means the nightmare of micro-targeted personalized political ads is coming to broadcast television:


    Can you imagine being a middle-of-the-road voter in a swing state when the election rolls around? If you thought political advertising was bad now, just wait until the campaigns get their teeth into targeting on this personalized level. It might be better to leave the TV off for six months.

    Yep, just wait for Cambridge Analytica-style personalized psychological profiling of you, a profile that incorporates all the information already gathered about you from all the existing sources of information about you – Facebook, Google, data broker giants like Acxiom – and combines that with the knowledge on you obtained through your smart television, and get ready for the next-generation onslaught of the full-spectrum of personalized political ads designed to inflame you and polarize the country. The “A/B testing on steroids” advertising experiments employed by the Trump team on social media is coming to television.

    It’ll be a golden age for television commercial actors because they’re going to have to shoot all the different customized versions of the same commercials used to micro-target the audience’s psychological profiles.

    Of course, there is going to be the one option for next-generation television owners for avoiding the data privacy nightmare of personalized tv: unplug it from the internet and just watch tv the soon-to-be-old-fashioned way:


    It’s worth remembering that ATSC 3.0 doesn’t inevitably mean a loss in privacy. None of this matters if you don’t hook up a TV to the internet, but then you forego additional services like catch-up.

    And that points towards one of the glaring problems and solutions to this situation: the only option American television consumers are going to have is either navigate a data privacy nightmare landscape, where each app can have its own privacy standards and there are almost no rules, or unplug the smart tvs from the internet and forgo the internet-based services. And that’s because spying on consumers in exchange for services and enhanced profits is the fundamental model of the internet and this new data privacy nightmare landscape for smart tvs is merely the logical extension of that fundamental model. It’s a fundamental problem with the future of television ads and a fundamental problem with the internet-of-things in general: mass commercial spying is just assumed in America. It’s the model for the internet in America. There is no alternative. And that model is coming to broadcast television since that commercial mass spying model is clearly enshrined in the new ATSC 3.0. broadcast format. It’s a format that lets each app developer make up their own privacy standards. A ‘prepare-for-the-worst-hope-for-the-best’ model that literally prepares the way for the worst case scenario for consumer privacy and then just hopes that it won’t be abused. Like the internet.

    And in the case of this next-generation internet-connected television it’s not like there’s the same possibility for competition that we find with Facebook because there’s the possibility for a Facebook competitor. But there’s only one national broadcast format for smart tvs and for nations that use teh ATSC 3.0 standard it’s going to let each app maker make up their own privacy rules. Note that the ATSC 3.0 standard doesn’t just apply the US. It was created by the Advanced Television Systems Committee which is shared by the US, Canada, Mexico, South Korea, and Honduras. So this is a multinational television standard and it’s a standard governments approve so it’s not like there’s competition. This is as good as the privacy standards are going to get for North American and South Korean internet-connected tv consumers: it’s up to the app developers i.e. no privacy standards.

    And no standards on the exploitation of all the data collected on us to delivered highly persuasive micro-targeted ad campaigns. Cambridge Analytica-style micro-targeting psychological operations for tv. That’s coming to all elections.

    So just FYI, your next smart television is going to be very persuasive.

    Posted by Pterrafractyl | April 15, 2018, 7:41 pm
  8. This was more or less inevitable: it sounds like the ’87 million’ figure – the number of Facebook profiles that had their data scraped by Cambridge Analytica – is set to be raised again. Recall that it was initially a 50 million figure before Cambridge Analytica whistle-blower Christopher Wylie raised the estimate to 87 million, while hinting that the figure could be more.

    Also recall that the 87 million figure, ostensibly derived from the 270,000 people who downloaded the Cambridge Analytica Facebook app and their many friends, corresponded to ~322 friends for each app user on average, which is very closer to the 338 average number of friends Facebook users had in 2014. In other words, the 87 million figure is roughly what we should expect if you start off with 270,000 app users and scrape the profile information for each of their 338 friends on average. So if that 87 million figure was to rise significantly, it would raise the question of where else did Cambrdige Analytica get their data.

    Well, we have a new Cambridge Analytica whistle-blower, Brittany Kaiser, who worked full-time for SCL, Cambridge Analytica’s parent company, as director of business development between February 2015 and January of 2018. And according to Kaiser, it is indeed “much greater” than 87 million users. And Kaiser has a possible explanation for how Cambridge Analytica got data on all these additional users: they had more than one app that was scraping Facebook profile data.

    And the way Kaiser puts it, it sounds like there were quite a few different apps used by Cambridge Analytica. Including one she calls the “sex compass quiz”. So, yes, the Trump team was apparently exploring the sexual predilections of the American electorate.

    Additionally, Kaiser makes references to Cambridge Analytica’s “partners”. As she puts it, “I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login–for example, the ‘sex compass’ quiz.” So is that reference to Cambridge Analytica’s “partners” a reference to SCL or Aleksandr Kogan’s Global Science Research (GSR) company? Or were there other third-party firms that are also feeding information into Cambridge Analytica? The Republican National Committee, perhaps?

    Along those lines, Kaiser has another remarkable claim that office culture was like the “Wild West” and that personal data was “being scraped, resold and modeled willy-nilly.” So Kaiser is asserting that Cambridge Analytica resold the data too? It sure sounds like it.

    These are the kinds of questions raised by Brittany Kaiser’s new claims. Along with the open question of exactly how many people Cambridge Analytica was collecting this kind of Facebook data on. We know it’s “much greater” than 87 million, according to Kaiser, but we have no idea how much greater it is:

    Newsweek

    Who Is Brittany Kaiser? Facebook Leak ‘Much Greater’ Than 87M Accounts Warns Ex-Cambridge Analytica Director

    By Jason Murdock
    On 4/17/18 at 12:30 PM

    Cambridge Analytica, the London-based political analysis firm that worked on the presidential election campaign of Donald Trump, used multiple apps to harvest Facebook data—and the true scope of the abuse is likely “much greater” than 87 million accounts, a former staffer-turned-whistleblower has claimed.

    Brittany Kaiser, who worked full-time for the SCL Group, the parent company of Cambridge Analytica, as director of business development between February 2015 and January this year, told a U.K. government committee on Tuesday the firm had used Facebook data it previously claimed to have deleted.

    Facebook has faced an unprecedented backlash after user data was allegedly abused by a researcher called Aleksandr Kogan. Kogan has been accused of using a personality test app to obtain data linked to millions of accounts.

    Kaiser, who released a number of new documents into the public domain alleging to show how the company worked on proposals for the U.K. “Brexit” campaign, wrote in a testimony submitted to the government’s enquiry into fake news: “I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login–for example, the ‘sex compass’ quiz.

    “I do not know the specifics of these surveys or how the data was acquired or processed. But I believe it is almost certain that the number of Facebook users whose data was compromised through routes similar to that used by Kogan is much greater than 87 million; and that both Cambridge Analytica and other unconnected companies and campaigns were involved in these activities.”

    Facebook’s founder and CEO, Mark Zuckerberg, has said Kogan broke the website’s policies and stressed a full audit is currently taking place to find out if other apps were using similar tactics. It is believed that Kogan—who is alleged to have sold the information to Cambridge Analytica—designed the system so users’ social media activity could be used for intensive political profiling.

    Zuckerberg himself has warned all users were at risk of data scraping.

    According to Kaiser, a U.S. citizen who, alongside former Cambridge Analytica staffer Christopher Wylie, is now considered a whistleblower, her former employer used the Facebook data during sales pitches to potential clients.

    She alleged it had links to the London bureau of far-right news website Breitbart and significant time during the hearing was dedicated to its suspected work with Leave.EU, a campaign pushing for Britain to exit the European Union (EU). In a series of updates via Twitter, Cambridge Analytica denied links to Leave.EU.

    In a statement to Newsweek, Cambridge Analytica said:

    “In the past Cambridge Analytica has designed and run quizzes for internal research projects. This has included a fairly conventional personality quiz as well as broader quizzes such as one that probed people’s music preferences.

    “Data collected from these quizzes were always collected under a clear statement of consent. When members of the public logged into a quiz with their Facebook details, only their public profile information was collected. The volumes of users who took the quizzes numbered in the tens of thousands: any suggestion that we collected data on the scale of [Global Science Research Limited] is incorrect.

    “We no longer run such quizzes or hold data that was collected in this way.”

    Who is Brittany Kaiser?

    According to her written testimony, Kaiser was born in Houston, Texas, and grew up in Chicago. She was a part of Barack Obama’s media team during the presidential campaign in 2007 and has also worked for Amnesty International as a lobbyist appealing for an end to crimes against humanity. This month, Kaiser started a Facebook campaign appealing for transparency called #OwnYourData.

    During her time at Cambridge Analytica she worked on sales proposals and liaised with clients. She worked under senior management including CEO Alexander Nix, who this week declined to appear before the same fake news enquiry.

    Kaiser claimed that the office culture was like the “Wild West” and alleged that citizens’ data was “being scraped, resold and modeled willy-nilly.”

    “Privacy has become a myth, and tracking people’s behavior has become an essential part of using social media and the internet itself; tools that were meant to free our minds and make us more connected, with faster access to information than ever before,” she wrote in her testimony.

    “Instead of connecting us, these tools have divided us. It’s time to expose their abuses, so we can have an honest conversation about how we build a better way forward,” Kaiser added.

    ———-

    “Who Is Brittany Kaiser? Facebook Leak ‘Much Greater’ Than 87M Accounts Warns Ex-Cambridge Analytica Director” by Jason Murdock; Newsweek; 04/17/2018

    “Kaiser claimed that the office culture was like the “Wild West” and alleged that citizens’ data was “being scraped, resold and modeled willy-nilly.””

    That’s rights, Cambridge Analytica wasn’t just scraping Facebook users’ data. They were apparently reselling it too. These are the claims by Brittany Kaiser, who worked full-time for the SCL Group, the parent company of Cambridge Analytica, as director of business development between February 2015 and January this year, during her testimony to a UK government government:


    Brittany Kaiser, who worked full-time for the SCL Group, the parent company of Cambridge Analytica, as director of business development between February 2015 and January this year, told a U.K. government committee on Tuesday the firm had used Facebook data it previously claimed to have deleted.

    And according to Kaiser, the additional apps used by Cambridge Analytica include a “sex compass” quiz.


    Kaiser, who released a number of new documents into the public domain alleging to show how the company worked on proposals for the U.K. “Brexit” campaign, wrote in a testimony submitted to the government’s enquiry into fake news: “I am aware in a general sense of a wide range of surveys which were done by CA or its partners, usually with a Facebook login–for example, the ‘sex compass’ quiz.

    And keep in mind that the use of this sex app quiz is probably pretty similar to how Aleksandr’s psychological profiling app worked: you use the data collected on the people taking the quiz as the “training set” in order to develop algorithms for inferring Facebook users’ sexual preferences based on their Facebook profile data. And then Cambridge Analytica uses those algorithms to make educated guesses about the ‘sexual compass’ of all the other Facebook user they have profile data on. We don’t know that this is what Cambridge Analytica did with the ‘sex compass’ app, but we know that’s probably what they did because that is the business they are in.

    And it’s the use of all these additional apps that Kaiser saw Cambridge Analytica employ that appears to be the basis for her conclusion that the number of Facebook profiles scraped by Cambridge Analytica is “much greater than 87 million”. And she also asserts, quite reasonably, that Cambridge Analytica wasn’t the only entity engaged in this kind of activity:


    “I do not know the specifics of these surveys or how the data was acquired or processed. But I believe it is almost certain that the number of Facebook users whose data was compromised through routes similar to that used by Kogan is much greater than 87 million; and that both Cambridge Analytica and other unconnected companies and campaigns were involved in these activities.”

    So how much higher is that 87 million figure going to go? Well, there’s one other highly significant number we should keep in mind when trying to understand what kind of data Cambridge Analytica acquired: The company claimed to have up to 5,000 data points on 220 million Americans. Also keep in mind that 220 million is greater than the total number of Facebook users in the US (~214 million in 2018).

    So if we’re wondering how high that 87 million figure might go, the answers might be something along the lines of “almost all the Facebook users in the US in 2014-2015”. Whatever that number happens to be is probably the answer.

    Posted by Pterrafractyl | April 17, 2018, 3:43 pm
  9. Here’s a set of articles on one of the figures who co-founded both Cambridge Analytica and its parent company SCL Group: Nigel Oakes.

    While Cambridge Analytica’s former-CEO Alexander Nix has received much of the attention directed at Cambridge Analytica, especially following the shocking hidden-camera footage of Nix talking to an undercover reporter he thought was a client, the story of Cambridge Analytica ultimately leads to Oakes according to multiple sources.

    So who is Nigel Oakes? Well, as the following article notes, Oakes got his start in the business of influencing people in the field of “marketing aromatics,” or the use of smells to make consumers spend more money. He also dated Lady Helen Windsor when he was younger, which made him a somewhat publicly known person in the UK.

    In 1993, Oakes co-founded Strategic Communication Laboratories, the predecessor to SCL Group. In 2005, he co-founded SCL Group which, at the time, made headlines when it billed itself at a global arms fair in London as the first private company to provide psychological warfare services. Oakes said he was confident that psyops could shorten military conflicts. As he put it, “We used to be in the business of mind bending for political purposes, but now we are in the business of saving lives.”

    SCL sold the same psychological warfare products in the US. Services included manipulation of elections and “perception management,” or the intentional spread of fake news. And the US State Department remains a client and confirmed that it retains SCL Group on a contract to “provide research and analytical support in connection with our mission to counter terrorist propaganda and disinformation overseas.”

    So Nigel Oakes has quite an interesting history. A history that he unwittingly encapsulate with a now-notorious quote he gave in 1992:
    “We use the same techniques as Aristotle and Hitler…We appeal to people on an emotional level to get them to agree on a functional level.”:

    Politico

    Cambridge Analytica boss went from ‘aromatics’ to psyops to Trump’s campaign

    While Alexander Nix draws headlines for his role in the Trump 2016 digital operation, his colorful business partner Nigel Oakes may be an equally important figure.

    By Josh Meyer

    3/22/18, 10:15 AM CET

    Updated 3/23/18, 4:17 AM CET

    WASHINGTON — Long before the political data firm he oversees, Cambridge Analytica, helped Donald Trump become president, Nigel Oakes tried a very different form of influencing human behavior. It was called “marketing aromatics,” or the use of smells to make consumers spend more money.

    In the decades since, the Eton-educated British businessman has styled himself as an expert on a wide variety of “mind-bending” techniques — from scents to psychological warfare to campaign politics.

    But some 25 years after his foray into aromatics, a bad odor has arisen around his use of data to influence voter behavior. Oakes and his partners, who include Cambridge Analytica CEO Alexander Nix, are under intense scrutiny over their methods in the 2016 campaign, including the alleged improper use of Facebook data. Some news reports have also found links to Russia that the company has downplayed.

    Oakes and the company he co-founded in 2005 along with Nix, SCL Group, have now drawn the interest of congressional officials. Three Republican senators wrote Oakes a letter this week requesting information and a briefing related to Facebook’s sudden suspension last Friday of Cambridge Analytica, which is a closely affiliated subsidiary of SCL.

    The request — from Senate commerce committee members John Thune (R-S.D.), Roger Wicker (R-Miss.) and Jerry Moran, (R-Kan.) — came after recent allegations that Cambridge Analytica used inappropriately harvested private Facebook data on nearly 50 million users and exploited the information to assist President Donald Trump’s 2016 campaign.

    But that has triggered wider questions about whether Cambridge Analytica, whose board once included former Trump political strategist Steve Bannon, could have played some role in the Kremlin’s scheme to manipulate U.S. social media in 2016. The company denies that.

    Captured on an undercover video by Britain’s Channel 4 News, Nix boasted that the firm “did all the research, all the data, all the analytics, all the targeting,” for the Trump campaign, adding that “our data informed all the strategy.” (Trump officials call that an exaggeration.)

    Adding to the concern is the role of Aleksandr Kogan, a Russian-born researcher at Cambridge University who collected the Facebook data without disclosing that it would be used commercially, and who was also working for a university in St. Petersburg, Russia at the time. Cambridge Analytica also reportedly discussed a business relationship in 2014 and 2015 with the the Kremlin-connected Russian oil giant Lukoil, which expressed interest in how data is used to target American voters, according to the New York Times.

    The recent flurry of coverage has barely mentioned the 55-year-old Oakes, a virtual unknown in the U.S. but more familiar in Great Britain, in part because of his relationship in the 1990s with a member of the royal Windsor family.

    But data analytics experts described Oakes as a hidden hand running both SCL and Cambridge Analytica.

    “Anyone right now that is focusing on the problems with Cambridge Analytica should be backtracking to the source, which is Nigel Oakes,” said Sam Woolley, research director of the Digital Intelligence Lab at the Silicon Valley-based Institute for the Future.

    “My research has shown that Cambridge Analytica is the tip of the iceberg of Nigel Oakes’ empire of psyops and information ops around the world,” said Woolley, whose research aims to help protect democracy from the nefarious use of rapidly evolving communications technology. “As you start to dig in to that, you find out a lot of very concerning things.”

    Woolley said he attended a Cambridge Analytica “meet-up” in April 2016 during the New York presidential primary in New York. At the time, the company was working for another candidate, Senator Ted Cruz (R-Texas), and gave a wide-ranging overview of their activities, Woolley said.

    It was clear from the session that the two companies are completely intertwined, Woolley said. He recalled that Cambridge Analytica leaders “conflated all of their work with SCL’s work,” including in several overseas elections. Based on his ongoing research, he described the two firms as selling “political marketing to the highest bidder, whether you’re in government, the military or politics, even authoritarian” regimes.

    Oakes and SCL Group did not return calls seeking comment through a spokesperson.

    SCL Group — like its predecessor, Strategic Communication Laboratories, which Oakes co-founded in 1993 — is no stranger to controversies related to foreign elections, including in connection with alleged dirty tricks it has allegedly employed on behalf of political clients from Europe to Africa and Asia.

    The company also made headlines in 2005 when it billed itself at a global arms fair in London as the first private company to provide psychological warfare services, or “psyops,” to the British military.

    At the time, Oakes, as chief executive, said he was confident that psyops could shorten military conflicts, and that governments would buy such a service, which SCL had provided commercially.

    “We used to be in the business of mind bending for political purposes,” he told a reporter, “but now we are in the business of saving lives.”

    Those who know Oakes, or know of him, are somewhat skeptical.

    One private investigator said the company is known to have done extensive work for the U.S. military and other government agencies against targets including Iran. SCL got its start in the U.S. by selling the same psychological warfare product as it did to the British, including manipulation of elections and “perception management,” or the intentional spread of fake news.

    The State Department confirmed to Defense One this week that it retains SCL Group on a contract to “provide research and analytical support in connection with our mission to counter terrorist propaganda and disinformation overseas.”

    Company literature describes some of SCL’s services, besides “psychological warfare,” as “influence operations” and “public diplomacy.”

    Absent from such descriptions is some of the more bombastic rhetoric of Oakes’ youth.

    “We use the same techniques as Aristotle and Hitler,” he told an interviewer in 1992. “We appeal to people on an emotional level to get them to agree on a functional level.”

    On its website, SCL Group does not highlight its connections to Cambridge Analytica.

    “Our vision is to be the premier provider of data analytics and strategy for behavior change,” the website says.

    “Our mission is to create behavior change through research, data, analytics, and strategy for both domestic and international government clients.”

    But Oakes and his company have a history of secrecy, making the hidden-camera footage of Nix all the more shocking. In the footage aired by Channel 4, Nix appears to tell a journalist posing as a potential client that the company could, for instance, send Ukrainian sex workers to an opponent’s house to sabotage him.

    SCL Group said it has suspended Nix while it investigates, and several U.S. lawmakers cited the reports in saying that they want to call him back before committees investigating Russian meddling to answer more questions.

    One British journalist who has investigated the two companies and their leaders also suggested that the real trail of questions leads to Oakes.

    “Alexander Nix has been suspended from a shell company that has no employees and no assets,” said Carole Cadwalladr of the Observer, who authored last weekend’s expose, and others. “If you think this ends here, think again.”

    The letter from the three senators — which they also sent to Facebook CEO Mark Zuckerberg — asks Oakes whether he acknowledges the conduct described in Facebook’s statement announcing the suspension of Nix’s account, and those of both SCL Group and Cambridge Analytica.

    It also asks him to provide information about whether he was aware of other activity by Cambridge that Facebook said led to the suspension, including how it accessed the data in question and whether it falsely certified that it had destroyed it at Facebook’s request.

    “Consumers rely on app developers to be transparent and truthful in their terms of service so consumers can make informed decisions about whether to consent to the sharing and use of their data,” the senators wrote. “Therefore, the allegation that SCL was not forthcoming with Facebook or transparent with consumers is troubling.”

    The senators reminded Oakes that their committee has jurisdiction over the internet and communications technologies generally, as well as over consumer protection and data privacy issues.

    Meanwhile, Democrats who have been investigating Russian election interference and suspected collusion between the Kremlin and the Trump campaign are expressing heightened interest in Oakes’s company, though for how their focus is primarily on Nix.

    Representative Adam Schiff, the top Democrat on the House intelligence committee, said on MSNBC Wednesday that he was particularly concerned about Nix’ comments, captured by Channel 4, about how he got off easy during his interview with Congress.

    “The Republicans asked three questions. Five minutes, done,” Nix said. And while the Democrats asked two hours of questions, Nix said he didn’t have to answer them because “it’s voluntary.”

    ———-

    “Cambridge Analytica boss went from ‘aromatics’ to psyops to Trump’s campaign” by Josh Meyer; Politico; 03/22/2018

    ““Anyone right now that is focusing on the problems with Cambridge Analytica should be backtracking to the source, which is Nigel Oakes,” said Sam Woolley, research director of the Digital Intelligence Lab at the Silicon Valley-based Institute for the Future.”

    Nigel Oakes is seen as “the source” of Cambridge Analytica. And Cambridge Analytica is seen as merely “the tip of the iceberg of Nigel Oakes’ empire of psyops and information ops around the world”:


    My research has shown that Cambridge Analytica is the tip of the iceberg of Nigel Oakes’ empire of psyops and information ops around the world,” said Woolley, whose research aims to help protect democracy from the nefarious use of rapidly evolving communications technology. “As you start to dig in to that, you find out a lot of very concerning things.”

    And that’s how British journalist Carole Cadwalladr, who has done extensive reporting on Cambridge Analytica over the last year, also sees it: the questions about Cambridge Analytica leads to Oakes:


    One British journalist who has investigated the two companies and their leaders also suggested that the real trail of questions leads to Oakes.

    “Alexander Nix has been suspended from a shell company that has no employees and no assets,” said Carole Cadwalladr of the Observer, who authored last weekend’s expose, and others. “If you think this ends here, think again.”

    And that’s no surprise that Cambridge Analytica questions lead to Oakes. He helped co-found it, along with co-founding SCL Group in 2005 and Strategic Communication Laboratories in 1993:


    Oakes and the company he co-founded in 2005 along with Nix, SCL Group, have now drawn the interest of congressional officials. Three Republican senators wrote Oakes a letter this week requesting information and a briefing related to Facebook’s sudden suspension last Friday of Cambridge Analytica, which is a closely affiliated subsidiary of SCL.

    SCL Group — like its predecessor, Strategic Communication Laboratories, which Oakes co-founded in 1993 — is no stranger to controversies related to foreign elections, including in connection with alleged dirty tricks it has allegedly employed on behalf of political clients from Europe to Africa and Asia.

    And Oakes has been pitching SCL Group as a private psychological warfare service provider for years. So if we’re exploring how Cambridge Analytica got into the business of the manipulation of the masses, the fact that SCL has been providing those services to the US and UK governments for years is a pretty big factor in that story. when Cambridge Analytica was formed in 2013, its team was already quite experienced in these kinds of matters:


    The company also made headlines in 2005 when it billed itself at a global arms fair in London as the first private company to provide psychological warfare services, or “psyops,” to the British military.

    At the time, Oakes, as chief executive, said he was confident that psyops could shorten military conflicts, and that governments would buy such a service, which SCL had provided commercially.

    “We used to be in the business of mind bending for political purposes,” he told a reporter, “but now we are in the business of saving lives.”

    Those who know Oakes, or know of him, are somewhat skeptical.

    One private investigator said the company is known to have done extensive work for the U.S. military and other government agencies against targets including Iran. SCL got its start in the U.S. by selling the same psychological warfare product as it did to the British, including manipulation of elections and “perception management,” or the intentional spread of fake news.

    The State Department confirmed to Defense One this week that it retains SCL Group on a contract to “provide research and analytical support in connection with our mission to counter terrorist propaganda and disinformation overseas.”

    Company literature describes some of SCL’s services, besides “psychological warfare,” as “influence operations” and “public diplomacy.”

    And as the hidden-camera footage of Alexander Nix showed the world, those mass manipulation services include dirty tricks. Like sending Ukrainian sex workers to an opponent’s house to sabotage him. It’s an indicator of the amoral character of the people behind Cambridge Analytica and its SCL Group parent:


    But Oakes and his company have a history of secrecy, making the hidden-camera footage of Nix all the more shocking. In the footage aired by Channel 4, Nix appears to tell a journalist posing as a potential client that the company could, for instance, send Ukrainian sex workers to an opponent’s house to sabotage him.

    And that amorality is perfectly encapsulated in a now-notorious 1992 quote from Oakes, where he favorably compares his work in psychological manipulation with the techniques employed by Hitler:


    Absent from such descriptions is some of the more bombastic rhetoric of Oakes’ youth.

    “We use the same techniques as Aristotle and Hitler,” he told an interviewer in 1992. “We appeal to people on an emotional level to get them to agree on a functional level.”

    And 1992 quote was the only ‘we use the same techniques as Hitler!’ quote Oakes has made over the years. As the following article notes, Oakes made the same admission last year in reference to the techniques employed by Cambridge Analytica for the Trump campaign:

    The Huffington Post

    Cambridge Analytica Founder Once Compared Trump To Hitler
    Trump vilified Muslims the same way Hitler vilified Jews, Nigel Oakes said.

    By Willa Frej
    04/17/2018 12:32 pm ET Updated

    Nigel Oakes, who runs the group that founded data mining firm Cambridge Analytica, admitted in an interview last year that President Donald Trump’s controversial propaganda tactics mirror those of Adolf Hitler.

    Both Hitler and Trump have successfully attacked another group, turning it into an “artificial enemy,” in order to foster greater support among loyalists, Oakes, the CEO of SCL Group, Cambridge Analytica’s parent company, said last November.

    He made the comments as part of a series of interviews that Emma Briant, a University of Essex lecturer, conducted with people involved in Britain’s campaign to leave the European Union about propaganda used during the Brexit referendum. Britain’s Parliament released the interview transcripts Monday.

    “Hitler, got to be very careful about saying so, must never probably say this, off the record, but of course Hitler attacked the Jews, because… He didn’t have a problem with the Jews at all, but the people didn’t like the Jews,” Oakes said. “So if the people… He could just use them to say… So he just leverage an artificial enemy. Well that’s exactly what Trump did. He leveraged a Muslim- I mean, you know, it’s- It was a real enemy. ISIS is a real, but how big a threat is ISIS really to America? Really, I mean, we are still talking about 9/11, well 9/11 is a long time ago.”

    Another one of the interviewees, former communications director for Leave.EU Andy Wigmore, compared the campaign’s own strategy to Hitler’s “very clever” propaganda machine.

    “In its pure marketing sense, you can see the logic of what they were saying, why they were saying it, and how they presented things, and the imagery,” he said of the Nazis. “And looking at that now, in hindsight, having been on the sharp end of this campaign, you think: crikey, this is not new, and it’s just … using the tools that you have at the time.”

    Cambridge Analytica said Oakes never worked for the company or the Trump campaign and said he was instead “speaking in a personal capacity about the historical use of propaganda to an academic he knew well from her work in the defence sphere,” according to a spokesperson.

    ———-

    “Cambridge Analytica Founder Once Compared Trump To Hitler” by Willa Frej; The Huffington Post; 04/17/2018

    ““Hitler, got to be very careful about saying so, must never probably say this, off the record, but of course Hitler attacked the Jews, because… He didn’t have a problem with the Jews at all, but the people didn’t like the Jews,” Oakes said. “So if the people… He could just use them to say… So he just leverage an artificial enemy. Well that’s exactly what Trump did. He leveraged a Muslim- I mean, you know, it’s- It was a real enemy. ISIS is a real, but how big a threat is ISIS really to America? Really, I mean, we are still talking about 9/11, well 9/11 is a long time ago.””

    And that’s Nigel Oakes in his own words: he saw Trump’s systematic fear mongering about virtually all Muslims as more or less the same cynical technique employed by Hitler.

    And when you look at the full quote provided to the UK parliament it sounds even worse because he’s framing the use of these demonization techniques as simply a way to fire up “your group” (your target base of supporters) by demonizing a different group that you don’t expect to vote for your candidate:

    Clip 8 – Nigel Oakes: Nazi methods of propaganda

    Emma Briant: It didn’t matter with the rest of what he’s [Donald Trump] saying, it didn’t matter if he is alienating all of the liberal women, actually, and I think he was never going to get them anyway.

    Nigel Oakes: That’s right

    Emma Briant: You’ve got to think about what would resonate with as many as possible.

    Nigel Oakes: And often, as you rightly say, it’s the things that resonate, sometimes to attack the other group and know that you are going to lose them is going to reinforce and resonate your group. Which is why, you know, Hitler, got to be very careful about saying so, must never probably say this, off the record, but of course Hitler attacked the Jews, because… He didn’t have a problem with the Jews at all, but the people didn’t like the Jews. So if the people… He could just use them to say… So he just leverage an artificial enemy. Well that’s exactly what Trump did. He leveraged a Muslim – I mean, you know, it’s – It was a real enemy. ISIS is a real, but how big a threat is ISIS really to America? Really, I mean, we are still talking about 9/11, well 9/11 is a long time ago.

    This interview was conducted by Dr Emma L Briant, University of Essex, both for the upcoming book “What’s wrong with the Democrats? Media Bias, Inequality and the rise of Donald Trump”, and for other upcoming publications.

    “And often, as you rightly say, it’s the things that resonate, sometimes to attack the other group and know that you are going to lose them is going to reinforce and resonate your group.”

    Attacking “the other group and know that you are going to lose” in order to “reinforce and resonate your group.” That’s how Nigel Oakes matter-of-factly framed the use of the same kinds of mass manipulation techniques designed to generate an emotional appeal to a target political demographic. An emotional appeal that happens to be based on demonization a group of people that your target demographic already generally dislikes. In other words, find the existing areas of hatred and inflame them.

    And offering services that will strategically inflame those passions is something Nigel Oakes has been openly offering clients for decades. And that’s all part of why Nigel Oakes is described as the real force behind Cambridge Analytica.

    At the same time, let’s not forget the previous reports about Cambridge Analytica whistle-blower Christopher Wylie and Wylie characterization of Steve Bannon as Alexander Nix’s real boss at Cambridge Analytica despite technically serving as the company’s vice president and secretary. So while Nigel Oakes is clearly a critically important figure behind Cambridge Analytica, the question of who was really in charge of that Cambridge Analytica operation for the Trump Team is still an open question. Although it was likely more of a Hitler-inspired group effort.

    Posted by Pterrafractyl | April 18, 2018, 3:28 pm
  10. Here’s an ominous article about Palantir (as if there aren’t ominous articles about Palantir) that highlights both the challenges the company faces in selling its surveillance services and their plans for overcoming those challenges: It turns out the services Palantir offers to its clients is pretty labor intensive, including a potentially large number of on-site Palantir employees. One notable example is JP Morgan that hired Palantir to monitor the bank’s employees for the purpose of detecting miscreant behaviors. And this service involved as many as 120 “forward-deployed engineers” from Palantir working at JP Morgan, each one costing the bank as much as $3,000 a day. So from a price standpoint that’s obviously going to be an issue, even for a financial giant like JP Morgan. Although at JP Morgan it sounds like the bigger issue was that the executives learned that their emails and activity were potentially caught up in Palantir’s data dragnet too. But the overall cost of these “forward-deployed engineer” Palantir contractors is reportedly an issue for a number of other corporate clients that recently dropped Palantir including Hershey Co., Coca-Cola, Nasdaq, American Express, and Home Depot.

    So how is Palantir planning on addressing the labor-intensive nature of their services to attract more clients? Automation, of course. And that’s already part of the new product Palantir is offering clients called Faundry which is already in use by Airbus SE and Merck KGaA. In other words, the automation of Palantir’s corporate surveillance services is almost here and that means a lot more corporate clients are probably going to be hiring Palantir. So, yeah, that’s rather ominous.

    The article also includes a few more Palantir fun facts. For instance, while there are 2,000 engineers at the company, the Privacy and Civil Liberties Team only consists of 10 people.

    A second fun fact is about Peter Thiel. Apparently he’s planning on move to Los Angeles and starting up a right-wing media empire. Oh goodie.

    The article also contains a couple of fun facts in relation to the questions about Palantir and Cambridge Analytica after the revelation that a Palantir employee was working with Cambridge Analytica to develop its psychological profiling algorithms: First, Palantir claims that the company turned down the offers to work with Cambridge Analytica and that its employee, Alfredas Chmieliauskas, was purely working on his own. As the following article notes, that’s the same explanation Palantir gave when it was caught planning an orchestrated disinformation campaign against Wikileaks and Anonymous. So the “lone employee” explanation for Palantir appears to be a favorite.

    Additional, the article notes tha Palantir doesn’t advertise its services and instead purely relies on word-of-mouth. And that’s interesting in relation to the mystery of how it was that Sophie Schmidt, Google CEO Eric Schmidt’s daughter and a former Cambridge Analytica intern, just happened to stop by in Cambridge Analytica’s London headquarters in mid 2013 to push the idea that the company should start working with Palantir. Now, it’s important to recall that part of what made Sophie Schmidt’s seemingly random visit in mid-2013 so curious is that Cambridge Analytica and Palantir has already started talking in early 2013. Still, it’s noteworthy if Palantir only relies on word-of-mouth referrals and Sophie Schmidt appeared to be provided exactly that kind of referral seemingly randomly and spontaneously.

    So that’s all some of the new information we learn about Palantir in the following article. New information that’s all ominous, of course:

    Bloomberg Businessweek

    Peter Thiel’s data-mining company is using War on Terror tools to track American citizens. The scary thing? Palantir is desperate for new customers.

    By Peter Waldman, Lizette Chapman, and Jordan Robertson
    April 19, 2018

    High above the Hudson River in downtown Jersey City, a former U.S. Secret Service agent named Peter Cavicchia III ran special ops for JPMorgan Chase & Co. His insider threat group—most large financial institutions have one—used computer algorithms to monitor the bank’s employees, ostensibly to protect against perfidious traders and other miscreants.

    Aided by as many as 120 “forward-deployed engineers” from the data mining company Palantir Technologies Inc., which JPMorgan engaged in 2009, Cavicchia’s group vacuumed up emails and browser histories, GPS locations from company-issued smartphones, printer and download activity, and transcripts of digitally recorded phone conversations. Palantir’s software aggregated, searched, sorted, and analyzed these records, surfacing keywords and patterns of behavior that Cavicchia’s team had flagged for potential abuse of corporate assets. Palantir’s algorithm, for example, alerted the insider threat team when an employee started badging into work later than usual, a sign of potential disgruntlement. That would trigger further scrutiny and possibly physical surveillance after hours by bank security personnel.

    Over time, however, Cavicchia himself went rogue. Former JPMorgan colleagues describe the environment as Wall Street meets Apocalypse Now, with Cavicchia as Colonel Kurtz, ensconced upriver in his office suite eight floors above the rest of the bank’s security team. People in the department were shocked that no one from the bank or Palantir set any real limits. They darkly joked that Cavicchia was listening to their calls, reading their emails, watching them come and go. Some planted fake information in their communications to see if Cavicchia would mention it at meetings, which he did.

    It all ended when the bank’s senior executives learned that they, too, were being watched, and what began as a promising marriage of masters of big data and global finance descended into a spying scandal. The misadventure, which has never been reported, also marked an ominous turn for Palantir, one of the most richly valued startups in Silicon Valley. An intelligence platform designed for the global War on Terror was weaponized against ordinary Americans at home.

    Founded in 2004 by Peter Thiel and some fellow PayPal alumni, Palantir cut its teeth working for the Pentagon and the CIA in Afghanistan and Iraq. The company’s engineers and products don’t do any spying themselves; they’re more like a spy’s brain, collecting and analyzing information that’s fed in from the hands, eyes, nose, and ears. The software combs through disparate data sources—financial documents, airline reservations, cellphone records, social media postings—and searches for connections that human analysts might miss. It then presents the linkages in colorful, easy-to-interpret graphics that look like spider webs. U.S. spies and special forces loved it immediately; they deployed Palantir to synthesize and sort the blizzard of battlefield intelligence. It helped planners avoid roadside bombs, track insurgents for assassination, even hunt down Osama bin Laden. The military success led to federal contracts on the civilian side. The U.S. Department of Health and Human Services uses Palantir to detect Medicare fraud. The FBI uses it in criminal probes. The Department of Homeland Security deploys it to screen air travelers and keep tabs on immigrants.

    Police and sheriff’s departments in New York, New Orleans, Chicago, and Los Angeles have also used it, frequently ensnaring in the digital dragnet people who aren’t suspected of committing any crime. People and objects pop up on the Palantir screen inside boxes connected to other boxes by radiating lines labeled with the relationship: “Colleague of,” “Lives with,” “Operator of [cell number],” “Owner of [vehicle],” “Sibling of,” even “Lover of.” If the authorities have a picture, the rest is easy. Tapping databases of driver’s license and ID photos, law enforcement agencies can now identify more than half the population of U.S. adults.

    JPMorgan was effectively Palantir’s R&D lab and test bed for a foray into the financial sector, via a product called Metropolis. The two companies made an odd couple. Palantir’s software engineers showed up at the bank on skateboards. Neckties and haircuts were too much to ask, but JPMorgan drew the line at T-shirts. The programmers had to agree to wear shirts with collars, tucked in when possible.

    As Metropolis was installed and refined, JPMorgan made an equity investment in Palantir and inducted the company into its Hall of Innovation, while its executives raved about Palantir in the press. The software turned “data landfills into gold mines,” Guy Chiarello, who was then JPMorgan’s chief information officer, told Bloomberg Businessweek in 2011.

    Cavicchia was in charge of forensic investigations at the bank. Through Palantir, he gained administrative access to a full range of corporate security databases that had previously required separate authorizations and a specific business justification to use. He had unprecedented access to everything, all at once, all the time, on one analytic platform. He was a one-man National Security Agency, surrounded by the Palantir engineers, each one costing the bank as much as $3,000 a day.

    Senior investigators stumbled onto the full extent of the spying by accident. In May 2013 the bank’s leadership ordered an internal probe into who had leaked a document to the New York Times about a federal investigation of JPMorgan for possibly manipulating U.S. electricity markets. Evidence indicated the leaker could have been Frank Bisignano, who’d recently resigned as JPMorgan’s co-chief operating officer to become CEO of First Data Corp., the big payments processor. Cavicchia had used Metropolis to gain access to emails about the leak investigation—some written by top executives—and the bank believed he shared the contents of those emails and other communications with Bisignano after Bisignano had left the bank. (Inside JPMorgan, Bisignano was considered Cavicchia’s patron—a senior executive who protected and promoted him.)

    JPMorgan officials debated whether to file a suspicious activity report with federal regulators about the internal security breach, as required by law whenever banks suspect regulatory violations. They decided not to—a controversial decision internally, according to multiple sources with the bank. Cavicchia negotiated a severance agreement and was forced to resign. He joined Bisignano at First Data, where he’s now a senior vice president. Chiarello also went to First Data, as president. After their departures, JPMorgan drastically curtailed its Palantir use, in part because “it never lived up to its promised potential,” says one JPMorgan executive who insisted on anonymity to discuss the decision.

    The bank, First Data, and Bisignano, Chiarello, and Cavicchia didn’t respond to separately emailed questions for this article. Palantir, in a statement responding to questions about how JPMorgan and others have used its software, declined to answer specific questions. “We are aware that powerful technology can be abused and we spend a lot of time and energy making sure our products are used for the forces of good,” the statement said.

    Much depends on how the company chooses to define good. In March a former computer engineer for Cambridge Analytica, the political consulting firm that worked for Donald Trump’s 2016 presidential campaign, testified in the British Parliament that a Palantir employee had helped Cambridge Analytica use the personal data of up to 87 million Facebook users to develop psychographic profiles of individual voters. Palantir said it has a strict policy against working on political issues, including campaigns, and showed Bloomberg emails in which it turned down Cambridge’s request to work with Palantir on multiple occasions. The employee, Palantir said, worked with Cambridge Analytica on his own time. Still, there was no mistaking the implications of the incident: All human relations are a matter of record, ready to be revealed by a clever algorithm. Everyone is a spidergram now.

    Thiel, who turned 50 in October, long reveled as the libertarian black sheep in left-leaning Silicon Valley. He contributed $1.25 million to Trump’s presidential victory, spoke at the Republican convention, and has dined with Trump at the White House. But Thiel has told friends he’s had enough of the Bay Area’s “monocultural” liberalism. He’s ditching his longtime base in San Francisco and moving his personal investment firms this year to Los Angeles, where he plans to establish his next project, a conservative media empire.

    As Thiel’s wealth has grown, he’s gotten more strident. In a 2009 essay for the Cato Institute, he railed against taxes, ­government, women, poor people, and society’s acquiescence to the inevitability of death. (Thiel doesn’t accept death as inexorable.) He wrote that he’d reached some radical conclusions: “Most importantly, I no longer believe that freedom and democracy are compatible.” The 1920s was the last time one could feel “genuinely optimistic” about American democracy, he said; since then, “the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”

    Thiel went into tech after missing a prized Supreme Court clerkship following his graduation from Stanford Law School. He co-founded PayPal and then parlayed his winnings from its 2002 sale to EBay Inc. into a career in venture investing. He made an early bet on Facebook Inc. (where he’s still on the board), which accounts for most of his $3.3 billion fortune, as estimated by Bloomberg, and launched his career as a backer of big ideas—things like private space travel (through an investment in SpaceX), hotel alternatives (Airbnb), and floating island nations (the Seasteading Institute).

    He started Palantir—named after the omniscient crystal balls in J.R.R. Tolkien’s Lord of the Rings trilogy—three years after the attacks of Sept. 11, 2001. The CIA’s investment arm, In-Q-Tel, was a seed investor. For the role of chief executive officer, he chose an old law school friend and self-described neo-Marxist, Alex Karp. Thiel told Bloomberg in 2011 that civil libertarians ought to embrace Palantir, because data mining is less repressive than the “crazy abuses and draconian policies” proposed after Sept. 11. The best way to prevent another catastrophic attack without becoming a police state, he argued, was to give the government the best surveillance tools possible, while building in safeguards against their abuse.

    Legend has it that Stephen Cohen, one of Thiel’s co-founders, programmed the initial prototype for Palantir’s software in two weeks. It took years, however, to coax customers away from the longtime leader in the intelligence analytics market, a software company called I2 Inc.

    In one adventure missing from the glowing accounts of Palantir’s early rise, I2 accused Palantir of misappropriating its intellectual property through a Florida shell company registered to the family of a Palantir executive. A company claiming to be a private eye firm had been licensing I2 software and development tools and spiriting them to Palantir for more than four years. I2 said the cutout was registered to the family of Shyam Sankar, Palantir’s director of business development.

    I2 sued Palantir in federal court, alleging fraud, conspiracy, and copyright infringement. In its legal response, Palantir argued it had the right to appropriate I2’s code for the greater good. “What’s at stake here is the ability of critical national security, defense and intelligence agencies to access their own data and use it interoperably in whichever platform they choose in order to most effectively protect the citizenry,” Palantir said in its motion to dismiss I2’s suit.

    The motion was denied. Palantir agreed to pay I2 about $10 million to settle the suit. I2 was sold to IBM in 2011.

    Sankar, Palantir employee No.13 and now one of the company’s top executives, also showed up in another Palantir scandal: the company’s 2010 proposal for the U.S. Chamber of Commerce to run a secret sabotage campaign against the group’s liberal opponents. Hacked emails released by the group Anonymous indicated that Palantir and two other defense contractors pitched outside lawyers for the organization on a plan to snoop on the families of progressive activists, create fake identities to infiltrate left-leaning groups, scrape social media with bots, and plant false information with liberal groups to subsequently discredit them.

    After the emails emerged in the press, Palantir offered an explanation similar to the one it provided in March for its U.K.-based employee’s assistance to Cambridge Analytica: It was the work of a single rogue employee. The company never explained Sankar’s involvement. Karp issued a public apology and said he and Palantir were deeply committed to progressive causes. Palantir set up an advisory panel on privacy and civil liberties, headed by a former CIA attorney, and beefed up an engineering group it calls the Privacy and Civil Liberties Team. The company now has about 10 PCL engineers on call to help vet clients’ requests for access to data troves and pitch in with pertinent thoughts about law, morality, and machines.

    During its 14 years in startup mode, Palantir has cultivated a mystique as a haven for brilliant engineers who want to solve big problems such as terrorism and human trafficking, unfettered by pedestrian concerns such as making money. Palantir executives boast of not employing a single sales­person, relying instead on word-of-mouth referrals.

    The company’s early data mining dazzled venture investors, who valued it at $20 billion in 2015. But Palantir has never reported a profit. It operates less like a conventional software company than like a consultancy, deploying roughly half its 2,000 engineers to client sites. That works at well-funded government spy agencies seeking specialized applications but has produced mixed results with corporate clients. Palantir’s high installation and maintenance costs repelled customers such as Hershey Co., which trumpeted a Palantir partnership in 2015 only to walk away two years later. Coca-Cola, Nasdaq, American Express, and Home Depot have also dumped Palantir.

    Karp recognized the high-touch model was problematic early in the company’s push into the corporate market, but solutions have been elusive. “We didn’t want to be a services company. We wanted to do something that was cost-efficient,” he confessed at a European conference in 2010, in one of several unguarded comments captured in videos posted online. “Of course, what we didn’t recognize was that this would be much, much harder than we realized.”

    Palantir’s newest product, Foundry, aims to finally break through the profitability barrier with more automation and less need for on-site engineers. Airbus SE, the big European plane maker, uses Foundry to crunch airline data about specific onboard components to track usage and maintenance and anticipate repair problems. Merck KGaA, the pharmaceutical giant, has a long-term Palantir contract to use Foundry in drug development and supply chain management.

    Deeper adoption of Foundry in the commercial market is crucial to Palantir’s hopes of a big payday. Some investors are weary and have already written down their Palantir stakes. Morgan Stanley now values the company at $6 billion. Fred Alger Management Inc., which has owned stock since at least 2006, revalued Palantir in December at about $10 billion, according to Bloomberg Holdings. One frustrated investor, Marc Abramowitz, recently won a court order for Palantir to show him its books, as part of a lawsuit he filed alleging the company sabotaged his attempt to find a buyer for the Palantir shares he has owned for more than a decade.

    As shown in the privacy breaches at Facebook and Cambridge Analytica—with Thiel and Palantir linked to both sides of the equation—the pressure to monetize data at tech companies is ceaseless. Facebook didn’t grow from a website connecting college kids into a purveyor of user profiles and predilections worth $478 billion by walling off personal data. Palantir says its Privacy and Civil Liberties Team watches out for inappropriate data demands, but it consists of just 10 people in a company of 2,000 engineers. No one said no to JPMorgan, or to whomever at Palantir volunteered to help Cambridge Analytica—or to another organization keenly interested in state-of-the-art data science, the Los Angeles Police Department.

    Palantir began work with the LAPD in 2009. The impetus was federal funding. After several Sept. 11 postmortems called for more intelligence sharing at all levels of law enforcement, money started flowing to Palantir to help build data integration systems for so-called fusion centers, starting in L.A. There are now more than 1,300 trained Palantir users at more than a half-dozen law enforcement agencies in Southern California, including local police and sheriff’s departments and the Bureau of Alcohol, Tobacco, Firearms and Explosives.

    The LAPD uses Palantir’s Gotham product for Operation Laser, a program to identify and deter people likely to commit crimes. Information from rap sheets, parole reports, police interviews, and other sources is fed into the system to generate a list of people the department defines as chronic offenders, says Craig Uchida, whose consulting firm, Justice & Security Strategies Inc., designed the Laser system. The list is distributed to patrolmen, with orders to monitor and stop the pre-crime suspects as often as possible, using excuses such as jaywalking or fix-it tickets. At each contact, officers fill out a field interview card with names, addresses, vehicles, physical descriptions, any neighborhood intelligence the person offers, and the officer’s own observations on the subject.

    The cards are digitized in the Palantir system, adding to a constantly expanding surveillance database that’s fully accessible without a warrant. Tomorrow’s data points are automatically linked to today’s, with the goal of generating investigative leads. Say a chronic offender is tagged as a passenger in a car that’s pulled over for a broken taillight. Two years later, that same car is spotted by an automatic license plate reader near a crime scene 200 miles across the state. As soon as the plate hits the system, Palantir alerts the officer who made the original stop that a car once linked to the chronic offender was spotted near a crime scene.

    The platform is supplemented with what sociologist Sarah Brayne calls the secondary surveillance network: the web of who is related to, friends with, or sleeping with whom. One woman in the system, for example, who wasn’t suspected of committing any crime, was identified as having multiple boyfriends within the same network of associates, says Brayne, who spent two and a half years embedded with the LAPD while researching her dissertation on big-data policing at Princeton University and who’s now an associate professor at the University of Texas at Austin. “Anybody who logs into the system can see all these intimate ties,” she says. To widen the scope of possible connections, she adds, the LAPD has also explored purchasing private data, including social media, foreclosure, and toll road information, camera feeds from hospitals, parking lots, and universities, and delivery information from Papa John’s International Inc. and Pizza Hut LLC.

    The LAPD declined to comment for this story. Palantir sent Bloomberg a statement about its work with law enforcement: “Our [forward-deployed engineers] and [privacy and civil liberties] engineers work with the law enforcement customers (including LAPD) to ensure that the implementation of our software and integration of their source systems with the software is consistent with the Department’s legal and policy obligations, as well as privacy and civil liberties considerations that may not currently be legislated but are on the horizon. We as a company determine the types of engagements and general applications of our software with respect to those overarching considerations. Police Agencies have internal responsibility for ensuring that their information systems are used in a manner consistent with their policies and procedures.”

    Operation Laser has made L.A. cops more surgical—and, according to community activists, unrelenting. Once targets are enmeshed in a spidergram, they’re stuck.

    Palantir is twice the age most startups are when they cash out in a sale or initial public offering. The company needs to figure out how to be rewarded on Wall Street without creeping out Main Street. It might not be possible. For all of Palantir’s professed concern for individuals’ privacy, the single most important safeguard against abuse is the one it’s trying desperately to reduce through automation: human judgment.

    As Palantir tries to court corporate customers as a more conventional software company, fewer forward-deployed engineers will mean fewer human decisions. Sensitive questions, such as how deeply to pry into people’s lives, will be answered increasingly by artificial intelligence and machine-learning algorithms. The small team of Privacy and Civil Liberties engineers could find themselves even less influential, as the urge for omnipotence among clients overwhelms any self-imposed restraints.

    Computers don’t ask moral questions; people do, says John Grant, one of Palantir’s top PCL engineers and a forceful advocate for mandatory ethics education for engineers. “At a company like ours with millions of lines of code, every tiny decision could have huge implications,” Grant told a privacy conference in Berkeley last year.

    JPMorgan’s experience remains instructive. “The world changed when it became clear everyone could be targeted using Palantir,” says a former JPMorgan cyber expert who worked with Cavicchia at one point on the insider threat team. “Nefarious ideas became trivial to implement; everyone’s a suspect, so we monitored everything. It was a pretty terrible feeling.”
    ———–

    “Peter Thiel’s data-mining company is using War on Terror tools to track American citizens. The scary thing? Palantir is desperate for new customers.” by Peter Waldman, Lizette Chapman, and Jordan Robertson; Bloomberg Businessweek; 04/19/2018

    “High above the Hudson River in downtown Jersey City, a former U.S. Secret Service agent named Peter Cavicchia III ran special ops for JPMorgan Chase & Co. His insider threat group—most large financial institutions have one—used computer algorithms to monitor the bank’s employees, ostensibly to protect against perfidious traders and other miscreants.”

    Insider threat services. That appears to be one of the primary services Palantir is trying to offer to corporate clients. It’s the kind of service that gives Palantir access to almost everything employees are doing in a company and basically turns it into a Big Brother-for-hire entity. And when JP Morgan hired Palantir to provide these services they ended up dropping the after the executives learned that it was too Big Brother-ish and watching over the executives too:


    Aided by as many as 120 “forward-deployed engineers” from the data mining company Palantir Technologies Inc., which JPMorgan engaged in 2009, Cavicchia’s group vacuumed up emails and browser histories, GPS locations from company-issued smartphones, printer and download activity, and transcripts of digitally recorded phone conversations. Palantir’s software aggregated, searched, sorted, and analyzed these records, surfacing keywords and patterns of behavior that Cavicchia’s team had flagged for potential abuse of corporate assets. Palantir’s algorithm, for example, alerted the insider threat team when an employee started badging into work later than usual, a sign of potential disgruntlement. That would trigger further scrutiny and possibly physical surveillance after hours by bank security personnel.

    Over time, however, Cavicchia himself went rogue. Former JPMorgan colleagues describe the environment as Wall Street meets Apocalypse Now, with Cavicchia as Colonel Kurtz, ensconced upriver in his office suite eight floors above the rest of the bank’s security team. People in the department were shocked that no one from the bank or Palantir set any real limits. They darkly joked that Cavicchia was listening to their calls, reading their emails, watching them come and go. Some planted fake information in their communications to see if Cavicchia would mention it at meetings, which he did.

    It all ended when the bank’s senior executives learned that they, too, were being watched, and what began as a promising marriage of masters of big data and global finance descended into a spying scandal. The misadventure, which has never been reported, also marked an ominous turn for Palantir, one of the most richly valued startups in Silicon Valley. An intelligence platform designed for the global War on Terror was weaponized against ordinary Americans at home.

    And this project at JP Morgan was basically the test lab for a new service Palantir is trying to offer the financial sector: Metropolis:


    JPMorgan was effectively Palantir’s R&D lab and test bed for a foray into the financial sector, via a product called Metropolis. The two companies made an odd couple. Palantir’s software engineers showed up at the bank on skateboards. Neckties and haircuts were too much to ask, but JPMorgan drew the line at T-shirts. The programmers had to agree to wear shirts with collars, tucked in when possible.

    As Metropolis was installed and refined, JPMorgan made an equity investment in Palantir and inducted the company into its Hall of Innovation, while its executives raved about Palantir in the press. The software turned “data landfills into gold mines,” Guy Chiarello, who was then JPMorgan’s chief information officer, told Bloomberg Businessweek in 2011.

    Cavicchia was in charge of forensic investigations at the bank. Through Palantir, he gained administrative access to a full range of corporate security databases that had previously required separate authorizations and a specific business justification to use. He had unprecedented access to everything, all at once, all the time, on one analytic platform. He was a one-man National Security Agency, surrounded by the Palantir engineers, each one costing the bank as much as $3,000 a day.

    And through this JP Morgan test bed for Metropolis, Peter Cavicchia insider threat group was given access to “a full range of corporate security databases that had previously required separate authorizations and a specific business justification to use”. Along with a team of Palantir engineers to help him use that data. This is the business model Palantir was trying to test so it could sell to other banks: using Palantir to give bank employees unprecedented access to the bank’s internal data (which, of course, means Palantir likely has access to that data too):


    Cavicchia was in charge of forensic investigations at the bank. Through Palantir, he gained administrative access to a full range of corporate security databases that had previously required separate authorizations and a specific business justification to use. He had unprecedented access to everything, all at once, all the time, on one analytic platform. He was a one-man National Security Agency, surrounded by the Palantir engineers, each one costing the bank as much as $3,000 a day.

    But Palantir’s test bed at JP Morgan ultimately turned into a failed experiment when JP Morgan’s leadership learned that Cavicchia had apparently used his unprecedented access to internal documents to spy on JP Morgan executives who were investigating a leak to the New York Times. The leak appeared to be done by an executive who had just left the company, Frank Bisignano, who also happened to be Cavicchia’s patron at the company before he left. And that leak investigation appeared to show that Cavicchia accessed executive emails about the leak and passed them along to Bisignano. In other words, JP Morgan learned that the guy they made their corporate Big Brother abused that power (shocker):


    Senior investigators stumbled onto the full extent of the spying by accident. In May 2013 the bank’s leadership ordered an internal probe into who had leaked a document to the New York Times about a federal investigation of JPMorgan for possibly manipulating U.S. electricity markets. Evidence indicated the leaker could have been Frank Bisignano, who’d recently resigned as JPMorgan’s co-chief operating officer to become CEO of First Data Corp., the big payments processor. Cavicchia had used Metropolis to gain access to emails about the leak investigation—some written by top executives—and the bank believed he shared the contents of those emails and other communications with Bisignano after Bisignano had left the bank. (Inside JPMorgan, Bisignano was considered Cavicchia’s patron—a senior executive who protected and promoted him.)

    JPMorgan officials debated whether to file a suspicious activity report with federal regulators about the internal security breach, as required by law whenever banks suspect regulatory violations. They decided not to—a controversial decision internally, according to multiple sources with the bank. Cavicchia negotiated a severance agreement and was forced to resign. He joined Bisignano at First Data, where he’s now a senior vice president. Chiarello also went to First Data, as president. After their departures, JPMorgan drastically curtailed its Palantir use, in part because “it never lived up to its promised potential,” says one JPMorgan executive who insisted on anonymity to discuss the decision.

    Thus ended Palantir’s test run of Metropolis, highlighting the fact that the extensive manpower associated with Palantir’s services isn’t the only factor that might keep corporate clients away. The way Palantir’s services create individuals with unprecedented access to the internal documents of a company might also drive clients away. After all, threat assessment groups are intended to mitigate risk. Not exacerbate it.

    But the cost of all those on-site Palantir engineers is still a obstacle to wider adoption of Palantir services. As the article notes, roughly half of Palantir’s 2,000 engineers are working on client sites:


    The company’s early data mining dazzled venture investors, who valued it at $20 billion in 2015. But Palantir has never reported a profit. It operates less like a conventional software company than like a consultancy, deploying roughly half its 2,000 engineers to client sites. That works at well-funded government spy agencies seeking specialized applications but has produced mixed results with corporate clients. Palantir’s high installation and maintenance costs repelled customers such as Hershey Co., which trumpeted a Palantir partnership in 2015 only to walk away two years later. Coca-Cola, Nasdaq, American Express, and Home Depot have also dumped Palantir.

    And that’s what Palantir’s newest product, Foundry, is designed to address. By increasingly automating the corporate surveillance process:


    Palantir’s newest product, Foundry, aims to finally break through the profitability barrier with more automation and less need for on-site engineers. Airbus SE, the big European plane maker, uses Foundry to crunch airline data about specific onboard components to track usage and maintenance and anticipate repair problems. Merck KGaA, the pharmaceutical giant, has a long-term Palantir contract to use Foundry in drug development and supply chain management.

    Deeper adoption of Foundry in the commercial market is crucial to Palantir’s hopes of a big payday. Some investors are weary and have already written down their Palantir stakes. Morgan Stanley now values the company at $6 billion. Fred Alger Management Inc., which has owned stock since at least 2006, revalued Palantir in December at about $10 billion, according to Bloomberg Holdings. One frustrated investor, Marc Abramowitz, recently won a court order for Palantir to show him its books, as part of a lawsuit he filed alleging the company sabotaged his attempt to find a buyer for the Palantir shares he has owned for more than a decade.

    “Deeper adoption of Foundry in the commercial market is crucial to Palantir’s hopes of a big payday.”

    And that appears to be the direction Palantir is heading: automated corporate surveillance which will allow the company to offer its services cheaper and to more clients. So if Palantir succeeds we just might see A LOT more companies hiring Palantir’s services, which means A LOT more employees are going to have Palantir’s software watching and analyzing their every keystroke and email. It really is pretty ominous. Especially given the fact that company’s Privacy and Civil Liberties Team consists of a whole 10 people:


    As shown in the privacy breaches at Facebook and Cambridge Analytica—with Thiel and Palantir linked to both sides of the equation—the pressure to monetize data at tech companies is ceaseless. Facebook didn’t grow from a website connecting college kids into a purveyor of user profiles and predilections worth $478 billion by walling off personal data. Palantir says its Privacy and Civil Liberties Team watches out for inappropriate data demands, but it consists of just 10 people in a company of 2,000 engineers. No one said no to JPMorgan, or to whomever at Palantir volunteered to help Cambridge Analytica—or to another organization keenly interested in state-of-the-art data science, the Los Angeles Police Department.

    So that’s an overview of the current status of Palantir’s Big Brother-for-hire services: they’ve hit some obstacles, but if they can succeed in overcoming those obstacle Palantir could become the go-to corporate surveillance firm. It’s more than a little ominous.

    And then there’s the to fun fact from this article that relate to the questions of Palantir’s ties to Cambridge Analytica: First, just as Palantir claimed that it’s employee found to be working with Cambridge Analytica, Alfredas Chmieliauskas, was doing this on his own, that’s the same excuse Palantir gave when it was caught pitching a project to the US Chamber of Commerce to run a secret campaign to spy on and sabotage the Chamber’s critics: it was just a lone employee:


    Sankar, Palantir employee No.13 and now one of the company’s top executives, also showed up in another Palantir scandal: the company’s 2010 proposal for the U.S. Chamber of Commerce to run a secret sabotage campaign against the group’s liberal opponents. Hacked emails released by the group Anonymous indicated that Palantir and two other defense contractors pitched outside lawyers for the organization on a plan to snoop on the families of progressive activists, create fake identities to infiltrate left-leaning groups, scrape social media with bots, and plant false information with liberal groups to subsequently discredit them.

    After the emails emerged in the press, Palantir offered an explanation similar to the one it provided in March for its U.K.-based employee’s assistance to Cambridge Analytica: It was the work of a single rogue employee. The company never explained Sankar’s involvement. Karp issued a public apology and said he and Palantir were deeply committed to progressive causes. Palantir set up an advisory panel on privacy and civil liberties, headed by a former CIA attorney, and beefed up an engineering group it calls the Privacy and Civil Liberties Team. The company now has about 10 PCL engineers on call to help vet clients’ requests for access to data troves and pitch in with pertinent thoughts about law, morality, and machines.

    Finally, there’s the interesting fact that the Palantir executes boast of not employing a single sales-person and just rely on word of mouth:


    During its 14 years in startup mode, Palantir has cultivated a mystique as a haven for brilliant engineers who want to solve big problems such as terrorism and human trafficking, unfettered by pedestrian concerns such as making money. Palantir executives boast of not employing a single sales­person, relying instead on word-of-mouth referrals.

    And Sophie Schmidt, Google CEO Eric Schmidt’s daughter and a former Cambridge Analytica intern, provided exactly that in June of 2013: a word of mouth endorsement of Palantir. So did Sophie Schmidt make this word of mouth pitch independently and coincidentally? It remains an unanswered question but it’s hard to ignore that Schmidt’s pitch appears to be the mode of how Palantir markets itself.

    So we’ll see what happens with Palantir and its drive to use automated corporate surveillance to cut costs and sell its Big Brother-for-hire services to even more large employers. But it does seem like just a matter of time before Palantir succeeds in cutting those costs, which means “word of mouth” isn’t just going to be Palantir’s approach to marketing. Word of mouth is also going to be the only way employees in the future will be able to say something to each other without Palantir knowing about it.

    Posted by Pterrafractyl | April 19, 2018, 7:43 pm
  11. Here’s an update on how Facebook his planning on addressing the new congressional scrutiny its receiving from the US Congress as the Cambridge Analytica continues to play out: Facebook’s head of policy in the United States, Erin Egan, was just replaced. It’s a notable position, politically speaking, because it’s based in Washington DC, so Facebook basically just replaced one of it’s top DC lobbyists.

    So who replaced Egan? Kevin Martin, Facebook’s vice president of mobile and global access policy. Oh, and Martin was also a former Republican chairman of the Federal Communications Commission. Surprise!

    Martin will report to vice president of global public policy, Joel Kaplan. Oh, and Martin and Kaplan worked together in the George W. Bush White House and on Bush’s 2000 presidential campaign. Surprise again! There’s a distinct ‘K Street‘ feel to it all.

    Facebook is spinning this by emphasizing that Egan will remain chief privacy officer. The company is acting like they made this move in order to have someone with Egan’s credentials focused on rebuilding trust and not so they can replace her with a Republican.

    And that appears to be Facebook’s strategy for dealing with Congress: tasking Republicans to lobby their fellow Republicans:

    The New York Times

    Facebook Replaces Lobbying Executive Amid Regulatory Scrutiny

    By Cecilia Kang
    April 24, 2018

    WASHINGTON — Facebook on Tuesday replaced its head of policy in the United States, Erin Egan, as the social network scrambles to respond to intense scrutiny from federal regulators and lawmakers.

    Ms. Egan, who is also Facebook’s chief privacy officer, was responsible for lobbying and government relations as head of policy for the last two years. She will be replaced by Kevin Martin on an interim basis, the company said. Mr. Martin has been Facebook’s vice president of mobile and global access policy and is a former Republican chairman of the Federal Communications Commission.

    Ms. Egan will remain chief privacy officer and focus on privacy policies across the globe, Andy Stone, a Facebook spokesman, said.

    Elliot Schrage, Facebook’s vice president of communications and public policy, said in a statement on Wednesday: “We need to focus our best people on our most important priorities. We are committed to rebuilding people’s trust in how we handle their information, and Erin is the best person to partner with our product teams on that task.”

    The executive reshuffling in Facebook’s Washington offices followed a period of tumult for the company, which has put it increasingly in the spotlight on Capitol Hill. Last month, The New York Times and others reported that the data of millions of Facebook users had been harvested by the British political research firm Cambridge Analytica. The ensuing outcry led Facebook’s chief executive, Mark Zuckerberg, to testify at two congressional hearings this month.

    Since the revelations about Cambridge Analytica, the Federal Trade Commission has started an investigation of whether Facebook violated promises it made in 2011 to protect the privacy of users, making it harder for the company to share data with third parties.

    At the same time, Facebook is grappling with increased privacy regulations outside the United States. Sweeping new privacy laws called the General Data Protection Regulation are set to take effect in Europe next month. And Facebook has been called to talk to regulators in several countries, including Ireland, Germany and Indonesia, about its handling of user data.

    Mr. Zuckerberg said told Congress this month that Facebook had grown too fast and that he hadn’t foreseen the problems the platform would confront.

    “Facebook is an idealistic and optimistic company,” he said. “For most of our existence, we focused on all the good that connecting people can bring.”

    The executive shifts put two Republican men in charge of Facebook’s Washington offices. Mr. Martin will report to Joel Kaplan, vice president of global public policy. Mr. Martin and Mr. Kaplan worked together in the George W. Bush White House and on Mr. Bush’s 2000 presidential campaign.

    Facebook hired Ms. Egan in 2011; she is a frequent headliner at tech policy events in Washington. Before joining Facebook, she spent 15 years as a partner at the law firm Covington & Burling as co-chairwoman of the global privacy and security group.

    ———-

    “Facebook Replaces Lobbying Executive Amid Regulatory Scrutiny” by Cecilia Kang; The New York Times; 04/24/2018

    “Ms. Egan, who is also Facebook’s chief privacy officer, was responsible for lobbying and government relations as head of policy for the last two years. She will be replaced by Kevin Martin on an interim basis, the company said. Mr. Martin has been Facebook’s vice president of mobile and global access policy and is a former Republican chairman of the Federal Communications Commission.

    When you’re a company as big as Facebook, that’s who you bring in to lead you’re lobbying effort: The former Republican chairman of the FCC.

    And this means two Republicans will be in charge of Facebook’s Washington offices (which are pretty much there to lobby):


    The executive shifts put two Republican men in charge of Facebook’s Washington offices. Mr. Martin will report to Joel Kaplan, vice president of global public policy. Mr. Martin and Mr. Kaplan worked together in the George W. Bush White House and on Mr. Bush’s 2000 presidential campaign.

    But the way Facebook would prefer us to look at it, this was really all about freeing up Erin Egan to work on rebuilding trust over privacy concerns:


    Ms. Egan will remain chief privacy officer and focus on privacy policies across the globe, Andy Stone, a Facebook spokesman, said.

    Elliot Schrage, Facebook’s vice president of communications and public policy, said in a statement on Wednesday: “We need to focus our best people on our most important priorities. We are committed to rebuilding people’s trust in how we handle their information, and Erin is the best person to partner with our product teams on that task.”

    And this move is happening at the same time Facebook is staring at a new EU data privacy regime, the GDPR:


    At the same time, Facebook is grappling with increased privacy regulations outside the United States. Sweeping new privacy laws called the General Data Protection Regulation are set to take effect in Europe next month. And Facebook has been called to talk to regulators in several countries, including Ireland, Germany and Indonesia, about its handling of user data.

    And those new EU GDPR rules don’t just potentially impact how Facebook handles its European users going forward. It potentially impacts the policies governing all of Facebook’s users outside of the US.

    Why? Because Facebook’s customers outside the US and Canada are handled by Facebook’s operations in Ireland and therefore under EU rules. That’s just how Facebook decided to structure itself internationally (largely due to Ireland’s status is a corporate tax haven).

    So does this mean Facebook’s US users will be operating in a data privacy regulatory environment managed by the GOP while almost everyone else in the world operates under the EU’s new rules? Nope, because Facebook just moved its international operations out of Ireland and back to its US headquarters in California. And that means the rules Facebook is lobbying for in DC will apply to all Facebook users globally outside the EU:

    Reuters

    Exclusive: Facebook to put 1.5 billion users out of reach of new EU privacy law

    David Ingram
    April 18, 2018 / 7:13 PM

    SAN FRANCISCO (Reuters) – If a new European law restricting what companies can do with people’s online data went into effect tomorrow, almost 1.9 billion Facebook Inc users around the world would be protected by it. The online social network is making changes that ensure the number will be much smaller.

    Facebook members outside the United States and Canada, whether they know it or not, are currently governed by terms of service agreed with the company’s international headquarters in Ireland.

    Next month, Facebook is planning to make that the case for only European users, meaning 1.5 billion members in Africa, Asia, Australia and Latin America will not fall under the European Union’s General Data Protection Regulation (GDPR), which takes effect on May 25.

    The previously unreported move, which Facebook confirmed to Reuters on Tuesday, shows the world’s largest online social network is keen to reduce its exposure to GDPR, which allows European regulators to fine companies for collecting or using personal data without users’ consent.

    That removes a huge potential liability for Facebook, as the new EU law allows for fines of up to 4 percent of global annual revenue for infractions, which in Facebook’s case could mean billions of dollars.

    The change comes as Facebook is under scrutiny from regulators and lawmakers around the world since disclosing last month that the personal information of millions of users wrongly ended up in the hands of political consultancy Cambridge Analytica, setting off wider concerns about how it handles user data.

    The change affects more than 70 percent of Facebook’s 2 billion-plus members. As of December, Facebook had 239 million users in the United States and Canada, 370 million in Europe and 1.52 billion users elsewhere.

    Facebook, like many other U.S. technology companies, established an Irish subsidiary in 2008 and took advantage of the country’s low corporate tax rates, routing through it revenue from some advertisers outside North America. The unit is subject to regulations applied by the 28-nation European Union.

    Facebook said the latest change does not have tax implications.

    ‘IN SPIRIT’

    In a statement given to Reuters, Facebook played down the importance of the terms of service change, saying it plans to make the privacy controls and settings that Europe will get under GDPR available to the rest of the world.

    “We apply the same privacy protections everywhere, regardless of whether your agreement is with Facebook Inc or Facebook Ireland,” the company said.

    Earlier this month, Facebook Chief Executive Mark Zuckerberg told Reuters in an interview that his company would apply the EU law globally “in spirit,” but stopped short of committing to it as the standard for the social network across the world.

    In practice, the change means the 1.5 billion affected users will not be able to file complaints with Ireland’s Data Protection Commissioner or in Irish courts. Instead they will be governed by more lenient U.S. privacy laws, said Michael Veale, a technology policy researcher at University College London.

    Facebook will have more leeway in how it handles data about those users, Veale said. Certain types of data such as browsing history, for instance, are considered personal data under EU law but are not as protected in the United States, he said.

    The company said its rationale for the change was related to the European Union’s mandated privacy notices, “because EU law requires specific language.” For example, the company said, the new EU law requires specific legal terminology about the legal basis for processing data which does not exist in U.S. law.

    NO WARNING

    Ireland was unaware of the change. One Irish official, speaking on condition of anonymity, said he did not know of any plans by Facebook to transfer responsibilities wholesale to the United States or to decrease Facebook’s presence in Ireland, where the social network is seeking to recruit more than 100 new staff.

    Facebook released a revised terms of service in draft form two weeks ago, and they are scheduled to take effect next month.

    Other multinational companies are also planning changes. LinkedIn, a unit of Microsoft Corp, tells users in its existing terms of service that if they are outside the United States, they have a contract with LinkedIn Ireland. New terms that take effect May 8 move non-Europeans to contracts with U.S.-based LinkedIn Corp.

    ———-
    “Exclusive: Facebook to put 1.5 billion users out of reach of new EU privacy law” by David Ingram; Reuters; 04/18/2018

    “Facebook members outside the United States and Canada, whether they know it or not, are currently governed by terms of service agreed with the company’s international headquarters in Ireland.”

    Yep, for Facebook and quite a few other major internet companies with international headquarters in Ireland, it’s the EU’s rules that determine the rules for most of their global customer base. But not anymore for Facebook:


    Next month, Facebook is planning to make that the case for only European users, meaning 1.5 billion members in Africa, Asia, Australia and Latin America will not fall under the European Union’s General Data Protection Regulation (GDPR), which takes effect on May 25.

    The previously unreported move, which Facebook confirmed to Reuters on Tuesday, shows the world’s largest online social network is keen to reduce its exposure to GDPR, which allows European regulators to fine companies for collecting or using personal data without users’ consent.

    That removes a huge potential liability for Facebook, as the new EU law allows for fines of up to 4 percent of global annual revenue for infractions, which in Facebook’s case could mean billions of dollars.

    And that move from Ireland to California will impact the ~1.5 billion users Facebook has outside of the US, Canada, and EU:


    The change affects more than 70 percent of Facebook’s 2 billion-plus members. As of December, Facebook had 239 million users in the United States and Canada, 370 million in Europe and 1.52 billion users elsewhere.

    acebook, like many other U.S. technology companies, established an Irish subsidiary in 2008 and took advantage of the country’s low corporate tax rates, routing through it revenue from some advertisers outside North America. The unit is subject to regulations applied by the 28-nation European Union.

    Facebook said the latest change does not have tax implications.

    But Facebook wants to assure everyone that this move will have no meaningful impact on anyone’s privacy because it’s committed to having ALL of its users globally follow the same rules as laid out by the EU’s new GDPR. At least ‘in spirit’. That’s right, Facebook is telling the world that its going to implement the GDPR globally at the same time it moves its operations out of the EU. That’s not suspicious or anything:


    ‘IN SPIRIT’

    In a statement given to Reuters, Facebook played down the importance of the terms of service change, saying it plans to make the privacy controls and settings that Europe will get under GDPR available to the rest of the world.

    “We apply the same privacy protections everywhere, regardless of whether your agreement is with Facebook Inc or Facebook Ireland,” the company said.

    Earlier this month, Facebook Chief Executive Mark Zuckerberg told Reuters in an interview that his company would apply the EU law globally “in spirit,” but stopped short of committing to it as the standard for the social network across the world.

    In practice, the change means the 1.5 billion affected users will not be able to file complaints with Ireland’s Data Protection Commissioner or in Irish courts. Instead they will be governed by more lenient U.S. privacy laws, said Michael Veale, a technology policy researcher at University College London.

    Facebook will have more leeway in how it handles data about those users, Veale said. Certain types of data such as browsing history, for instance, are considered personal data under EU law but are not as protected in the United States, he said.

    So why did Facebook make the move if it’s pledging to implement the GDPR ‘in spirit’ for everyone? Well, according to Facebook, it’s “because EU law requires specific language.” That’s not dubious or anything:


    The company said its rationale for the change was related to the European Union’s mandated privacy notices, “because EU law requires specific language.” For example, the company said, the new EU law requires specific legal terminology about the legal basis for processing data which does not exist in U.S. law.

    And, of course, Facebook isn’t the only multinational internet firm looking to move out of Ireland. Microsoft’s LinkedIn is making the same move, under a similarly laughable pretense:


    Other multinational companies are also planning changes. LinkedIn, a unit of Microsoft Corp, tells users in its existing terms of service that if they are outside the United States, they have a contract with LinkedIn Ireland. New terms that take effect May 8 move non-Europeans to contracts with U.S.-based LinkedIn Corp.

    LinkedIn said in a statement on Wednesday that all users are entitled to the same privacy protections. “We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data,” the company said.

    “We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data”

    Yeah, LinkedIn is making the move so users won’t be confused about whether or not the US or EU LinkedIn entity was responsible for their personal data. LOL! We’ll no doubt get similarly laughable explanations from all the other multinational firms making similar moves.

    Also don’t forget that these moves mean the US’s data privacy rules are going to be even more important for the internet giants because now those rules are for going to apply to users everywhere but the EU. And that means the lobbying of US lawmakers and regulators is going to be even more important going forward. The more companies that relocate to the US to escape the EU’s GDPR for the international customer base, the greater the incentives for undermining US data privacy laws. In other words, it’s a really great time to be a Republican data privacy lobbyist.

    Posted by Pterrafractyl | April 30, 2018, 5:30 pm
  12. Here’s a pair of stories that relates to both Cambridge Analytica as well as the bizarre collection of stories related to the ‘Seychelles backchannel’ #TrumpRussia story (like George Nader’s participation in the ‘backchannel’ or Nader’s hiring of GOP money man Elliot Broidy to lobby on behalf of the UAE and Saudis). And the connecting element is none other than Erik Prince:

    So long Cambridge Analytica! Yep, Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

    Is this truly the end of Cambridge Analytica? Of course not. They’re just rebranding under a new company, Emerdata. It’s kind of like when Blackwater renamed itself Xe, and then Academi. And intriguingly, Cambridge Analytica’s transformation into Emerdata new another association with Blackwater: Emerdata’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince:

    The New York Times

    Cambridge Analytica to File for Bankruptcy After Misuse of Facebook Data

    By Nicholas Confessore and Matthew Rosenberg
    May 2, 2018

    The embattled political consulting firm Cambridge Analytica announced on Wednesday that it would cease most operations and file for bankruptcy amid growing legal and political scrutiny of its business practices and work for Donald J. Trump’s presidential campaign.

    The decision was made less than two months after Cambridge Analytica and Facebook became embroiled in a data-harvesting scandal that compromised the personal information of up to 87 million people. Revelations about the misuse of data, published in March by The New York Times and The Observer of London, plunged Facebook into crisis and prompted regulators and lawmakers to open investigations into Cambridge Analytica.

    In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.

    But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices.

    “Over the past several months, Cambridge Analytica has been the subject of numerous unfounded accusations and, despite the company’s efforts to correct the record, has been vilified for activities that are not only legal, but also widely accepted as a standard component of online advertising in both the political and commercial arenas,” the company’s statement said.

    Cambridge Analytica also said the results of an independent investigation it had commissioned, which it released on Wednesday, contradicted assertions made by former employees and contractors about its acquisition of Facebook data. The report played down the role of a contractor turned whistle-blower, Christopher Wylie, who helped the company acquire Facebook data, calling it “very modest.”

    Cambridge Analytica did not reply to requests for comment. The news of Cambridge ceasing operations was earlier reported by The Wall Street Journal and Gizmodo.

    The company, bankrolled by Robert Mercer, a wealthy Republican donor who invested at least $15 million, offered tools that it claimed could identify the personalities of American voters and influence their behavior. Those modeling techniques underpinned Cambridge Analytica’s work for the Trump campaign and for other candidates in 2014 and 2016.

    But Cambridge Analytica came under scrutiny over the past year, first for its purported methods of profiling voters and then over allegations that it improperly harvested private data from Facebook users. Last year, the company was drawn into the special counsel investigation of Russian interference in the 2016 election.

    The company was also forced to suspend its chief executive, Alexander Nix, after a British television channel released an undercover video. In it, Mr. Nix suggested that the company had used seduction and bribery to entrap politicians and influence foreign elections.

    Facebook has since announced changes to its policies for collecting and handling user data. Its chief executive, Mark Zuckerberg, testified last month before Congress, where he faced criticism for failing to protect users’ data.

    The controversy dealt a major blow to Cambridge Analytica’s ambitions of expanding its commercial business in the United States, while also bringing unwanted attention to the American government contracts sought by SCL Group, an intelligence contractor.

    Besides working for the Trump campaign, Cambridge Analytica was previously hired by the political action committee founded by John R. Bolton, the national security adviser. It had also worked for the 2016 presidential campaigns of Ben Carson and Senator Ted Cruz.

    But no candidates for federal office in the United States have disclosed paying Cambridge Analytica during the 2018 cycle. A Republican congressional candidate in California did report voiding a $10,000 transaction with the company in early March, according to federal election records.

    The company also unsuccessfully tried to court some major commercial clients in the last year, including Mercedes-Benz and Anheuser-Busch InBev, the global brewer, according to one former employee. Cambridge pitched AB InBev by claiming that it could position Bud Light as the beer for the young party crowd and Budweiser for old-school conservatives, according to the former employee, who asked not to be named because the person was restricted from speaking about the company’s business.

    In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.

    Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.

    An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. Efforts to reach him by phone on Wednesday were unsuccessful.

    ———

    “Cambridge Analytica to File for Bankruptcy After Misuse of Facebook Data” by Nicholas Confessore and Matthew Rosenberg; The New York Times; 05/02/2018

    “In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.”

    So Cambridge Analytica is going away and the SCL Group is getting out of the elections business. At least on the surface. But there’s still an open question of who is going to retain the rights to all the information held by Cambridge Analytica, including all those psychographic voter profiles that are presumably worth quite a bit of money:


    But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices.

    And that question over who is going to own the rights to all that data is particularly relevant given that executives at Cambridge Analytica and SCL Group and the Mercers recently formed a new company: Emerdata. And look who happens to be one of Emerdata’s directors: Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince:


    In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.

    Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.

    An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. Efforts to reach him by phone on Wednesday were unsuccessful.

    “Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group.”

    LOL! Yeah, the possibility for a “Blackwater-style rebranding” is looking more like a reality at this point. Although we’ll see how many clients this new company gets.

    And that brings us to the following piece. It’s a fascinating piece that summarizes all of the various thing we’ve learned about Erik Prince, the #TrumpRussia investigation, and the UAE. And as the article notes, at the same time Emerdata was being formed in 2017 (August 11, 2017, was the incorporation dat) the UAE was already paying SCL to work on running a social media campaign for the UAE against Qatar as part of the UAE’s #BoycottQatar campaign. And as the article also notes, if you look at the name “Emerdata”, it sure sounds like shortened version of “Emerati-Data”.

    So given the presence of Erik Prince’s business partner on the board of directors of Emerdata, and given Prince’s extensive ties to the UAE, we have to ask the question of whether or not Cambridge Analytica is about to become the new plaything of UAE:

    Medium

    From the Seychelles to the White House to Cambridge Analytica, Erik Prince and the UAE are key parts of the Trump story

    Wendy Siegelman
    Apr 8, 2018

    In January 2017 Erik Prince attended a meeting in the Seychelles with the United Arab Emirate’s Crown Prince Mohammed bin Zayed Al-Nahyan, the CEO of the Russian Direct Investment Fund Kirill Dmitriev, and George Nader, a former consultant for Erik Prince’s company Blackwater.

    While Erik Prince testified in November 2017 to the House Intelligence Committee that the meeting with Dmitriev was unplanned, news broke last week that Mueller has evidence that Prince’s meeting with Putin’s ally Dmitriev may not have been a chance encounter, contradicting Prince’s sworn testimony. And, according to George Nader, a main purpose of the meeting was to set up a communication channel between the Russian government and the incoming Trump administration.

    Earlier this year, a seemingly unrelated scandal erupted after a story broke about Trump’s data company Cambridge Analytica harvesting Facebook data on tens of millions of people. Shortly after that a Channel 4 News investigation revealed undercover film of Cambridge Analytica executives bragging about the dirty tricks they use to influence elections.

    As the Cambridge Analytica scandal was unfolding, I broke the story about a new company Emerdata Limited, created by Cambridge Analytica executives, that in early 2018 added new board members Rebekah and Jennifer Mercer, Cheng Peng, Chun Shun Ko Johnson, who is a business partner of Erik Prince, and Ahmed Al Khatib, a ‘Citizen of Seychelles’.

    In 2017 as Cambridge Analytica executives created Emerdata, they were also working on behalf of the UAE through SCL Social, which had a $330,000 contract to run a social media campaign for the UAE against Qatar, featuring the theme #BoycottQatar. One of the Emerdata directors may have ties to the UAE and the company name, coincidentally, sounds like a play on Emirati-Data…Emerdata.

    The United Arab Emirates and people advocating for the interests of the UAE—including Prince, Nader, and Trump fundraiser Elliot Broidy who has done large business deals with the UAE—have started to appear frequently in news related to Mueller’s investigation. Erik Prince, the brother of the U.S. Secretary of Education Betsy DeVos, lived in the UAE, attended the Seychelles meeting with the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, is business partners with Chun Shun Ko who just joined the board of the new Cambridge Analytica/SCL company Emerdata, and SCL had a large contract to work on behalf of the UAE.

    To better understand the role Erik Prince and the UAE have played in the Trump-Russia story—and in the much broader story of global political influence, and often corruption—below is a timeline tracking some key events. Not all events are related, but reviewing the information chronologically may help answer a few questions, including: Why was the UAE involved in a meeting with Erik Prince to set up a communication channel with Russia? Is the UAE involved with Cambridge Analytica’s new company Emerdata? Does Erik Prince have any connection to Cambridge Analytica, even if only indirectly through Chun Shun Ko, Steve Bannon, or the Mercers? And what role has the UAE had in influencing the Trump administration?

    Note: this timeline may be updated periodically to include new pertinent information. Each event below includes the source data link. Name variations (e.g. Chun Shun Ko Johnson vs Johnson Ko Chun-shun) reflect how names are presented by each source.

    2010

    * In a deposition Erik Prince said he had previously hired George Nader to help Blackwater as a “business development consultant that we retained in Iraq” because the company was looking for contracts with the Iraqi government. New York Times
    * After a series of civil lawsuits, criminal charges and Congressional investigations against Erik Prince’s company Blackwater and its former executives, Prince moved to the United Arab Emirates. New York Times.

    2011

    * Sheik Mohamed bin Zayed al-Nahyan of Abu Dhabi hired Erik Prince to build a fighting force, paying $529 million to build an army. Additionally, Prince “worked with the Emirati government on various ventures…including an operation using South African mercenaries to train Somalis to fight pirates.” New York Times
    * A movie called “The Project,” about Erik Prince’s UAE-funded private army in Somalia, was paid for by the Moving Picture Institute where Rebekah Mercer is on the board of Trustees. Gawker Website

    2012

    * Erik Prince, who works and lives in Abu Dhabi in the United Arab Emirates, created Frontier Resource Group, an Africa-dedicated investment firm partnered with major Chinese enterprises. South China Morning Post

    2013

    * The Russian Direct Investment Fund led by CEO Kirill Dmitriev, and the UAE’s Mubadala Development Company based in Abu Dhabi, launched a $2 billion co-investment fund to pursue opportunities in Russia. PR Newswire

    2014

    * January: Erik Prince was named Chairman of DVN Holdings, controlled by Hong Kong businessman Johnson Ko Chun-shun and Chinese state-owned Citic Group. DVN’s board proposed that the firm be renamed Frontier Services Group. South China Morning Post.
    * January: Erik Prince’s business partner, Dorian Barak, became a Non-Executive Director of Reorient Group Limited, an investment company where Ko Chun Shun Johnson was Chairman and Executive Director, and had done a $350 million deal with Jack Ma. 2014 Annual Report. Forbes
    * Erik Prince’s business partner Dorian Barak joined the board of Alufur Mining, “an independent mineral exploration and development company with significant bauxite interests in the Republic of Guinea.” (Prince would later testify that the purpose of his Seychelles trip was to discuss minerals and ‘bauxite’ with the UAE’s Mohammed bin Zayed). Alufur website
    * August: The John Bolton Super PAC founded by John Bolton, President Trump’s incoming national security adviser, hired Cambridge Analytica months after the firm was founded and while it was still harvesting Facebook data. In the two years that followed, Bolton’s super PAC spent nearly $1.2 million primarily for “survey research” and “behavioral microtargeting with psychographic messaging” using Facebook data. New York Times

    2016

    * August/September: Erik Prince donated $150,000 to Make America Number 1, a pro-Trump PAC for which Robert Mercer has been the largest funder. Open Secrets
    * October: Erik Prince donated $100,000 to the Trump Victory fund and $33,400 to the Republican National Committee. FEC
    * October 11: Erik Prince did an interview with Breitbart News Daily describing Hillary Clinton’s “demonstrable links to Russia, particularly her complicity in “selling 20 percent of the United States’s uranium supply to a Russian state company.” Breitbart
    * November 4: Erik Prince told Breitbart News Daily that “The NYPD wanted to do a press conference announcing the warrants and the additional arrests they were making” in the Anthony Weiner investigation, but received “huge pushback” from the Justice Department.” Prince described criminal culpability in emails from Weiner’s laptop related to “money laundering, underage sex, pay-for-play.” Breitbart
    * December: The United Arab Emirate’s crown prince of Abu Dhabi, Sheikh Mohamed bin Zayed al-Nahyan, visited Trump Tower and met with Jared Kushner, Michael Flynn, and Steve Bannon. In an unusual breach of protocol, the Obama administration was not notified about the visit. Washington Post
    * Erik Prince told the House Intelligence Committee that Steve Bannon informed him about the December Trump Tower meeting with Mohamed bin Zayed al-Nahyan. Prince also said he had sent Bannon unsolicited policy papers during the campaign. CNN

    2017

    January 2017

    * One week prior to the meeting in the Seychelles, sources reported that George Nader met with Erik Prince and later sent him information on Kirill Dmitriev, the CEO of the Russian Direct Investment Fund, contradicting Prince’s sworn testimony to the House Intelligence Committee that the meeting with Kirill Dmitriev in the Seychelles was unexpected. ABC News
    * January 11: A meeting was held in the Seychelles with Erik Prince, the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, Kirill Dmitriev, and George Nader, who had previously consulted for Prince’s Blackwater. According to Nader the meeting was to discuss foreign policy and to establish a line of communication between the Russian government and the incoming Trump administration. ABC News

    February 2017

    * “After decades of close political and defense proximity with the United States, the United Arab Emirates have concluded three major agreements with Russia which could lead to its air force being ultimately re-equipped with Russian combat aircraft.” Defense Aerospace

    March 2017

    * Elliott Broidy, a top GOP and Trump fundraiser with hundreds of millions of dollars in business deals with the UAE, sent George Nader a spreadsheet outlining a proposed $12.7 million campaign against Qatar and the Muslim Brotherhood. Broidy also sent an email to George Nader referring to Secure America Now as a group he worked with. New York Times
    * The largest funder of Secure America Now, a secretive group that creates anti-Muslim advertising, is Robert Mercer, who is also the largest funder of Cambridge Analytica. Open Secrets

    April 2017

    * Jared Kushner’s father Charles Kushner met with Qatar’s minister of finance Ali Sharif Al Emadi to discuss financing of Kushner Companies’ 666 Fifth Avenue building. Intercept
    * “Tom Barrack, a Trump friend who had suggested that Thani consider investing in the Kushner property, has said Charles Kushner was “crushed” when his son got the White House job because that prompted the Qataris to pull out.” Washington Post

    May 2017

    * May 20–21: Donald Trump made his first overseas trip to Riyadh, Saudi Arabia, accompanied by Jared Kushner, Steve Bannon and others. On his first day there Trump signed a joint “strategic vision” that included $110 billion in American arms sales and other investments. Washington Post
    * May 23: Per U.S. officials, the UAE government discussed plans to hack Qatar. Washington Post
    * May 24: Per U.S. officials, the UAE orchestrated the hack of Qatari government news and social media sites in order to post incendiary false quotes attributed to Qatar’s emir, Sheikh Tamim Bin Hamad al-Thani. Washington Post
    * Late May: Following the hack, Saudi Arabia, UAE, Bahrain and Egypt banned Qatari media, broke relations and declared a trade and diplomatic boycott. Washington Post

    June 2017

    * June 5: The Gulf Cooperation Council members Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt released coordinated statements accusing Qatar of supporting terrorist groups and saying that as a result they were cutting links to the country by land, sea and air. Washington Post
    * June 6: Trump tweeted his support for the blockade against Qatar, while Rex Tillerson and James Mattis called for calm. The Guardian
    * June 7: U.S. investigators from the FBI, who sent a team to Doha to help the Qatari government investigate the alleged hacking incident, suspected Russian hackers planted the fake news behind the Qatar crisis. CNN
    * June 27: Rex Tillerson “reaffirmed his strong support for Kuwait’s efforts to mediate the dispute between Qatar and Saudi Arabia, the UAE, Bahrain, and Egypt” and “leaders reaffirmed the need for all parties to exercise restraint to allow for productive diplomatic discussions.” U.S. Department of State Readout
    * An aide to Tillerson was convinced Trump’s support for the UAE came from the UAE Ambassador Yousef Al Otabie, a close friend of Jared Kushner, known to speak to Kushner on a weekly basis. The American Conservative

    July 2017

    * U.S. intelligence officials confirmed that the UAE orchestrated the hack of Qatari government news and social media sites with incendiary false quotes attributed to Qatar’s emir, Sheikh Tamim Bin Hamad al-Thani. Washington Post

    August 2017

    * Emerdata Limited was incorporated in the UK with Cambridge Analytica’s Chairman Julian Wheatland and Chief Data Officer Alexander Tayler as significant owners. Company filing

    September 2017

    * Cambridge Analytica CEO Alexander Nix and Steve Bannon both present at the CSLA Investors’ Forum in Hong Kong. CLSA is part of Citic Securities, which is part of Citic Group, the majority owner of Erik Prince and Ko Chun Shun Johnson’s Frontier Services Group. Bloomberg Tweet

    October 2017

    * October 6: Elliott Broidy, whose company Circinus has had hundreds of millions of dollars in contracts with the UAE, met Trump and suggested Trump meet with the UAE’s Mohammed bin Zayed al-Nahyan. Broidy said Trump thought it was good idea. Broidy also “personally urged Mr. Trump to fire Mr. Tillerson, whom the Saudis and Emiratis saw as insufficiently tough on Iran and Qatar.” New York Times
    * October 6: SCL Social Limited, part of SCL Group/Cambridge Analytica, was hired by UK company Project Associates for approximately $330,000 to implement a social media campaign for the UAE against Qatar, featuring the them #BoycottQatar. FARA filing
    * October 23: Steve Bannon spoke at a Hudson Institute event on “Countering Violent Extremism: Qatar, Iran, and the Muslim Brotherhood,” and called the Qatar blockade “the single most important thing that’s happening in the world.” Bannon “bragged that president’s trip to Saudi Arabia in May gave the Saudis the gumption to lead a blockade against Doha.” The National Interest
    * October 29: Jared Kushner returned from an unannounced trip to Saudi Arabia to discuss Middle East peace. Tom Barrack, a longtime friend and close Trump confidant said “The key to solving (the Israel-Palestinian dispute) is Egypt. And the key to Egypt is Abu Dhabi and Saudi Arabia.” Politico

    November 2017

    * November 4: A week after Kushner’s visit Saudi prince Mohammed bin Salman, launched an anti-corruption crackdown and arrested dozens of members of the Saudi royal family. Per three sources “Crown Prince Mohammed told confidants that Kushner had discussed the names of Saudis disloyal to the crown prince.” However, “Kushner, through his attorney’s spokesperson, denies having done so.” Another source said Mohammed bin Salman told UAE Crown Prince Mohammed bin Zayed that Kushner was “in his pocket.” The Intercept
    * November 15: Dmitry Rybolovlev sold his Leonardo Da Vinci painting ‘Salvatore Mundi’ for $450.3 million, setting a new record for the highest priced painting sold at auction. Rybolovlev?—?who had purchased a Florida home from Trump in 2008 for $95 million, more than $50 million above Trump’s purchase price?—?sold the Da Vinci painting for $322 million above his purchase price. The painting was bought by Saudi Prince Bader bin Abdullah on behalf of crown prince Mohammad Bin Salman. The price was driven up by a bidding war with the UAE’s Mohammed bin Zayed al-Nahyan, as both bidders feared losing the painting to the Qatari ruling family. After the purchase came under criticism, the Da Vinci painting was swapped with the UAE Ministry of Culture in exchange for an equally valued yacht. Daily Mail
    * November 20: Erik Prince testified before the U.S. House of Representatives Intelligence Committee and said that he arranged his trip to Seychelles with people who worked for Mohammed bin Zayed to discuss “security issues and mineral issues and even bauxite.” Prince then described how someone, maybe one of Mohammed bin Zayed’s brothers, told Prince he should meet with Kirill Dimitriev, describing him as “a Russian guy that we’ve done some business with in the past.” Erik Prince Transcript

    December 2017

    * Buzzfeed broke a story on how Erik Prince had pitched the Trump administration on a plan to hire him to privatize the Afghan war and mine Afghanistan’s valuable minerals. A slide presentation of Prince’s pitch described Afghanistan’s rich deposits of minerals with an estimated value of $1 trillion, and described his plan as “a strategic mineral resource extraction funded effort.” Buzzfeed

    2018
    January 2018

    * George Nader emailed a request to Elliott Broidy saying the leader of the UAE asked Trump to call the crown prince of Saudi Arabia to smooth over potential bad feelings created by the book “Fire and Fury.” Nader also reiterated to Broidy the desire of the ruler of the UAE to meet alone with Trump. Days later as Nader went to meet Broidy at Mar-a-lago to celebrate the anniversary of the inauguration, he was met at Dulles Airport by FBI agents working for Mueller. New York Times

    January to March 2018

    * Emerdata Limited added new directors Alexander Nix, Johnson Chun Shun Ko, Cheng Peng, Ahmad Al Khatib, Rebekah Mercer, and Jennifer Mercer. Johnson Chun Shun Ko is the business partner of Erik Prince. Ahmad Al Khatib is identified as ‘Citizen of Seychelles’. Shares are issued valued at 1,912,512 GBP. Emerdata article
    * SCL/Cambridge Analytica founder Nigel Oakes told Channel 4 News it was his understanding that Emerdata was set up a year ago to acquire all of Cambridge Analytica and all of SCL. Channel 4 News
    * A website for a company called Coinagelabs.org displayed Cambridge Analytica as a partner. The team was led by CEO Sandy Peng, who previously worked at Reorient Capital, part of Reorient Group where Chun Shun Ko had been executive chairman. The site is no longer available. Twitter Thread

    March 2018

    March 13: A key goal of UAE political advisor George Nader, and Elliott Broidy who had hundreds of millions of dollars of business with the UAE, was accomplished, and Rex Tillerson was fired. New York Times
    March 19: Russia-friendly California representative Dana Rohrabacher, who has criticized the Magnitsky Act and who Prince had interned for in the 1990’s, attended a fundraiser hosted for him by Erik Prince and Oliver North, at Prince’s home in Virginia. The Intercept

    April 2018

    * Sources reported that “Special Counsel Robert Mueller has obtained evidence that calls into question Congressional testimony given by Trump supporter and Blackwater founder Erik Prince last year, when he described a meeting in Seychelles with a Russian financier close to Vladimir Putin as a casual chance encounter “over a beer.”” ABC News
    * John Bolton is set to begin as Donald Trump’s new National Security Advisor, replacing Lt. Gen. H.R. McMaster, who had opposed Erik Prince’s plans to privatize the war in Afghanistan. Washington Post
    * Robert Mercer, the largest funder of Cambridge Analytica, has given $5 million to Bolton’s super PAC since 2013. He was the Bolton super PAC’s largest donor during the 2016 election cycle, and so far, is also the largest donor for the 2018 election cycle, according to federal campaign finance filings. The Center for Public Integrity
    * A source close to Erik Prince said, “now that McMaster will be replaced by neocon favorite John Bolton, and Tillerson with CIA director Mike Pompeo, who once ran an aerospace supplier, the dynamics have changed. Bolton’s selection, particularly, is “going to take us in a really positive direction.”” Forbes
    * According to an SEC filing, Kushner Companies appears to have reached a deal to buy out its partner, Vornado Realty Trust, in the troubled 666 Fifth Avenue property. The Kushners had previously negotiated unsuccessfully with Chinese company Anbang Insurance Group, whose chairman has since been prosecuted and regulators have seized control of the company. The Kushners also negotiated unsuccessfully with the former prime minister of Qatar, and shortly afterwards Qatar was hacked and blockaded by the UAE, Saudi Arabia, Bahrain and Egypt. It is not clear who will provide the financing for the deal. New York Times

    ———-

    “From the Seychelles to the White House to Cambridge Analytica, Erik Prince and the UAE are key parts of the Trump story” by Wendy Siegelman; Medium; 04/08/2018

    In 2017 as Cambridge Analytica executives created Emerdata, they were also working on behalf of the UAE through SCL Social, which had a $330,000 contract to run a social media campaign for the UAE against Qatar, featuring the theme #BoycottQatar. One of the Emerdata directors may have ties to the UAE and the company name, coincidentally, sounds like a play on Emirati-Data…Emerdata.

    Emirati-Data = Emerdata. Is that the play on words we’re seeing in this name? It does sound like a reasonable inference. Especially given Erik Prince’s close association with both the EmerData’s board of directors and the UAE:


    As the Cambridge Analytica scandal was unfolding, I broke the story about a new company Emerdata Limited, created by Cambridge Analytica executives, that in early 2018 added new board members Rebekah and Jennifer Mercer, Cheng Peng, Chun Shun Ko Johnson, who is a business partner of Erik Prince, and Ahmed Al Khatib, a ‘Citizen of Seychelles’.

    The United Arab Emirates and people advocating for the interests of the UAE—including Prince, Nader, and Trump fundraiser Elliot Broidy who has done large business deals with the UAE—have started to appear frequently in news related to Mueller’s investigation. Erik Prince, the brother of the U.S. Secretary of Education Betsy DeVos, lived in the UAE, attended the Seychelles meeting with the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, is business partners with Chun Shun Ko who just joined the board of the new Cambridge Analytica/SCL company Emerdata, and SCL had a large contract to work on behalf of the UAE.

    So let’s take a closer look at Prince’s ties to the UAE and his parnters in Hong Kong: He moves to the UAE in 2010, and gets hired by the Sheik Mohamed bin Zayed al-Nahyan to build a fighting force in 2011. In 2012, while still living in the UAE, Prince creates the Frontier Resource Group, an Africa-dedicated investment firm partnered with major Chinese enterprises:


    2010

    * In a deposition Erik Prince said he had previously hired George Nader to help Blackwater as a “business development consultant that we retained in Iraq” because the company was looking for contracts with the Iraqi government. New York Times
    * After a series of civil lawsuits, criminal charges and Congressional investigations against Erik Prince’s company Blackwater and its former executives, Prince moved to the United Arab Emirates. New York Times.

    2011

    * Sheik Mohamed bin Zayed al-Nahyan of Abu Dhabi hired Erik Prince to build a fighting force, paying $529 million to build an army. Additionally, Prince “worked with the Emirati government on various ventures…including an operation using South African mercenaries to train Somalis to fight pirates.” New York Times
    * A movie called “The Project,” about Erik Prince’s UAE-funded private army in Somalia, was paid for by the Moving Picture Institute where Rebekah Mercer is on the board of Trustees. Gawker Website

    2012

    * Erik Prince, who works and lives in Abu Dhabi in the United Arab Emirates, created Frontier Resource Group, an Africa-dedicated investment firm partnered with major Chinese enterprises. South China Morning Post

    2013

    * The Russian Direct Investment Fund led by CEO Kirill Dmitriev, and the UAE’s Mubadala Development Company based in Abu Dhabi, launched a $2 billion co-investment fund to pursue opportunities in Russia. PR Newswire

    Then, in 2014, Prince gets named as Chairman of DVN Holdings, controlled by Hong Kong businessman Johnson Ko Chun-shun (who sits on the board of Emerdata) and Chinese state-owned Citic Group:


    2014

    * January: Erik Prince was named Chairman of DVN Holdings, controlled by Hong Kong businessman Johnson Ko Chun-shun and Chinese state-owned Citic Group. DVN’s board proposed that the firm be renamed Frontier Services Group. South China Morning Post.
    * January: Erik Prince’s business partner, Dorian Barak, became a Non-Executive Director of Reorient Group Limited, an investment company where Ko Chun Shun Johnson was Chairman and Executive Director, and had done a $350 million deal with Jack Ma. 2014 Annual Report. Forbes
    * Erik Prince’s business partner Dorian Barak joined the board of Alufur Mining, “an independent mineral exploration and development company with significant bauxite interests in the Republic of Guinea.” (Prince would later testify that the purpose of his Seychelles trip was to discuss minerals and ‘bauxite’ with the UAE’s Mohammed bin Zayed). Alufur website

    Then there’s all the shenanigans involving the Seychelles ‘backchannel’ (that inexplicably involves the UAE) and GOP money-man Elliott Broidy:


    * December: The United Arab Emirate’s crown prince of Abu Dhabi, Sheikh Mohamed bin Zayed al-Nahyan, visited Trump Tower and met with Jared Kushner, Michael Flynn, and Steve Bannon. In an unusual breach of protocol, the Obama administration was not notified about the visit. Washington Post
    * Erik Prince told the House Intelligence Committee that Steve Bannon informed him about the December Trump Tower meeting with Mohamed bin Zayed al-Nahyan. Prince also said he had sent Bannon unsolicited policy papers during the campaign. CNN

    2017

    January 2017

    * One week prior to the meeting in the Seychelles, sources reported that George Nader met with Erik Prince and later sent him information on Kirill Dmitriev, the CEO of the Russian Direct Investment Fund, contradicting Prince’s sworn testimony to the House Intelligence Committee that the meeting with Kirill Dmitriev in the Seychelles was unexpected. ABC News
    * January 11: A meeting was held in the Seychelles with Erik Prince, the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, Kirill Dmitriev, and George Nader, who had previously consulted for Prince’s Blackwater. According to Nader the meeting was to discuss foreign policy and to establish a line of communication between the Russian government and the incoming Trump administration. ABC News

    February 2017

    * “After decades of close political and defense proximity with the United States, the United Arab Emirates have concluded three major agreements with Russia which could lead to its air force being ultimately re-equipped with Russian combat aircraft.” Defense Aerospace

    March 2017

    * Elliott Broidy, a top GOP and Trump fundraiser with hundreds of millions of dollars in business deals with the UAE, sent George Nader a spreadsheet outlining a proposed $12.7 million campaign against Qatar and the Muslim Brotherhood. Broidy also sent an email to George Nader referring to Secure America Now as a group he worked with. New York Times
    * The largest funder of Secure America Now, a secretive group that creates anti-Muslim advertising, is Robert Mercer, who is also the largest funder of Cambridge Analytica. Open Secrets

    Then Emerdata gets formed in August of 2017. The next month, Steve Bannon and Alexander Nix atten the CSLA Investors’ Forum in Hong Kong, which is run by Citic Group, the majority owner of Prince’s Frontier Services Group:


    August 2017

    * Emerdata Limited was incorporated in the UK with Cambridge Analytica’s Chairman Julian Wheatland and Chief Data Officer Alexander Tayler as significant owners. Company filing

    September 2017

    * Cambridge Analytica CEO Alexander Nix and Steve Bannon both present at the CSLA Investors’ Forum in Hong Kong. CLSA is part of Citic Securities, which is part of Citic Group, the majority owner of Erik Prince and Ko Chun Shun Johnson’s Frontier Services Group. Bloomberg Tweet

    Then in October of 2017, we have a continuation of Elliot Broidy’s lobbying the Trump administration on behalf of the UAE at the same time the SCL Group gets hired to implement a social media campaign for the UAE against Qatar:


    October 2017

    * October 6: Elliott Broidy, whose company Circinus has had hundreds of millions of dollars in contracts with the UAE, met Trump and suggested Trump meet with the UAE’s Mohammed bin Zayed al-Nahyan. Broidy said Trump thought it was good idea. Broidy also “personally urged Mr. Trump to fire Mr. Tillerson, whom the Saudis and Emiratis saw as insufficiently tough on Iran and Qatar.” New York Times
    * October 6: SCL Social Limited, part of SCL Group/Cambridge Analytica, was hired by UK company Project Associates for approximately $330,000 to implement a social media campaign for the UAE against Qatar, featuring the them #BoycottQatar. FARA filing
    * October 23: Steve Bannon spoke at a Hudson Institute event on “Countering Violent Extremism: Qatar, Iran, and the Muslim Brotherhood,” and called the Qatar blockade “the single most important thing that’s happening in the world.” Bannon “bragged that president’s trip to Saudi Arabia in May gave the Saudis the gumption to lead a blockade against Doha.” The National Interest
    * October 29: Jared Kushner returned from an unannounced trip to Saudi Arabia to discuss Middle East peace. Tom Barrack, a longtime friend and close Trump confidant said “The key to solving (the Israel-Palestinian dispute) is Egypt. And the key to Egypt is Abu Dhabi and Saudi Arabia.” Politico

    Finally, in early 2018 we find Emerdata adding Alexander Nix, Johnson Chun Shun Ko (Prince’s partner at Frontier Services Group), Cheng Peng, Ahmad Al Khatib, Rebekah Mercer, and Jennifer Mercer to the board of directors:


    January to March 2018

    * Emerdata Limited added new directors Alexander Nix, Johnson Chun Shun Ko, Cheng Peng, Ahmad Al Khatib, Rebekah Mercer, and Jennifer Mercer. Johnson Chun Shun Ko is the business partner of Erik Prince. Ahmad Al Khatib is identified as ‘Citizen of Seychelles’. Shares are issued valued at 1,912,512 GBP. Emerdata article

    So it sure looks a lot like the new incarnation of Cambridge Analytica is basically going to be applying Cambridge Analytica’s psychological warfare methods on behalf of the UAE, among others. The Chinese investors will also presumably be interested in these kinds of services. And anyone else who might want to hire a psychological warfare service provider run by a bunch of far right luminaries.

    Posted by Pterrafractyl | May 3, 2018, 3:56 pm
  13. Oh look at that: Remember how Aleksandr Kogan, the University of Cambridge professor who built the app used by Cambridge Analytica, claimed that what he was doing was rather typical? Well, Facebook’s audit of the thousands of apps used on its platform appears to be proving Kogan right. Facebook just announced that it has already found and suspended 200 apps that appear to be misusing user data.

    Facebook won’t say which apps were suspended, how many users were involved, or what the red flags were that triggered the suspension, so we’re largely left in the dark in terms of the scope of the problem.

    But there is one particular problem app that’s been revealed, although it wasn’t revealed by Facebook. It’s the myPersonality app which was also developed by Cambridge University professors at the Cambridge Psychometrics Center. Recall how Cambridge Analytica ended up working with Aleksander Kogan only after first being rebuffed by the Cambridge Psychometrics Center. And as we’re going to see in the second article below, Kogan actually working on the myPersonality app until 2014 (when he went to work for Cambridge Analytica). So the one app of the 200 recently suspended apps that we get to know about at this point is an app Kogan helped develop. And the other 199 apps remain a mystery for now:

    The Washington Post

    Facebook suspends 200 apps following Cambridge Analytica scandal

    by Drew Harwell and Tony Romm
    May 14, 2018

    Facebook said Monday morning that it had suspended roughly 200 apps amid an ongoing investigation prompted by the Cambridge Analytica scandal into whether services on the site had improperly used or collected users’ personal data.

    The company said in an update, its first since the social network announced the internal audit in March, that the apps would undergo a “thorough investigation” into whether they had misused user data.

    Facebook declined to provide more detail on which apps were suspended, how many people had used them or what red flags had led them to suspect those apps of misuse.

    CEO Mark Zuckerberg has said the company will examine tens of thousands of apps that could have accessed or collected large amounts of users’ personal information before the site’s more restrictive data rules for third-party developers took effect in 2015.

    The company said teams of internal and external experts will conduct interviews and lead on-site inspections of certain apps during its ongoing audit. Thousands of apps have been investigated so far, the company said, adding that any app that refuses to cooperate or failed the audit would be banned from the site.

    The suspensions support a long-running defense of Aleksandr Kogan, the researcher who provided Facebook data to Cambridge Analytica, that many apps besides his had gathered vast amounts of user information under Facebook’s previously lax data-privacy rules.

    One of the 200 apps, the personality quiz myPersonality, was suspended in early April and is under investigation, Facebook officials said. Researchers at the University of Cambridge had set up the app to collect personal information about Facebook users and inform academic research. But its data may not have been properly secured, as first reported by New Scientist, which found login credentials for the app’s database available online.

    “This is clearly a breach of the terms that academics agree to when requesting a collaboration with myPersonality,” the University of Cambridge said in a statement Monday. “Once we learned of this, we took immediate steps to stop access to the account and to stop further data sharing.”

    The researchers added that academics who used the tool had to verify their identities and the nature of their research and agree to terms of service that prohibited them from sharing Facebook data “outside of their research group.”

    A different quiz app, developed by Kogan and tapped by Cambridge Analytica, a political consultancy hired by President Trump and other Republicans, was able to pull detailed data on 87 million people, including from the app’s direct users and their friends, who had not overtly consented to the app’s use.

    The announcement comes ahead of a Wednesday hearing on Capitol Hill focused on Cambridge Analytica and data privacy. Lawmakers on the Senate Judiciary Committee said they would question Christopher Wylie, a former employee at the firm who brought its business practices to light earlier this year, along with other academics.

    In the United States, the Federal Trade Commission is investigating whether Facebook’s entanglement with Cambridge Analytica violates its 2011 settlement with the U.S. government over another series of privacy mishaps. Such violations could carry sky-high fines.

    Facebook said users will be able to go to this page to see if they had used one of the suspected apps once the company reveals which apps are under investigation. Company officials would not provide an estimated timeline for that disclosure.

    ———-

    “Facebook suspends 200 apps following Cambridge Analytica scandal” by Drew Harwell and Tony Romm; The Washington Post; 05/14/2018

    “Facebook declined to provide more detail on which apps were suspended, how many people had used them or what red flags had led them to suspect those apps of misuse.”

    Did you happen to use one of the 200 suspended apps? Who knows, although Facebook says it will notify people of the names of suspended apps eventually. No timeline for that disclosure is given:


    Facebook said users will be able to go to this page to see if they had used one of the suspected apps once the company reveals which apps are under investigation. Company officials would not provide an estimated timeline for that disclosure.

    And, again, this is exactly what Kogan warned us about:


    The suspensions support a long-running defense of Aleksandr Kogan, the researcher who provided Facebook data to Cambridge Analytica, that many apps besides his had gathered vast amounts of user information under Facebook’s previously lax data-privacy rules.

    And note how Facebook is specifically saying it’s reviewing “tens of thousands of apps that could have accessed or collected large amounts of users’ personal information before the site’s more restrictive data rules for third-party developers took effect in 2015″. In other words, Facebook isn’t reviewing all of it’s apps. Only those that existed before the policy change that stopped apps from exploiting the “friends permission” feature that let app developers scrape the information for Facebook users and their friends. So it sounds like this review process isn’t looking for data privacy abuses under the current set of rules. Just abuses under the old set of rules:


    CEO Mark Zuckerberg has said the company will examine tens of thousands of apps that could have accessed or collected large amounts of users’ personal information before the site’s more restrictive data rules for third-party developers took effect in 2015.

    The company said teams of internal and external experts will conduct interviews and lead on-site inspections of certain apps during its ongoing audit. Thousands of apps have been investigated so far, the company said, adding that any app that refuses to cooperate or failed the audit would be banned from the site.

    And that apparent focus on abuses from the old “friends permission” rules suggests that current data use problems might go undetected. And the one app we’ve learned about, the myPersonality app, is a perfect example of the kind of app that would have been violating Facebook’s current data privacy rules. Because as people recently learned, the Facebook data gathered by the app was available online for the purpose of sharing with other researchers, but it was so poorly secured that anyone could have potentially accessed it:


    One of the 200 apps, the personality quiz myPersonality, was suspended in early April and is under investigation, Facebook officials said. Researchers at the University of Cambridge had set up the app to collect personal information about Facebook users and inform academic research. But its data may not have been properly secured, as first reported by New Scientist, which found login credentials for the app’s database available online.

    “This is clearly a breach of the terms that academics agree to when requesting a collaboration with myPersonality,” the University of Cambridge said in a statement Monday. “Once we learned of this, we took immediate steps to stop access to the account and to stop further data sharing.”

    The researchers added that academics who used the tool had to verify their identities and the nature of their research and agree to terms of service that prohibited them from sharing Facebook data “outside of their research group.”

    But it gets worse. Because as the following New Scientist article that revealed the myPersonality apps privacy issues points out, the data on some 6 million Facebook users was anonymized, but it was such a shoddy anonymization scheme that someone could have easily deanonymized the data in an automated fashion. And access to this database was potentially available to anyone for the past four years. So almost anyone could have grabbed this anonymized data on 6 million Facebook users and deanonymized it with relative ease.

    And putting aside the possible unofficial access of this data, the people and instituations that got official access is also concerning:More than 280 people from nearly 150 institutions accessed this database, including researchers at universities and at companies like Facebook, Google, Microsoft and Yahoo. Yep, researchers at Facebook were apparently accessing this database of poorly anonymized data.

    So it should come as no surprise that, just as Aleksandr Kogan defended himself by asserting that lots of other apps did the same thing as his Cambridge Analytica app and Facebook was well aware of how his app was being used, we’re getting the exact same defense from the team by myPersonality:

    New Scientist

    Huge new Facebook data leak exposed intimate details of 3m users

    By Phee Waterfield and Timothy Revell
    14 May 2018, updated 15 May 2018

    Data from millions of Facebook users who used a popular personality app, including their answers to intimate questionnaires, was left exposed online for anyone to access, a New Scientist investigation has found.

    Academics at the University of Cambridge distributed the data from the personality quiz app myPersonality to hundreds of researchers via a website with insufficient security provisions, which led to it being left vulnerable to access for four years. Gaining access illicitly was relatively easy.

    The data was highly sensitive, revealing personal details of Facebook users, such as the results of psychological tests. It was meant to be stored and shared anonymously, however such poor precautions were taken that deanonymising would not be hard.

    “This type of data is very powerful and there is real potential for misuse,” says Chris Sumner at the Online Privacy Foundation. The UK’s data watchdog, the Information Commissioner’s Office, has told New Scientist that it is investigating.

    The data sets were controlled by David Stillwell and Michal Kosinski at the University of Cambridge’s The Psychometrics Centre. Alexandr Kogan, at the centre of the Cambridge Analytica allegations, was listed as a collaborator on the myPersonality project until the summer of 2014.

    Facebook suspended myPersonality from its platform on 7 April saying the app may have violated its policies due to the language used in the app and on its website to describe how data is shared.

    More than 6 million people completed the tests on the myPersonality app and nearly half agreed to share data from their Facebook profiles with the project. All of this data was then scooped up and the names removed before it was put on a website to share with other researchers. The terms allow the myPersonality team to use and distribute the data “in an anonymous manner such that the information cannot be traced back to the individual user”.

    To get access to the full data set people had to register as a collaborator to the project. More than 280 people from nearly 150 institutions did this, including researchers at universities and at companies like Facebook, Google, Microsoft and Yahoo.

    Easy backdoor

    However, for those who were not entitled to access the data set because they didn’t have a permanent academic contract, for example, there was an easy workaround. For the last four years, a working username and password has been available online that could be found from a single web search. Anyone who wanted access to the data set could have found the key to download it in less than a minute.

    The publicly available username and password were sitting on the code-sharing website GitHub. They had been passed from a university lecturer to some students for a course project on creating a tool for processing Facebook data. Uploading code to GitHub is very common in computer science as it allows others to reuse parts of your work, but the students included the working login credentials too.

    myPersonality wasn’t merely an academic project; researchers from commercial companies were also entitled to access the data so long as they agreed to abide by strict data protection procedures and didn’t directly earn money from it.

    Stillwell and Kosinski were both part of a spin-out company called Cambridge Personality Research, which sold access to a tool for targeting adverts based on personality types, built on the back of the myPersonality data sets. The firm’s website described it as the tool that “mind-reads audiences”.

    Facebook started investigating myPersonality as part of a wider investigation into apps using the platform. This was started by the allegations surrounding how Cambridge Analytica accessed data from an app called This Is Your Digital Life developed by Kogan.

    Today it it announced it has suspended around 200 apps as part of its investigation into apps that had access to large amounts of information on users.

    Cambridge Analytica had approached the myPersonality app team in 2013 to get access to the data, but was turned down because of its political ambitions, according to Stillwell.

    “We are currently investigating the app, and if myPersonality refuses to cooperate or fails our audit, we will ban it,” says Ime Archibong, Facebook’s vice president of Product Partnerships.

    The myPersonality app website has now been taken down, the publicly available credentials no longer work, and Stillwell’s website and Twitter account have gone offline.

    “We are aware of an incident related to the My Personality app and are making enquiries,” a spokesperson for the Information Commissioner’s Office told New Scientist.

    Personal information exposed

    The credentials gave access to the “Big Five” personality scores of 3.1 million users. These scores are used in psychology to assess people’s characteristics, such as conscientiousness, agreeableness and neuroticism. The credentials also allowed access to 22 million status updates from over 150,000 users, alongside details such as age, gender and relationship status from 4.3 million people.

    “If at any time a username and password for any files that were supposed to be restricted were made public, it would be a consequential and serious issue,” says Pam Dixon at the World Privacy Forum. “Not only is it a bad security practice, it is a profound ethical violation to allow strangers to access files.”

    Beyond the password leak and distributing the data to hundreds of researchers, there are serious concerns with the way the anonymisation process was performed.

    Each user in the data set was given a unique ID, which tied together data such as their age, gender, location, status updates, results on the personality quiz and more. With that much information, de-anonymising the data can be done very easily. “You could re-identify someone online from a status update, gender and date,” says Dixon.

    This process could be automated, quickly revealing the identities of the millions of people in the data sets, and tying them to the results of intimate personality tests.

    “Any data set that has enough attributes is extremely hard to anonymise,” says Yves-Alexandre de Montjoye at Imperial College London. So instead of distributing actual data sets, the best approach is to provide a way for researchers to run tests on the data. That way they get aggregated results and never access to individuals. “The use of the data can’t be at the expense of people’s privacy,” he says.

    The University of Cambridge says it was alerted to the issues surrounding myPersonality by the Information Commissioner’s Office. It says that, as the app was created by Stillwell before he joined the university, “it did not go through our ethical approval processes”. It also says “the University of Cambridge does not own or control the app or data”.

    When approached, Stillwell says that throughout the nine years of the project there has only been one data breach, and that researchers given access to the data set must agree not to de-anonymise the data. “We believe that academic research benefits from properly controlled sharing of anonymised data among the research community,” he told New Scientist.

    He also says that Facebook has long been aware of the myPersonality project, holding meetings with himself and Kosinski going back as far as 2011. “It is therefore a little odd that Facebook should suddenly now profess itself to have been unaware of the myPersonality research and to believe that the use of the data was a breach of its terms,” he says.

    The investigations by Facebook and the Information Commissioner’s Office should try to determine who accessed the myPersonality data and what it was used for. However, as it was shared with so many different people, tracking everyone who has a copy and what they did with it will prove very difficult. We will never know exactly who did what with this data set. “This is the tip of the iceberg,” says Dixon. “Who else has this data?”

    ———–

    “Huge new Facebook data leak exposed intimate details of 3m users” by Phee Waterfield and Timothy Revell; New Scientist; 05/14/2018

    “Academics at the University of Cambridge distributed the data from the personality quiz app myPersonality to hundreds of researchers via a website with insufficient security provisions, which led to it being left vulnerable to access for four years. Gaining access illicitly was relatively easy.”

    Yep, an online database of highly sensitive Facebook + psychological profile data was made accessible to hundreds of researchers. But it was also potentially accessible to anyone due to poor security. For four years.

    And those that were given official access to the data included companies like Microsoft, Google, Yahoo, and Facebook:


    To get access to the full data set people had to register as a collaborator to the project. More than 280 people from nearly 150 institutions did this, including researchers at universities and at companies like Facebook, Google, Microsoft and Yahoo.

    While the Facebook researchers could plausibly claim that they had no idea the server hosting this data had insufficient security, it would be a lot harder for them to claim they had no idea the anonymization scheme was highly inadequate:


    The data was highly sensitive, revealing personal details of Facebook users, such as the results of psychological tests. It was meant to be stored and shared anonymously, however such poor precautions were taken that deanonymising would not be hard.

    “This type of data is very powerful and there is real potential for misuse,” says Chris Sumner at the Online Privacy Foundation. The UK’s data watchdog, the Information Commissioner’s Office, has told New Scientist that it is investigating.

    And the only thing the myPersonality team appeared to do to anonymize the data was replace names with a number. THAT’S IT! And when that’s the only anonymization step employed in a data set with large amounts of data on each individual, including status updates, it’s going to be trivial to automate the deanonymization of these people, especially for companies like Google, Yahoo, Microsoft and Facebook:


    Personal information exposed

    The credentials gave access to the “Big Five” personality scores of 3.1 million users. These scores are used in psychology to assess people’s characteristics, such as conscientiousness, agreeableness and neuroticism. The credentials also allowed access to 22 million status updates from over 150,000 users, alongside details such as age, gender and relationship status from 4.3 million people.

    Each user in the data set was given a unique ID, which tied together data such as their age, gender, location, status updates, results on the personality quiz and more. With that much information, de-anonymising the data can be done very easily. “You could re-identify someone online from a status update, gender and date,” says Dixon.

    This process could be automated, quickly revealing the identities of the millions of people in the data sets, and tying them to the results of intimate personality tests.

    “Any data set that has enough attributes is extremely hard to anonymise,” says Yves-Alexandre de Montjoye at Imperial College London. So instead of distributing actual data sets, the best approach is to provide a way for researchers to run tests on the data. That way they get aggregated results and never access to individuals. “The use of the data can’t be at the expense of people’s privacy,” he says.

    Not surprisingly, two of the academics in charge of this project were part of a spin-off company that sold tools for targeting ads based on personality types. So it wasn’t just commercial companies like Google and Yahoo who got access to this data. The whole enterprise appeared to be commercial in nature:


    myPersonality wasn’t merely an academic project; researchers from commercial companies were also entitled to access the data so long as they agreed to abide by strict data protection procedures and didn’t directly earn money from it.

    Stillwell and Kosinski were both part of a spin-out company called Cambridge Personality Research, which sold access to a tool for targeting adverts based on personality types, built on the back of the myPersonality data sets. The firm’s website described it as the tool that “mind-reads audiences”.

    And, of course, Aleksandr Kogan was part of this project before he went to work for Cambridge Analytica:


    The data sets were controlled by David Stillwell and Michal Kosinski at the University of Cambridge’s The Psychometrics Centre. Alexandr Kogan, at the centre of the Cambridge Analytica allegations, was listed as a collaborator on the myPersonality project until the summer of 2014.

    And note how Facebook only suspended this app on April 7th of this year, four years after Facebook ended its notorious “friends permission” feature that’s received most of the attention from the Cambridge Analytica scandal. It’s a big reminder that data privacy abuses via Facebook apps aren’t limited to that “friends permissions” feature. It’s an existing problem, which is why it’s troubling to hear that Facebook was looking into the tens of thousands of apps that may have abused in pre-2015 data use policies:


    Facebook suspended myPersonality from its platform on 7 April saying the app may have violated its policies due to the language used in the app and on its website to describe how data is shared.

    More than 6 million people completed the tests on the myPersonality app and nearly half agreed to share data from their Facebook profiles with the project. All of this data was then scooped up and the names removed before it was put on a website to share with other researchers. The terms allow the myPersonality team to use and distribute the data “in an anonymous manner such that the information cannot be traced back to the individual user”.

    But beyond the troubling half-assed anonymization scheme, there’s the issue of all this data being inadvertently made available to the world due to the user credentials for the database getting uploaded into some code on GitHub, an online coding repository:


    Easy backdoor

    However, for those who were not entitled to access the data set because they didn’t have a permanent academic contract, for example, there was an easy workaround. For the last four years, a working username and password has been available online that could be found from a single web search. Anyone who wanted access to the data set could have found the key to download it in less than a minute.

    The publicly available username and password were sitting on the code-sharing website GitHub. They had been passed from a university lecturer to some students for a course project on creating a tool for processing Facebook data. Uploading code to GitHub is very common in computer science as it allows others to reuse parts of your work, but the students included the working login credentials too.

    It’s important to keep in mind that the accidental release of those credentials by some students is probably the most understandable aspect of this data privacy nightmare. It’s the equivalent of writing a bug in code: a common careless accident. Everything else associated with this data privacy nightmare is far less understandable because it wasn’t a mistake but by design.

    And as we should expect at this point, the designers of the myPersonality app are expressing dismay as Facebook’s dismay. After all, Facebook has long been aware of the project and even held meetings with the team as far back as 2011:


    When approached, Stillwell says that throughout the nine years of the project there has only been one data breach, and that researchers given access to the data set must agree not to de-anonymise the data. “We believe that academic research benefits from properly controlled sharing of anonymised data among the research community,” he told New Scientist.

    He also says that Facebook has long been aware of the myPersonality project, holding meetings with himself and Kosinski going back as far as 2011. “It is therefore a little odd that Facebook should suddenly now profess itself to have been unaware of the myPersonality research and to believe that the use of the data was a breach of its terms,” he says.

    And don’t forget, Facebook researchers were among the users of this data. So Facebook was obviously pretty familiar with the app.

    And in the end, we’ll likely never know who accessed the data and what they did with it. It’s just the tip of the iceberg:


    The investigations by Facebook and the Information Commissioner’s Office should try to determine who accessed the myPersonality data and what it was used for. However, as it was shared with so many different people, tracking everyone who has a copy and what they did with it will prove very difficult. We will never know exactly who did what with this data set. “This is the tip of the iceberg,” says Dixon. “Who else has this data?”

    And note one of the other chilling implications of this story: Recall how the ~270,000 user of the Cambridge Analytica app resulting in Cambridge Analytica harvesting data on ~87 million people using the “friends permissions” option. Well, if this myPersonality app was been operating for 9 years that means it also had access to the “friends permissions” option, and for much longer than the Cambridge Analytica app. And 6 million people apparently downloaded this app! So how many of that 6 million people were using this app in the pre-2015 period when the “friends permission” option was still available and how many friends of those 6 million people had their profiles harvested too?

    So it’s entirely possible the people at myPersonality grabbed information on far more than the 6 million people who used their app and we have no idea what they did with the data. What we know know is just the tip of the iceberg of this story.

    And this story of myPersonality is just covering one of the 200 apps that Facebook just suspended. In other words, this iceberg of a story is just the tip of a much, much larger iceberg.

    Posted by Pterrafractyl | May 17, 2018, 10:54 pm
  14. Here’s a story about explosive new lawsuit against Facebook that could end up being a major headache for the company, and Mark Zuckerberg in particular: The lawsuit is being brought by Six4Three, a former app developer startup. Six4Three claims that, in 2012, Facebook was facing a large crisis with its advertising business model due to the rapid adoption of smartphones and the fact that Facebook’s ads were primarily focused on desktops. Facing a large drop in revenue, Facebook allegedly forced developer to buy expensive ads on the new, underused Facebook mobile service or risk having their access to data at the core of their business cut off.

    The way Six4Three describes it, Facebook first got developers to build their business models around access to that data, and then engaged in what amounts to a shakedown of those developers, threatening to take that access away unless expensive mobile ads were purchased.

    But beyond that, Six4Three alleges that Facebook incentivized developed to create apps for its system by implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. Don’t forget the Facebook friends data data (accessed via the “friends permission” feature) is the information at the heart of the Cambridge Analytica scandal.

    So Facebook was apparently offering long-term access to “friends permission” data back in 2012 as a means of incentivizing developers to create apps and the same time it was threatening to cut off developer access to this data unless they purchased expensive mobile adds. And then, of course, that “friends permission” feature was wound down in 2015, which was undoubtedly a good thing for the privacy of Facebook users but as we can see the developers weren’t so happy about this, in part because they were apparently told by Facebook to expect long-term access to that data. Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook.

    It’s worth noting that Six4Three developed an app called Pinkinis that searched through the photos of your friends for pictures of them in swimwear. So losing access to friends data more or less broke Six4Three’s app.

    Beyond that, Six4Three also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. This is also noteworthy with respect to the Cambridge Analytica scandal since it appeared to be the case that Aleksandr Kogan’s psychological profiling app was allowed to access the “friends permission” feature later than other apps. In other words, the Cambridge Analytica app did actually appear to get preferential treatment from Facebook.

    But Six4Three’s allegations go further, and suggest that Facebook’s executives would observe which apps were the most successful and plotted to either extract money from them, co-opt them or destroy them using the threat of cutting off access to the user data as leverage.

    So, basically, Facebook is getting sued by this app developer for acting like the mafia and turning access to all that user data as the key enforcement tool:

    The Guardian

    Zuckerberg set up fraudulent scheme to ‘weaponise’ data, court case alleges

    Facebook CEO exploited ability to access data from any user’s friend network, US case claims

    Carole Cadwalladr and Emma Graham-Harrison

    Thu 24 May 2018 08.01 EDT

    Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business.

    A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.

    A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.

    Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.

    It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.

    Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University, said, in her opinion, this exposed a potential tension for Facebook.

    “Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.”

    The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.

    The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.

    It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours.

    Facebook said the claims had no merit and the company would “continue to defend ourselves vigorously”.

    Six4Three lodged its original case in 2015 shortly after Facebook removed developers’ access to friends’ data. The company said it had invested $250,000 in developing an app called Pikinis that filtered users’ friends photos to find any of them in swimwear. Its launch was met with controversy.

    The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.

    The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.

    “The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.

    The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.

    Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.

    “Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.

    In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends.

    Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.

    Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.

    The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.

    It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents.

    A Facebook spokesman said: “When we changed our policy in 2015, we gave all third-party developers ample notice of material platform changes that could have impacted their applications.”

    Facebook’s submission to the court, an “anti-Slapp motion” under Californian legislation designed to protect freedom of speech, said: “Six4Three is taking its fifth shot at an ever expanding set of claims and all of its claims turn on one decision, which is absolutely protected: Facebook’s editorial decision to stop publishing certain user-generated content via its Platform to third-party app developers.”

    David Godkin, Six4Three’s lead counsel said: “We believe the public has a right to see the evidence and are confident the evidence clearly demonstrates the truth of our allegations, and much more.”

    Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.

    “They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.”

    ———-

    “Zuckerberg set up fraudulent scheme to ‘weaponise’ data, court case alleges” by Carole Cadwalladr and Emma Graham-Harrison; The Guardian; 05/24/2018

    “A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.”

    It was Mark Zuckerberg who personally led this shakedown operation, according to the lawsuit. So what’s the evidence? Well, that appears to be in the form of thousands of currently redacted internal emails. It’s unclear how those emails were obtained:


    The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.

    The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.

    It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours.

    Note this isn’t a new lawsuit by Six4Three. They first filed a case in 2015, shortly after Facebook removed developers’ access to the “friends permission” data feature, where app developers could grab extensive information from ALL the Facebook friends of the users who downloaded their apps. And when you look at the how the Six4Three app works it’s pretty clear why they would have been very upset about losing access to the friends data: their “Pikinis” app is based on scanning your friends’ pictures for shots of them in swimwear:


    Six4Three lodged its original case in 2015 shortly after Facebook removed developers’ access to friends’ data. The company said it had invested $250,000 in developing an app called Pikinis that filtered users’ friends photos to find any of them in swimwear. Its launch was met with controversy.

    And it’s a rather fascinating lawsuit by Six4Three because it’s basically complaining about Facebook suddenly threatening to remove access to this personal data after previously implying that developers would have long-term access to it and use that power to extort developers. And in order to make that case, Six4Three also asserts that Facebook was well aware of the privacy implications of its data sharing policies because access to that data was both the carrot and the stick for developers. So this case, if proven, would utterly destroy Facebook’s portrayal of itself as a victim of Cambridge Analytica’s misuse of its data:


    The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.

    The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.

    And the initial motive for all this was Facebook’s realization in 2012 that it failed to anticipate the speed of consumer adoption of smartphones and effectively damaged its lucrative advertising business, which was focused on desktop ads:


    “The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.

    The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.

    So Facebook responded to this sudden threat to its core business by in multiple scandalous ways, according to the lawsuit. First, Facebook began forcing app developers to buy expensive mobile ads on its new, underused mobile service, or risk having their access to data at the core of their business cut off. It’s an example of how important selling access to that user data to third parties was to Facebook’s business model:


    Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.

    “Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.

    But beyond that, Six4Three alleges that Facebook was simultaneously trying to entice developers to makes for its systems by implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends. So the “friends permission” feature for developers that Facebook was phasing out in 2014-2015 was apparently be peddled to developers as a long-term feature back in 2012:


    In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends.

    And, according to Six4Three, once a business became hooked on Facebook’s user data, Facebook would then look for particularly lucrative apps and try to find ways to extract more money out of them. And that would apparently include threatening to cut off access to that user data to either force companies out of business or coerce app owners into selling at below market prices. Up to 40,000 companies were potentially defrauded in this way and it was Facebook’s senior executives who personally devised and managed the scheme, including Zuckerberg:


    Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.

    Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.

    The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.

    It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents.

    A Facebook spokesman said: “When we changed our policy in 2015, we gave all third-party developers ample notice of material platform changes that could have impacted their applications.”

    Not surprisingly, Sandy Parakila, the former Facebook executive turned whistleblower who previously revealed that Facebook executives were consciously negligent in how user data was used(or abused), views this lawsuit and the revelations contained in those emails a “bombshell” that more or less backs up what he’s been saying all along:


    Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.

    “They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.”

    So was Mark Zuckerberg effectively acting like the top mobster in a shakedown scheme involving app developers? A scheme where Facebook selectively threatened to rescind access to its core data in order to extort ad buys from the developers, buy the app at below market prices, or straight up drive app developers out of business? We’ll see, but this is going to be a lawsuit to keep in eye on.

    “That’s a nice app you got there…it would be a shame if something happened to your access to user data…”

    Posted by Pterrafractyl | May 24, 2018, 12:09 pm
  15. Here’s a fascinating twist to the already fascinating story of Psy Group, the Israeli-owned private intelligence firm that was apparently pushed on the Trump team during the August 3, 2016, Trump Tower meeting. That’s the newly discovered meeting where Erik Prince and George Nader met with Donald Trump, Jr. and Stephen Miller to inform the Trump team that the crown princes of Saudi Arabia and the UAE were “eager” to help Trump win the election. And Psy Group, an Israeli private intelligence firm that offers many of the same psychological warfare services of Cambridge Analytica, presented a pitch at that meeting for a socia media manipulation campaign involving thousands of fake accounts. And this meeting happened a couple weeks before Steve Bannon replaced Paul Manafort and brought Cambridge Analytica into prominence in the Trump team’s electoral machinations.

    So here’s the new twist to this Psy Group/Cambridge Analytica story: now we learn that Cambridge Analytica and Psy Group formed a business alliance with Cambridge Analytica after Trump’s victory to try to win U.S. government work. This alliance reportedly happened after the Cambridge Analytica and Psy Group signed a mutual non-disclosure agreement.

    Intriguingly, the agreement was signed on December 14, 2016, according to documents seen by Bloomberg. And December 14th, 2016, just happens to be one day before the Crown Prince of the UAE secretly traveled the US – against diplomatic protocol – and met with the Trump transition team at Trump Tower (including Michael Flynn, Jared Kushner, and Steve Bannon) to help arrange the eventual meeting in the Seychelles between Erik Prince, George Nader, and Kirill Dmitriev.

    So you have to wonder if the signing of that non-disclosure agreement was part of all the scheming associated with the Seychelles. Don’t forget that the Seychelles meeting appears to center around what amounts to a lucrative offer to Russia to realign itself away from the governments of Iran and Syria, which implicitly suggests plans for ongoing regime change operations in Syria and a major new regime change operation in Iran. And based on what we know about the services offered by both Psy Group and Cambridge Analytica – psychological warfare services designed to change the attitudes of entire nations – the two firms sound like exactly the kinds of companies that might have been major contractors for those planned regime change operations.

    Granted, there would have been no shortage of potential US government contracts Cambridge Analytica and Psy Group would have been mutually interested in pursuing that have nothing to do with the Seychelles scheme. But the timing sure is interesting given the heavy overlap of characters involved.

    And while the non-disclosure documents don’t indicate which government contracts precisely the two companies were initially planning on jointly bidding on (which makes sense if they were initially planning on working on something involving a Seychelles/regime-change scheme), there is some information on one of the contracts they did end up jointing bidding on which happened to focus on psychological warfare services in the Middle East. Specifically, they made a joint proposal for the State Department’s Global Engagement Center for a project focused on disrupting the recruitment and radicalization of ISIS members. It sounds like the proposal focused heavily on creating fake online personas so it’s basically a different application for the same fake-persona services Psy Group and Cambridge Analytica offer in the political arena.

    And it turns out the State Department’s Global Engagement Center did indeed sign a contract with Cambridge Analytica’s parent company, SCL Group, last year. Additionally, one of the contracts Psy Group and Cambridge Analytica jointly submitted to the US State Department also included SCL. Although it’s unclear if it involved Cambridge Analytica because it didn’t include provisions for subcontractors and the contract didn’t involve social media and was focused on in-person interviews. So while we don’t know how successful Cambrdige Analytica and Psy Group were in their mutual hunt for government contracts, SCL was successful. So if SCL was getting lots of other contracts who knows how many of them also involved Cambridge Analytica and/or Psy Group.

    We’re also learning that Psy Group appears to have shut itself down in February of 2018 shortly after George Nader was interview by Robert Mueller’s grand jury. But it doesn’t appear to be a real shutdown and it sounds like Psy Group has quietly reopened under the new name “WhiteKnight”. Let’s not forget that Cambridge Analytica appears to have already done the same thing, shutting down only to quietly reopen as “Emerdata”. So for all we know there’s already a new WhiteKnight/Emerdata non-disclosure agreement in place for the purpose of further joint bidding on government contracts. But as the following story makes clear, one thing we do know for sure at this point is that if the Cambridge Analytica and/or Psy Group end up getting government contracts they’re going to go to great lengths to hide it:

    Bloomberg

    Mueller Asked About Money Flows to Israeli Social-Media Firm, Source Says

    * PSY Group’s work included fake personas, firm’s documents show
    * Founder is reported to have met with Donald Trump Jr. in 2016

    By Michael Riley and Lauren Etter
    May 22, 2018, 12:35 PM CDT

    Special Counsel Robert Mueller’s team has asked about flows of money into the Cyprus bank account of a company that specialized in social-media manipulation and whose founder reportedly met with Donald Trump Jr. in August 2016, according to a person familiar with the investigation.

    The inquiry is drawing attention to PSY Group, an Israeli firm that pitched its services to super-PACs and other entities during the 2016 election. Those services included infiltrating target audiences with elaborately crafted social-media personas and spreading misleading information through websites meant to mimic news portals, according to interviews and PSY Group documents seen by Bloomberg News.

    The person doesn’t believe any of those pitches was successful, and it’s illegal for foreign entities to contribute anything of value or to play decision-making roles in U.S. political campaigns.

    One of PSY Group’s founders, Joel Zamel, met in August 2016 at Trump Tower with Donald Trump Jr. and an emissary to Saudi Arabia and the United Arab Emirates to discuss how PSY Group could help Trump win, the New York Times reported on Saturday.

    Marc Mukasey, a lawyer for Zamel, said his client “offered nothing to the Trump campaign, received nothing from the Trump campaign, delivered nothing to the Trump campaign and was not solicited by, or asked to do anything for, the Trump campaign.” He also said reports that Zamel’s companies engage in social-media manipulation are misguided and that the firms “harvest publicly available information for lawful use.”

    Donald Trump Jr. recalls a meeting at which he was pitched “on a social media platform or marketing strategy,” said his attorney, Alan Futerfas, in an emailed statement. “He was not interested and that was the end of it.”

    Following Trump’s victory, PSY Group formed an alliance with Cambridge Analytica, the Trump campaign’s primary social-media consultants, to try to win U.S. government work, according to documents obtained by Bloomberg News.

    FBI agents working with Mueller’s team interviewed people associated with PSY Group’s U.S. operations in February, and Mueller subpoenaed bank records for payments made to the firm’s Cyprus bank accounts, according to a person who has seen one of the subpoenas. Though PSY Group is based in Israel, it’s technically headquartered in Cyprus, the small Mediterranean island famous for its banking secrecy.

    Shortly after those interviews, on Feb. 25, PSY Group Chief Executive Officer Royi Burstien informed employees in Tel Aviv that the company was closing down. Burstien is a former commander of an Israeli psychological warfare unit, according to two people familiar with the company. He didn’t respond to requests for comment.

    ‘Poisoning the Well’

    Tactics deployed by PSY Group in foreign elections included inflaming divisions in opposition groups and playing on deep-seated cultural and ethnic conflicts, something the firm called “poisoning the well,” according to the people.

    In a contracting proposal for the U.S. State Department that PSY Group prepared with Cambridge Analytica and SCL Group, Cambridge’s U.K. affiliate, the firm said that it “has conducted messaging/influence operations in well over a dozen languages and dialects” and that it employs “an elite group of high-ranking former officers from some of the world’s most renowned intelligence units.”

    Although the proposal says that the company is legally bound not to reveal its clients, it also boasts that “PSY has succeeded in placing the results of its intelligence activities in top-tier publications across the globe in order to advance the interests of its clients.”

    That proposal was the result of a collaboration that gelled after Trump’s victory — a mutual non-disclosure agreement between Cambridge and PSY Group is dated Dec. 14, 2016 — but the documents don’t indicate how the companies initially connected or why they decided to work together.

    Companies Shut Down

    Cambridge Analytica and the elections division of SCL shut down this month following scrutiny of the companies’ business practices, including the release of a secretly recorded interview of Cambridge CEO Alexander Nix saying he could entrap politicians in compromising situations.

    The joint proposal for the State Department’s Global Engagement Center was for a project to interrupt the recruitment and radicalization of ISIS members, and it provides insight into PSY Group’s use of fake social-media personas.

    The company spent months preparing for the proposal by developing a persona for “an average Chicago teenager” named Madison who converted from Christianity to Islam and became alienated from her parents. Over a period of many weeks, Madison interacted with an ISIS recruiter, received instructions for sending money to fighters in Syria, and began an extended flirtation with a fighter in Raqqa, Syria.

    Among the long-term objectives of Madison’s persona were obtaining names and contacts of “radical Turkish Islamic elements” and obtaining bank accounts and routing numbers for donating to ISIS, according to the proposal seen by Bloomberg News.

    The State Department’s Global Engagement Center entered into a contract with SCL Group last year, but it didn’t include provisions for work to be performed by any subcontractors, according to a department spokesman. That contract didn’t involve social media and was focused on in-person interviews, according to an earlier department briefing.

    Tower Meeting

    The Trump Tower meeting in August 2016 included Zamel, the PSY Group founder, and George Nader, an adviser to the ruling families of Saudi Arabia and the United Arab Emirates, according to the New York Times report. PSY Group’s decision to shut down appears to have come the same week that Nader testified before the grand jury working with Mueller, according to the timing of that testimony previously reported in the Times.

    Following the election, Nader hired a different company of Zamel’s called WhiteKnight, which specializes in open-source social media research and is based in the Caribbean, according to a person familiar with the transaction.

    The person described WhiteKnight as a high-end business consulting firm owned in part by Zamel that completed a post-election analysis for Nader that examined the role that social media played in the 2016 election.

    There is little public information about WhiteKnight or its products, and the company does not appear to have a website.

    Another person familiar with PSY Group’s operations said that months ago, there was discussion about rebranding the firm under a different name.

    The name being discussed internally, according to the person, was WhiteKnight.

    ———-

    “Mueller Asked About Money Flows to Israeli Social-Media Firm, Source Says” by Michael Riley and Lauren Etter; Bloomberg; 05/22/2018

    “Special Counsel Robert Mueller’s team has asked about flows of money into the Cyprus bank account of a company that specialized in social-media manipulation and whose founder reportedly met with Donald Trump Jr. in August 2016, according to a person familiar with the investigation.”

    So the Mueller probe is looking into money-flows of Psy Group’s Cyprus bank account, along with the activities of George Nader (who pitched Psy Group to the Trump team in August 2016) and this interest from Mueller appears to have led to the sudden shutdown of the company a few months ago:


    FBI agents working with Mueller’s team interviewed people associated with PSY Group’s U.S. operations in February, and Mueller subpoenaed bank records for payments made to the firm’s Cyprus bank accounts, according to a person who has seen one of the subpoenas. Though PSY Group is based in Israel, it’s technically headquartered in Cyprus, the small Mediterranean island famous for its banking secrecy.

    Shortly after those interviews, on Feb. 25, PSY Group Chief Executive Officer Royi Burstien informed employees in Tel Aviv that the company was closing down. Burstien is a former commander of an Israeli psychological warfare unit, according to two people familiar with the company. He didn’t respond to requests for comment.

    Tower Meeting

    The Trump Tower meeting in August 2016 included Zamel, the PSY Group founder, and George Nader, an adviser to the ruling families of Saudi Arabia and the United Arab Emirates, according to the New York Times report. PSY Group’s decision to shut down appears to have come the same week that Nader testified before the grand jury working with Mueller, according to the timing of that testimony previously reported in the Times.

    Although the sudden shutdown of Psy Group appears to really be a secret rebranding. Psy Group is apparently now WhiteKnight, a rebranding the company has been working on for a white it seems since WhiteKnight was hired by Nader to do a post-election analysis on the role social media played in the 2016 election:


    Following the election, Nader hired a different company of Zamel’s called WhiteKnight, which specializes in open-source social media research and is based in the Caribbean, according to a person familiar with the transaction.

    The person described WhiteKnight as a high-end business consulting firm owned in part by Zamel that completed a post-election analysis for Nader that examined the role that social media played in the 2016 election.

    There is little public information about WhiteKnight or its products, and the company does not appear to have a website.

    Another person familiar with PSY Group’s operations said that months ago, there was discussion about rebranding the firm under a different name.

    The name being discussed internally, according to the person, was WhiteKnight.

    Just imagine how fascinating WhiteKnight’s post-election analysis on the role social media played must since it was basically conducted by Psy Group, a social media manipulation firm that either executed much of the most egregious (and effective) social media manipulation itself or worked directly with the worst perpetrators like Cambridge Analytica. There’s probably quite a few insights in that report that wouldn’t be available to other firms.

    So what kinds of secrets is Psy Group hoping to keep hidden with its shutdown/rebranding move? Well, some of those secrets presumably involve the alliance Psy Group created with Cambridge Analytica shortly after Trump’s victory, culminating the the December 14, 2016, mutual non-disclosure agreement (one day before the Trump Tower meeting with the crown prince of the UAE to set up the Seychelles meeting). And note how the contract Psy Group and Cambridge Analytica pitched to “conducted messaging/influence operations in well over a dozen languages and dialects” was also submitted with Cambridge Analytica’s parent company SCL. So Psy Group’s alliance with Cambridge Analytica was probably really an alliance with Cambridge Analytica’s parent company too:


    Following Trump’s victory, PSY Group formed an alliance with Cambridge Analytica, the Trump campaign’s primary social-media consultants, to try to win U.S. government work, according to documents obtained by Bloomberg News.

    PSY Group developed elaborate information operations for commercial clients and political candidates around the world, the people said.

    ‘Poisoning the Well’

    Tactics deployed by PSY Group in foreign elections included inflaming divisions in opposition groups and playing on deep-seated cultural and ethnic conflicts, something the firm called “poisoning the well,” according to the people.

    In a contracting proposal for the U.S. State Department that PSY Group prepared with Cambridge Analytica and SCL Group, Cambridge’s U.K. affiliate, the firm said that it “has conducted messaging/influence operations in well over a dozen languages and dialects” and that it employs “an elite group of high-ranking former officers from some of the world’s most renowned intelligence units.”

    Although the proposal says that the company is legally bound not to reveal its clients, it also boasts that “PSY has succeeded in placing the results of its intelligence activities in top-tier publications across the globe in order to advance the interests of its clients.”

    That proposal was the result of a collaboration that gelled after Trump’s victory — a mutual non-disclosure agreement between Cambridge and PSY Group is dated Dec. 14, 2016 — but the documents don’t indicate how the companies initially connected or why they decided to work together.

    Another point to keep in mind regarding the timing of that December 14, 2016, mutual non-disclosure agreement: the Seychelles meeting appears to be a giant pitch designed to realign Russia, indicating the UAE was clearly very interested in exploiting Trump’s victory in a big way. They were ‘cashing in’, metaphorically. So it seems reasonable to suspect that Psy Group, which is closely affiliated with the UAE’s crown prince, would also be quite interested in literally ‘cashing in’ in a very big way too during that December 2016 transition period. In other words, while we don’t know what Psy Group and Cambridge Analytica decided to not disclose with their non-disclosure agreement, we can be pretty sure it was extremely ambitious at the time.

    But at this point, the only proposals for US government contracts that we do know about were for an anti-ISIS social media operation for the US State Department’s Global Engagement Center:


    The joint proposal for the State Department’s Global Engagement Center was for a project to interrupt the recruitment and radicalization of ISIS members, and it provides insight into PSY Group’s use of fake social-media personas.

    The company spent months preparing for the proposal by developing a persona for “an average Chicago teenager” named Madison who converted from Christianity to Islam and became alienated from her parents. Over a period of many weeks, Madison interacted with an ISIS recruiter, received instructions for sending money to fighters in Syria, and began an extended flirtation with a fighter in Raqqa, Syria.

    Among the long-term objectives of Madison’s persona were obtaining names and contacts of “radical Turkish Islamic elements” and obtaining bank accounts and routing numbers for donating to ISIS, according to the proposal seen by Bloomberg News.

    And one contract we do know about at this point that was awarding to this network of companies was actually awarded to Cambridge Analytica’s parent company, SCL:


    The State Department’s Global Engagement Center entered into a contract with SCL Group last year, but it didn’t include provisions for work to be performed by any subcontractors, according to a department spokesman. That contract didn’t involve social media and was focused on in-person interviews, according to an earlier department briefing.

    So there’s one government contract that SCL won following Trump’s election, but Psy Group/Cambridge Analytica may or may not have been involved with.

    And that’s all we know about the work Psy Group may or may not have done for the US government following Trump’s victory at this point. Except we also know that Psy Group and Cambridge Analytica weren’t competing, so whatever contract Psy Group got Cambridge Analytica may have received too. And that indicates, at a minimum, a willingness for these two companies to work VERY close together. So close they risk revealing internal secrets to each other. Don’t forget, Psy Group and Cambridge Analytica are ostensibly competitors offering similar services to the same types of clients. And shortly after the election they were willing to sign an agreement to jointly compete for contracts that they would work on together. Don’t forget that one of the massive questions looming over this whole story is whether or not Psy Group and Cambridge Analytica – two direct competitors – were not just on the same team but actually working closely together during the 2016 election to help elect Trump. And thanks to these recent revelations we now know Psy Group and Cambridge Analytica were at least willing to work extremely closely with each other immediately after the election on a variety of different government contracts. That seems like a relevant clue in this whole mess.

    Posted by Pterrafractyl | May 29, 2018, 8:14 pm
  16. Oh look, a new scary Cambridge Analytica operation was just discovered. Or rather, it’s a scary new story about AggregateIQ (AIQ), the Cambridge Analytica offshoot that Cambridge Analytica outsourced the development of its “Ripon” psychological profile software development to and played a key role in also worked on the pro-Brexit campaign and later assisted a West-leaning East Ukraine politician Sergei Taruta. It’s like these companies can’t go a week without a new scary story. Which is extra scary.

    For scary starters, the article also notes that, despite Facebook’s pledge to kick Cambridge Analytica off of its platform, security researchers just found 13 apps available for Facebook that appear to be developed by AIQ. So if Facebook really was trying to kick Cambridge Analytica off of its platform it’s not trying very hard. One is even named “AIQ Johnny Scraper” and it’s registered to AIQ.

    Another part of what makes the following article scary is that it’s a reminder that you don’t necessarily need to have downloaded a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

    Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made. That’s what AIQ was offering and the newly discovered database contained the info for that.

    In this case, the Financial Times has somehow gotten its hands on a bunch of Facebook-related data on held internally by AIQ. It turns out that AIQ stored a list of 759,934 Facebook users in a table that included home addresses, phone numbers and email addresses for some profiles. Additionally, the files contain political Facebook posts and likes for the people. It all appears to be part of a software package AIQ was developing for a client that would allow them to search the political posts and “Likes” people made on Facebook. A personal political browser that could give a far more detailed peak into someone’s politics than other forms of traditionally available information on people’s politics like political donation records and party affiliation.

    Also keep in mind that we already know Cambridge Analytica collected large amounts of information on 87 million Facebook accounts. So the 759,934 number should not be seen as the total number of people AIQ has similar such files on. It could just be a particular batch selected by that client. A batch of 759,934 people a client just happens to want to make personalized political searches on.

    It’s also worth noting that this service would be perfect for accomplishing the right-wing’s long-standing goal of purging the federal government of liberal employees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. Johnson and ‘Alt-Right’ neo-Nazi billionaire Peter Thiel reportedly was helping the Trump team accomplish during the transition period. And an ideological purge of the State Department is reportedly already underway. So it will be interesting to learn if this AIQ is being used for such purposes.

    It’s unclear if the data in these files was collected through a Facebook app developed by AIQ – in which case the people in the file at least had to click the “I accept” part of installing the app – or if the data was collected simply from scraping publicly available Facebook posts. Again, it’s a reminder that pretty much ANYTHING you do on a publicly accessible Facebook post, even a ‘Like’, is probably getting collected by someone, aggregated, and resold. Including, perhaps, by AggregateIQ:

    Financial Times

    AggregateIQ had data of thousands of Facebook users
    Linked app found by security researcher raises questions on social network’s policing

    Aliya Ram in London and Hannah Kuchler in San Francisco
    June 1, 2018, 2:21 PM

    AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.

    The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.

    The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.

    Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.

    According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.

    Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.

    “I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”

    He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.

    AggregateIQ has sought to distance itself from an international privacy scandal engulfing Facebook and Cambridge Analytica, despite allegations from Christopher Wylie, a whistleblower at the now-defunct UK firm, that it had acted as the Canadian branch of the organisation.

    The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.

    The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said.

    AggregateIQ’s technology was used in the US for Ted Cruz’s campaign for the Republican nomination in 2016, and the company has also received millions of pounds of funding from British groups. These include Vote Leave, the main pro-Brexit campaign fronted by foreign secretary Boris Johnson.

    “The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.”

    ———-

    “AggregateIQ had data of thousands of Facebook users” by Aliya Ram and Hannah Kuchler; Financial Times; 06/01/2018

    ““The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.”

    As security researcher Chris Vickery put it, the whole is greater than the sum when you look at the synergystic way the various tools developed by companies like Cambridge Analytica and AIQ work together. Synergy in the service of creating a mass manipulation service with personalized micro-targeting capabilities.

    And that synergistic mass manipulation is part of why it’s disturbing to hear that Vickery just discovered 13 AIQ apps still available on Facebook after Cambridge Analytica was declared banned and caused Facebook so much bad publicity. The fact that there are still Cambridge Analytica-affiliated apps suggests Facebook either really, really, really likes Cambridge Analytica or it’s just really, really bad at app oversight:


    The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.

    The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.

    Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.

    He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.

    “However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.”

    “AIQ Johnny Scraper”. They weren’t even hiding it. But at least the Johnny Scraper app sounds relatively innocuous.

    The personal political post search engine service, on the other hand, sounds far from innocuous. A database on 759,934 Facebook users created by AIQ software that tracked which which users liked a particular page or were posting positive or negative comments. So software that interprets what people write about politics on Facebook and aggregates that data into a search engine for clients. You have to wonder how sophisticated that automated interpretation software is at this point. Whatever the answer, AIQ’s text interpretation software is only going to get more sophisticated. That’s a given.

    Someday that software will probably be able to write its own synopsis of a person that’s better than a human could do. Who knows when that kind of software will be available but someday it will be and companies like AIQ will be there to exploit it if that’s legal. That’s also a given.

    And this 759,934 person database of political Likes and written political comments was what AIQ provided for just one client:


    According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.

    Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.

    “I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”

    The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said.

    AggregateIQ’s technology was used in the US for Ted Cruz’s campaign for the Republican nomination in 2016, and the company has also received millions of pounds of funding from British groups. These include Vote Leave, the main pro-Brexit campaign fronted by foreign secretary Boris Johnson.

    And for all we know, AIQ’s database could have been data curated from publicly available posts and not AIQ app users, highlighting how anything publicly done on Facebook, even a Like, is going to be collected by someone and probably sold:


    The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.

    You are what you Like in this commercial space. And we’re all in this commercial space to some extent. There really is a commercially available profile of you. It’s just distributed between the many different data brokers offering slices of it.

    Another key dynamic in all this is that Facebook’s business model appears to be both a combination of exploiting the vast information monopoly it possesses with an opposing business model of effectively selling off little chunks of that data by making it available to app developers. There’s an obvious tension in both exploiting your data monopoly while selling it off but that appears to be the most profitable path forward which is why that’s probably the business model AIQ was offering with the data it was collecting from Facebook: analyzing the Facebook data it’s collected through apps and public data scraping, categorizing the data (like political of non-political comments and if they’re positive or negative), and then sell slices of that vast internal AIQ curated content to clients.

    Aggregate as much data as possible. Analyze it. And offer pieces of that curated data pile to clients. That appears to be a business model of choice in this commercial big data arena which is why we should assume AIQ and Cambridge Analytica were offering similar service and shouldn’t assume this particular database of 759,934 Facebook accounts is the only one of its nature. Especially given the 87 million profiles they already scraped.

    And this is a business model that’s going to apply for far more than just Facebook content. The whole spectrum of information collected on everyone is going to be part of this commercial space. And that’s part of what’s so scary: the data that gets fed into these independent Big Data repositories like the AIQ/Cambridge Analytica database is going to increasingly be the curated data provided by other Big Data providers in the same business. Everyone is collecting and analyzing the curated data everyone else is regurgitating out. Just as Cambridge Analytica and AIQ offer a slew of separate interoperable services to clients that have a ‘whole is greater than the sum’ synergistic quality, the entire Big Data industry is going to have a similar quality. It’s a competitive cooperative division of labor. Cambridge Analytica and AIQ are just the extra scary team members in a synergistic industry-wide team effort in the service of maximizing the profits we all make from exploiting everyone’s data for sale.

    Posted by Pterrafractyl | June 3, 2018, 9:47 pm
  17. It’s that time again. Time to learn how the Cambridge Analytica/Facebook scandal just got worse. So what’s the new low? Well, it turns out Facebook hasn’t just been sharing egregious amounts of Facebook user data with app developers. Device makers, like Apple and Samsung, have also been given similar access to user data. At least 60 device makers known thus far.

    Except, of course, it’s worse and these device makers have actually been given EVEN MORE data that Facebook app developers received. For example, Facebook allowed the device makers access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. And some device makers could access personal information from users’ friends who thought they had turned off any sharing. So the “friends permissions” option that allowed Cambridge Analytica’s app to collect data on 87 million Facebook users even though just 300,000 people used their app has remained an option for device manufacturers even after Facebook phased out the friends permission option in 2014-2015.

    Beyond that, the New York Times examined the kind of information gathered from a Blackberry device owned by one of its reporters and found that it wasn’t just collecting identifying information on all the reporters friends. It was also grabbing identifying information on those friends’ friends. That single Blackberry was able to retrieve identifying information on nearly 295,000 people!

    Facebook justifies all this by arguing that the device makers are basically an extension of Facebook. The company also asserts that there were strict agreements on how the data could be used. But the main loophole they cite is that Facebook viewed its hardware partners as “service providers,” like a cloud computing service paid to store Facebook data or a company contracted to process credit card transactions. And by categorizing these device makers as service providers Facebook is able to get around a 2011 consent decree Facebook signed with the US Federal Trade Commission over previous privacy violations. According to that consent decree Facebook does not need to seek additional permission to share friend data with service providers.

    So it’s not just Cambridge Analytica and the thousands of app developers who have been scooping up mountains of Facebook user data without people realizing it. The device makers have been doing it too. More so. Much, much more so:

    The New York Times

    Facebook Gave Device Makers Deep Access to Data on Users and Friends

    The company formed data-sharing partnerships with Apple, Samsung and
    dozens of other device makers, raising new concerns about its privacy protections.

    By GABRIEL J.X. DANCE, NICHOLAS CONFESSORE and MICHAEL LaFORGIA
    JUNE 3, 2018

    As Facebook sought to become the world’s dominant social media service, it struck agreements allowing phone and other device makers access to vast amounts of its users’ personal information.

    Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.

    But the partnerships, whose scope has not previously been reported, raise concerns about the company’s privacy protections and compliance with a 2011 consent decree with the Federal Trade Commission. Facebook allowed the device companies access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.

    Most of the partnerships remain in effect, though Facebook began winding them down in April. The company came under intensifying scrutiny by lawmakers and regulators after news reports in March that a political consulting firm, Cambridge Analytica, misused the private information of tens of millions of Facebook users.

    In the furor that followed, Facebook’s leaders said that the kind of access exploited by Cambridge in 2014 was cut off by the next year, when Facebook prohibited developers from collecting information from users’ friends. But the company officials did not disclose that Facebook had exempted the makers of cellphones, tablets and other hardware from such restrictions.

    “You might think that Facebook or the device manufacturer is trustworthy,” said Serge Egelman, a privacy researcher at the University of California, Berkeley, who studies the security of mobile apps. “But the problem is that as more and more data is collected on the device — and if it can be accessed by apps on the device — it creates serious privacy and security risks.”

    In interviews, Facebook officials defended the data sharing as consistent with its privacy policies, the F.T.C. agreement and pledges to users. They said its partnerships were governed by contracts that strictly limited use of the data, including any stored on partners’ servers. The officials added that they knew of no cases where the information had been misused.

    The company views its device partners as extensions of Facebook, serving its more than two billion users, the officials said.

    “These partnerships work very differently from the way in which app developers use our platform,” said Ime Archibong, a Facebook vice president. Unlike developers that provide games and services to Facebook users, the device partners can use Facebook data only to provide versions of “the Facebook experience,” the officials said.

    Some device partners can retrieve Facebook users’ relationship status, religion, political leaning and upcoming events, among other data. Tests by The Times showed that the partners requested and received data in the same way other third parties did.

    Facebook’s view that the device makers are not outsiders lets the partners go even further, The Times found: They can obtain data about a user’s Facebook friends, even those who have denied Facebook permission to share information with any third parties.

    In interviews, several former Facebook software engineers and security experts said they were surprised at the ability to override sharing restrictions.

    “It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said Ashkan Soltani, a research and privacy consultant who formerly served as the F.T.C.’s chief technologist.

    Details of Facebook’s partnerships have emerged amid a reckoning in Silicon Valley over the volume of personal information collected on the internet and monetized by the tech industry. The pervasive collection of data, while largely unregulated in the United States, has come under growing criticism from elected officials at home and overseas and provoked concern among consumers about how freely their information is shared.

    In a tense appearance before Congress in March, Facebook’s chief executive, Mark Zuckerberg, emphasized what he said was a company priority for Facebook users.“Every piece of content that you share on Facebook you own,” he testified. ”You have complete control over who sees it and how you share it.”

    But the device partnerships provoked discussion even within Facebook as early as 2012, according to Sandy Parakilas, who at the time led third-party advertising and privacy compliance for Facebook’s platform.

    “This was flagged internally as a privacy issue,” said Mr. Parakilas, who left Facebook that year and has recently emerged as a harsh critic of the company. “It is shocking that this practice may still continue six years later, and it appears to contradict Facebook’s testimony to Congress that all friend permissions were disabled.”

    The partnerships were briefly mentioned in documents submitted to German lawmakers investigating the social media giant’s privacy practices and released by Facebook in mid-May. But Facebook provided the lawmakers with the name of only one partner — BlackBerry, maker of the once-ubiquitous mobile device — and little information about how the agreements worked.

    The submission followed testimony by Joel Kaplan, Facebook’s vice president for global public policy, during a closed-door German parliamentary hearing in April. Elisabeth Winkelmeier-Becker, one of the lawmakers who questioned Mr. Kaplan, said in an interview that she believed the data partnerships disclosed by Facebook violated users’ privacy rights.

    “What we have been trying to determine is whether Facebook has knowingly handed over user data elsewhere without explicit consent,” Ms. Winkelmeier-Becker said. “I would never have imagined that this might even be happening secretly via deals with device makers. BlackBerry users seem to have been turned into data dealers, unknowingly and unwillingly.”

    In interviews with The Times, Facebook identified other partners: Apple and Samsung, the world’s two biggest smartphone makers, and Amazon, which sells tablets.

    An Apple spokesman said the company relied on private access to Facebook data for features that enabled users to post photos to the social network without opening the Facebook app, among other things. Apple said its phones no longer had such access to Facebook as of last September.

    Usher Lieberman, a BlackBerry spokesman, said in a statement that the company used Facebook data only to give its own customers access to their Facebook networks and messages. Mr. Lieberman said that the company “did not collect or mine the Facebook data of our customers,” adding that “BlackBerry has always been in the business of protecting, not monetizing, customer data.”

    Microsoft entered a partnership with Facebook in 2008 that allowed Microsoft-powered devices to do things like add contacts and friends and receive notifications, according to a spokesman. He added that the data was stored locally on the phone and was not synced to Microsoft’s servers.

    Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers. A Facebook official said that regardless of where the data was kept, it was governed by strict agreements between the companies.

    “I am dumbfounded by the attitude that anybody in Facebook’s corporate office would think allowing third parties access to data would be a good idea,” said Henning Schulzrinne, a computer science professor at Columbia University who specializes in network security and mobile systems.

    The Cambridge Analytica scandal revealed how loosely Facebook had policed the bustling ecosystem of developers building apps on its platform. They ranged from well-known players like Zynga, the maker of the FarmVille game, to smaller ones, like a Cambridge contractor who used a quiz taken by about 300,000 Facebook users to gain access to the profiles of as many as 87 million of their friends.

    Those developers relied on Facebook’s public data channels, known as application programming interfaces, or APIs. But starting in 2007, the company also established private data channels for device manufacturers.

    At the time, mobile phones were less powerful, and relatively few of them could run stand-alone Facebook apps like those now common on smartphones. The company continued to build new private APIs for device makers through 2014, spreading user data through tens of millions of mobile devices, game consoles, televisions and other systems outside Facebook’s direct control.

    Facebook began moving to wind down the partnerships in April, after assessing its privacy and data practices in the wake of the Cambridge Analytica scandal. Mr. Archibong said the company had concluded that the partnerships were no longer needed to serve Facebook users. About 22 of them have been shut down.

    The broad access Facebook provided to device makers raises questions about its compliance with a 2011 consent decree with the F.T.C.

    The decree barred Facebook from overriding users’ privacy settings without first getting explicit consent. That agreement stemmed from an investigation that found Facebook had allowed app developers and other third parties to collect personal details about users’ friends, even when those friends had asked that their information remain private.

    After the Cambridge Analytica revelations, the F.T.C. began an investigation into whether Facebook’s continued sharing of data after 2011 violated the decree, potentially exposing the company to fines.

    Facebook officials said the private data channels did not violate the decree because the company viewed its hardware partners as “service providers,” akin to a cloud computing service paid to store Facebook data or a company contracted to process credit card transactions. According to the consent decree, Facebook does not need to seek additional permission to share friend data with service providers.

    “These contracts and partnerships are entirely consistent with Facebook’s F.T.C. consent decree,” Mr. Archibong, the Facebook official, said.

    But Jessica Rich, a former F.T.C. official who helped lead the commission’s earlier Facebook investigation, disagreed with that assessment.

    “Under Facebook’s interpretation, the exception swallows the rule,” said Ms. Rich, now with the Consumers Union. “They could argue that any sharing of data with third parties is part of the Facebook experience. And this is not at all how the public interpreted their 2014 announcement that they would limit third-party app access to friend data.”

    To test one partner’s access to Facebook’s private data channels, The Times used a reporter’s Facebook account — with about 550 friends — and a 2013 BlackBerry device, monitoring what data the device requested and received. (More recent BlackBerry devices, which run Google’s Android operating system, do not use the same private channels, BlackBerry officials said.)

    Immediately after the reporter connected the device to his Facebook account, it requested some of his profile data, including user ID, name, picture, “about” information, location, email and cellphone number. The device then retrieved the reporter’s private messages and the responses to them, along with the name and user ID of each person with whom he was communicating.

    The data flowed to a BlackBerry app known as the Hub, which was designed to let BlackBerry users view all of their messages and social media accounts in one place.

    The Hub also requested — and received — data that Facebook’s policy appears to prohibit. Since 2015, Facebook has said that apps can request only the names of friends using the same app. But the BlackBerry app had access to all of the reporter’s Facebook friends and, for most of them, returned information such as user ID, birthday, work and education history and whether they were currently online.

    The BlackBerry device was also able to retrieve identifying information for nearly 295,000 Facebook users. Most of them were second-degree Facebook friends of the reporter, or friends of friends.

    In all, Facebook empowers BlackBerry devices to access more than 50 types of information about users and their friends, The Times found.

    ———-

    “Facebook Gave Device Makers Deep Access to Data on Users and Friends” by GABRIEL J.X. DANCE, NICHOLAS CONFESSORE and MICHAEL LaFORGIA; The New York Times; 06/03/2018

    “Facebook has reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books.”

    At least 60 device makers are sitting on A LOT of Facebook data. Note how NONE of them acknowledge this before this report came out as this Cambridge Analytica scandal was unfolding. It’s one of those quiet lessons in how the world unfortunately works.

    And these 60+ device makers were able to access data of users’ friends without their consent even when those friends changed their privacy setting to bar any sharing:


    But the partnerships, whose scope has not previously been reported, raise concerns about the company’s privacy protections and compliance with a 2011 consent decree with the Federal Trade Commission. Facebook allowed the device companies access to the data of users’ friends without their explicit consent, even after declaring that it would no longer share such information with outsiders. Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing, The New York Times found.

    Most of the partnerships remain in effect, though Facebook began winding them down in April. The company came under intensifying scrutiny by lawmakers and regulators after news reports in March that a political consulting firm, Cambridge Analytica, misused the private information of tens of millions of Facebook users.

    In the furor that followed, Facebook’s leaders said that the kind of access exploited by Cambridge in 2014 was cut off by the next year, when Facebook prohibited developers from collecting information from users’ friends. But the company officials did not disclose that Facebook had exempted the makers of cellphones, tablets and other hardware from such restrictions.

    Most of the partnerships remain in effect, though Facebook began winding them down in April.”

    Yep, these data sharing partnership largely remain in effect and didn’t end in 2014-2015 when the app developers lost access to this kind of data. It’s only now, as the Cambridge Analytica scandal unfolds, that these partnerships are being ended.

    This was all done despite a 2011 consent decree that barred Facebook from overriding users’ privacy settings without first getting explicit consent. Facebook simply categorizing the device makers “service providers”, exploiting a “service provider” loophole in the decree:


    The broad access Facebook provided to device makers raises questions about its compliance with a 2011 consent decree with the F.T.C.

    The decree barred Facebook from overriding users’ privacy settings without first getting explicit consent. That agreement stemmed from an investigation that found Facebook had allowed app developers and other third parties to collect personal details about users’ friends, even when those friends had asked that their information remain private.

    After the Cambridge Analytica revelations, the F.T.C. began an investigation into whether Facebook’s continued sharing of data after 2011 violated the decree, potentially exposing the company to fines.

    Facebook officials said the private data channels did not violate the decree because the company viewed its hardware partners as “service providers,” akin to a cloud computing service paid to store Facebook data or a company contracted to process credit card transactions. According to the consent decree, Facebook does not need to seek additional permission to share friend data with service providers.

    “These contracts and partnerships are entirely consistent with Facebook’s F.T.C. consent decree,” Mr. Archibong, the Facebook official, said.

    But Jessica Rich, a former F.T.C. official who helped lead the commission’s earlier Facebook investigation, disagreed with that assessment.

    “Under Facebook’s interpretation, the exception swallows the rule,” said Ms. Rich, now with the Consumers Union. “They could argue that any sharing of data with third parties is part of the Facebook experience. And this is not at all how the public interpreted their 2014 announcement that they would limit third-party app access to friend data.”

    It’s also worth recalling that Facebook made similar excuses for allowing app developers to grab user friends data, claiming that the data was solely going to be used for “improving user experiences.” Which makes the Facebook explanation for how the device maker data sharing program was very different from the app developer data sharing program rather amusing because, according to Facebook, the the device partners can use Facebook data only to provide versions of “the Facebook experience” (which implicitly admits that app developers were using that data from a lot more than just improving user experiences):


    “You might think that Facebook or the device manufacturer is trustworthy,” said Serge Egelman, a privacy researcher at the University of California, Berkeley, who studies the security of mobile apps. “But the problem is that as more and more data is collected on the device — and if it can be accessed by apps on the device — it creates serious privacy and security risks.”

    In interviews, Facebook officials defended the data sharing as consistent with its privacy policies, the F.T.C. agreement and pledges to users. They said its partnerships were governed by contracts that strictly limited use of the data, including any stored on partners’ servers. The officials added that they knew of no cases where the information had been misused.

    “These partnerships work very differently from the way in which app developers use our platform,” said Ime Archibong, a Facebook vice president. Unlike developers that provide games and services to Facebook users, the device partners can use Facebook data only to provide versions of “the Facebook experience,” the officials said.

    ““These partnerships work very differently from the way in which app developers use our platform,” said Ime Archibong, a Facebook vice president. Unlike developers that provide games and services to Facebook users, the device partners can use Facebook data only to provide versions of “the Facebook experience,” the officials said.” LOL!

    Of course, it’s basically impossible for Facebook to know what device makers were doing with this data because, just like with app developers, these device manufacturers had the option of keeping this Facebook data on their own servers:


    In interviews with The Times, Facebook identified other partners: Apple and Samsung, the world’s two biggest smartphone makers, and Amazon, which sells tablets.

    An Apple spokesman said the company relied on private access to Facebook data for features that enabled users to post photos to the social network without opening the Facebook app, among other things. Apple said its phones no longer had such access to Facebook as of last September.

    Usher Lieberman, a BlackBerry spokesman, said in a statement that the company used Facebook data only to give its own customers access to their Facebook networks and messages. Mr. Lieberman said that the company “did not collect or mine the Facebook data of our customers,” adding that “BlackBerry has always been in the business of protecting, not monetizing, customer data.”

    Microsoft entered a partnership with Facebook in 2008 that allowed Microsoft-powered devices to do things like add contacts and friends and receive notifications, according to a spokesman. He added that the data was stored locally on the phone and was not synced to Microsoft’s servers.

    Facebook acknowledged that some partners did store users’ data — including friends’ data — on their own servers. A Facebook official said that regardless of where the data was kept, it was governed by strict agreements between the companies.

    “I am dumbfounded by the attitude that anybody in Facebook’s corporate office would think allowing third parties access to data would be a good idea,” said Henning Schulzrinne, a computer science professor at Columbia University who specializes in network security and mobile systems.

    And this data privacy nightmare situation apparently all started in 2007, when Facebook began building private APIs for device makers:


    The Cambridge Analytica scandal revealed how loosely Facebook had policed the bustling ecosystem of developers building apps on its platform. They ranged from well-known players like Zynga, the maker of the FarmVille game, to smaller ones, like a Cambridge contractor who used a quiz taken by about 300,000 Facebook users to gain access to the profiles of as many as 87 million of their friends.

    Those developers relied on Facebook’s public data channels, known as application programming interfaces, or APIs. But starting in 2007, the company also established private data channels for device manufacturers.

    At the time, mobile phones were less powerful, and relatively few of them could run stand-alone Facebook apps like those now common on smartphones. The company continued to build new private APIs for device makers through 2014, spreading user data through tens of millions of mobile devices, game consoles, televisions and other systems outside Facebook’s direct control.

    Facebook began moving to wind down the partnerships in April, after assessing its privacy and data practices in the wake of the Cambridge Analytica scandal. Mr. Archibong said the company had concluded that the partnerships were no longer needed to serve Facebook users. About 22 of them have been shut down.

    So what kind of data are device manufacturers actually collecting? Well, it’s unclear if all device makers get the same level of access. But BlackBerry, for example, can access 50 types of information on users and their friends. Information like Facebook users’ relationship status, religion, political leaning and upcoming events:


    Some device partners can retrieve Facebook users’ relationship status, religion, political leaning and upcoming events, among other data. Tests by The Times showed that the partners requested and received data in the same way other third parties did.

    Facebook’s view that the device makers are not outsiders lets the partners go even further, The Times found: They can obtain data about a user’s Facebook friends, even those who have denied Facebook permission to share information with any third parties.

    In interviews, several former Facebook software engineers and security experts said they were surprised at the ability to override sharing restrictions.

    “It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said Ashkan Soltani, a research and privacy consultant who formerly served as the F.T.C.’s chief technologist.

    In all, Facebook empowers BlackBerry devices to access more than 50 types of information about users and their friends, The Times found.

    And as the New York Times discovered after testing a reporter’s Blackberry device, Blackberry was able to grab information on friends of friends, allowing the one device they tested to collect identifying information on 295,000 Facebook users:


    The BlackBerry device was also able to retrieve identifying information for nearly 295,000 Facebook users. Most of them were second-degree Facebook friends of the reporter, or friends of friends.

    And this information was collected and sent to the “BlackBerry Hub” immediately after the reporter connected their device to his Facebook account:


    To test one partner’s access to Facebook’s private data channels, The Times used a reporter’s Facebook account — with about 550 friends — and a 2013 BlackBerry device, monitoring what data the device requested and received. (More recent BlackBerry devices, which run Google’s Android operating system, do not use the same private channels, BlackBerry officials said.)

    Immediately after the reporter connected the device to his Facebook account, it requested some of his profile data, including user ID, name, picture, “about” information, location, email and cellphone number. The device then retrieved the reporter’s private messages and the responses to them, along with the name and user ID of each person with whom he was communicating.

    The data flowed to a BlackBerry app known as the Hub, which was designed to let BlackBerry users view all of their messages and social media accounts in one place.

    The Hub also requested — and received — data that Facebook’s policy appears to prohibit. Since 2015, Facebook has said that apps can request only the names of friends using the same app. But the BlackBerry app had access to all of the reporter’s Facebook friends and, for most of them, returned information such as user ID, birthday, work and education history and whether they were currently online.

    Not surprisingly, Facebook whistle-blower Sandy Parakilas, who left the company in 2012, recalls this data sharing arrangement triggering discussions within Facebook as early as 2012. So Facebook has had internal concerns about this kind of data sharing for the past six years. Concerns that were apparently ignored


    But the device partnerships provoked discussion even within Facebook as early as 2012, according to Sandy Parakilas, who at the time led third-party advertising and privacy compliance for Facebook’s platform.

    “This was flagged internally as a privacy issue,” said Mr. Parakilas, who left Facebook that year and has recently emerged as a harsh critic of the company. “It is shocking that this practice may still continue six years later, and it appears to contradict Facebook’s testimony to Congress that all friend permissions were disabled.”

    Also keep in mind that the main concerns Sandy Parakilas recalls hearing Facebook executives expressing over the app developer data sharing back in 2012 was concerns that these developers were collected so much information that they were going to be able to create their own social networks. As Parakilas put it, ““They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people…They were worried that they were going to build their own social networks.”

    Well, the major device makers have undoubtedly been gathering far more information than major app developers, especially when you factor in the “friends of friends” option and the fact that they’ve apparently had access to this kind of data up until now. And that means these device makers must already possess remarkably detailed social networks of their own at this point.

    So when you hear Facebook executives characterizing these device manufacturers as “extensions of Facebook”…


    The company views its device partners as extensions of Facebook, serving its more than two billion users, the officials said.

    …it’s probably the most honest thing Facebook has said about this entire scandal.

    Posted by Pterrafractyl | June 7, 2018, 10:41 pm

Post a comment