Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

The Cambridge Analytica Microcosm in Our Panoptic Macrocosm

Let the Great Unfriend­ing Com­mence! Specif­i­cal­ly, the mass unfriend­ing of Face­book. Which would be a well deserved unfriend­ing after the scan­dalous rev­e­la­tions in a recent series of arti­cles cen­tered around the claims of Christo­pher Wylie, a Cam­bridge Ana­lyt­i­ca whis­tle-blow­er who helped found the firm and worked there until late 2014 until he and oth­ers grew increas­ing­ly uncom­fort­able with the far right goals and ques­tion­able actions of the firm.

And it turns out those ques­tion­able actions by Cam­bridge involve a far larg­er and more scan­dalous Face­book pol­i­cy brought forth by anoth­er whis­tle-blow­er, Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012.

So here’s a rough break­down of what’s been learned so far:

Accord­ing to Christo­pher Wylie, Cam­bridge Ana­lyt­i­ca was “har­vest­ing” mas­sive amount data off of Face­book from peo­ple who did not give their per­mis­sion by uti­liz­ing a Face­book loop­hole. This “friends per­mis­sions” loop­hole allowed app devel­op­ers to scrape infor­ma­tion not just from the Face­book pro­files of the peo­ple that agree to use their apps but also their friends’ pro­files too. In oth­er words, if your Face­book friend down­loaded Cam­bridge Ana­lyt­i­ca’s app, Cam­bridge Ana­lyt­i­ca was allowed to grab pri­vate infor­ma­tion from your Face­book pro­file with­out your per­mis­sion. And you would nev­er know it.

So how many pro­files was Cam­bridge Ana­lyt­i­ca allowed to “har­vest” uti­liz­ing this “friends per­mis­sion” fea­ture? About 50 mil­lion, and only a tiny frac­tion (~270,000) of that 50 mil­lion peo­ple actu­al­ly agreed to use Cam­bridge Ana­lyt­i­ca’s app. The rest were all their friends. So Face­book lit­er­al­ly used the con­nec­tiv­i­ty of Face­book users against them.

Keep in mind that this isn’t a new rev­e­la­tion. There were reports last year about how Cam­bridge Ana­lyt­i­ca paid ~100,000 peo­ple a dol­lar or two (via Ama­zon’s Mechan­i­cal Turks micro-task plat­form) to take an online sur­vey. But the only way they could be paid was to down­load an app that gave Cam­bridge Ana­lyt­i­ca access to the pro­files of all their Face­book friends, even­tu­al­ly yield­ing ~30 mil­lion “har­vest­ed” pro­files. Although accord­ing to these new reports that num­ber is clos­er to 50 mil­lion pro­files.

Before that, there was also a report from Decem­ber of 2015 about Cam­bridge Ana­lyt­i­ca’s build­ing of “psy­cho­graph­ic pro­files” for the Ted Cruz cam­paign. And that report also includ­ed the fact that this involved Face­book data har­vest­ed large­ly with­out users’ per­mis­sions.

So the fact that Cam­bridge Ana­lyt­i­ca was secret­ly har­vest­ing pri­vate Face­book user data with­out their per­mis­sions isn’t the big rev­e­la­tion here. What’s new is the rev­e­la­tion that what Cam­bridge Ana­lyt­i­ca did was inte­gral to Face­book’s busi­ness mod­el for years and very wide­spread.

This is where Sandy Parak­i­las comes into the pic­ture. Accord­ing to Parak­i­las, this pro­file-scrap­ing loop­hole that Cam­bridge Ana­lyt­i­ca was exploit­ing with its app was rou­tine­ly exploit­ing by pos­si­bly hun­dreds of thou­sands of oth­er app devel­op­ers for years. Yep. It turns out that Face­book had an arrange­ment going back to 2007 where the com­pa­ny would get a 30 per­cent cut in the mon­ey app devel­op­ers make off their Face­book apps and in exchange these devel­op­ers were giv­en the abil­i­ty to scrape the pro­files of not just the peo­ple who used their apps but also their friends. In oth­er words, Face­book was essen­tial­ly sell­ing the pri­vate infor­ma­tion of its users to app devel­op­ers. Secret­ly. Well, except it was­n’t a secret to all those app devel­op­ers. That’s also part of this scan­dal

This “friends per­mis­sion” fea­ture start­ed get­ting phased out around 2012, although it turns out Cam­bridge Ana­lyt­i­ca was one of the very last apps allowed to use it up into 2014.

Face­book has tried to defend itself by assert­ing that Face­book was only mak­ing this avail­able for things like aca­d­e­m­ic research and that Cam­bridge Ana­lyt­i­ca was there­fore mis­us­ing that data. And aca­d­e­m­ic research was in fact the cov­er sto­ry Cam­bridge Ana­lyt­i­ca used. Cam­bridge Ana­lyt­ic actu­al­ly set up a shell com­pa­ny, Glob­al Sci­ence Research (GRS), that was run by a Cam­bridge Uni­ver­si­ty pro­fes­sor, Alek­san­dr Kogan, and claimed to be pure­ly inter­est­ed in using that Face­book data for aca­d­e­m­ic research. The col­lect­ed data was then sent off to Cam­bridge Ana­lyt­i­ca. But accord­ing to Parak­i­las, Face­book was allow­ing devel­op­ers to uti­lize this “friends per­mis­sions” fea­ture rea­sons as vague as “improv­ing user expe­ri­ences”. Parak­i­las saw plen­ty of apps har­vest­ing this data for com­mer­cial pur­pos­es. Even worse, both Parak­i­las and Wylie paint a pic­ture of Face­book releas­ing this data and then doing almost noth­ing to ensure that it’s not mis­used.

So we’ve learned that Face­book was allow­ing app devel­op­ers to “har­vest” pri­vate data on Face­book users with­out their per­mis­sions from 2007–2014, and now we get to per­haps the most chill­ing part: Accord­ing to Parak­i­las, this data almost cer­tain­ly float­ing around in the black mar­ket. And it was so easy to set up an app and start col­lect­ing this kind of data that any­one with basic app cre­ate skills could start trawl­ing Face­book for data. And a major­i­ty of Face­book users prob­a­bly had their pro­files secret­ly “har­vest­ed” dur­ing this peri­od. If true, that means there’s like­ly a mas­sive black mar­ket of Face­book user pro­files just float­ing around out there and Face­book has done lit­tle to noth­ing to address this.

Parak­i­las, whose job it was to police data breach­es by third-par­ty soft­ware devel­op­ers from 2011–2012, under­stand­ably grew quite con­cerned over the risks to user data inher­ent in this busi­ness mod­el. So what did Face­book’s lead­er­ship do when he raised these con­cerns? They essen­tial­ly asked him “do you real­ly want to know how this data is being use” atti­tude and active­ly dis­cour­aged him from inves­ti­gat­ing how this data may be abused. Inten­tion­al­ly not know­ing about abus­es was oth­er part of the busi­ness mod­el. Crack­ing down on “rogue devel­op­ers” was very rare and the approval of Face­book CEO Mark Zucker­berg him­self was required to get an app kicked off the plat­form.

Face­book has been pub­licly deny­ing alle­ga­tions like this for years. It was the pub­lic denials that led Parak­i­las to come for­ward.

And it gets worse. It turns out that Alek­san­dr Kogan, the Uni­ver­si­ty of Cam­bridge aca­d­e­m­ic who end­ed up team­ing up with Cam­bridge Ana­lyt­i­ca and built the app that har­vest­ed the data, has a remark­ably close work­ing rela­tion­ship with Face­book. So close that Kogan actu­al­ly co-authored an aca­d­e­m­ic study pub­lished in 2015 with Face­book employ­ees. In addi­tion, one of Kogan’s part­ners in the data har­vest­ing, Joseph Chan­cel­lor, was also an author on the study and went on to join Face­book a few months after it was pub­lished.

It also looks like Steve Ban­non was over­see­ing this entire process, although he claims to know noth­ing.

Oh, and Palan­tir, the pri­vate intel­li­gence firm with deep ties to the US nation­al secu­ri­ty state owned by far right Face­book board mem­ber Peter Thiel, appears to have had an infor­mal rela­tion­ship with Cam­bridge Ana­lyt­i­ca this whole time, with Palan­tir employ­ees report­ed­ly trav­el­ing to Cam­bridge Ana­lyt­i­ca’s office to help build the psy­cho­log­i­cal pro­files. And this state of affairs is an exten­sion of how the inter­net has been used from its very con­cep­tion a half cen­tu­ry ago.

And that’s all part of why the Great Unfriend­ing of Face­book real­ly is long over­due. It’s one real­ly big rea­son to delete your Face­book account com­prised of many many many small egre­gious rea­sons.

So let’s start tak­ing a look at those many small rea­sons to delete your Face­book account with a look at a New York Times sto­ry about Christo­pher Wylie and his sto­ry of the ori­gins of Cam­bridge Ana­lyt­i­ca and the cru­cial role Face­book “har­vest­ing” played in pro­vid­ing the com­pa­ny with the data it need­ed to car­ry out the goals of its chief financiers: wag­ing the kind of ‘cul­ture war’ the bil­lion­aire far right Mer­cer fam­i­ly and Steve Ban­non want­ed to wage:

The New York Times

How Trump Con­sul­tants Exploit­ed the Face­book Data of Mil­lions

by Matthew Rosen­berg, Nicholas Con­fes­sore and Car­ole Cad­wal­ladr;
03/17/2018

As the upstart vot­er-pro­fil­ing com­pa­ny Cam­bridge Ana­lyt­i­ca pre­pared to wade into the 2014 Amer­i­can midterm elec­tions, it had a prob­lem.

The firm had secured a $15 mil­lion invest­ment from Robert Mer­cer, the wealthy Repub­li­can donor, and wooed his polit­i­cal advis­er, Stephen K. Ban­non, with the promise of tools that could iden­ti­fy the per­son­al­i­ties of Amer­i­can vot­ers and influ­ence their behav­ior. But it did not have the data to make its new prod­ucts work.

So the firm har­vest­ed pri­vate infor­ma­tion from the Face­book pro­files of more than 50 mil­lion users with­out their per­mis­sion, accord­ing to for­mer Cam­bridge employ­ees, asso­ciates and doc­u­ments, mak­ing it one of the largest data leaks in the social network’s his­to­ry. The breach allowed the com­pa­ny to exploit the pri­vate social media activ­i­ty of a huge swath of the Amer­i­can elec­torate, devel­op­ing tech­niques that under­pinned its work on Pres­i­dent Trump’s cam­paign in 2016.

An exam­i­na­tion by The New York Times and The Observ­er of Lon­don reveals how Cam­bridge Analytica’s dri­ve to bring to mar­ket a poten­tial­ly pow­er­ful new weapon put the firm — and wealthy con­ser­v­a­tive investors seek­ing to reshape pol­i­tics — under scruti­ny from inves­ti­ga­tors and law­mak­ers on both sides of the Atlantic.

Christo­pher Wylie, who helped found Cam­bridge and worked there until late 2014, said of its lead­ers: “Rules don’t mat­ter for them. For them, this is a war, and it’s all fair.”

“They want to fight a cul­ture war in Amer­i­ca,” he added. “Cam­bridge Ana­lyt­i­ca was sup­posed to be the arse­nal of weapons to fight that cul­ture war.”

Details of Cambridge’s acqui­si­tion and use of Face­book data have sur­faced in sev­er­al accounts since the busi­ness began work­ing on the 2016 cam­paign, set­ting off a furi­ous debate about the mer­its of the firm’s so-called psy­cho­graph­ic mod­el­ing tech­niques.

But the full scale of the data leak involv­ing Amer­i­cans has not been pre­vi­ous­ly dis­closed — and Face­book, until now, has not acknowl­edged it. Inter­views with a half-dozen for­mer employ­ees and con­trac­tors, and a review of the firm’s emails and doc­u­ments, have revealed that Cam­bridge not only relied on the pri­vate Face­book data but still pos­sess­es most or all of the trove.

Cam­bridge paid to acquire the per­son­al infor­ma­tion through an out­side researcher who, Face­book says, claimed to be col­lect­ing it for aca­d­e­m­ic pur­pos­es.

Dur­ing a week of inquiries from The Times, Face­book down­played the scope of the leak and ques­tioned whether any of the data still remained out of its con­trol. But on Fri­day, the com­pa­ny post­ed a state­ment express­ing alarm and promis­ing to take action.

“This was a scam — and a fraud,” Paul Gre­w­al, a vice pres­i­dent and deputy gen­er­al coun­sel at the social net­work, said in a state­ment to The Times ear­li­er on Fri­day. He added that the com­pa­ny was sus­pend­ing Cam­bridge Ana­lyt­i­ca, Mr. Wylie and the researcher, Alek­san­dr Kogan, a Russ­ian-Amer­i­can aca­d­e­m­ic, from Face­book. “We will take what­ev­er steps are required to see that the data in ques­tion is delet­ed once and for all — and take action against all offend­ing par­ties,” Mr. Gre­w­al said.

Alexan­der Nix, the chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and oth­er offi­cials had repeat­ed­ly denied obtain­ing or using Face­book data, most recent­ly dur­ing a par­lia­men­tary hear­ing last month. But in a state­ment to The Times, the com­pa­ny acknowl­edged that it had acquired the data, though it blamed Mr. Kogan for vio­lat­ing Facebook’s rules and said it had delet­ed the infor­ma­tion as soon as it learned of the prob­lem two years ago.

In Britain, Cam­bridge Ana­lyt­i­ca is fac­ing inter­twined inves­ti­ga­tions by Par­lia­ment and gov­ern­ment reg­u­la­tors into alle­ga­tions that it per­formed ille­gal work on the “Brex­it” cam­paign. The coun­try has strict pri­va­cy laws, and its infor­ma­tion com­mis­sion­er announced on Sat­ur­day that she was look­ing into whether the Face­book data was “ille­gal­ly acquired and used.”

In the Unit­ed States, Mr. Mercer’s daugh­ter, Rebekah, a board mem­ber, Mr. Ban­non and Mr. Nix received warn­ings from their lawyer that it was ille­gal to employ for­eign­ers in polit­i­cal cam­paigns, accord­ing to com­pa­ny doc­u­ments and for­mer employ­ees.

Con­gres­sion­al inves­ti­ga­tors have ques­tioned Mr. Nix about the company’s role in the Trump cam­paign. And the Jus­tice Department’s spe­cial coun­sel, Robert S. Mueller III, has demand­ed the emails of Cam­bridge Ana­lyt­i­ca employ­ees who worked for the Trump team as part of his inves­ti­ga­tion into Russ­ian inter­fer­ence in the elec­tion.

While the sub­stance of Mr. Mueller’s inter­est is a close­ly guard­ed secret, doc­u­ments viewed by The Times indi­cate that the firm’s British affil­i­ate claims to have worked in Rus­sia and Ukraine. And the Wik­iLeaks founder, Julian Assange, dis­closed in Octo­ber that Mr. Nix had reached out to him dur­ing the cam­paign in hopes of obtain­ing pri­vate emails belong­ing to Mr. Trump’s Demo­c­ra­t­ic oppo­nent, Hillary Clin­ton.

The doc­u­ments also raise new ques­tions about Face­book, which is already grap­pling with intense crit­i­cism over the spread of Russ­ian pro­pa­gan­da and fake news. The data Cam­bridge col­lect­ed from pro­files, a por­tion of which was viewed by The Times, includ­ed details on users’ iden­ti­ties, friend net­works and “likes.” Only a tiny frac­tion of the users had agreed to release their infor­ma­tion to a third par­ty.

“Pro­tect­ing people’s infor­ma­tion is at the heart of every­thing we do,” Mr. Gre­w­al said. “No sys­tems were infil­trat­ed, and no pass­words or sen­si­tive pieces of infor­ma­tion were stolen or hacked.”

Still, he added, “it’s a seri­ous abuse of our rules.”

Read­ing Vot­ers’ Minds

The Bor­deaux flowed freely as Mr. Nix and sev­er­al col­leagues sat down for din­ner at the Palace Hotel in Man­hat­tan in late 2013, Mr. Wylie recalled in an inter­view. They had much to cel­e­brate.

Mr. Nix, a brash sales­man, led the small elec­tions divi­sion at SCL Group, a polit­i­cal and defense con­trac­tor. He had spent much of the year try­ing to break into the lucra­tive new world of polit­i­cal data, recruit­ing Mr. Wylie, then a 24-year-old polit­i­cal oper­a­tive with ties to vet­er­ans of Pres­i­dent Obama’s cam­paigns. Mr. Wylie was inter­est­ed in using inher­ent psy­cho­log­i­cal traits to affect vot­ers’ behav­ior and had assem­bled a team of psy­chol­o­gists and data sci­en­tists, some of them affil­i­at­ed with Cam­bridge Uni­ver­si­ty.

The group exper­i­ment­ed abroad, includ­ing in the Caribbean and Africa, where pri­va­cy rules were lax or nonex­is­tent and politi­cians employ­ing SCL were hap­py to pro­vide gov­ern­ment-held data, for­mer employ­ees said.

Then a chance meet­ing brought Mr. Nix into con­tact with Mr. Ban­non, the Bre­it­bart News fire­brand who would lat­er become a Trump cam­paign and White House advis­er, and with Mr. Mer­cer, one of the rich­est men on earth.

Mr. Nix and his col­leagues court­ed Mr. Mer­cer, who believed a sophis­ti­cat­ed data com­pa­ny could make him a king­mak­er in Repub­li­can pol­i­tics, and his daugh­ter Rebekah, who shared his con­ser­v­a­tive views. Mr. Ban­non was intrigued by the pos­si­bil­i­ty of using per­son­al­i­ty pro­fil­ing to shift America’s cul­ture and rewire its pol­i­tics, recalled Mr. Wylie and oth­er for­mer employ­ees, who spoke on the con­di­tion of anonymi­ty because they had signed nondis­clo­sure agree­ments. Mr. Ban­non and the Mer­cers declined to com­ment.

Mr. Mer­cer agreed to help finance a $1.5 mil­lion pilot project to poll vot­ers and test psy­cho­graph­ic mes­sag­ing in Virginia’s guber­na­to­r­i­al race in Novem­ber 2013, where the Repub­li­can attor­ney gen­er­al, Ken Cuc­cinel­li, ran against Ter­ry McAu­li­ffe, the Demo­c­ra­t­ic fund-rais­er. Though Mr. Cuc­cinel­li lost, Mr. Mer­cer com­mit­ted to mov­ing for­ward.

The Mer­cers want­ed results quick­ly, and more busi­ness beck­oned. In ear­ly 2014, the investor Toby Neuge­bauer and oth­er wealthy con­ser­v­a­tives were prepar­ing to put tens of mil­lions of dol­lars behind a pres­i­den­tial cam­paign for Sen­a­tor Ted Cruz of Texas, work that Mr. Nix was eager to win.

...

Mr. Wylie’s team had a big­ger prob­lem. Build­ing psy­cho­graph­ic pro­files on a nation­al scale required data the com­pa­ny could not gath­er with­out huge expense. Tra­di­tion­al ana­lyt­ics firms used vot­ing records and con­sumer pur­chase his­to­ries to try to pre­dict polit­i­cal beliefs and vot­ing behav­ior.

But those kinds of records were use­less for fig­ur­ing out whether a par­tic­u­lar vot­er was, say, a neu­rot­ic intro­vert, a reli­gious extro­vert, a fair-mind­ed lib­er­al or a fan of the occult. Those were among the psy­cho­log­i­cal traits the firm claimed would pro­vide a unique­ly pow­er­ful means of design­ing polit­i­cal mes­sages.

Mr. Wylie found a solu­tion at Cam­bridge University’s Psy­cho­met­rics Cen­tre. Researchers there had devel­oped a tech­nique to map per­son­al­i­ty traits based on what peo­ple had liked on Face­book. The researchers paid users small sums to take a per­son­al­i­ty quiz and down­load an app, which would scrape some pri­vate infor­ma­tion from their pro­files and those of their friends, activ­i­ty that Face­book per­mit­ted at the time. The approach, the sci­en­tists said, could reveal more about a per­son than their par­ents or roman­tic part­ners knew — a claim that has been dis­put­ed.

When the Psy­cho­met­rics Cen­tre declined to work with the firm, Mr. Wylie found some­one who would: Dr. Kogan, who was then a psy­chol­o­gy pro­fes­sor at the uni­ver­si­ty and knew of the tech­niques. Dr. Kogan built his own app and in June 2014 began har­vest­ing data for Cam­bridge Ana­lyt­i­ca. The busi­ness cov­ered the costs — more than $800,000 — and allowed him to keep a copy for his own research, accord­ing to com­pa­ny emails and finan­cial records.

All he divulged to Face­book, and to users in fine print, was that he was col­lect­ing infor­ma­tion for aca­d­e­m­ic pur­pos­es, the social net­work said. It did not ver­i­fy his claim. Dr. Kogan declined to pro­vide details of what hap­pened, cit­ing nondis­clo­sure agree­ments with Face­book and Cam­bridge Ana­lyt­i­ca, though he main­tained that his pro­gram was “a very stan­dard vanil­la Face­book app.”

He ulti­mate­ly pro­vid­ed over 50 mil­lion raw pro­files to the firm, Mr. Wylie said, a num­ber con­firmed by a com­pa­ny email and a for­mer col­league. Of those, rough­ly 30 mil­lion — a num­ber pre­vi­ous­ly report­ed by The Inter­cept — con­tained enough infor­ma­tion, includ­ing places of res­i­dence, that the com­pa­ny could match users to oth­er records and build psy­cho­graph­ic pro­files. Only about 270,000 users — those who par­tic­i­pat­ed in the sur­vey — had con­sent­ed to hav­ing their data har­vest­ed.

Mr. Wylie said the Face­book data was “the sav­ing grace” that let his team deliv­er the mod­els it had promised the Mer­cers.

“We want­ed as much as we could get,” he acknowl­edged. “Where it came from, who said we could have it — we weren’t real­ly ask­ing.”

Mr. Nix tells a dif­fer­ent sto­ry. Appear­ing before a par­lia­men­tary com­mit­tee last month, he described Dr. Kogan’s con­tri­bu­tions as “fruit­less.”

An Inter­na­tion­al Effort

Just as Dr. Kogan’s efforts were get­ting under­way, Mr. Mer­cer agreed to invest $15 mil­lion in a joint ven­ture with SCL’s elec­tions divi­sion. The part­ners devised a con­vo­lut­ed cor­po­rate struc­ture, form­ing a new Amer­i­can com­pa­ny, owned almost entire­ly by Mr. Mer­cer, with a license to the psy­cho­graph­ics plat­form devel­oped by Mr. Wylie’s team, accord­ing to com­pa­ny doc­u­ments. Mr. Ban­non, who became a board mem­ber and investor, chose the name: Cam­bridge Ana­lyt­i­ca.

The firm was effec­tive­ly a shell. Accord­ing to the doc­u­ments and for­mer employ­ees, any con­tracts won by Cam­bridge, orig­i­nal­ly incor­po­rat­ed in Delaware, would be ser­viced by Lon­don-based SCL and over­seen by Mr. Nix, a British cit­i­zen who held dual appoint­ments at Cam­bridge Ana­lyt­i­ca and SCL. Most SCL employ­ees and con­trac­tors were Cana­di­an, like Mr. Wylie, or Euro­pean.

But in July 2014, an Amer­i­can elec­tion lawyer advis­ing the com­pa­ny, Lau­rence Levy, warned that the arrange­ment could vio­late laws lim­it­ing the involve­ment of for­eign nation­als in Amer­i­can elec­tions.

In a memo to Mr. Ban­non, Ms. Mer­cer and Mr. Nix, the lawyer, then at the firm Bracewell & Giu­liani, warned that Mr. Nix would have to recuse him­self “from sub­stan­tive man­age­ment” of any clients involved in Unit­ed States elec­tions. The data firm would also have to find Amer­i­can cit­i­zens or green card hold­ers, Mr. Levy wrote, “to man­age the work and deci­sion mak­ing func­tions, rel­a­tive to cam­paign mes­sag­ing and expen­di­tures.”

In sum­mer and fall 2014, Cam­bridge Ana­lyt­i­ca dived into the Amer­i­can midterm elec­tions, mobi­liz­ing SCL con­trac­tors and employ­ees around the coun­try. Few Amer­i­cans were involved in the work, which includ­ed polling, focus groups and mes­sage devel­op­ment for the John Bolton Super PAC, con­ser­v­a­tive groups in Col­orado and the cam­paign of Sen­a­tor Thom Tillis, the North Car­oli­na Repub­li­can.

Cam­bridge Ana­lyt­i­ca, in its state­ment to The Times, said that all “per­son­nel in strate­gic roles were U.S. nation­als or green card hold­ers.” Mr. Nix “nev­er had any strate­gic or oper­a­tional role” in an Amer­i­can elec­tion cam­paign, the com­pa­ny said.

Whether the company’s Amer­i­can ven­tures vio­lat­ed elec­tion laws would depend on for­eign employ­ees’ roles in each cam­paign, and on whether their work count­ed as strate­gic advice under Fed­er­al Elec­tion Com­mis­sion rules.

Cam­bridge Ana­lyt­i­ca appears to have exhib­it­ed a sim­i­lar pat­tern in the 2016 elec­tion cycle, when the com­pa­ny worked for the cam­paigns of Mr. Cruz and then Mr. Trump. While Cam­bridge hired more Amer­i­cans to work on the races that year, most of its data sci­en­tists were cit­i­zens of the Unit­ed King­dom or oth­er Euro­pean coun­tries, accord­ing to two for­mer employ­ees.

Under the guid­ance of Brad Parscale, Mr. Trump’s dig­i­tal direc­tor in 2016 and now the cam­paign man­ag­er for his 2020 re-elec­tion effort, Cam­bridge per­formed a vari­ety of ser­vices, for­mer cam­paign offi­cials said. That includ­ed design­ing tar­get audi­ences for dig­i­tal ads and fund-rais­ing appeals, mod­el­ing vot­er turnout, buy­ing $5 mil­lion in tele­vi­sion ads and deter­min­ing where Mr. Trump should trav­el to best drum up sup­port.

Cam­bridge exec­u­tives have offered con­flict­ing accounts about the use of psy­cho­graph­ic data on the cam­paign. Mr. Nix has said that the firm’s pro­files helped shape Mr. Trump’s strat­e­gy — state­ments dis­put­ed by oth­er cam­paign offi­cials — but also that Cam­bridge did not have enough time to com­pre­hen­sive­ly mod­el Trump vot­ers.

In a BBC inter­view last Decem­ber, Mr. Nix said that the Trump efforts drew on “lega­cy psy­cho­graph­ics” built for the Cruz cam­paign.

After the Leak

By ear­ly 2015, Mr. Wylie and more than half his orig­i­nal team of about a dozen peo­ple had left the com­pa­ny. Most were lib­er­al-lean­ing, and had grown dis­en­chant­ed with work­ing on behalf of the hard-right can­di­dates the Mer­cer fam­i­ly favored.

Cam­bridge Ana­lyt­i­ca, in its state­ment, said that Mr. Wylie had left to start a rival firm, and that it lat­er took legal action against him to enforce intel­lec­tu­al prop­er­ty claims. It char­ac­ter­ized Mr. Wylie and oth­er for­mer “con­trac­tors” as engag­ing in “what is clear­ly a mali­cious attempt to hurt the com­pa­ny.”

Near the end of that year, a report in The Guardian revealed that Cam­bridge Ana­lyt­i­ca was using pri­vate Face­book data on the Cruz cam­paign, send­ing Face­book scram­bling. In a state­ment at the time, Face­book promised that it was “care­ful­ly inves­ti­gat­ing this sit­u­a­tion” and would require any com­pa­ny mis­us­ing its data to destroy it.

Face­book ver­i­fied the leak and — with­out pub­licly acknowl­edg­ing it — sought to secure the infor­ma­tion, efforts that con­tin­ued as recent­ly as August 2016. That month, lawyers for the social net­work reached out to Cam­bridge Ana­lyt­i­ca con­trac­tors. “This data was obtained and used with­out per­mis­sion,” said a let­ter that was obtained by the Times. “It can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly.”

Mr. Gre­w­al, the Face­book deputy gen­er­al coun­sel, said in a state­ment that both Dr. Kogan and “SCL Group and Cam­bridge Ana­lyt­i­ca cer­ti­fied to us that they destroyed the data in ques­tion.”

But copies of the data still remain beyond Facebook’s con­trol. The Times viewed a set of raw data from the pro­files Cam­bridge Ana­lyt­i­ca obtained.

While Mr. Nix has told law­mak­ers that the com­pa­ny does not have Face­book data, a for­mer employ­ee said that he had recent­ly seen hun­dreds of giga­bytes on Cam­bridge servers, and that the files were not encrypt­ed.

Today, as Cam­bridge Ana­lyt­i­ca seeks to expand its busi­ness in the Unit­ed States and over­seas, Mr. Nix has men­tioned some ques­tion­able prac­tices. This Jan­u­ary, in under­cov­er footage filmed by Chan­nel 4 News in Britain and viewed by The Times, he boast­ed of employ­ing front com­pa­nies and for­mer spies on behalf of polit­i­cal clients around the world, and even sug­gest­ed ways to entrap politi­cians in com­pro­mis­ing sit­u­a­tions.

All the scruti­ny appears to have dam­aged Cam­bridge Analytica’s polit­i­cal busi­ness. No Amer­i­can cam­paigns or “super PACs” have yet report­ed pay­ing the com­pa­ny for work in the 2018 midterms, and it is unclear whether Cam­bridge will be asked to join Mr. Trump’s re-elec­tion cam­paign.

In the mean­time, Mr. Nix is seek­ing to take psy­cho­graph­ics to the com­mer­cial adver­tis­ing mar­ket. He has repo­si­tioned him­self as a guru for the dig­i­tal ad age — a “Math Man,” he puts it. In the Unit­ed States last year, a for­mer employ­ee said, Cam­bridge pitched Mer­cedes-Benz, MetLife and the brew­er AB InBev, but has not signed them on.

———-

“How Trump Con­sul­tants Exploit­ed the Face­book Data of Mil­lions” by Matthew Rosen­berg, Nicholas Con­fes­sore and Car­ole Cad­wal­ladr; The New York Times; 03/17/2018

“They want to fight a cul­ture war in Amer­i­ca,” he added. “Cam­bridge Ana­lyt­i­ca was sup­posed to be the arse­nal of weapons to fight that cul­ture war.”

Cam­bridge Ana­lyt­i­ca was sup­posed to be the arse­nal of weapons to fight the cul­ture war Cam­bridge Ana­lyt­i­ca’s lead­er­ship want­ed to wage. But that arse­nal could­n’t be built with­out data on what makes us ‘tick’. That’s where Face­book pro­file har­vest­ing came in:

The firm had secured a $15 mil­lion invest­ment from Robert Mer­cer, the wealthy Repub­li­can donor, and wooed his polit­i­cal advis­er, Stephen K. Ban­non, with the promise of tools that could iden­ti­fy the per­son­al­i­ties of Amer­i­can vot­ers and influ­ence their behav­ior. But it did not have the data to make its new prod­ucts work.

So the firm har­vest­ed pri­vate infor­ma­tion from the Face­book pro­files of more than 50 mil­lion users with­out their per­mis­sion, accord­ing to for­mer Cam­bridge employ­ees, asso­ciates and doc­u­ments, mak­ing it one of the largest data leaks in the social network’s his­to­ry. The breach allowed the com­pa­ny to exploit the pri­vate social media activ­i­ty of a huge swath of the Amer­i­can elec­torate, devel­op­ing tech­niques that under­pinned its work on Pres­i­dent Trump’s cam­paign in 2016.

An exam­i­na­tion by The New York Times and The Observ­er of Lon­don reveals how Cam­bridge Analytica’s dri­ve to bring to mar­ket a poten­tial­ly pow­er­ful new weapon put the firm — and wealthy con­ser­v­a­tive investors seek­ing to reshape pol­i­tics — under scruti­ny from inves­ti­ga­tors and law­mak­ers on both sides of the Atlantic.

Christo­pher Wylie, who helped found Cam­bridge and worked there until late 2014, said of its lead­ers: “Rules don’t mat­ter for them. For them, this is a war, and it’s all fair.”
...

And the acqui­si­tion of these 50 mil­lion Face­book pro­files has nev­er been acknowl­edge by Face­book, until now. And most or per­haps all of that data is still in the hands of Cam­bridge Ana­lyt­i­ca:

...
But the full scale of the data leak involv­ing Amer­i­cans has not been pre­vi­ous­ly dis­closed — and Face­book, until now, has not acknowl­edged it. Inter­views with a half-dozen for­mer employ­ees and con­trac­tors, and a review of the firm’s emails and doc­u­ments, have revealed that Cam­bridge not only relied on the pri­vate Face­book data but still pos­sess­es most or all of the trove.
...

And Face­book isn’t alone in sud­den­ly dis­cov­er­ing that its data was “har­vest­ed” by Cam­bridge Ana­lyt­i­ca. Cam­bridge Ana­lyt­i­ca itself would­n’t admit this either. Until now. Now Cam­bridge Ana­lyt­i­ca admits it did indeed obtained Face­book’s data. But the com­pa­ny blames it all on Alek­san­dr Kogan, the Cam­bridge Uni­ver­si­ty aca­d­e­m­ic who ran the front-com­pa­ny that paid peo­ple to take the psy­cho­log­i­cal pro­file sur­veys, for vio­lat­ing Face­book’s data usage rules. It also claims it delet­ed all the “har­vest­ed” infor­ma­tion two years ago as soon as it learned there was a prob­lem. That’s Cam­bridge Ana­lyt­i­ca’s new sto­ry and it’s stick­ing to it. For now:

...
Alexan­der Nix, the chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and oth­er offi­cials had repeat­ed­ly denied obtain­ing or using Face­book data, most recent­ly dur­ing a par­lia­men­tary hear­ing last month. But in a state­ment to The Times, the com­pa­ny acknowl­edged that it had acquired the data, though it blamed Mr. Kogan for vio­lat­ing Facebook’s rules and said it had delet­ed the infor­ma­tion as soon as it learned of the prob­lem two years ago.
...

But Christo­pher Wylie has a very dif­fer­ent rec­ol­lec­tion of events. In 2013, Wylie was a 24-year-old polit­i­cal oper­a­tive with ties to vet­er­ans of Pres­i­dent Obama’s cam­paigns inter­est­ed in using psy­cho­log­i­cal traits to affect vot­ers’ behav­ior. He even had a team of psy­chol­o­gists and data sci­en­tists, some of them affil­i­at­ed with Cam­bridge Uni­ver­si­ty (where Alek­san­dr Kogan was also work­ing at the time). And that exper­tise in psy­cho­log­i­cal pro­fil­ing for polit­i­cal pur­pos­es is why Mr. Nix recruit­ed Wylie and his team.

Then Nix has a chance meet­ing with Steve Ban­non and Robert Mer­cer. Mer­cer shows inter­est in the com­pa­ny because he believes it can make him a Repub­li­can king­mak­er, while Ban­non was focused on the pos­si­bil­i­ty of using per­son­al­i­ty pro­fil­ing to shift America’s cul­ture and rewire its pol­i­tics. The Mer­cers end up invest­ing $1.5 mil­lion in a pilot project: polling vot­ers and test­ing psy­cho­graph­ic mes­sag­ing in Virginia’s 2013 guber­na­to­r­i­al race:

...
The Bor­deaux flowed freely as Mr. Nix and sev­er­al col­leagues sat down for din­ner at the Palace Hotel in Man­hat­tan in late 2013, Mr. Wylie recalled in an inter­view. They had much to cel­e­brate.

Mr. Nix, a brash sales­man, led the small elec­tions divi­sion at SCL Group, a polit­i­cal and defense con­trac­tor. He had spent much of the year try­ing to break into the lucra­tive new world of polit­i­cal data, recruit­ing Mr. Wylie, then a 24-year-old polit­i­cal oper­a­tive with ties to vet­er­ans of Pres­i­dent Obama’s cam­paigns. Mr. Wylie was inter­est­ed in using inher­ent psy­cho­log­i­cal traits to affect vot­ers’ behav­ior and had assem­bled a team of psy­chol­o­gists and data sci­en­tists, some of them affil­i­at­ed with Cam­bridge Uni­ver­si­ty.

The group exper­i­ment­ed abroad, includ­ing in the Caribbean and Africa, where pri­va­cy rules were lax or nonex­is­tent and politi­cians employ­ing SCL were hap­py to pro­vide gov­ern­ment-held data, for­mer employ­ees said.

Then a chance meet­ing brought Mr. Nix into con­tact with Mr. Ban­non, the Bre­it­bart News fire­brand who would lat­er become a Trump cam­paign and White House advis­er, and with Mr. Mer­cer, one of the rich­est men on earth.

Mr. Nix and his col­leagues court­ed Mr. Mer­cer, who believed a sophis­ti­cat­ed data com­pa­ny could make him a king­mak­er in Repub­li­can pol­i­tics, and his daugh­ter Rebekah, who shared his con­ser­v­a­tive views. Mr. Ban­non was intrigued by the pos­si­bil­i­ty of using per­son­al­i­ty pro­fil­ing to shift America’s cul­ture and rewire its pol­i­tics, recalled Mr. Wylie and oth­er for­mer employ­ees, who spoke on the con­di­tion of anonymi­ty because they had signed nondis­clo­sure agree­ments. Mr. Ban­non and the Mer­cers declined to com­ment.

Mr. Mer­cer agreed to help finance a $1.5 mil­lion pilot project to poll vot­ers and test psy­cho­graph­ic mes­sag­ing in Virginia’s guber­na­to­r­i­al race in Novem­ber 2013, where the Repub­li­can attor­ney gen­er­al, Ken Cuc­cinel­li, ran against Ter­ry McAu­li­ffe, the Demo­c­ra­t­ic fund-rais­er. Though Mr. Cuc­cinel­li lost, Mr. Mer­cer com­mit­ted to mov­ing for­ward.
...

So the pilot project pro­ceed, but there was a prob­lem: Wylie’s team sim­ply did not have the data it need­ed. They only had the kind of data tra­di­tion­al ana­lyt­ics firms had: vot­ing records and con­sumer pur­chase his­to­ries. And get­ting the kind of data they want­ed to gain insight into vot­er neu­roti­cisms and psy­cho­log­i­cal traits could be very expen­sive:

...
The Mer­cers want­ed results quick­ly, and more busi­ness beck­oned. In ear­ly 2014, the investor Toby Neuge­bauer and oth­er wealthy con­ser­v­a­tives were prepar­ing to put tens of mil­lions of dol­lars behind a pres­i­den­tial cam­paign for Sen­a­tor Ted Cruz of Texas, work that Mr. Nix was eager to win.

...

Mr. Wylie’s team had a big­ger prob­lem. Build­ing psy­cho­graph­ic pro­files on a nation­al scale required data the com­pa­ny could not gath­er with­out huge expense. Tra­di­tion­al ana­lyt­ics firms used vot­ing records and con­sumer pur­chase his­to­ries to try to pre­dict polit­i­cal beliefs and vot­ing behav­ior.

But those kinds of records were use­less for fig­ur­ing out whether a par­tic­u­lar vot­er was, say, a neu­rot­ic intro­vert, a reli­gious extro­vert, a fair-mind­ed lib­er­al or a fan of the occult. Those were among the psy­cho­log­i­cal traits the firm claimed would pro­vide a unique­ly pow­er­ful means of design­ing polit­i­cal mes­sages.
...

And that’s where Alek­san­dr Kogan enters the pic­ture: First, Wylie found that Cam­bridge University’s Psy­cho­met­rics Cen­tre had exact­ly the kind of set up he need­ed. Researchers there claimed to have devel­oped tech­niques for map­ping per­son­al­i­ty traits based on what peo­ple “liked” on Face­book. Bet­ter yet, this team already had an app that paid users small sums to take a per­son­al­i­ty quiz and down­load an app that would scrape pri­vate infor­ma­tion from their Face­book pro­files and from their friends’ Face­book pro­files. In oth­er words, Cam­bridge University’s Psy­cho­met­rics Cen­tre was already employ­ing exact­ly the same kind of “har­vest­ing” mod­el Kogan and Cam­bridge Ana­lyt­i­ca even­tu­al­ly end­ed up doing.

But there was a prob­lem for Wylie and his team: Cam­bridge University’s Psy­cho­met­rics Cen­tre declined to work with them:

...
Mr. Wylie found a solu­tion at Cam­bridge University’s Psy­cho­met­rics Cen­tre. Researchers there had devel­oped a tech­nique to map per­son­al­i­ty traits based on what peo­ple had liked on Face­book. The researchers paid users small sums to take a per­son­al­i­ty quiz and down­load an app, which would scrape some pri­vate infor­ma­tion from their pro­files and those of their friends, activ­i­ty that Face­book per­mit­ted at the time. The approach, the sci­en­tists said, could reveal more about a per­son than their par­ents or roman­tic part­ners knew — a claim that has been dis­put­ed.
...

But it was­n’t a par­tic­u­lar­ly big prob­lem because Wylie found anoth­er Cam­bridge Uni­ver­si­ty psy­chol­o­gy pro­fes­sor who was famil­iar with the tech­niques and will­ing to do the job: Alek­san­dr Kogan. So Kogan built his own psy­cho­log­i­cal pro­file app and began har­vest­ing data for Cam­bridge Ana­lyt­i­ca in June 2014. Kogan was even allowed to keep the har­vest­ed data for his own research accord­ing to his con­tract with Cam­bridge Ana­lyt­i­ca. Accord­ing to Face­book, the only thing Kogan told them and told the users of his app in the fine print was that he was col­lect­ing infor­ma­tion for aca­d­e­m­ic pur­pos­es. Although Face­book did­n’t appear to have ever attempt­ed to ver­i­fy that claim:

...
When the Psy­cho­met­rics Cen­tre declined to work with the firm, Mr. Wylie found some­one who would: Dr. Kogan, who was then a psy­chol­o­gy pro­fes­sor at the uni­ver­si­ty and knew of the tech­niques. Dr. Kogan built his own app and in June 2014 began har­vest­ing data for Cam­bridge Ana­lyt­i­ca. The busi­ness cov­ered the costs — more than $800,000 — and allowed him to keep a copy for his own research, accord­ing to com­pa­ny emails and finan­cial records.

All he divulged to Face­book, and to users in fine print, was that he was col­lect­ing infor­ma­tion for aca­d­e­m­ic pur­pos­es, the social net­work said. It did not ver­i­fy his claim. Dr. Kogan declined to pro­vide details of what hap­pened, cit­ing nondis­clo­sure agree­ments with Face­book and Cam­bridge Ana­lyt­i­ca, though he main­tained that his pro­gram was “a very stan­dard vanil­la Face­book app.”
...

In the end, Kogan’s app man­aged to “har­vest” 50 mil­lion Face­book pro­files based on a mere 270,000 peo­ple actu­al­ly sign­ing up for Kogan’s app. So for each per­son who signed up for the app there were ~185 oth­er peo­ple who had their pro­files sent to Kogan too.

And 30 mil­lion of those pro­files con­tained infor­ma­tion like places of res­i­dence that allowed them to match that Face­book pro­file with oth­er records (pre­sum­ably non-Face­book records) and build psy­cho­graph­ic pro­files, imply­ing that those 30 mil­lion records were mapped to real life peo­ple:

...
He ulti­mate­ly pro­vid­ed over 50 mil­lion raw pro­files to the firm, Mr. Wylie said, a num­ber con­firmed by a com­pa­ny email and a for­mer col­league. Of those, rough­ly 30 mil­lion — a num­ber pre­vi­ous­ly report­ed by The Inter­cept — con­tained enough infor­ma­tion, includ­ing places of res­i­dence, that the com­pa­ny could match users to oth­er records and build psy­cho­graph­ic pro­files. Only about 270,000 users — those who par­tic­i­pat­ed in the sur­vey — had con­sent­ed to hav­ing their data har­vest­ed.

Mr. Wylie said the Face­book data was “the sav­ing grace” that let his team deliv­er the mod­els it had promised the Mer­cers.
...

So this har­vest­ing starts in mid-2014, but by ear­ly 2015, Wylie and more than half his orig­i­nal team leave the firm to start a rival firm, although it sounds lie con­cerns over the far right cause they were work­ing for was also behind their depar­ture:

...
By ear­ly 2015, Mr. Wylie and more than half his orig­i­nal team of about a dozen peo­ple had left the com­pa­ny. Most were lib­er­al-lean­ing, and had grown dis­en­chant­ed with work­ing on behalf of the hard-right can­di­dates the Mer­cer fam­i­ly favored.

Cam­bridge Ana­lyt­i­ca, in its state­ment, said that Mr. Wylie had left to start a rival firm, and that it lat­er took legal action against him to enforce intel­lec­tu­al prop­er­ty claims. It char­ac­ter­ized Mr. Wylie and oth­er for­mer “con­trac­tors” as engag­ing in “what is clear­ly a mali­cious attempt to hurt the com­pa­ny.”
...

Final­ly, this whole scan­dal goes pub­lic. Well, at least par­tial­ly: At the end of 2015, the Guardian reports this Face­book pro­file col­lec­tion scheme Cam­bridge Ana­lyt­i­ca was doing for the Ted Cruz cam­paign. Face­book does­n’t pub­licly acknowl­edge the truth of this report, but it did pub­licly state that it was “care­ful­ly inves­ti­gat­ing this sit­u­a­tion.” Face­book also sent a let­ter to Cam­bridge Ana­lyt­i­ca demand­ing that it destroy this data...except the let­ter was­n’t sent until August of 2016.

...
Near the end of that year, a report in The Guardian revealed that Cam­bridge Ana­lyt­i­ca was using pri­vate Face­book data on the Cruz cam­paign, send­ing Face­book scram­bling. In a state­ment at the time, Face­book promised that it was “care­ful­ly inves­ti­gat­ing this sit­u­a­tion” and would require any com­pa­ny mis­us­ing its data to destroy it.

Face­book ver­i­fied the leak and — with­out pub­licly acknowl­edg­ing it — sought to secure the infor­ma­tion, efforts that con­tin­ued as recent­ly as August 2016. That month, lawyers for the social net­work reached out to Cam­bridge Ana­lyt­i­ca con­trac­tors. “This data was obtained and used with­out per­mis­sion,” said a let­ter that was obtained by the Times. “It can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly.”
...

Face­book now claims that Cam­bridge Ana­lyt­i­ca “SCL Group and Cam­bridge Ana­lyt­i­ca cer­ti­fied to us that they destroyed the data in ques­tion.” But, of course, this was a lie. The New York Times was shown sets of the raw data.

And even more dis­turb­ing, a for­mer Cam­bridge Ana­lyt­i­ca employ­ee claims he recent­ly saw hun­dreds of giga­bytes on Cam­bridge Ana­lyt­i­ca’s servers. Unen­crypt­ed. Which means that data could poten­tial­ly be grabbed by any Cam­bridge Ana­lyt­i­ca employ­ee with access to that serv­er:

...
Mr. Gre­w­al, the Face­book deputy gen­er­al coun­sel, said in a state­ment that both Dr. Kogan and “SCL Group and Cam­bridge Ana­lyt­i­ca cer­ti­fied to us that they destroyed the data in ques­tion.”

But copies of the data still remain beyond Facebook’s con­trol. The Times viewed a set of raw data from the pro­files Cam­bridge Ana­lyt­i­ca obtained.

While Mr. Nix has told law­mak­ers that the com­pa­ny does not have Face­book data, a for­mer employ­ee said that he had recent­ly seen hun­dreds of giga­bytes on Cam­bridge servers, and that the files were not encrypt­ed.
...

So, to sum­ma­rize the key points from this New York Times arti­cle:

1. In 2013, Cam­bridge Ana­lyt­i­ca is formed when Alexan­der Nix, then a sales­man for the small elec­tions divi­sion at SCL Group, recruits Christo­pher Wylie and a team of psy­chol­o­gist to help devel­op a “polit­i­cal data” unit at the com­pa­ny, with an eye on the 2014 US mid-terms.

2. By chance, Nix and Wylie meet Steve Ban­non and Robert Mer­cer, who are quick­ly sold on the idea of psy­cho­graph­ic pro­fil­ing for polit­i­cal pur­pos­es. Ban­non was intrigue by the idea of using this data to wage the “cul­ture war.” Mer­cer agrees to invest $1.5 Bil­lion in a pilot project involv­ing the Vir­ginia guber­na­to­r­i­al race. Their suc­cess is lim­it­ed as Wylie soon dis­cov­ers that they don’t have the data they real­ly need to car­ry out their psy­cho­graph­ic pro­fil­ing project. But Robert Mer­cer remained com­mit­ted to the project.

3. Wylie found that Cam­bridge University’s Psy­cho­met­rics Cen­tre had exact­ly the kind of data they were seek­ing. Data that was being col­lect­ed via an app admin­is­tered through Face­book, where peo­ple were paid small amounts a mon­ey to take a sur­vey, and in exchange Cam­bridge University’s Psy­cho­met­rics Cen­tre was allowed to scrape their Face­book pro­file as well as the pro­files of all their Face­book friends.

4. Cam­bridge University’s Psy­cho­met­rics Cen­tre reject­ed Wylies offer to work with them, but there was anoth­er Cam­bridge Uni­ver­si­ty psy­chol­o­gy pro­fes­sor who was will­ing to do so, Alek­san­dr Kogan. Kogan pro­ceed­ed to start a com­pa­ny (as a front for Cam­bridge Ana­lyt­i­ca) and devel­op his own app, get­ting ~270,000 peo­ple to down­load it and give their per­mis­sion for their pro­files to be col­lect­ed. But using the “friends per­mis­sion” fea­ture, Kogan’s app end­ed col­lect­ing anoth­er ~50 mil­lion Face­book pro­files from the friends of those 270,000 peo­ple. ~30 mil­lion of those pro­files were matched to US vot­ers.

5. By ear­ly 2015, Wylie and his left-lean­ing team mem­bers leave Cam­bridge Ana­lyt­i­ca and form their own com­pa­ny, appar­ent­ly due to con­cerns over the far right goals of the firm.

6. Cam­bridge Ana­lyt­i­ca goes on to work for the Ted Cruz cam­paign. In late 2015, it’s report­ed that Cam­bridge Ana­lyt­i­ca work for Cruz involved work­ing with Face­book data from peo­ple who did­n’t give it per­mis­sion. Face­book issues a vague state­ment about how it’s going to inves­ti­gate.

7. In August 2016, Face­book sends a let­ter to Cam­bridge Ana­lyt­i­ca assert­ing that the data was obtained and used with­out per­mis­sion and must be delet­ed imme­di­ate­ly. The New York Times was just shown copies of exact­ly that data to write this arti­cle. Hun­dreds of giga­bytes of data that is com­plete­ly out­side Face­book’s con­trol.

8. Cam­bridge Ana­lyt­i­ca CEO (now for­mer CEO) Alexan­der Nix told law­mak­ers that the firm did­n’t pos­sess any Face­book data. So he was clear­ly com­plete­ly lying.

9. Final­ly, a for­mer Cam­bridge Ana­lyt­i­ca employ­ee showed the New York Times hun­dreds of giga­bytes of Face­book data. And it was unen­crypt­ed, so any­one with access to it could make a copy and give it to who­ev­er they want.

And that’s what we learned from just the New York Times’s ver­sion of this sto­ry. The Guardian Observ­er was also talk­ing with Christo­pher Wylie and oth­er Cam­bridge Ana­lyt­i­ca whis­tle-blow­ers. And while it large­ly cov­ers the same sto­ry as the New York Times report, the Observ­er arti­cle con­tains some addi­tion­al details.
1. For starters, the fol­low­ing arti­cle notes that the Facebook’s “plat­form pol­i­cy” allowed only col­lec­tion of friends’ data to improve user expe­ri­ence in the app and barred it being sold on or used for adver­tis­ing. That’s impor­tant to note because the stat­ed use of the data grabbed by Alek­san­dr Kogan’s app was for research pur­pos­es. But “improv­ing user expe­ri­ence in the app” is a far more gener­ic rea­son for grab­bing that data than aca­d­e­m­ic research pur­pos­es. And that hints at some­thing we’re going to see below from a Face­book whis­tle-blow­er: that all sorts of app devel­op­ers were grab­bing this kind of data using the ‘friends’ loop­hole for rea­sons that had absolute­ly noth­ing to do with aca­d­e­m­ic pur­pos­es and this was deemed fine by Face­book.

2. Face­book did­n’t for­mal­ly sus­pend Cam­bridge Ana­lyt­i­ca and Alek­san­dr Kogan from the plat­form until one day before the Observ­er arti­cle was pub­lished, which is more than two years after the ini­tial reports in late 2015 about the Cam­bridge Ana­lyt­i­ca mis­us­ing Face­book data for the Ted Cruz cam­paign. So if Face­book felt like Cam­bridge Ana­lyt­i­ca and Alek­san­dr Kogan was improp­er­ly obtain­ing and mis­us­ing its data it sure tried hard not to let on until the very last moment.

3. Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, told the UK MP when asked if Cam­bridge Ana­lyt­i­ca had Face­book data that, “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.” Which, again, as we’re going to see, was a total lie accord­ing to a Face­book whis­tle-blow­er because Face­book was rou­tine­ly pro­vid­ing exact­ly the kind of data Kogan’s app was col­lect­ing to thou­sands of devel­op­ers.

4. Alek­san­dr Kogan had a license from Face­book to col­lect pro­file data, but for research pur­pos­es, so when he used the data for com­mer­cial pur­pos­es he was vio­lat­ing his agree­ment, accord­ing to the arti­cle. Also, Kogan main­tains every­thing he did was legal, and says he had a “close work­ing rela­tion­ship” with Face­book, which had grant­ed him per­mis­sion for his apps. And as we’re going to see in sub­se­quent arti­cles, it does indeed look like Kogan is cor­rect and he was very open about using the data from the Cam­bridge Ana­lyt­i­ca app for com­mer­cial pur­pos­es and Face­book had no prob­lem with this.

5. In addi­tion to being a Cam­bridge Uni­ver­si­ty pro­fes­sor, Alek­san­dr Kogan has links to a Russ­ian uni­ver­si­ty and took Russ­ian grants for research. This will undoubt­ed­ly raise spec­u­la­tion about the pos­si­bil­i­ty that Kogan’s data was hand­ed over to the Krem­lin and used in the social-media influ­enc­ing cam­paign car­ried out by the Krem­lin-linked Inter­net Research Agency. If so, it’s still impor­tant to keep in mind that, based on what we’re going to see from Face­book whis­tle-blow­er Sandy Parak­i­las, the Krem­lin could have eas­i­ly set up all sorts of Face­book apps for col­lect­ing this kind of data because appar­ent­ly any­one could do it as long as the data was for “improv­ing the user expe­ri­ence”. That’s how obscene this sit­u­a­tion is. Kogan was not at all need­ed to pro­vide this data to the Krem­lin because it was so easy for any­one to obtain. In oth­er words, we should assume all sorts of gov­ern­ments have this kind of data.

6. The legal let­ter sent by Face­book to Cam­bridge Ana­lyt­i­ca in August 2016 demand­ing that it delete the data was sent just days before it was offi­cial­ly announced that Steve Ban­non was tak­ing over as cam­paign man­ag­er for Trump and bring­ing Cam­bridge Ana­lyt­i­ca with him. That sure does seem like Face­book knew about Ban­non’s involve­ment with Cam­bridge Ana­lyt­i­ca and the fact that Ban­non was going to become Trump’s cam­paign man­ag­er and bring Cam­bridge Ana­lyt­i­ca into the cam­paign.

7. Steve Bannon’s lawyer said he had no com­ment because his client “knows noth­ing about the claims being assert­ed”. He added: “The first Mr Ban­non heard of these reports was from media inquiries in the past few days.”

So as we can see, like the prover­bial onion, the more lay­ers you peel back on the sto­ry Cam­bridge Ana­lyt­i­ca and Face­book have been ped­dling about how this data was obtained and used, the more acrid and mal­odor­ous it gets. With a dis­tinct tinge of BS:

The Guardian

Revealed: 50 mil­lion Face­book pro­files har­vest­ed for Cam­bridge Ana­lyt­i­ca in major data breach

Whistle­blow­er describes how firm linked to for­mer Trump advis­er Steve Ban­non com­piled user data to tar­get Amer­i­can vot­ers

Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son

Sat 17 Mar 2018 18.03 EDT

The data ana­lyt­ics firm that worked with Don­ald Trump’s elec­tion team and the win­ning Brex­it cam­paign har­vest­ed mil­lions of Face­book pro­files of US vot­ers, in one of the tech giant’s biggest ever data breach­es, and used them to build a pow­er­ful soft­ware pro­gram to pre­dict and influ­ence choic­es at the bal­lot box.

A whistle­blow­er has revealed to the Observ­er how Cam­bridge Ana­lyt­i­ca – a com­pa­ny owned by the hedge fund bil­lion­aire Robert Mer­cer, and head­ed at the time by Trump’s key advis­er Steve Ban­non – used per­son­al infor­ma­tion tak­en with­out autho­ri­sa­tion in ear­ly 2014 to build a sys­tem that could pro­file indi­vid­ual US vot­ers, in order to tar­get them with per­son­alised polit­i­cal adver­tise­ments.

Christo­pher Wylie, who worked with a Cam­bridge Uni­ver­si­ty aca­d­e­m­ic to obtain the data, told the Observ­er: “We exploit­ed Face­book to har­vest mil­lions of people’s pro­files. And built mod­els to exploit what we knew about them and tar­get their inner demons. That was the basis the entire com­pa­ny was built on.

Doc­u­ments seen by the Observ­er, and con­firmed by a Face­book state­ment, show that by late 2015 the com­pa­ny had found out that infor­ma­tion had been har­vest­ed on an unprece­dent­ed scale. How­ev­er, at the time it failed to alert users and took only lim­it­ed steps to recov­er and secure the pri­vate infor­ma­tion of more than 50 mil­lion indi­vid­u­als.

The New York Times is report­ing that copies of the data har­vest­ed for Cam­bridge Ana­lyt­i­ca could still be found online; its report­ing team had viewed some of the raw data.

The data was col­lect­ed through an app called thi­sisy­our­dig­i­tal­life, built by aca­d­e­m­ic Alek­san­dr Kogan, sep­a­rate­ly from his work at Cam­bridge Uni­ver­si­ty. Through his com­pa­ny Glob­al Sci­ence Research (GSR), in col­lab­o­ra­tion with Cam­bridge Ana­lyt­i­ca, hun­dreds of thou­sands of users were paid to take a per­son­al­i­ty test and agreed to have their data col­lect­ed for aca­d­e­m­ic use.

How­ev­er, the app also col­lect­ed the infor­ma­tion of the test-tak­ers’ Face­book friends, lead­ing to the accu­mu­la­tion of a data pool tens of mil­lions-strong. Facebook’s “plat­form pol­i­cy” allowed only col­lec­tion of friends’ data to improve user expe­ri­ence in the app and barred it being sold on or used for adver­tis­ing. The dis­cov­ery of the unprece­dent­ed data har­vest­ing, and the use to which it was put, rais­es urgent new ques­tions about Facebook’s role in tar­get­ing vot­ers in the US pres­i­den­tial elec­tion. It comes only weeks after indict­ments of 13 Rus­sians by the spe­cial coun­sel Robert Mueller which stat­ed they had used the plat­form to per­pe­trate “infor­ma­tion war­fare” against the US.

Cam­bridge Ana­lyt­i­ca and Face­book are one focus of an inquiry into data and pol­i­tics by the British Infor­ma­tion Commissioner’s Office. Sep­a­rate­ly, the Elec­toral Com­mis­sion is also inves­ti­gat­ing what role Cam­bridge Ana­lyt­i­ca played in the EU ref­er­en­dum.

...

On Fri­day, four days after the Observ­er sought com­ment for this sto­ry, but more than two years after the data breach was first report­ed, Face­book announced that it was sus­pend­ing Cam­bridge Ana­lyt­i­ca and Kogan from the plat­form, pend­ing fur­ther infor­ma­tion over mis­use of data. Sep­a­rate­ly, Facebook’s exter­nal lawyers warned the Observ­er it was mak­ing “false and defam­a­to­ry” alle­ga­tions, and reserved Facebook’s legal posi­tion.

The rev­e­la­tions pro­voked wide­spread out­rage. The Mass­a­chu­setts Attor­ney Gen­er­al Mau­ra Healey announced that the state would be launch­ing an inves­ti­ga­tion. “Res­i­dents deserve answers imme­di­ate­ly from Face­book and Cam­bridge Ana­lyt­i­ca,” she said on Twit­ter.

The Demo­c­ra­t­ic sen­a­tor Mark Warn­er said the har­vest­ing of data on such a vast scale for polit­i­cal tar­get­ing under­lined the need for Con­gress to improve con­trols. He has pro­posed an Hon­est Ads Act to reg­u­late online polit­i­cal adver­tis­ing the same way as tele­vi­sion, radio and print. “This sto­ry is more evi­dence that the online polit­i­cal adver­tis­ing mar­ket is essen­tial­ly the Wild West. Whether it’s allow­ing Rus­sians to pur­chase polit­i­cal ads, or exten­sive micro-tar­get­ing based on ill-got­ten user data, it’s clear that, left unreg­u­lat­ed, this mar­ket will con­tin­ue to be prone to decep­tion and lack­ing in trans­paren­cy,” he said.

Last month both Face­book and the CEO of Cam­bridge Ana­lyt­i­ca, Alexan­der Nix, told a par­lia­men­tary inquiry on fake news: that the com­pa­ny did not have or use pri­vate Face­book data.

Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, when asked if Cam­bridge Ana­lyt­i­ca had Face­book data, told MPs: “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.”

Cam­bridge Analytica’s chief exec­u­tive, Alexan­der Nix, told the inquiry: “We do not work with Face­book data and we do not have Face­book data.”

Wylie, a Cana­di­an data ana­lyt­ics expert who worked with Cam­bridge Ana­lyt­i­ca and Kogan to devise and imple­ment the scheme, showed a dossier of evi­dence about the data mis­use to the Observ­er which appears to raise ques­tions about their tes­ti­mo­ny. He has passed it to the Nation­al Crime Agency’s cyber­crime unit and the Infor­ma­tion Commissioner’s Office. It includes emails, invoic­es, con­tracts and bank trans­fers that reveal more than 50 mil­lion pro­files – most­ly belong­ing to reg­is­tered US vot­ers – were har­vest­ed from the site in one of the largest-ever breach­es of Face­book data. Face­book on Fri­day said that it was also sus­pend­ing Wylie from access­ing the plat­form while it car­ried out its inves­ti­ga­tion, despite his role as a whistle­blow­er.

At the time of the data breach, Wylie was a Cam­bridge Ana­lyt­i­ca employ­ee, but Face­book described him as work­ing for Eunoia Tech­nolo­gies, a firm he set up on his own after leav­ing his for­mer employ­er in late 2014.

The evi­dence Wylie sup­plied to UK and US author­i­ties includes a let­ter from Facebook’s own lawyers sent to him in August 2016, ask­ing him to destroy any data he held that had been col­lect­ed by GSR, the com­pa­ny set up by Kogan to har­vest the pro­files.

That legal let­ter was sent sev­er­al months after the Guardian first report­ed the breach and days before it was offi­cial­ly announced that Ban­non was tak­ing over as cam­paign man­ag­er for Trump and bring­ing Cam­bridge Ana­lyt­i­ca with him.

“Because this data was obtained and used with­out per­mis­sion, and because GSR was not autho­rised to share or sell it to you, it can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly,” the let­ter said.

Face­book did not pur­sue a response when the let­ter ini­tial­ly went unan­swered for weeks because Wylie was trav­el­ling, nor did it fol­low up with foren­sic checks on his com­put­ers or stor­age, he said.

“That to me was the most aston­ish­ing thing. They wait­ed two years and did absolute­ly noth­ing to check that the data was delet­ed. All they asked me to do was tick a box on a form and post it back.”

Paul-Olivi­er Dehaye, a data pro­tec­tion spe­cial­ist, who spear­head­ed the inves­tiga­tive efforts into the tech giant, said: “Face­book has denied and denied and denied this. It has mis­led MPs and con­gres­sion­al inves­ti­ga­tors and it’s failed in its duties to respect the law.

“It has a legal oblig­a­tion to inform reg­u­la­tors and indi­vid­u­als about this data breach, and it hasn’t. It’s failed time and time again to be open and trans­par­ent.”

A major­i­ty of Amer­i­can states have laws requir­ing noti­fi­ca­tion in some cas­es of data breach, includ­ing Cal­i­for­nia, where Face­book is based.

Face­book denies that the har­vest­ing of tens of mil­lions of pro­files by GSR and Cam­bridge Ana­lyt­i­ca was a data breach. It said in a state­ment that Kogan “gained access to this infor­ma­tion in a legit­i­mate way and through the prop­er chan­nels” but “did not sub­se­quent­ly abide by our rules” because he passed the infor­ma­tion on to third par­ties.

Face­book said it removed the app in 2015 and required cer­ti­fi­ca­tion from every­one with copies that the data had been destroyed, although the let­ter to Wylie did not arrive until the sec­ond half of 2016. “We are com­mit­ted to vig­or­ous­ly enforc­ing our poli­cies to pro­tect people’s infor­ma­tion. We will take what­ev­er steps are required to see that this hap­pens,” Paul Gre­w­al, Facebook’s vice-pres­i­dent, said in a state­ment. The com­pa­ny is now inves­ti­gat­ing reports that not all data had been delet­ed.

Kogan, who has pre­vi­ous­ly unre­port­ed links to a Russ­ian uni­ver­si­ty and took Russ­ian grants for research, had a licence from Face­book to col­lect pro­file data, but it was for research pur­pos­es only. So when he hoovered up infor­ma­tion for the com­mer­cial ven­ture, he was vio­lat­ing the company’s terms. Kogan main­tains every­thing he did was legal, and says he had a “close work­ing rela­tion­ship” with Face­book, which had grant­ed him per­mis­sion for his apps.

The Observ­er has seen a con­tract dat­ed 4 June 2014, which con­firms SCL, an affil­i­ate of Cam­bridge Ana­lyt­i­ca, entered into a com­mer­cial arrange­ment with GSR, entire­ly premised on har­vest­ing and pro­cess­ing Face­book data. Cam­bridge Ana­lyt­i­ca spent near­ly $1m on data col­lec­tion, which yield­ed more than 50 mil­lion indi­vid­ual pro­files that could be matched to elec­toral rolls. It then used the test results and Face­book data to build an algo­rithm that could analyse indi­vid­ual Face­book pro­files and deter­mine per­son­al­i­ty traits linked to vot­ing behav­iour.

The algo­rithm and data­base togeth­er made a pow­er­ful polit­i­cal tool. It allowed a cam­paign to iden­ti­fy pos­si­ble swing vot­ers and craft mes­sages more like­ly to res­onate.

“The ulti­mate prod­uct of the train­ing set is cre­at­ing a ‘gold stan­dard’ of under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion,” the con­tract spec­i­fies. It promis­es to cre­ate a data­base of 2 mil­lion “matched” pro­files, iden­ti­fi­able and tied to elec­toral reg­is­ters, across 11 states, but with room to expand much fur­ther.

At the time, more than 50 mil­lion pro­files rep­re­sent­ed around a third of active North Amer­i­can Face­book users, and near­ly a quar­ter of poten­tial US vot­ers. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a rela­tion­ship with GSR. They did some research for us back in 2014. That research proved to be fruit­less and so the answer is no.”

Cam­bridge Ana­lyt­i­ca said that its con­tract with GSR stip­u­lat­ed that Kogan should seek informed con­sent for data col­lec­tion and it had no rea­son to believe he would not.

GSR was “led by a seem­ing­ly rep­utable aca­d­e­m­ic at an inter­na­tion­al­ly renowned insti­tu­tion who made explic­it con­trac­tu­al com­mit­ments to us regard­ing its legal author­i­ty to license data to SCL Elec­tions”, a com­pa­ny spokesman said.

SCL Elec­tions, an affil­i­ate, worked with Face­book over the peri­od to ensure it was sat­is­fied no terms had been “know­ing­ly breached” and pro­vid­ed a signed state­ment that all data and deriv­a­tives had been delet­ed, he said. Cam­bridge Ana­lyt­i­ca also said none of the data was used in the 2016 pres­i­den­tial elec­tion.

Steve Bannon’s lawyer said he had no com­ment because his client “knows noth­ing about the claims being assert­ed”. He added: “The first Mr Ban­non heard of these reports was from media inquiries in the past few days.” He direct­ed inquires to Nix.

———-

“Revealed: 50 mil­lion Face­book pro­files har­vest­ed for Cam­bridge Ana­lyt­i­ca in major data breach” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 03/17/2018

“Christo­pher Wylie, who worked with a Cam­bridge Uni­ver­si­ty aca­d­e­m­ic to obtain the data, told the Observ­er: “We exploit­ed Face­book to har­vest mil­lions of people’s pro­files. And built mod­els to exploit what we knew about them and tar­get their inner demons. That was the basis the entire com­pa­ny was built on.””

Exploit­ing every­one’s inner demons. Yeah, that sounds like some­thing Steve Ban­non and Robert Mer­cer would be inter­est­ed in. And it explains why Face­book data would have been poten­tial­ly so use­ful for exploit­ing those demons. Recall that the orig­i­nal non-Face­book data that Christo­pher Wylie and ini­tial Cam­bridge Ana­lyt­i­ca team was work­ing with with in 2013 and 2014 was­n’t seen as effec­tive. It did­n’t have that inner-demon-influ­enc­ing gran­u­lar­i­ty. And then they dis­cov­ered the Face­book data avail­able through this app loop­hole and it was tak­en to a dif­fer­ent lev­el. Remem­ber when Face­book ran that con­tro­ver­sial exper­i­ment on users where they tried to manip­u­late their emo­tions by alter­ing their news feeds? It sounds like that’s what Cam­bridge Ana­lyt­i­ca was basi­cal­ly try­ing to do using Face­book ads instead of the news­feed, but per­haps in a more micro­tar­get­ed way.

And that’s all because Facebook’s “plat­form pol­i­cy” allowed for the col­lec­tion of friends’ data to “improve user expe­ri­ence in the app” with the non-enforced request that the data not be sold on or used for adver­tis­ing:

...
The data was col­lect­ed through an app called thi­sisy­our­dig­i­tal­life, built by aca­d­e­m­ic Alek­san­dr Kogan, sep­a­rate­ly from his work at Cam­bridge Uni­ver­si­ty. Through his com­pa­ny Glob­al Sci­ence Research (GSR), in col­lab­o­ra­tion with Cam­bridge Ana­lyt­i­ca, hun­dreds of thou­sands of users were paid to take a per­son­al­i­ty test and agreed to have their data col­lect­ed for aca­d­e­m­ic use.

How­ev­er, the app also col­lect­ed the infor­ma­tion of the test-tak­ers’ Face­book friends, lead­ing to the accu­mu­la­tion of a data pool tens of mil­lions-strong. Facebook’s “plat­form pol­i­cy” allowed only col­lec­tion of friends’ data to improve user expe­ri­ence in the app and barred it being sold on or used for adver­tis­ing. The dis­cov­ery of the unprece­dent­ed data har­vest­ing, and the use to which it was put, rais­es urgent new ques­tions about Facebook’s role in tar­get­ing vot­ers in the US pres­i­den­tial elec­tion. It comes only weeks after indict­ments of 13 Rus­sians by the spe­cial coun­sel Robert Mueller which stat­ed they had used the plat­form to per­pe­trate “infor­ma­tion war­fare” against the US.
...

Just imag­ine how many app devel­op­ers were using this over the 2007–2014 peri­od Face­book had this “plat­form pol­i­cy” that allowed data cap­tures of friends’ “to improve user expe­ri­ence in the app”. It was­n’t just Cam­bridge Ana­lyt­i­ca that took advan­tage of this. That’s a big part of the sto­ry here.

And yet when Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, was asked if Cam­bridge Ana­lyt­i­ca had Face­book data, he said, “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.”:

...
Last month both Face­book and the CEO of Cam­bridge Ana­lyt­i­ca, Alexan­der Nix, told a par­lia­men­tary inquiry on fake news: that the com­pa­ny did not have or use pri­vate Face­book data.

Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, when asked if Cam­bridge Ana­lyt­i­ca had Face­book data, told MPs: “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.”

Cam­bridge Analytica’s chief exec­u­tive, Alexan­der Nix, told the inquiry: “We do not work with Face­book data and we do not have Face­book data.”
...

And note how the arti­cle appears to say the data Cam­bridge Ana­lyt­i­ca col­lect­ed on Face­book users includ­ed “emails, invoic­es, con­tracts and bank trans­fers that reveal more than 50 mil­lion pro­files.” It’s not clear if that’s a ref­er­ence to emails, invoic­es, con­tracts and bank trans­fers that involved with set­ting up Cam­bridge Ana­lyt­i­ca or emails, invoic­es, con­tracts and bank trans­fers from Face­book users, but if that was from users that would be wild­ly scan­dalous:

...
Wylie, a Cana­di­an data ana­lyt­ics expert who worked with Cam­bridge Ana­lyt­i­ca and Kogan to devise and imple­ment the scheme, showed a dossier of evi­dence about the data mis­use to the Observ­er which appears to raise ques­tions about their tes­ti­mo­ny. He has passed it to the Nation­al Crime Agency’s cyber­crime unit and the Infor­ma­tion Commissioner’s Office. It includes emails, invoic­es, con­tracts and bank trans­fers that reveal more than 50 mil­lion pro­filesmost­ly belong­ing to reg­is­tered US vot­ers – were har­vest­ed from the site in one of the largest-ever breach­es of Face­book data. Face­book on Fri­day said that it was also sus­pend­ing Wylie from access­ing the plat­form while it car­ried out its inves­ti­ga­tion, despite his role as a whistle­blow­er.
...

So it will be inter­est­ing to see if that point of ambi­gu­i­ty is ever clar­i­fied some­where. Because wow would that be scan­dalous if emails, invoic­es, con­tracts and bank trans­fers of Face­book users were released through this “plat­form pol­i­cy”.

Either way, it looks unam­bigu­ous­ly awful for Face­book. Espe­cial­ly now that we learn that the cease and destroy let­ter Face­book sent to Cam­bridge Ana­lyt­i­ca in August of 2016 was sus­pi­cious­ly sent just days before Steve Ban­non, a founder and offi­cer of Cam­bridge Ana­lyt­i­ca, becomes Trump’s cam­paign man­ag­er and brings the com­pa­ny into the Trump cam­paign:

...
The evi­dence Wylie sup­plied to UK and US author­i­ties includes a let­ter from Facebook’s own lawyers sent to him in August 2016, ask­ing him to destroy any data he held that had been col­lect­ed by GSR, the com­pa­ny set up by Kogan to har­vest the pro­files.

That legal let­ter was sent sev­er­al months after the Guardian first report­ed the breach and days before it was offi­cial­ly announced that Ban­non was tak­ing over as cam­paign man­ag­er for Trump and bring­ing Cam­bridge Ana­lyt­i­ca with him.

“Because this data was obtained and used with­out per­mis­sion, and because GSR was not autho­rised to share or sell it to you, it can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly,” the let­ter said.
...

And the only thing Face­book did to con­firm that the Face­book data was­n’t mis­used, accord­ing to Christo­pher Wylie, was to ask that a box be checked a box on a form:

...
Face­book did not pur­sue a response when the let­ter ini­tial­ly went unan­swered for weeks because Wylie was trav­el­ling, nor did it fol­low up with foren­sic checks on his com­put­ers or stor­age, he said.

“That to me was the most aston­ish­ing thing. They wait­ed two years and did absolute­ly noth­ing to check that the data was delet­ed. All they asked me to do was tick a box on a form and post it back.”
...

And, again, Face­book denied it’s data based passed along to Cam­bridge Ana­lyt­i­ca when ques­tioned by both the US Con­gress and UK Par­lia­ment:

...
Paul-Olivi­er Dehaye, a data pro­tec­tion spe­cial­ist, who spear­head­ed the inves­tiga­tive efforts into the tech giant, said: “Face­book has denied and denied and denied this. It has mis­led MPs and con­gres­sion­al inves­ti­ga­tors and it’s failed in its duties to respect the law.

“It has a legal oblig­a­tion to inform reg­u­la­tors and indi­vid­u­als about this data breach, and it hasn’t. It’s failed time and time again to be open and trans­par­ent.”

A major­i­ty of Amer­i­can states have laws requir­ing noti­fi­ca­tion in some cas­es of data breach, includ­ing Cal­i­for­nia, where Face­book is based.
...

And not how Face­book now admits Alek­san­dr Kogan did indeed get the data legal­ly. It just was­n’t used prop­er­ly. It’s why Face­book is say­ing it should­n’t be called a “data breach”: because it was­n’t a breach because the data was obtained prop­er­ly:

...
Face­book denies that the har­vest­ing of tens of mil­lions of pro­files by GSR and Cam­bridge Ana­lyt­i­ca was a data breach. It said in a state­ment that Kogan “gained access to this infor­ma­tion in a legit­i­mate way and through the prop­er chan­nels” but “did not sub­se­quent­ly abide by our rules” because he passed the infor­ma­tion on to third par­ties.

Face­book said it removed the app in 2015 and required cer­ti­fi­ca­tion from every­one with copies that the data had been destroyed, although the let­ter to Wylie did not arrive until the sec­ond half of 2016. “We are com­mit­ted to vig­or­ous­ly enforc­ing our poli­cies to pro­tect people’s infor­ma­tion. We will take what­ev­er steps are required to see that this hap­pens,” Paul Gre­w­al, Facebook’s vice-pres­i­dent, said in a state­ment. The com­pa­ny is now inves­ti­gat­ing reports that not all data had been delet­ed.
...

But Alek­san­dr Kogan isn’t sim­ply argu­ing that he did noth­ing wrong when he obtained that Face­book data via his app. Kogan also argues that he had a “close work­ing rela­tion­ship” with Face­book, which has grant­ed him per­mis­sion for his apps, and every­thing he did with the data was legal. So Alek­san­dr Kogan’s sto­ry is quite notable because, again, as we’ll see below, there is evi­dence that his sto­ry is clos­est to the truth of all the sto­ries we’re hear­ing: that Face­book was total­ly fine with Kogan’s apps obtain­ing the pri­vate data of mil­lions of Face­book friends. And Face­book was per­fect­ly fine with how that data was used or was at least con­scious­ly try­ing to not know how the data might be mis­used. That’s the pic­ture that’s going to emerge so keep that in mind when Kogan asserts that he had a “close work­ing rela­tion­ship” with Face­book. He prob­a­bly did based on avail­able evi­dence:

...
Kogan, who has pre­vi­ous­ly unre­port­ed links to a Russ­ian uni­ver­si­ty and took Russ­ian grants for research, had a licence from Face­book to col­lect pro­file data, but it was for research pur­pos­es only. So when he hoovered up infor­ma­tion for the com­mer­cial ven­ture, he was vio­lat­ing the company’s terms. Kogan main­tains every­thing he did was legal, and says he had a “close work­ing rela­tion­ship” with Face­book, which had grant­ed him per­mis­sion for his apps.
...

Kogan main­tains every­thing he did was legal, and guess what? It prob­a­bly was legal. That’s part of the scan­dal here.

And regard­ing those tes­ti­mony’s by Cam­bridge Ana­lyt­i­ca’s now-for­mer CEO Alexan­der Nix that the com­pa­ny nev­er worked with Face­book, note how the Observ­er got to see a copy of the con­tract Cam­bridge Ana­lyt­i­ca entered into with Kogan’s GSR and the con­tract was entire­ly premised on har­vest­ing and pro­cess­ing the Face­book data. Which, again, hints at the like­li­hood that they thought what they were doing at the time (2014) was com­plete­ly legal. They talked about it in the con­tract:

...
The Observ­er has seen a con­tract dat­ed 4 June 2014, which con­firms SCL, an affil­i­ate of Cam­bridge Ana­lyt­i­ca, entered into a com­mer­cial arrange­ment with GSR, entire­ly premised on har­vest­ing and pro­cess­ing Face­book data. Cam­bridge Ana­lyt­i­ca spent near­ly $1m on data col­lec­tion, which yield­ed more than 50 mil­lion indi­vid­ual pro­files that could be matched to elec­toral rolls. It then used the test results and Face­book data to build an algo­rithm that could analyse indi­vid­ual Face­book pro­files and deter­mine per­son­al­i­ty traits linked to vot­ing behav­iour.

...

“The ulti­mate prod­uct of the train­ing set is cre­at­ing a ‘gold stan­dard’ of under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion,” the con­tract spec­i­fies. It promis­es to cre­ate a data­base of 2 mil­lion “matched” pro­files, iden­ti­fi­able and tied to elec­toral reg­is­ters, across 11 states, but with room to expand much fur­ther.

...

Cam­bridge Ana­lyt­i­ca said that its con­tract with GSR stip­u­lat­ed that Kogan should seek informed con­sent for data col­lec­tion and it had no rea­son to believe he would not.

GSR was “led by a seem­ing­ly rep­utable aca­d­e­m­ic at an inter­na­tion­al­ly renowned insti­tu­tion who made explic­it con­trac­tu­al com­mit­ments to us regard­ing its legal author­i­ty to license data to SCL Elec­tions”, a com­pa­ny spokesman said.
...

““The ulti­mate prod­uct of the train­ing set is cre­at­ing a ‘gold stan­dard’ of under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion,” the con­tract spec­i­fies. It promis­es to cre­ate a data­base of 2 mil­lion “matched” pro­files, iden­ti­fi­able and tied to elec­toral reg­is­ters, across 11 states, but with room to expand much fur­ther.”

A con­tract to cre­ate a ‘gold stan­dard’ of 2 mil­lion Face­book accounts that are ‘matched’ to real life vot­ers for the use of “under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion.” That was the actu­al con­tract Kogan had with Cam­bridge Ana­lyt­i­ca. All for the pur­pose of devel­op­ing a sys­tem that would allow Cam­bridge Ana­lyt­i­ca to infer your inner demons from your Face­book pro­file and then manip­u­late them.

So it’s worth not­ing how the app per­mis­sions set­up Face­book allowed from 2007–2014 of let­ting app devel­op­ers col­lect Face­book pro­file infor­ma­tion of the peo­ple who use their apps and their friends cre­at­ed this amaz­ing arrange­ment where app devel­op­ers could gen­er­ate a ‘gold stan­dard’ of of peo­ple using apps and a test set from all their friends. If the goal was get­ting peo­ple to encour­age their friends to down­load an app that would have been a very use­ful data set. But it would of course also have been an incred­i­bly use­ful data set for any­one who want­ed to col­lect the pro­file infor­ma­tion of Face­book users. Because, again, as we’re going to see, a Face­book whis­tle-blow­er is claim­ing that Face­book user pro­file infor­ma­tion was rou­tine­ly hand­ed out to app devel­op­ers.

So if an app devel­op­er want­ed to exper­i­ment on, say, how to use that avail­able Face­book pro­file infor­ma­tion to manip­u­late peo­ple, get­ting a ‘gold stan­dard’ of peo­ple to take a psy­cho­log­i­cal pro­file sur­vey would be an impor­tant step in car­ry­ing out that exper­i­ment. Because those peo­ple who take your psy­cho­log­i­cal sur­vey form the data set you can use to train your algo­rithms that take Face­book pro­file infor­ma­tion as the input and cre­ate psy­cho­log­i­cal pro­file data as the out­put.

And that’s what Alek­san­dr Kogan’s app was doing: grab­bing psy­cho­log­i­cal infor­ma­tion from the sur­vey while simul­ta­ne­ous­ly grab­bing the Face­book pro­file data from the test-tak­ers, along with the Face­book pro­file data of all their friends. Kogan’s ‘gold stan­dard’ train­ing set was the peo­ple who actu­al­ly used his app and hand­ed over a bunch of per­son­al­i­ty infor­ma­tion from the sur­vey and the test set would have been the tens of mil­lions of friends whose data was also col­lect­ed. Since the goal of Cam­bridge Ana­lyt­i­ca was to infer per­son­al­i­ty char­ac­ter­is­tics from peo­ple’s Face­book pro­files, pair­ing the per­son­al­i­ty sur­veys from the ~270,000 peo­ple who took the app sur­vey to their Face­book pro­files allowed Cam­bridge Ana­lyt­i­ca to train their algo­rithms that guessed at per­son­al­i­ty char­ac­ter­is­tics from the Face­book pro­file infor­ma­tion. Then they had all the rest of the pro­file infor­ma­tion on the rest of the ~50 mil­lion peo­ple to apply those algo­rithms.

Recall how Trump’s 2016 cam­paign dig­i­tal direc­tor, Brad Parscale, curi­ous­ly downlplayed the util­i­ty of Cam­bridge Ana­lyt­i­ca’s data dur­ing inter­views where he was brag­ging about how they were using Face­book’s ad micro-tar­get­ing fea­tures to run “A/B test­ing on ste­ri­ods” on micro-tar­get­ed audi­ences i.e. strate­gi­cal­ly expos­ing micro-tar­get­ed Face­book audi­ences sets of ads that dif­fered in some spe­cif­ic way design to explore a par­tic­u­lar psy­cho­log­i­cal dimen­sion of that micro-audi­ence. So it’s worth not­ing that the “A/B test­ing on steroids” Brad Parscale referred to was prob­a­bly focused on the ~30 mil­lion of that ~50 mil­lion set of peo­ple that Cam­bridge Ana­lyt­i­ca obtained a Face­book pro­file who could be matched back to real peo­ple. Those 30 mil­lion Face­book users that Cam­bridge Ana­lyt­i­ca had Face­book pro­file data on were the test set. And the algo­rithms designed to guess the psy­cho­log­i­cal make­up of peo­ple from their Face­book pro­files that Cam­bridge Ana­lyt­i­ca refined on the train­ing set of ~270,000 Face­book users who took the psy­cho­log­i­cal pro­files were like­ly unleashed on that test set of ~30 mil­lion peo­ple.

So when we find out that the Cam­bridge Ana­lyt­i­ca con­tract with Alek­san­dr Kogan’s GSR com­pa­ny includ­ed lan­guage like build­ing a “gold stan­dard”, keep in mind that this implied that there was a lot of test­ing to do after the algo­rith­mic refine­ments based on that gold stan­dard. And the ~30–50 mil­lion pro­files they col­lect­ed from the friends of the ~270,000 peo­ple who down­loaded Kogan’s app made for quite a test set.

Also keep in mind that the denials that Cam­bridge Ana­lyt­i­ca worked with Face­book data by for­mer CEO Alexan­der Nix aren’t the only laugh­able denials of Cam­bridge Ana­lyt­i­ca’s offi­cers. Any denials by Steve Ban­non and his lawyers that he knew about Cam­bridge Ana­lyt­i­ca’s use of Face­book pro­file data should also be seen laugh­able, start­ing with the denials from Ban­non’s lawyers that he knows noth­ing about what Wylie and oth­ers are claim­ing:

...
Steve Bannon’s lawyer said he had no com­ment because his client “knows noth­ing about the claims being assert­ed”. He added: “The first Mr Ban­non heard of these reports was from media inquiries in the past few days.” He direct­ed inquires to Nix.

Steve Ban­non: the Boss Who Knows Noth­ing (Or So He Says)

Steve Ban­non “knows noth­ing about the claims being assert­ed.” LOL! Yeah, well, not accord­ing to Christo­pher Wylie, who, in the fol­low­ing arti­cle, has some rather sig­nif­i­cant claims about the role Steve Ban­non in all this. Accord­ing to Wylie:

1. Steve Ban­non was the per­son over­see­ing the acqui­si­tion of Face­book data by Cam­bridge Ana­lyt­i­ca. As Wylie put it, “We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss.” Now, when Wylie says Ban­non was Nix’s boss, note that Ban­non served as vice pres­i­dent and sec­re­tary of Cam­bridge Ana­lyt­i­ca from June 2014 to August 2016. And Nix was CEO dur­ing this peri­od. So tech­ni­cal­ly Nix was the boss. But it sounds like Ban­non was effec­tive­ly the boss, accord­ing to Wylie.

2. Wylie acknowl­edges that it’s unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the Face­book data. But Wylie does say that both Ban­non and Rebekah Mer­cer par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed. And Ban­non “approved the data-col­lec­tion scheme we were propos­ing”. So if Ban­non and Mer­cer did­n’t know the details of how the pur­chase of mas­sive amounts of Face­book data took place that would be pret­ty remark­able. Remark­ably uncu­ri­ous, giv­en that acquir­ing this data was at the core of what the com­pa­ny was doing and they approved of the data-col­lec­tion scheme. A scheme that involved hav­ing Alek­san­dr Kogan set up a sep­a­rate com­pa­ny. That was the “scheme” Ban­non and Mer­cer would have had to approve so the ques­tion if they did­n’t real­ize that they were acquire this Face­book data using this “friend shar­ing” fea­ture Face­book made avail­able to app devel­op­ers that would have been a sig­nif­i­cant over­sight.

The arti­cle goes on to include a few more fun facts, like...

3. Cam­bridge Ana­lyt­i­ca was doing focus group tests on vot­ers in 2014 and iden­ti­fied many of the same under­ly­ing emo­tion­al sen­ti­ments in vot­ers that formed the core mes­sage behind Don­ald Trump’s cam­paign. In focus groups for the 2014 midterms, the firm found that vot­ers respond­ed to calls for build­ing a wall with Mex­i­co, “drain­ing the swamp” int Wash­ing­ton DC, and to thin­ly veiled forms of racism toward African Amer­i­cans called “race real­ism”. The firm also test­ed vot­er atti­tudes towards Russ­ian Pres­i­dent Vladimir Putin and dis­cov­ered that there’s a lot of Amer­i­cans who real­ly like the idea of a real­ly strong author­i­tar­i­an leader. Again, this was all dis­cov­ered before Trump even jumped into the race.

4. The Trump cam­paign reject­ed ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, which sug­gests that Trump was actu­al­ly the top choice of the Mer­cers and Ban­non, ahead of Ted Cruz.

5. Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix was caught by Chan­nel 4 News in the UK boast­ing about the secre­cy of his firm, at one point stress­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing.”

So based on these alle­ga­tions, Steve Ban­non was close­ly involved in approval the var­i­ous schemes to acquire Face­book data and prob­a­bly using self-destruc­t­ing emails in the process:

The Wash­ing­ton Post

Ban­non over­saw Cam­bridge Analytica’s col­lec­tion of Face­book data, accord­ing to for­mer employ­ee

By Craig Tim­berg, Kar­la Adam and Michael Kran­ish
March 20, 2018 at 7:53 PM

LONDON — Con­ser­v­a­tive strate­gist Stephen K. Ban­non over­saw Cam­bridge Analytica’s ear­ly efforts to col­lect troves of Face­book data as part of an ambi­tious pro­gram to build detailed pro­files of mil­lions of Amer­i­can vot­ers, a for­mer employ­ee of the data-sci­ence firm said Tues­day.

The 2014 effort was part of a high-tech form of vot­er per­sua­sion tout­ed by the com­pa­ny, which under Ban­non iden­ti­fied and test­ed the pow­er of anti-estab­lish­ment mes­sages that lat­er would emerge as cen­tral themes in Pres­i­dent Trump’s cam­paign speech­es, accord­ing to Chris Wylie, who left the com­pa­ny at the end of that year.

Among the mes­sages test­ed were “drain the swamp” and “deep state,” he said.

Cam­bridge Ana­lyt­i­ca, which worked for Trump’s 2016 cam­paign, is now fac­ing ques­tions about alleged uneth­i­cal prac­tices, includ­ing charges that the firm improp­er­ly han­dled the data of tens of mil­lions of Face­book users. On Tues­day, the company’s board announced that it was sus­pend­ing its chief exec­u­tive, Alexan­der Nix, after British tele­vi­sion released secret record­ings that appeared to show him talk­ing about entrap­ping polit­i­cal oppo­nents.

More than three years before he served as Trump’s chief polit­i­cal strate­gist, Ban­non helped launch Cam­bridge Ana­lyt­i­ca with the finan­cial back­ing of the wealthy Mer­cer fam­i­ly as part of a broad­er effort to cre­ate a pop­ulist pow­er base. Ear­li­er this year, the Mer­cers cut ties with Ban­non after he was quot­ed mak­ing incen­di­ary com­ments about Trump and his fam­i­ly.

In an inter­view Tues­day with The Wash­ing­ton Post at his lawyer’s Lon­don office, Wylie said that Ban­non — while he was a top exec­u­tive at Cam­bridge Ana­lyt­i­ca and head of Bre­it­bart News — was deeply involved in the company’s strat­e­gy and approved spend­ing near­ly $1 mil­lion to acquire data, includ­ing Face­book pro­files, in 2014.

“We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss,” said Wylie, who was Cam­bridge Analytica’s research direc­tor. “Alexan­der Nix didn’t have the author­i­ty to spend that much mon­ey with­out approval.”

Ban­non, who served on the company’s board, did not respond to a request for com­ment. He served as vice pres­i­dent and sec­re­tary of Cam­bridge Ana­lyt­i­ca from June 2014 to August 2016, when he became chief exec­u­tive of Trump’s cam­paign, accord­ing to his pub­licly filed finan­cial dis­clo­sure. In 2017, he joined Trump in the White House as his chief strate­gist.

Ban­non received more than $125,000 in con­sult­ing fees from Cam­bridge Ana­lyt­i­ca in 2016 and owned “mem­ber­ship units” in the com­pa­ny worth between $1 mil­lion and $5 mil­lion, accord­ing to his finan­cial dis­clo­sure.

...

It is unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the data, which alleged­ly was col­lect­ed through an app that was por­trayed as a tool for psy­cho­log­i­cal research but was then trans­ferred to the com­pa­ny.

Face­book has said that infor­ma­tion was improp­er­ly shared and that it request­ed the dele­tion of the data in 2015. Cam­bridge Ana­lyt­i­ca offi­cials said that they had done so, but Face­book said it received reports sev­er­al days ago that the data was not delet­ed.

Wylie said that both Ban­non and Rebekah Mer­cer, whose father, Robert Mer­cer, financed the com­pa­ny, par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed, although Wylie acknowl­edged that it was not clear they knew the details of how the col­lec­tion took place.

Ban­non “approved the data-col­lec­tion scheme we were propos­ing,” Wylie said.

...

The data and analy­ses that Cam­bridge Ana­lyt­i­ca gen­er­at­ed in this time pro­vid­ed dis­cov­er­ies that would lat­er form the emo­tion­al­ly charged core of Trump’s pres­i­den­tial plat­form, said Wylie, whose dis­clo­sures in news reports over the past sev­er­al days have rocked both his one­time employ­er and Face­book.

“Trump wasn’t in our con­scious­ness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or any­thing.”

The year before Trump announced his pres­i­den­tial bid, the data firm already had found a high lev­el of alien­ation among young, white Amer­i­cans with a con­ser­v­a­tive bent.

In focus groups arranged to test mes­sages for the 2014 midterms, these vot­ers respond­ed to calls for build­ing a new wall to block the entry of ille­gal immi­grants, to reforms intend­ed the “drain the swamp” of Washington’s entrenched polit­i­cal com­mu­ni­ty and to thin­ly veiled forms of racism toward African Amer­i­cans called “race real­ism,” he recount­ed.

The firm also test­ed views of Russ­ian Pres­i­dent Vladimir Putin.

“The only for­eign thing we test­ed was Putin,” he said. “It turns out, there’s a lot of Amer­i­cans who real­ly like this idea of a real­ly strong author­i­tar­i­an leader and peo­ple were quite defen­sive in focus groups of Putin’s inva­sion of Crimea.”

The con­tro­ver­sy over Cam­bridge Analytica’s data col­lec­tion erupt­ed in recent days amid news reports that an app cre­at­ed by a Cam­bridge Uni­ver­si­ty psy­chol­o­gist, Alek­san­dr Kogan, accessed exten­sive per­son­al data of 50 mil­lion Face­book users. The app, called thi­sisy­our­dig­i­tal­life, was down­loaded by 270,000 users. Facebook’s pol­i­cy, which has since changed, allowed Kogan to also col­lect data —includ­ing names, home towns, reli­gious affil­i­a­tions and likes — on all of the Face­book “friends” of those users. Kogan shared that data with Cam­bridge Ana­lyt­i­ca for its grow­ing data­base on Amer­i­can vot­ers.

Face­book on Fri­day banned the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, Kogan and Wylie for improp­er­ly shar­ing that data.

The Fed­er­al Trade Com­mis­sion has opened an inves­ti­ga­tion into Face­book to deter­mine whether the social media plat­form vio­lat­ed a 2011 con­sent decree gov­ern­ing its pri­va­cy poli­cies when it allowed the data col­lec­tion. And Wylie plans to tes­ti­fy to Democ­rats on the House Intel­li­gence Com­mit­tee as part of their inves­ti­ga­tion of Russ­ian inter­fer­ence in the elec­tion, includ­ing pos­si­ble ties to the Trump cam­paign.

Mean­while, Britain’s Chan­nel 4 News aired a video Tues­day in which Nix was shown boast­ing about his work for Trump. He seemed to high­light his firm’s secre­cy, at one point stress­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing.”

The com­pa­ny said in a state­ment that Nix’s com­ments “do not rep­re­sent the val­ues or oper­a­tions of the firm and his sus­pen­sion reflects the seri­ous­ness with which we view this vio­la­tion.”

Nix could not be reached for com­ment.

Cam­bridge Ana­lyt­i­ca was set up as a U.S. affil­i­ate of British-based SCL Group, which had a wide range of gov­ern­men­tal clients glob­al­ly, in addi­tion to its polit­i­cal work.

Wylie said that Ban­non and Nix first met in 2013, the same year that Wylie — a young data whiz with some polit­i­cal expe­ri­ence in Britain and Cana­da — was work­ing for SCL Group. Ban­non and Wylie met soon after and hit it off in con­ver­sa­tions about cul­ture, elec­tions and how to spread ideas using tech­nol­o­gy.

Ban­non, Wylie, Nix, Rebekah Mer­cer and Robert Mer­cer met in Rebekah Mercer’s Man­hat­tan apart­ment in the fall of 2013, strik­ing a deal in which Robert Mer­cer would fund the cre­ation of Cam­bridge Ana­lyt­i­ca with $10 mil­lion, with the hope of shap­ing the con­gres­sion­al elec­tions a year lat­er, accord­ing to Wylie. Robert Mer­cer, in par­tic­u­lar, seemed trans­fixed by the group’s plans to har­ness and ana­lyze data, he recalled.

The Mer­cers were keen to cre­ate a U.S.-based busi­ness to avoid bad optics and vio­lat­ing U.S. cam­paign finance rules, Wylie said. “They want­ed to cre­ate an Amer­i­can brand,” he said.

The young com­pa­ny strug­gled to quick­ly deliv­er on its promis­es, Wiley said. Wide­ly avail­able infor­ma­tion from com­mer­cial data bro­kers pro­vid­ed people’s names, address­es, shop­ping habits and more, but failed to dis­tin­guish on more fine-grained mat­ters of per­son­al­i­ty that might affect polit­i­cal views.

Cam­bridge Ana­lyt­i­ca ini­tial­ly worked for 2016 Repub­li­can can­di­date Sen. Ted Cruz (Tex.), who was backed by the Mer­cers. The Trump cam­paign had reject­ed ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, and Trump him­self said in May 2016 that he “always felt” that the use of vot­er data was “over­rat­ed.”

After Cruz fad­ed, the Mer­cers switched their alle­giance to Trump and pitched their ser­vices to Trump’s dig­i­tal direc­tor, Brad Parscale. The company’s hir­ing was approved by Trump’s son-in-law, Jared Kush­n­er, who was infor­mal­ly help­ing to man­age the cam­paign with a focus on dig­i­tal strat­e­gy.

Kush­n­er said in an inter­view with Forbes mag­a­zine that the cam­paign “found that Face­book and dig­i­tal tar­get­ing were the most effec­tive ways to reach the audi­ences. ...We brought in Cam­bridge Ana­lyt­i­ca.” Kush­n­er said he “built” a data hub for the cam­paign “which nobody knew about, until towards the end.”

Kushner’s spokesman and lawyer both declined to com­ment Tues­day.

Two weeks before Elec­tion Day, Nix told a Post reporter at the company’s New York City office that his com­pa­ny could “deter­mine the per­son­al­i­ty of every sin­gle adult in the Unit­ed States of Amer­i­ca.”

The claim was wide­ly ques­tioned, and the Trump cam­paign lat­er said that it didn’t rely on psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca. Instead, the cam­paign said that it used a vari­ety of oth­er dig­i­tal infor­ma­tion to iden­ti­fy prob­a­ble sup­port­ers.

Parscale said in a Post inter­view in Octo­ber 2016 that he had not “opened the hood” on Cam­bridge Analytica’s method­ol­o­gy, and said he got much of his data from the Repub­li­can Nation­al Com­mit­tee. Parscale declined to com­ment Tues­day. He has pre­vi­ous­ly said that the Trump cam­paign did not use any psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca.

Cam­bridge Analytica’s par­ent com­pa­ny, SCL Group, has an ongo­ing con­tract with the State Department’s Glob­al Engage­ment Cen­ter. The com­pa­ny was paid almost $500,000 to inter­view peo­ple over­seas to under­stand the mind-set of Islamist mil­i­tants as part of an effort to counter their online pro­pa­gan­da and block recruits.

Heather Nauert, the act­ing under­sec­re­tary for pub­lic diplo­ma­cy, said Tues­day that the con­tract was signed in Novem­ber 2016, under the Oba­ma admin­is­tra­tion, and has not expired yet. In pub­lic records, the con­tract is dat­ed in Feb­ru­ary 2017, and the rea­son for the dis­crep­an­cy was not clear. Nauert said that the State Depart­ment had signed oth­er con­tracts with SCL Group in the past.

———-

“Ban­non over­saw Cam­bridge Analytica’s col­lec­tion of Face­book data, accord­ing to for­mer employ­ee” by Craig Tim­berg, Kar­la Adam and Michael Kran­ish; The Wash­ing­ton Post; 03/20/2018

“Con­ser­v­a­tive strate­gist Stephen K. Ban­non over­saw Cam­bridge Analytica’s ear­ly efforts to col­lect troves of Face­book data as part of an ambi­tious pro­gram to build detailed pro­files of mil­lions of Amer­i­can vot­ers, a for­mer employ­ee of the data-sci­ence firm said Tues­day.”

Steve Ban­non over­saw Cam­bridge Analytica’s ear­ly efforts to col­lect troves of Face­book data. That’s what Christo­pher Wylie claims, and giv­en Ban­non’s role as vice pres­i­dent of the com­pa­ny it’s not, on its face, an out­landish claim. And Ban­non appar­ent­ly approved the spend­ing of near­ly $1 mil­lion to acquire that Face­book data in 2014. Because, accord­ing to Wylie, Alexan­der Nix did­n’t actu­al­ly have per­mis­sion to spend that kind of mon­ey with­out approval. Ban­non, on the hand, did have per­mis­sion to make those kinds of expen­di­ture approvals. That’s how high up Ban­non was at that com­pa­ny even though he was tech­ni­cal­ly the vice pres­i­dent while Nix was the CEO:

...
In an inter­view Tues­day with The Wash­ing­ton Post at his lawyer’s Lon­don office, Wylie said that Ban­non — while he was a top exec­u­tive at Cam­bridge Ana­lyt­i­ca and head of Bre­it­bart News — was deeply involved in the company’s strat­e­gy and approved spend­ing near­ly $1 mil­lion to acquire data, includ­ing Face­book pro­files, in 2014.

“We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss,” said Wylie, who was Cam­bridge Analytica’s research direc­tor. “Alexan­der Nix didn’t have the author­i­ty to spend that much mon­ey with­out approval.”

Ban­non, who served on the company’s board, did not respond to a request for com­ment. He served as vice pres­i­dent and sec­re­tary of Cam­bridge Ana­lyt­i­ca from June 2014 to August 2016, when he became chief exec­u­tive of Trump’s cam­paign, accord­ing to his pub­licly filed finan­cial dis­clo­sure. In 2017, he joined Trump in the White House as his chief strate­gist.
...

“We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss...Alexander Nix didn’t have the author­i­ty to spend that much mon­ey with­out approval.””

And while Wylie acknowl­edges that unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the data, Wylie does assert that both Ban­non and Rebekah Mer­cer par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed. And, gen­er­al­ly speak­ing, if Ban­non was approval $1 mil­lion expen­di­tures on acquir­ing Face­book data he prob­a­bly sat in on at least one meet­ing where they described how they were plan­ning on actu­al­ly get­ting the data by spend­ing on that mon­ey. Don’t for­get the scheme involved pay­ing indi­vid­u­als small amounts of mon­ey to take the psy­cho­log­i­cal sur­vey on Kogan’s app, so at a min­i­mum you would expect Ban­non to know about how these apps were going to result in the gath­er­ing of Face­book pro­file infor­ma­tion:

...
It is unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the data, which alleged­ly was col­lect­ed through an app that was por­trayed as a tool for psy­cho­log­i­cal research but was then trans­ferred to the com­pa­ny.

Face­book has said that infor­ma­tion was improp­er­ly shared and that it request­ed the dele­tion of the data in 2015. Cam­bridge Ana­lyt­i­ca offi­cials said that they had done so, but Face­book said it received reports sev­er­al days ago that the data was not delet­ed.

Wylie said that both Ban­non and Rebekah Mer­cer, whose father, Robert Mer­cer, financed the com­pa­ny, par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed, although Wylie acknowl­edged that it was not clear they knew the details of how the col­lec­tion took place.

Ban­non “approved the data-col­lec­tion scheme we were propos­ing,” Wylie said.
...

What’s Ban­non hid­ing by claim­ing igno­rance? Well, that’s a good ques­tion after Britain’s Chan­nel 4 News aired a video Tues­day in which Nix was high­light­ing his firm’s secre­cy, includ­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing”:

...
Mean­while, Britain’s Chan­nel 4 News aired a video Tues­day in which Nix was shown boast­ing about his work for Trump. He seemed to high­light his firm’s secre­cy, at one point stress­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing.”

The com­pa­ny said in a state­ment that Nix’s com­ments “do not rep­re­sent the val­ues or oper­a­tions of the firm and his sus­pen­sion reflects the seri­ous­ness with which we view this vio­la­tion.”
...

Self-destruc­t­ing emails. That’s not sus­pi­cious or any­thing.

And note how Cam­bridge Ana­lyt­i­ca was appar­ent­ly already hon­ing in on a very ‘Trumpian’ mes­sage in 2014, long before Trump was on the radar:

...
The data and analy­ses that Cam­bridge Ana­lyt­i­ca gen­er­at­ed in this time pro­vid­ed dis­cov­er­ies that would lat­er form the emo­tion­al­ly charged core of Trump’s pres­i­den­tial plat­form, said Wylie, whose dis­clo­sures in news reports over the past sev­er­al days have rocked both his one­time employ­er and Face­book.

“Trump wasn’t in our con­scious­ness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or any­thing.”

The year before Trump announced his pres­i­den­tial bid, the data firm already had found a high lev­el of alien­ation among young, white Amer­i­cans with a con­ser­v­a­tive bent.

In focus groups arranged to test mes­sages for the 2014 midterms, these vot­ers respond­ed to calls for build­ing a new wall to block the entry of ille­gal immi­grants, to reforms intend­ed the “drain the swamp” of Washington’s entrenched polit­i­cal com­mu­ni­ty and to thin­ly veiled forms of racism toward African Amer­i­cans called “race real­ism,” he recount­ed.

The firm also test­ed views of Russ­ian Pres­i­dent Vladimir Putin.

“The only for­eign thing we test­ed was Putin,” he said. “It turns out, there’s a lot of Amer­i­cans who real­ly like this idea of a real­ly strong author­i­tar­i­an leader and peo­ple were quite defen­sive in focus groups of Putin’s inva­sion of Crimea.”
...

Intrigu­ing­ly, giv­en these ear­ly Trumpian find­ings in their 2014 vot­er research, it appears that the Trump cam­paign turned down ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, which sug­gests that Trump real­ly was the top pref­er­ence for Ban­non and the Mer­cers, not Ted Cruz:

...
Cam­bridge Ana­lyt­i­ca ini­tial­ly worked for 2016 Repub­li­can can­di­date Sen. Ted Cruz (Tex.), who was backed by the Mer­cers. The Trump cam­paign had reject­ed ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, and Trump him­self said in May 2016 that he “always felt” that the use of vot­er data was “over­rat­ed.”
...

And as the arti­cle reminds us, the Trump cam­paign has com­plete­ly denied EVER using Cam­bridge Ana­lyt­i­ca’s data. Brad Parscale, Trump’s dig­i­tal direc­tor, claimed he got all the data they were work­ing with from the Repub­li­can Nation­al Com­mit­tee:

...
Two weeks before Elec­tion Day, Nix told a Post reporter at the company’s New York City office that his com­pa­ny could “deter­mine the per­son­al­i­ty of every sin­gle adult in the Unit­ed States of Amer­i­ca.”

The claim was wide­ly ques­tioned, and the Trump cam­paign lat­er said that it didn’t rely on psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca. Instead, the cam­paign said that it used a vari­ety of oth­er dig­i­tal infor­ma­tion to iden­ti­fy prob­a­ble sup­port­ers.

Parscale said in a Post inter­view in Octo­ber 2016 that he had not “opened the hood” on Cam­bridge Analytica’s method­ol­o­gy, and said he got much of his data from the Repub­li­can Nation­al Com­mit­tee. Parscale declined to com­ment Tues­day. He has pre­vi­ous­ly said that the Trump cam­paign did not use any psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca.
...

And that denial by Parscale rais­es an obvi­ous ques­tion: when Parscale claims they only used data from the RNC, it’s clear­ly very pos­si­ble that he’s just straight up lying. But it’s also pos­si­ble that he’s lying while tech­ni­cal­ly telling the truth. Because if Cam­bridge Ana­lyt­i­ca gave its data to the RNC, it’s pos­si­ble the Trump cam­paign acquired the Cam­gridge Ana­lyt­i­ca data from the RNC at that point, giv­ing the cam­paign a degree of deni­a­bil­i­ty about the use of such scan­dalous­ly acquired data if the sto­ry of it ever became pub­lic. Like now.

Don’t for­get that data of this nature would have been poten­tial­ly use­ful for EVERY 2016 race, not just the pres­i­den­tial cam­paign. So if Ban­non and Mer­cer were intent on help­ing Repub­li­cans win across the board, hand­ing that data over to the RNC would have just made sense.

Also don’t for­get that the New York Times was shown unen­crypt­ed copies of the Face­book data col­lect­ed by Cam­bridge Ana­lyt­i­ca. If the New York Times saw this data, odds are the RNC has too. And who knows who else.

Face­book’s Sandy Parak­i­las Blows an “Utter­ly Hor­ri­fy­ing” Whis­tle

It all rais­es the ques­tion of whether or not the Repub­li­can Nation­al Com­mit­tee now pos­sess all that Cam­bridge Ana­lyt­i­ca data/Facebook data right now. And that brings us to per­haps the most scan­dalous arti­cle of all that we’re going to look at. It’s about Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012 who is now a whis­tle-blow­er about exact­ly the kind of “friend’s per­mis­sion” loop­hole Cam­bridge Ana­lyt­i­ca exploit­ed. And as the fol­low­ing arti­cle makes hor­rif­i­cal­ly clear:

1. It’s not just Cam­bridge Ana­lyt­i­ca or the RNC that might pos­sess this trea­sure trove of per­son­al infor­ma­tion. It’s the entire data bro­ker­age indus­try that prob­a­bly has thi­er hands on this data. Along with any­one who has picked it up through the black mar­ket.

2. It was rel­a­tive­ly easy to write an app that could exploit this “friends per­mis­sions” fea­ture and start trawl­ing Face­book for pro­file data for app users and their friends. Any­one with basic app cod­ing skills could do it.

3. Parak­i­las esti­mates that per­haps hun­dreds of thou­sands of devel­op­ers like­ly exploit­ed exact­ly the same ‘for research pur­pos­es only’ loop­hole exploit­ed by Cam­bridge Ana­lyt­i­ca. And Face­book had no way of track­ing how this data was used by devel­op­ers once it left Face­book’s servers.

4. Parak­i­las sus­pects that this amount of data will inevitably end up in the black mar­ket mean­ing there is prob­a­bly a mas­sive amount of per­son­al­ly iden­ti­fi­able Face­book data just float­ing around for the entire mar­ket­ing indus­try and any­one else (like the GOP) to data mine.

5. Parak­i­las knew of many com­mer­cial apps that were using the same “friends per­mis­sion” fea­ture to grab Face­book pro­file data use it com­mer­cial pur­pos­es.

6. Face­book’s pol­i­cy of giv­ing devel­op­ers access to Face­book users’ friends’ data was sanc­tioned in the small print in Facebook’s terms and con­di­tions, and users could block such data shar­ing by chang­ing their set­tings. That appears to be part of the legal pro­tec­tion Face­book employed when it had this pol­i­cy: don’t com­plain, it’s in the fine print.

7. Per­haps most scan­dalous of all, Face­book took a 30% cut of pay­ments made through apps in exchange for giv­ing these app devel­op­ers access to Face­book user data. Yep, Face­book was effec­tive­ly sell­ing user data, but by struc­tur­ing the sale of this data as a 30% share of the pay­ments made through the app Face­book also cre­at­ed an incen­tive to help devel­op­ers max­i­mize the prof­its they made through the app. So Face­book lit­er­al­ly set up a sys­tem that incen­tivized itself to help app devel­op­ers make as much mon­ey as pos­si­ble off of the user data they were hand­ing over.

8. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users. So as a 2010, ~1 in 10 Face­book apps were using this app loop­hole to grab infor­ma­tion about both the users of the app and their friends.

9. While Cam­bridge Ana­lyt­i­ca was far from alone in exploit­ing this loop­hole, it was actu­al­ly one of the very last firms giv­en per­mis­sion to be allowed to do so. Which means that par­tic­u­lar data set col­lect­ed by Cam­bridge Ana­lyt­i­ca could be unique­ly valu­able sim­ply be being larg­er and con­tain­ing and more recent data than most oth­er data sets of this nature.

10. When Parak­i­las brought up these con­cerns to Face­book’s exec­u­tives and sug­gest­ed the com­pa­ny should proac­tive­ly “audit devel­op­ers direct­ly and see what’s going on with the data” he was dis­cour­aged from the approach. One Face­book exec­u­tive advised him against look­ing too deeply at how the data was being used, warn­ing him: “Do you real­ly want to see what you’ll find?” Parak­i­las said he inter­pret­ed the com­ment to mean that “Face­book was in a stronger legal posi­tion if it didn’t know about the abuse that was hap­pen­ing”

11. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters, Parak­i­las was told that any deci­sion to ban an app required the per­son­al approval of Mark Zucker­berg. Although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers. That said, rogue devel­op­ers were rarely dealt with.

12. When Face­book even­tu­al­ly phased out this “friends per­mis­sions” pol­i­cy for app devel­op­ers, it was like­ly done out of con­cerns over the com­mer­cial val­ue of all this data they were hand­ing out. Exec­u­tives were appar­ent­ly con­cerned that com­peti­tors were going to use this data to build their own social net­works.

So, as we can see, the entire saga of Cam­bridge Ana­lyt­i­ca’s scan­dalous acqui­si­tion of pri­vate Face­book pro­files on ~50 mil­lion Amer­i­cans is some­thing Face­book made rou­tine for devel­op­ers of all sorts from 2007–2014, which means this is far from a ‘Cam­bridge Ana­lyt­i­ca’ sto­ry. It’s a Face­book sto­ry about a mas­sive prob­lem Face­book cre­at­ed for itself (for its own prof­its):

The Guardian

‘Utter­ly hor­ri­fy­ing’: ex-Face­book insid­er says covert data har­vest­ing was rou­tine

Sandy Parak­i­las says numer­ous com­pa­nies deployed these tech­niques – like­ly affect­ing hun­dreds of mil­lions of users – and that Face­book looked the oth­er way

Paul Lewis in San Fran­cis­co
Tue 20 Mar 2018 07.46 EDT

Hun­dreds of mil­lions of Face­book users are like­ly to have had their pri­vate infor­ma­tion har­vest­ed by com­pa­nies that exploit­ed the same terms as the firm that col­lect­ed data and passed it on to Cam­bridge Ana­lyt­i­ca, accord­ing to a new whistle­blow­er.

Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012, told the Guardian he warned senior exec­u­tives at the com­pa­ny that its lax approach to data pro­tec­tion risked a major breach.

“My con­cerns were that all of the data that left Face­book servers to devel­op­ers could not be mon­i­tored by Face­book, so we had no idea what devel­op­ers were doing with the data,” he said.

Parak­i­las said Face­book had terms of ser­vice and set­tings that “peo­ple didn’t read or under­stand” and the com­pa­ny did not use its enforce­ment mech­a­nisms, includ­ing audits of exter­nal devel­op­ers, to ensure data was not being mis­used.

Parak­i­las, whose job was to inves­ti­gate data breach­es by devel­op­ers sim­i­lar to the one lat­er sus­pect­ed of Glob­al Sci­ence Research, which har­vest­ed tens of mil­lions of Face­book pro­files and pro­vid­ed the data to Cam­bridge Ana­lyt­i­ca, said the slew of recent dis­clo­sures had left him dis­ap­point­ed with his supe­ri­ors for not heed­ing his warn­ings.

“It has been painful watch­ing,” he said, “because I know that they could have pre­vent­ed it.”

Asked what kind of con­trol Face­book had over the data giv­en to out­side devel­op­ers, he replied: “Zero. Absolute­ly none. Once the data left Face­book servers there was not any con­trol, and there was no insight into what was going on.”

Parak­i­las said he “always assumed there was some­thing of a black mar­ket” for Face­book data that had been passed to exter­nal devel­op­ers. How­ev­er, he said that when he told oth­er exec­u­tives the com­pa­ny should proac­tive­ly “audit devel­op­ers direct­ly and see what’s going on with the data” he was dis­cour­aged from the approach.

He said one Face­book exec­u­tive advised him against look­ing too deeply at how the data was being used, warn­ing him: “Do you real­ly want to see what you’ll find?” Parak­i­las said he inter­pret­ed the com­ment to mean that “Face­book was in a stronger legal posi­tion if it didn’t know about the abuse that was hap­pen­ing”.

He added: “They felt that it was bet­ter not to know. I found that utter­ly shock­ing and hor­ri­fy­ing.”

...

Face­book did not respond to a request for com­ment on the infor­ma­tion sup­plied by Parak­i­las, but direct­ed the Guardian to a Novem­ber 2017 blog­post in which the com­pa­ny defend­ed its data shar­ing prac­tices, which it said had “sig­nif­i­cant­ly improved” over the last five years.

“While it’s fair to crit­i­cise how we enforced our devel­op­er poli­cies more than five years ago, it’s untrue to sug­gest we didn’t or don’t care about pri­va­cy,” that state­ment said. “The facts tell a dif­fer­ent sto­ry.”

‘A major­i­ty of Face­book users’

Parak­i­las, 38, who now works as a prod­uct man­ag­er for Uber, is par­tic­u­lar­ly crit­i­cal of Facebook’s pre­vi­ous pol­i­cy of allow­ing devel­op­ers to access the per­son­al data of friends of peo­ple who used apps on the plat­form, with­out the knowl­edge or express con­sent of those friends.

That fea­ture, called friends per­mis­sion, was a boon to out­side soft­ware devel­op­ers who, from 2007 onwards, were giv­en per­mis­sion by Face­book to build quizzes and games – like the wide­ly pop­u­lar Far­mVille – that were host­ed on the plat­form.

The apps pro­lif­er­at­ed on Face­book in the years lead­ing up to the company’s 2012 ini­tial pub­lic offer­ing, an era when most users were still access­ing the plat­form via lap­tops and com­put­ers rather than smart­phones.

Face­book took a 30% cut of pay­ments made through apps, but in return enabled their cre­ators to have access to Face­book user data.

Parak­i­las does not know how many com­pa­nies sought friends per­mis­sion data before such access was ter­mi­nat­ed around mid-2014. How­ev­er, he said he believes tens or maybe even hun­dreds of thou­sands of devel­op­ers may have done so.

Parak­i­las esti­mates that “a major­i­ty of Face­book users” could have had their data har­vest­ed by app devel­op­ers with­out their knowl­edge. The com­pa­ny now has stricter pro­to­cols around the degree of access third par­ties have to data.

Parak­i­las said that when he worked at Face­book it failed to take full advan­tage of its enforce­ment mech­a­nisms, such as a clause that enables the social media giant to audit exter­nal devel­op­ers who mis­use its data.

Legal action against rogue devel­op­ers or moves to ban them from Face­book were “extreme­ly rare”, he said, adding: “In the time I was there, I didn’t see them con­duct a sin­gle audit of a developer’s sys­tems.”

Face­book announced on Mon­day that it had hired a dig­i­tal foren­sics firm to con­duct an audit of Cam­bridge Ana­lyt­i­ca. The deci­sion comes more than two years after Face­book was made aware of the report­ed data breach.

Dur­ing the time he was at Face­book, Parak­i­las said the com­pa­ny was keen to encour­age more devel­op­ers to build apps for its plat­form and “one of the main ways to get devel­op­ers inter­est­ed in build­ing apps was through offer­ing them access to this data”. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters he was told that any deci­sion to ban an app required the per­son­al approval of the chief exec­u­tive, Mark Zucker­berg, although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers.

While the pre­vi­ous pol­i­cy of giv­ing devel­op­ers access to Face­book users’ friends’ data was sanc­tioned in the small print in Facebook’s terms and con­di­tions, and users could block such data shar­ing by chang­ing their set­tings, Parak­i­las said he believed the pol­i­cy was prob­lem­at­ic.

“It was well under­stood in the com­pa­ny that that pre­sent­ed a risk,” he said. “Face­book was giv­ing data of peo­ple who had not autho­rised the app them­selves, and was rely­ing on terms of ser­vice and set­tings that peo­ple didn’t read or under­stand.”

It was this fea­ture that was exploit­ed by Glob­al Sci­ence Research, and the data pro­vid­ed to Cam­bridge Ana­lyt­i­ca in 2014. GSR was run by the Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, who built an app that was a per­son­al­i­ty test for Face­book users.

The test auto­mat­i­cal­ly down­loaded the data of friends of peo­ple who took the quiz, osten­si­bly for aca­d­e­m­ic pur­pos­es. Cam­bridge Ana­lyt­i­ca has denied know­ing the data was obtained improp­er­ly, and Kogan main­tains he did noth­ing ille­gal and had a “close work­ing rela­tion­ship” with Face­book.

While Kogan’s app only attract­ed around 270,000 users (most of whom were paid to take the quiz), the com­pa­ny was then able to exploit the friends per­mis­sion fea­ture to quick­ly amass data per­tain­ing to more than 50 mil­lion Face­book users.

“Kogan’s app was one of the very last to have access to friend per­mis­sions,” Parak­i­las said, adding that many oth­er sim­i­lar apps had been har­vest­ing sim­i­lar quan­ti­ties of data for years for com­mer­cial pur­pos­es. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users.

If those fig­ures were extrap­o­lat­ed, tens of thou­sands of apps, if not more, were like­ly to have sys­tem­at­i­cal­ly culled “pri­vate and per­son­al­ly iden­ti­fi­able” data belong­ing to hun­dreds of mil­lions of users, Parak­i­las said.

The ease with which it was pos­si­ble for any­one with rel­a­tive­ly basic cod­ing skills to cre­ate apps and start trawl­ing for data was a par­tic­u­lar con­cern, he added.

Parak­i­las said he was unsure why Face­book stopped allow­ing devel­op­ers to access friends data around mid-2014, rough­ly two years after he left the com­pa­ny. How­ev­er, he said he believed one rea­son may have been that Face­book exec­u­tives were becom­ing aware that some of the largest apps were acquir­ing enor­mous troves of valu­able data.

He recalled con­ver­sa­tions with exec­u­tives who were ner­vous about the com­mer­cial val­ue of data being passed to oth­er com­pa­nies.

“They were wor­ried that the large app devel­op­ers were build­ing their own social graphs, mean­ing they could see all the con­nec­tions between these peo­ple,” he said. “They were wor­ried that they were going to build their own social net­works.”

‘They treat­ed it like a PR exer­cise’

Parak­i­las said he lob­bied inter­nal­ly at Face­book for “a more rig­or­ous approach” to enforc­ing data pro­tec­tion, but was offered lit­tle sup­port. His warn­ings includ­ed a Pow­er­Point pre­sen­ta­tion he said he deliv­ered to senior exec­u­tives in mid-2012 “that includ­ed a map of the vul­ner­a­bil­i­ties for user data on Facebook’s plat­form”.

“I includ­ed the pro­tec­tive mea­sures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do mali­cious things with the data,” he said. “On the list of bad actors I includ­ed for­eign state actors and data bro­kers.”

Frus­trat­ed at the lack of action, Parak­i­las left Face­book in late 2012. “I didn’t feel that the com­pa­ny treat­ed my con­cerns seri­ous­ly. I didn’t speak out pub­licly for years out of self-inter­est, to be frank.”

That changed, Parak­i­las said, when he heard the con­gres­sion­al tes­ti­mo­ny giv­en by Face­book lawyers to Sen­ate and House inves­ti­ga­tors in late 2017 about Russia’s attempt to sway the pres­i­den­tial elec­tion. “They treat­ed it like a PR exer­cise,” he said. “They seemed to be entire­ly focused on lim­it­ing their lia­bil­i­ty and expo­sure rather than help­ing the coun­try address a nation­al secu­ri­ty issue.”

It was at that point that Parak­i­las decid­ed to go pub­lic with his con­cerns, writ­ing an opin­ion arti­cle in the New York Times that said Face­book could not be trust­ed to reg­u­late itself. Since then, Parak­i­las has become an advis­er to the Cen­ter for Humane Tech­nol­o­gy, which is run by Tris­tan Har­ris, a for­mer Google employ­ee turned whistle­blow­er on the indus­try.

———-

“ ‘Utter­ly hor­ri­fy­ing’: ex-Face­book insid­er says covert data har­vest­ing was rou­tine” by Paul Lewis; The Guardian; 03/20/2018

“Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012, told the Guardian he warned senior exec­u­tives at the com­pa­ny that its lax approach to data pro­tec­tion risked a major breach.”

The plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012: That’s who is mak­ing these claims. In oth­er words, Sandy Parak­i­las is indeed some­one who should be inti­mate­ly famil­iar with Face­book’s poli­cies of hand­ing user data over to app devel­op­ers because it was his job to ensure that data was­n’t breached.

And as Parak­i­las makes clear, he was­n’t actu­al­ly able to do his job. When the data left Face­book’s servers after get­ting hand­ed over to app devel­op­er Face­book had no idea what devel­op­ers were doing with the data and appar­ent­ly no inter­est in learn­ing:

...
“My con­cerns were that all of the data that left Face­book servers to devel­op­ers could not be mon­i­tored by Face­book, so we had no idea what devel­op­ers were doing with the data,” he said.

Parak­i­las said Face­book had terms of ser­vice and set­tings that “peo­ple didn’t read or under­stand” and the com­pa­ny did not use its enforce­ment mech­a­nisms, includ­ing audits of exter­nal devel­op­ers, to ensure data was not being mis­used.

Parak­i­las, whose job was to inves­ti­gate data breach­es by devel­op­ers sim­i­lar to the one lat­er sus­pect­ed of Glob­al Sci­ence Research, which har­vest­ed tens of mil­lions of Face­book pro­files and pro­vid­ed the data to Cam­bridge Ana­lyt­i­ca, said the slew of recent dis­clo­sures had left him dis­ap­point­ed with his supe­ri­ors for not heed­ing his warn­ings.

“It has been painful watch­ing,” he said, “because I know that they could have pre­vent­ed it.”

Asked what kind of con­trol Face­book had over the data giv­en to out­side devel­op­ers, he replied: “Zero. Absolute­ly none. Once the data left Face­book servers there was not any con­trol, and there was no insight into what was going on.”
...

And this com­plete­ly lack of over­sight by Face­book led Parak­i­las to assume there was “some­thing of a black mar­ket” for that Face­book data. But when he expressed these con­cerns with fel­low exec­u­tives he was warned not to look. Not know­ing how this data was being used was iron­i­cal­ly part of Face­book’s legal strat­e­gy, it seems:

...
Parak­i­las said he “always assumed there was some­thing of a black mar­ket” for Face­book data that had been passed to exter­nal devel­op­ers. How­ev­er, he said that when he told oth­er exec­u­tives the com­pa­ny should proac­tive­ly “audit devel­op­ers direct­ly and see what’s going on with the data” he was dis­cour­aged from the approach.

He said one Face­book exec­u­tive advised him against look­ing too deeply at how the data was being used, warn­ing him: “Do you real­ly want to see what you’ll find?” Parak­i­las said he inter­pret­ed the com­ment to mean that “Face­book was in a stronger legal posi­tion if it didn’t know about the abuse that was hap­pen­ing”.

He added: “They felt that it was bet­ter not to know. I found that utter­ly shock­ing and hor­ri­fy­ing.”
...

“They felt that it was bet­ter not to know. I found that utter­ly shock­ing and hor­ri­fy­ing.”

Well, at least one exec­u­tive at Face­book was utter­ly shocked and hor­ri­fied by the “bet­ter not to know” pol­i­cy towards hand­ing per­son­al pri­vate infor­ma­tion over to devel­op­ers. And that one exec­u­tive, Parak­i­las, left the com­pa­ny and is now a whis­tle-blow­er.

And one of the things that made Parak­i­las par­tic­u­lar­ly con­cerned that this was wide­spread among app was the fact that it was so easy to cre­ate apps that could then just be released onto Face­book to trawl for Face­book pro­file data from users and their unwit­ting friends:

...
The ease with which it was pos­si­ble for any­one with rel­a­tive­ly basic cod­ing skills to cre­ate apps and start trawl­ing for data was a par­tic­u­lar con­cern, he added.
...

And while rogue app devel­op­ers were at times dealt with, it was exceed­ing­ly rare with Parak­i­las not wit­ness­ing a sin­gle audit of a devel­op­er’s sys­tems dur­ing his time there.

Even more alarm­ing is that Face­book was appar­ent­ly quite on encour­ag­ing app devel­op­ers to grab this Face­book pro­file data as an incen­tive to encour­age even more app devel­op. Apps were seen as so impor­tant to Face­book that Mark Zucker­berg him­self had to give his per­son­al approval to ban on app. And while that pol­i­cy was lat­er relaxed to not require Zucker­berg’s approval, it does­n’t sound like that pol­i­cy change actu­al­ly result­ed in more apps get­ting banned:

...
Parak­i­las said that when he worked at Face­book it failed to take full advan­tage of its enforce­ment mech­a­nisms, such as a clause that enables the social media giant to audit exter­nal devel­op­ers who mis­use its data.

Legal action against rogue devel­op­ers or moves to ban them from Face­book were “extreme­ly rare”, he said, adding: “In the time I was there, I didn’t see them con­duct a sin­gle audit of a developer’s sys­tems.”

Dur­ing the time he was at Face­book, Parak­i­las said the com­pa­ny was keen to encour­age more devel­op­ers to build apps for its plat­form and “one of the main ways to get devel­op­ers inter­est­ed in build­ing apps was through offer­ing them access to this data”. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters he was told that any deci­sion to ban an app required the per­son­al approval of the chief exec­u­tive, Mark Zucker­berg, although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers.
...

So how many Face­book users had their pri­vate pro­file infor­ma­tion like­ly via this ‘fine print’ fea­ture that allowed app devel­op­ers to scrape the pro­files of app users and their friends? Accord­ing to Parak­i­las, prob­a­bly a major­i­ty of Face­book users. So that black mar­ket of Face­book pro­files prob­a­bly includes a major­i­ty of Face­book users. But even more amaz­ing is that Face­book hand­ed out this per­son­al user infor­ma­tion to app devel­op­ers in exchange for a 30 share of the mon­ey they made through the app. Face­book was basi­cal­ly direct­ly sell­ing pri­vate user data to devel­op­ers, which is a big rea­son why Parak­i­las’s esti­mate that a major­i­ty of Face­book users were impact­ed by this is like­ly true. Espe­cial­ly if, as Parak­i­las hints, the num­ber of devel­op­ers grab­bing user pro­file infor­ma­tion via these apps might be in the hun­dreds of thou­sands. That’s a lot of devel­op­ers poten­tial­ly feed­ing into that black mar­ket:

...
‘A major­i­ty of Face­book users’

Parak­i­las, 38, who now works as a prod­uct man­ag­er for Uber, is par­tic­u­lar­ly crit­i­cal of Facebook’s pre­vi­ous pol­i­cy of allow­ing devel­op­ers to access the per­son­al data of friends of peo­ple who used apps on the plat­form, with­out the knowl­edge or express con­sent of those friends.

That fea­ture, called friends per­mis­sion, was a boon to out­side soft­ware devel­op­ers who, from 2007 onwards, were giv­en per­mis­sion by Face­book to build quizzes and games – like the wide­ly pop­u­lar Far­mVille – that were host­ed on the plat­form.

The apps pro­lif­er­at­ed on Face­book in the years lead­ing up to the company’s 2012 ini­tial pub­lic offer­ing, an era when most users were still access­ing the plat­form via lap­tops and com­put­ers rather than smart­phones.

Face­book took a 30% cut of pay­ments made through apps, but in return enabled their cre­ators to have access to Face­book user data.

Parak­i­las does not know how many com­pa­nies sought friends per­mis­sion data before such access was ter­mi­nat­ed around mid-2014. How­ev­er, he said he believes tens or maybe even hun­dreds of thou­sands of devel­op­ers may have done so.

Parak­i­las esti­mates that “a major­i­ty of Face­book users” could have had their data har­vest­ed by app devel­op­ers with­out their knowl­edge. The com­pa­ny now has stricter pro­to­cols around the degree of access third par­ties have to data.

...

Dur­ing the time he was at Face­book, Parak­i­las said the com­pa­ny was keen to encour­age more devel­op­ers to build apps for its plat­form and “one of the main ways to get devel­op­ers inter­est­ed in build­ing apps was through offer­ing them access to this data”. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters he was told that any deci­sion to ban an app required the per­son­al approval of the chief exec­u­tive, Mark Zucker­berg, although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers.
...

“Face­book took a 30% cut of pay­ments made through apps, but in return enabled their cre­ators to have access to Face­book user data.”

And that, right there, is per­haps the biggest scan­dal here: Face­book just hand­ed user data away in exchange for rev­enue streams from app devel­op­ers. And this was a key ele­ment of its busi­ness mod­el dur­ing this 2007–2014 peri­od. “Read the fine print” in the terms of ser­vice was the excuse they use:

...
“It was well under­stood in the com­pa­ny that that pre­sent­ed a risk,” he said. “Face­book was giv­ing data of peo­ple who had not autho­rised the app them­selves, and was rely­ing on terms of ser­vice and set­tings that peo­ple didn’t read or under­stand.”

It was this fea­ture that was exploit­ed by Glob­al Sci­ence Research, and the data pro­vid­ed to Cam­bridge Ana­lyt­i­ca in 2014. GSR was run by the Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, who built an app that was a per­son­al­i­ty test for Face­book users.
...

And this is all why Alek­san­dr Kogan’s asser­tions that he had a close work­ing rela­tion­ship with Face­book and did noth­ing tech­ni­cal­ly wrong do actu­al­ly seem to be backed up by Parak­i­las’s whis­tle-blow­ing. Both because it’s hard to see what Kogan did that was­n’t part of Face­book’s busi­ness mod­el and also because it’s hard to ignore that Kogan’s GSR shell com­pa­ny was one of the very last apps to have per­mis­sion to exploit their “friends’ per­mis­sion” app loop­hole. That sure does sug­gest that Kogan real­ly did have a “close work­ing rela­tion­ship” with Face­book. So close he got seem­ing­ly favored treat­ment, and that’s com­pared to the seem­ing­ly vast num­ber of apps that were appar­ent­ly using this “friends per­mis­sions” fea­ture: 1 in 10 Face­book apps, accord­ing to a 2010 study:

...
The test auto­mat­i­cal­ly down­loaded the data of friends of peo­ple who took the quiz, osten­si­bly for aca­d­e­m­ic pur­pos­es. Cam­bridge Ana­lyt­i­ca has denied know­ing the data was obtained improp­er­ly, and Kogan main­tains he did noth­ing ille­gal and had a “close work­ing rela­tion­ship” with Face­book.

While Kogan’s app only attract­ed around 270,000 users (most of whom were paid to take the quiz), the com­pa­ny was then able to exploit the friends per­mis­sion fea­ture to quick­ly amass data per­tain­ing to more than 50 mil­lion Face­book users.

“Kogan’s app was one of the very last to have access to friend per­mis­sions,” Parak­i­las said, adding that many oth­er sim­i­lar apps had been har­vest­ing sim­i­lar quan­ti­ties of data for years for com­mer­cial pur­pos­es. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users.

If those fig­ures were extrap­o­lat­ed, tens of thou­sands of apps, if not more, were like­ly to have sys­tem­at­i­cal­ly culled “pri­vate and per­son­al­ly iden­ti­fi­able” data belong­ing to hun­dreds of mil­lions of users, Parak­i­las said.
...

““Kogan’s app was one of the very last to have access to friend per­mis­sions,” Parak­i­las said, adding that many oth­er sim­i­lar apps had been har­vest­ing sim­i­lar quan­ti­ties of data for years for com­mer­cial pur­pos­es. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users.”

As of 2010, around 11 per­cent of app devel­op­ers request­ed data belong­ing to friends of users. Keep that in mind when Face­book claims that Alek­san­dr Kogan improp­er­ly obtained data from the friends of the peo­ple who down­loaded Kogan’s app.

So what made Face­book even­tu­al­ly end this “friends per­mis­sions” pol­i­cy in mid-2014? While Parak­i­las has already left the com­pa­ny by then, he does recall con­ver­sa­tions with exec­u­tive who were ner­vous about com­peti­tors build­ing their own social net­works from all the data Face­book was giv­ing away:

...
Parak­i­las said he was unsure why Face­book stopped allow­ing devel­op­ers to access friends data around mid-2014, rough­ly two years after he left the com­pa­ny. How­ev­er, he said he believed one rea­son may have been that Face­book exec­u­tives were becom­ing aware that some of the largest apps were acquir­ing enor­mous troves of valu­able data.

He recalled con­ver­sa­tions with exec­u­tives who were ner­vous about the com­mer­cial val­ue of data being passed to oth­er com­pa­nies.

“They were wor­ried that the large app devel­op­ers were build­ing their own social graphs, mean­ing they could see all the con­nec­tions between these peo­ple,” he said. “They were wor­ried that they were going to build their own social net­works.”
...

That’s how much data Face­book was hand­ing out to encour­age new app devel­op­ment: so much data that they were con­cerned about cre­at­ing com­peti­tors.

Final­ly, it’s impor­tant to note that the pic­ture paint­ed by Parak­i­las only goes until the end of 2012, when he left in frus­tra­tion. So we don’t actu­al­ly have tes­ti­mo­ny of Face­book insid­ers who were involved with app data breach­es like Parak­i­las dur­ing the peri­od when Cam­bridge Ana­lyt­i­ca was engaged in its mass data col­lec­tion scheme:

...
Frus­trat­ed at the lack of action, Parak­i­las left Face­book in late 2012. “I didn’t feel that the com­pa­ny treat­ed my con­cerns seri­ous­ly. I didn’t speak out pub­licly for years out of self-inter­est, to be frank.”
...

Now, it seems like a safe bet that the prob­lem only got worse after Parak­i­las left giv­en how the Cam­bridge Ana­lyt­i­ca sit­u­a­tion played out, but we don’t know yet just had bad it was at this point.

Alek­san­dr Kogan: Face­book’s Close Friend (Until He Belat­ed­ly Was­n’t)

So, fac­tor­ing in what we just saw with Parak­i­las’s claims about extent to which Face­book was hand­ing out pri­vate Face­book pro­file data — the inter­nal pro­file that Face­book builds up about you — to app devel­op­ers for wide­spread com­mer­cial appli­ca­tions, let’s take a look at the some of the claims Alek­san­dr Kogan has made about his rela­tion­ship with Face­book. Because while Kogan makes some extra­or­di­nary claims, they are also con­sis­tent with Parak­i­las’s claims, although in some cas­es Kogan’s descrip­tion actu­al­ly goes much fur­ther than Parak­i­las.

For instance, accord­ing to the fol­low­ing Observ­er arti­cle ...

1. In an email to col­leagues at the Uni­ver­si­ty of Cam­bridge, Alek­san­dr Kogan said that he had cre­at­ed the Face­book app in 2013 for aca­d­e­m­ic pur­pos­es, and used it for “a num­ber of stud­ies”. After he found­ed GSR, Kogan wrote, he trans­ferred the app to the com­pa­ny and changed its name, logo, descrip­tion, and terms and con­di­tions.

2. Kogan also claims in that email that the con­tract his GSR com­pa­ny signed with Face­book in 2014 made it absolute­ly clear the data was going to be used for com­mer­cial appli­ca­tions and that app users were grant­i­ng Kogan’s com­pa­ny the right to license or resell the data. “We made clear the app was for com­mer­cial use – we nev­er men­tioned aca­d­e­m­ic research nor the Uni­ver­si­ty of Cam­bridge,” Kogan wrote.We clear­ly stat­ed that the users were grant­i­ng us the right to use the data in broad scope, includ­ing sell­ing and licens­ing the data. These changes were all made on the Face­book app plat­form and thus they had full abil­i­ty to review the nature of the app and raise issues. Face­book at no point raised any con­cerns at all about any of these changes.” So Kogan says he made it clear to Face­book and user the app was for com­mer­cial pur­pos­es and that the data might be resold which sounds like the kind of sit­u­a­tion Sandy Parak­i­las said he wit­nessed except even more open (which should be eas­i­ly ver­i­fi­able if the app code still exists).

3. Face­book did­n’t actu­al­ly kick Kogan off of its plat­form until March 16th of this year, just days before this sto­ry broke. Which con­sis­tent with Kogan’s claims that he had a good work­ing rela­tion­ship with Face­book.

4. When Kogan found­ed Glob­al Sci­ence Research (GSR) in May 2014, he co-found­ed it with anoth­er Cam­bridge researcher, Joseph Chan­cel­lor. Chan­cel­lor is cur­rent­ly employed by Face­book.

5. Face­book gave Kogan’s Uni­ver­si­ty of Cam­bridge lab pro­vid­ed the dataset of “every friend­ship formed in 2011 in every coun­try in the world at the nation­al aggre­gate lev­el”. 57 bil­lion Face­book rela­tion­ships in all. The data was anonymized and aggre­gat­ed, so it did­n’t lit­er­al­ly include details on indi­vid­ual Face­book friends and was instead the aggre­gate “friend” counts at a nation­al. The data was used to pub­lish a study in Per­son­al­i­ty and Indi­vid­ual Dif­fer­ences in 2015 and two Face­book employ­ees were named as co-authors of the study, along­side researchers from Cam­bridge, Har­vard and the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley. But it’s still a sign that Kogan is indeed being hon­est when he says he had a close work­ing rela­tion­ship with Face­book. It’s also a reminder that when Face­book claims that it was just hand­ing out data for “research pur­pos­es” only, if that was true it would have hand­ed out anonymized aggre­gat­ed data like they did in this sit­u­a­tion with Kogan.

6. That study co-authored by Kogan’s team and Face­book did­n’t just use the anonymized aggre­gat­ed friend­ship data. The study also used non-anonymized Face­book ata col­lect­ed through Face­book apps using exact­ly the same tech­niques Kogan’s app for Cam­bridge Ana­lyt­i­ca used. This study was pub­lished in August of 2015. Again, it was a study co-authored by Face­book. GSR co-founder Joseph Chan­cel­lor left GSR a month lat­er and joined Face­book as a user expe­ri­ence research in Novem­ber 2015. Recall that it was a month lat­er, Decem­ber 2015, when we saw the first news reports of Ted Cruz’s cam­paign using Face­book data. Also recall that Face­book respond­ed to that Decem­ber 2015 report by say­ing it would look into the mat­ter. Face­book final­ly sent Cam­bridge Ana­lyt­i­ca a let­ter in August of 2016, days before Steve Ban­non became Trump’s cam­paign man­ag­er, ask­ing that Cam­bridge Ana­lyt­i­ca delete the data. So the fact that Face­book co-authored a paper with Kogan and Chan­cel­lor in August of the 2015 and then Chan­cel­lor joined Face­book in 2015 is a pret­ty sig­nif­i­cant bit of con­text for look­ing into Face­book’s behav­ior. Because Face­book did­n’t just know it was guilty of work­ing close­ly with Kogan. They also knew they just co-authored an aca­d­e­m­ic paper using data gath­ered with the same tech­nique Cam­bridge Ana­lyt­i­ca was charged with using.

7. Kogan does chal­lenge one of the claims by Christo­pher Wylie. Specif­i­cal­ly, Wylie claimed that Face­book became alarmed over the vol­ume of data Kogan’s app was scoop­ing up (50 mil­lion pro­files) but Kogan assuaged those con­cerns by say­ing it was all for research. Kogan says this is a fab­ri­ca­tion and Face­book nev­er actu­al­ly con­tact­ed him express­ing alarm.

So, accord­ing to Alek­san­dr Kogan, Face­book real­ly did have an excep­tion­al­ly close rela­tion­ship with Kogan and Face­book real­ly was total­ly on board with what Kogan and Cam­bridge Ana­lyt­i­ca were doing:

The Guardian

Face­book gave data about 57bn friend­ships to aca­d­e­m­ic
Vol­ume of data sug­gests trust­ed part­ner­ship with Alek­san­dr Kogan, says ana­lyst

Julia Car­rie Wong and Paul Lewis in San Fran­cis­co
Thu 22 Mar 2018 10.56 EDT
Last mod­i­fied on Sat 24 Mar 2018 22.56 EDT

Before Face­book sus­pend­ed Alek­san­dr Kogan from its plat­form for the data har­vest­ing “scam” at the cen­tre of the unfold­ing Cam­bridge Ana­lyt­i­ca scan­dal, the social media com­pa­ny enjoyed a close enough rela­tion­ship with the researcher that it pro­vid­ed him with an anonymised, aggre­gate dataset of 57bn Face­book friend­ships.

Face­book pro­vid­ed the dataset of “every friend­ship formed in 2011 in every coun­try in the world at the nation­al aggre­gate lev­el” to Kogan’s Uni­ver­si­ty of Cam­bridge lab­o­ra­to­ry for a study on inter­na­tion­al friend­ships pub­lished in Per­son­al­i­ty and Indi­vid­ual Dif­fer­ences in 2015. Two Face­book employ­ees were named as co-authors of the study, along­side researchers from Cam­bridge, Har­vard and the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley. Kogan was pub­lish­ing under the name Alek­san­dr Spec­tre at the time.

A Uni­ver­si­ty of Cam­bridge press release on the study’s pub­li­ca­tion not­ed that the paper was “the first out­put of ongo­ing research col­lab­o­ra­tions between Spectre’s lab in Cam­bridge and Face­book”. Face­book did not respond to queries about whether any oth­er col­lab­o­ra­tions occurred.

“The sheer vol­ume of the 57bn friend pairs implies a pre-exist­ing rela­tion­ship,” said Jonathan Albright, research direc­tor at the Tow Cen­ter for Dig­i­tal Jour­nal­ism at Colum­bia Uni­ver­si­ty. “It’s not com­mon for Face­book to share that kind of data. It sug­gests a trust­ed part­ner­ship between Alek­san­dr Kogan/Spectre and Face­book.”

Face­book down­played the sig­nif­i­cance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was lit­er­al­ly num­bers – num­bers of how many friend­ships were made between pairs of coun­tries – ie x num­ber of friend­ships made between the US and UK,” Face­book spokes­woman Chris­tine Chen said by email. “There was no per­son­al­ly iden­ti­fi­able infor­ma­tion includ­ed in this data.”

Facebook’s rela­tion­ship with Kogan has since soured.

“We end­ed our work­ing rela­tion­ship with Kogan alto­geth­er after we learned that he vio­lat­ed Facebook’s terms of ser­vice for his unre­lat­ed work as a Face­book app devel­op­er,” Chen said. Face­book has said that it learned of Kogan’s mis­use of the data in Decem­ber 2015, when the Guardian first report­ed that the data had been obtained by Cam­bridge Ana­lyt­i­ca.

“We start­ed to take steps to end the rela­tion­ship right after the Guardian report, and after inves­ti­ga­tion we end­ed the rela­tion­ship soon after, in 2016,” Chen said.

On Fri­day 16 March, in antic­i­pa­tion of the Observ­er’s report­ing that Kogan had improp­er­ly har­vest­ed and shared the data of more than 50 mil­lion Amer­i­cans, Face­book sus­pend­ed Kogan from the plat­form, issued a state­ment say­ing that he “lied” to the com­pa­ny, and char­ac­terised his activ­i­ties as “a scam – and a fraud”.

On Tues­day, Face­book went fur­ther, say­ing in a state­ment: “The entire com­pa­ny is out­raged we were deceived.” And on Wednes­day, in his first pub­lic state­ment on the scan­dal, its chief exec­u­tive, Mark Zucker­berg, called Kogan’s actions a “breach of trust”.

But Face­book has not explained how it came to have such a close rela­tion­ship with Kogan that it was co-author­ing research papers with him, nor why it took until this week – more than two years after the Guardian ini­tial­ly report­ed on Kogan’s data har­vest­ing activ­i­ties – for it to inform the users whose per­son­al infor­ma­tion was improp­er­ly shared.

And Kogan has offered a defence of his actions in an inter­view with the BBC and an email to his Cam­bridge col­leagues obtained by the Guardian. “My view is that I’m being basi­cal­ly used as a scape­goat by both Face­book and Cam­bridge Ana­lyt­i­ca,” Kogan said on Radio 4 on Wednes­day.

The data col­lec­tion that result­ed in Kogan’s sus­pen­sion by Face­book was under­tak­en by Glob­al Sci­ence Research (GSR), a com­pa­ny he found­ed in May 2014 with anoth­er Cam­bridge researcher, Joseph Chan­cel­lor. Chan­cel­lor is cur­rent­ly employed by Face­book.

Between June and August of that year, GSR paid approx­i­mate­ly 270,000 indi­vid­u­als to use a Face­book ques­tion­naire app that har­vest­ed data from their own Face­book pro­files, as well as from their friends, result­ing in a dataset of more than 50 mil­lion users. The data was sub­se­quent­ly giv­en to Cam­bridge Ana­lyt­i­ca, in what Face­book has said was a vio­la­tion of Kogan’s agree­ment to use the data sole­ly for aca­d­e­m­ic pur­pos­es.

In his email to col­leagues at Cam­bridge, Kogan said that he had cre­at­ed the Face­book app in 2013 for aca­d­e­m­ic pur­pos­es, and used it for “a num­ber of stud­ies”. After he found­ed GSR, Kogan wrote, he trans­ferred the app to the com­pa­ny and changed its name, logo, descrip­tion, and terms and con­di­tions. CNN first report­ed on the Cam­bridge email. Kogan did not respond to the Guardian’s request for com­ment on this arti­cle.

“We made clear the app was for com­mer­cial use – we nev­er men­tioned aca­d­e­m­ic research nor the Uni­ver­si­ty of Cam­bridge,” Kogan wrote. “We clear­ly stat­ed that the users were grant­i­ng us the right to use the data in broad scope, includ­ing sell­ing and licens­ing the data. These changes were all made on the Face­book app plat­form and thus they had full abil­i­ty to review the nature of the app and raise issues. Face­book at no point raised any con­cerns at all about any of these changes.”

Kogan is not alone in crit­i­cis­ing Facebook’s appar­ent efforts to place the blame on him.

“In my view, it’s Face­book that did most of the shar­ing,” said Albright, who ques­tioned why Face­book cre­at­ed a sys­tem for third par­ties to access so much per­son­al infor­ma­tion in the first place. That sys­tem “was designed to share their users’ data in mean­ing­ful ways in exchange for stock val­ue”, he added.

Whistle­blow­er Christo­pher Wylie told the Observ­er that Face­book was aware of the vol­ume of data being pulled by Kogan’s app. “Their secu­ri­ty pro­to­cols were trig­gered because Kogan’s apps were pulling this enor­mous amount of data, but appar­ent­ly Kogan told them it was for aca­d­e­m­ic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cam­bridge email, Kogan char­ac­terised this claim as a “fab­ri­ca­tion”, writ­ing: “There was no exchange with Face­book about it, and ... we nev­er claimed dur­ing the project that it was for aca­d­e­m­ic research. In fact, we did our absolute best not to have the project have any entan­gle­ments with the uni­ver­si­ty.”

The col­lab­o­ra­tion between Kogan and Face­book researchers which result­ed in the report pub­lished in 2015 also used data har­vest­ed by a Face­book app. The study analysed two datasets, the anony­mous macro-lev­el nation­al set of 57bn friend pairs pro­vid­ed by Face­book and a small­er dataset col­lect­ed by the Cam­bridge aca­d­e­mics.

For the small­er dataset, the research team used the same method of pay­ing peo­ple to use a Face­book app that har­vest­ed data about the indi­vid­u­als and their friends. Face­book was not involved in this part of the study. The study notes that the users signed a con­sent form about the research and that “no decep­tion was used”.

The paper was pub­lished in late August 2015. In Sep­tem­ber 2015, Chan­cel­lor left GSR, accord­ing to com­pa­ny records. In Novem­ber 2015, Chan­cel­lor was hired to work at Face­book as a user expe­ri­ence researcher.

...

———-

“Face­book gave data about 57bn friend­ships to aca­d­e­m­ic” by Julia Car­rie Wong and Paul Lewis; The Guardian; 03/22/2018

“Before Face­book sus­pend­ed Alek­san­dr Kogan from its plat­form for the data har­vest­ing “scam” at the cen­tre of the unfold­ing Cam­bridge Ana­lyt­i­ca scan­dal, the social media com­pa­ny enjoyed a close enough rela­tion­ship with the researcher that it pro­vid­ed him with an anonymised, aggre­gate dataset of 57bn Face­book friend­ships.

An anonymized, aggre­gate dataset of 57bn Face­book friend­ships sure makes it a lot eas­i­er to take Kogan at his word when he claims a close work­ing rela­tion­ship with Face­book.

Now, keep in mind that the aggre­gate anonymized data was aggre­gate at the nation­al lev­el, so it’s not as if Face­book gave Kogan a list of 57 bil­lion Face­book friend­ships. And when you think about it, that aggre­gat­ed anonymized data is far less sen­si­tive than the per­son­al Face­book pro­file data Kogan and oth­er app devel­op­ers were rou­tine­ly grab­bing dur­ing this peri­od. It’s the fact that Face­book gave this data to Kogan in the first place that lends cre­dence to his claims.

But the biggest fac­tor lend­ing cre­dence to Kogan’s claims is the fact that Face­book co-authored a study with Kogan and oth­er at the Uni­ver­si­ty of Cam­bridge using that anonymized aggre­gat­ed data. Two Face­book employ­ees were named as co-authors of the study. That is def­i­nite­ly a sign of close work­ing rela­tion­ship:

...
Face­book pro­vid­ed the dataset of “every friend­ship formed in 2011 in every coun­try in the world at the nation­al aggre­gate lev­el” to Kogan’s Uni­ver­si­ty of Cam­bridge lab­o­ra­to­ry for a study on inter­na­tion­al friend­ships pub­lished in Per­son­al­i­ty and Indi­vid­ual Dif­fer­ences in 2015. Two Face­book employ­ees were named as co-authors of the study, along­side researchers from Cam­bridge, Har­vard and the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley. Kogan was pub­lish­ing under the name Alek­san­dr Spec­tre at the time.

A Uni­ver­si­ty of Cam­bridge press release on the study’s pub­li­ca­tion not­ed that the paper was “the first out­put of ongo­ing research col­lab­o­ra­tions between Spectre’s lab in Cam­bridge and Face­book”. Face­book did not respond to queries about whether any oth­er col­lab­o­ra­tions occurred.

“The sheer vol­ume of the 57bn friend pairs implies a pre-exist­ing rela­tion­ship,” said Jonathan Albright, research direc­tor at the Tow Cen­ter for Dig­i­tal Jour­nal­ism at Colum­bia Uni­ver­si­ty. “It’s not com­mon for Face­book to share that kind of data. It sug­gests a trust­ed part­ner­ship between Alek­san­dr Kogan/Spectre and Face­book.”
...

Even more damn­ing for Face­book is that the research co-authored by Kogan, Face­book, and oth­er researchers did­n’t just includ­ed the anonymized aggre­gat­ed data. It also includ­ed a sec­ond data set of non-anonymized data that was har­vest­ed in exact­ly the same way Kogan’s GSR app worked. And while Face­book appar­ent­ly was­n’t involved in that part of the study, that’s beside the point. Face­book clear­ly knew about it if they co-authored the study:

...
The col­lab­o­ra­tion between Kogan and Face­book researchers which result­ed in the report pub­lished in 2015 also used data har­vest­ed by a Face­book app. The study analysed two datasets, the anony­mous macro-lev­el nation­al set of 57bn friend pairs pro­vid­ed by Face­book and a small­er dataset col­lect­ed by the Cam­bridge aca­d­e­mics.

For the small­er dataset, the research team used the same method of pay­ing peo­ple to use a Face­book app that har­vest­ed data about the indi­vid­u­als and their friends. Face­book was not involved in this part of the study. The study notes that the users signed a con­sent form about the research and that “no decep­tion was used”.

The paper was pub­lished in late August 2015. In Sep­tem­ber 2015, Chan­cel­lor left GSR, accord­ing to com­pa­ny records. In Novem­ber 2015, Chan­cel­lor was hired to work at Face­book as a user expe­ri­ence researcher.
...

But, alas, Kogan’s rela­tion­ship with Face­book as since soured, with Face­book now act­ing as if Kogan had total­ly vio­lat­ed their trust. And yet it’s hard to ignore the fact that Kogan was­n’t for­mal­ly kicked off Face­book’s plat­form until March 16th of this year, just a few days before all these sto­ries about Kogan and Face­book were about to go pub­lic:

...
Facebook’s rela­tion­ship with Kogan has since soured.

“We end­ed our work­ing rela­tion­ship with Kogan alto­geth­er after we learned that he vio­lat­ed Facebook’s terms of ser­vice for his unre­lat­ed work as a Face­book app devel­op­er,” Chen said. Face­book has said that it learned of Kogan’s mis­use of the data in Decem­ber 2015, when the Guardian first report­ed that the data had been obtained by Cam­bridge Ana­lyt­i­ca.

“We start­ed to take steps to end the rela­tion­ship right after the Guardian report, and after inves­ti­ga­tion we end­ed the rela­tion­ship soon after, in 2016,” Chen said.

On Fri­day 16 March, in antic­i­pa­tion of the Observ­er’s report­ing that Kogan had improp­er­ly har­vest­ed and shared the data of more than 50 mil­lion Amer­i­cans, Face­book sus­pend­ed Kogan from the plat­form, issued a state­ment say­ing that he “lied” to the com­pa­ny, and char­ac­terised his activ­i­ties as “a scam – and a fraud”.

On Tues­day, Face­book went fur­ther, say­ing in a state­ment: “The entire com­pa­ny is out­raged we were deceived.” And on Wednes­day, in his first pub­lic state­ment on the scan­dal, its chief exec­u­tive, Mark Zucker­berg, called Kogan’s actions a “breach of trust”.
...

““The entire com­pa­ny is out­raged we were deceived.” And on Wednes­day, in his first pub­lic state­ment on the scan­dal, its chief exec­u­tive, Mark Zucker­berg, called Kogan’s actions a “breach of trust”.”

Mark Zucker­berg is com­plain­ing about a “breach of trust.” LOL!

And yet Face­book has yet to explain the nature of its rela­tion­ship with Kogan or why it was that they did­n’t kick him off the plat­form until only recent­ly. But Kogan has an expla­na­tion: He’s a scape­goat and he was­n’t doing any­thing Face­book did­n’t know he was doing. And when you notice that Kogan’s co-founder of GSR, Joseph Chan­cel­lor, is now a Face­book employ­ee, it’s hard not to take his claims seri­ous­ly:

...
But Face­book has not explained how it came to have such a close rela­tion­ship with Kogan that it was co-author­ing research papers with him, nor why it took until this week – more than two years after the Guardian ini­tial­ly report­ed on Kogan’s data har­vest­ing activ­i­ties – for it to inform the users whose per­son­al infor­ma­tion was improp­er­ly shared.

And Kogan has offered a defence of his actions in an inter­view with the BBC and an email to his Cam­bridge col­leagues obtained by the Guardian. “My view is that I’m being basi­cal­ly used as a scape­goat by both Face­book and Cam­bridge Ana­lyt­i­ca,” Kogan said on Radio 4 on Wednes­day.

The data col­lec­tion that result­ed in Kogan’s sus­pen­sion by Face­book was under­tak­en by Glob­al Sci­ence Research (GSR), a com­pa­ny he found­ed in May 2014 with anoth­er Cam­bridge researcher, Joseph Chan­cel­lor. Chan­cel­lor is cur­rent­ly employed by Face­book.
...

But if Kogan’s claims are to be tak­en seri­ous­ly, we have a pret­ty seri­ous scan­dal on our hands. Because Kogan claims that not only did he make it clear to Face­book and his app users that the data they were col­lect­ing was for com­mer­cial use — with no men­tion of aca­d­e­m­ic or research pur­pos­es of the Uni­ver­si­ty of Cam­bridge — but he also claims that he made it clear the data GSR was col­lect­ing could be licensed and resold. And Face­book at no point raised any con­cerns at all about any of this:

...
“We made clear the app was for com­mer­cial use – we nev­er men­tioned aca­d­e­m­ic research nor the Uni­ver­si­ty of Cam­bridge,” Kogan wrote. “We clear­ly stat­ed that the users were grant­i­ng us the right to use the data in broad scope, includ­ing sell­ing and licens­ing the data. These changes were all made on the Face­book app plat­form and thus they had full abil­i­ty to review the nature of the app and raise issues. Face­book at no point raised any con­cerns at all about any of these changes.”

Kogan is not alone in crit­i­cis­ing Facebook’s appar­ent efforts to place the blame on him.

“In my view, it’s Face­book that did most of the shar­ing,” said Albright, who ques­tioned why Face­book cre­at­ed a sys­tem for third par­ties to access so much per­son­al infor­ma­tion in the first place. That sys­tem “was designed to share their users’ data in mean­ing­ful ways in exchange for stock val­ue”, he added.
...

Now, it’s worth not­ing that the casu­al accep­tance of the com­mer­cial use of the data col­lect­ed over these Face­book apps and the poten­tial licens­ing and reselling of that data is actu­al­ly a far more seri­ous­ly sit­u­a­tion than the one Sandy Parak­i­las described dur­ing his time at Face­book. Recall that, accord­ing to Parak­i­las, app devel­op­ers sim­ply had to tell Face­book was that they were going to use the pro­file data on app users and their friends to ‘improve the user expe­ri­ence.’ It was fine if they were com­mer­cial apps from Face­book’s per­spec­tive. But Parak­i­las did­n’t describe a sit­u­a­tion where app devel­op­ers open­ly made it clear they might license or resell the data. So Kogan’s claim that it was clear his app had com­mer­cial appli­ca­tions and might involve reselling the data is even more egre­gious than the sit­u­a­tion Parak­i­las described. But don’t for­get that Parak­i­las left Face­book in late 2012 and Kogan’s app would have been approved in 2014 so it’s entire­ly pos­si­ble Face­book’s poli­cies got even more egre­gious after Parak­i­las left.

And it’s worth not­ing how Kogan’s claims dif­fer from Christo­pher Wylie’s. Wylie asserts that Face­book grew alarmed by the vol­ume of data GSR’s app was pulling from Face­book users and Kogan assured them it was for research pur­pos­es. Where­as Kogan says Face­book nev­er expressed any alarm at all:

...
Whistle­blow­er Christo­pher Wylie told the Observ­er that Face­book was aware of the vol­ume of data being pulled by Kogan’s app. “Their secu­ri­ty pro­to­cols were trig­gered because Kogan’s apps were pulling this enor­mous amount of data, but appar­ent­ly Kogan told them it was for aca­d­e­m­ic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cam­bridge email, Kogan char­ac­terised this claim as a “fab­ri­ca­tion”, writ­ing: “There was no exchange with Face­book about it, and ... we nev­er claimed dur­ing the project that it was for aca­d­e­m­ic research. In fact, we did our absolute best not to have the project have any entan­gle­ments with the uni­ver­si­ty.”
...

So as we can see, when it comes to Face­book’s “friends per­mis­sions” data shar­ing pol­i­cy, its arrange­ment with Alek­san­dr Kogan was prob­a­bly one of the more respon­si­ble ones it engaged in because, hey, at least Kogan’s work was osten­si­bly for research pur­pos­es and involved at least some anonymized data.

Cam­bridge Ana­lyt­i­ca’s Infor­mal Friend: Palan­tir

And as we can also see, the more we learn about this sit­u­a­tion, the hard­er it gets to dis­miss Kogan’s claims that Face­book is mak­ing in a scape­goat in order to cov­er up not just the rela­tion­ship Face­book had with Kogan but the fact that what Kogan was doing was rou­tine for app devel­op­ers for years.

But as the fol­low­ing New York Times arti­cle makes clear, Face­book’s rela­tion­ship with Alek­san­dr Kogan isn’t the only work­ing rela­tion­ship Face­book needs to wor­ry about that might lead back to Cam­bridge Ana­lyt­i­ca. Because it turns out there’s anoth­er Face­book con­nec­tion to Cam­bridge Ana­lyt­i­ca and it’s poten­tial­ly far, far more scan­dalous than Face­book’s rela­tion­ship with Kogan: It turns out Palan­tir might be the orig­i­na­tor of the idea to cre­ate Kogan’s app for the pur­pose of col­lect­ing psy­cho­log­i­cal pro­files. That’s right, accord­ing to doc­u­ments the New York Times has seen, Palan­tir, the pri­vate intel­li­gence firm with a close rela­tion­ship with the US nation­al secu­ri­ty state, was in talks with Cam­bridge Ana­lyt­i­ca from 2013–2014 about psy­cho­log­i­cal­ly pro­fil­ing vot­ers and it was an employ­ee of Palan­tir who raised the idea of cre­at­ing that app in the first place.

And this is of course wild­ly scan­dalous if true because Palan­tir was found­ed by the Face­book exec­u­tive Peter Thiel who also hap­pens to be a far right polit­i­cal activist and a close ally of Pres­i­dent Trump.

But it gets worse. And weird­er. Because it sounds like one of the peo­ple encour­ag­ing SCL (Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny) to work with Palan­tir was none oth­er than Sophie Schmidt, daugh­ter of Google CEO Eric Schmidt.

Keep in mind that this isn’t the first time we’ve heard about Palan­tir’s ties to Cam­bridge Ana­lyt­i­ca and Sophie Schmidt’s role in this. It was report­ed by the Observ­er last May. Accord­ing to that May 2017 arti­cle in the Observ­er, Schmidt was pass­ing through Lon­don in June of 2013 when she decid­ed to called up her for­mer boss at SCL and rec­om­mend that they con­tact Palan­tir. Also if inter­est is that if you look at the cur­rent ver­sion of that Observ­er arti­cle, all men­tion of Sophie Schmidt has been removed and there’s a note that the arti­cle is the sub­ject of legal com­plaints on behalf of Cam­bridge Ana­lyt­i­ca LLC and SCL Elec­tions Lim­it­ed. But in the orig­i­nal arti­cle she’s men­tioned quite exten­sive­ly. It would appear that some­one is very upset about the Sophie Schmidt angle to this sto­ry.

So this Palantir/Sophie Schmidt side of this sto­ry isn’t a new. But we’re learn­ing a lot more infor­ma­tion about that rela­tion­ship now. For instance:

1. In ear­ly 2013, Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix, an SCL direc­tor at the time, and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns.

2. And SCL employ­ee wrote to a col­league in a June 2013 email that Schmidt is push­ing them to work with Palan­tir. “Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” .

3. Accord­ing to Christo­pher Wylie’s tes­ti­mo­ny to law­mak­ers, “There were Palan­tir staff who would come into the office and work on the data...And we would go and meet with Palan­tir staff at Palan­tir.” Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014.

4. The Palan­tir employ­ee who float­ed the idea of cre­ate the app ulti­mate­ly built by Alek­san­dr Kogan is Alfredas Chmieli­auskas. Chmieli­auskas works on busi­ness devel­op­ment for Palan­tire accord­ing to his LinkedIn page.

5. Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er for­mal­ly start­ed work­ing togeth­er. A Palan­tir spokes­woman acknowl­edged that the com­pa­nies had briefly con­sid­ered work­ing togeth­er but said that Palan­tir declined a part­ner­ship, in part because exec­u­tives there want­ed to steer clear of elec­tion work. Emails indi­cate that Mr. Nix and Mr. Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined. Wylie acknowl­edges that Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er signed a con­tract or entered into a for­mal busi­ness rela­tion­ship. But he said some Palan­tir employ­ees helped engi­neer Cam­bridge Analytica’s psy­cho­graph­ic mod­els. In oth­er words, while there was nev­er a for­mal rela­tion­ship, there was an pret­ty sig­nif­i­cant infor­mal rela­tion­ship.

6. Mr. Chmieli­auskas was in com­mu­ni­ca­tion with Wylie’s team in 2014 dur­ing the peri­od when Cam­bridge Ana­lyt­i­ca was ini­tial­ly try­ing to con­vince the Uni­ver­si­ty of Cam­bridge team to work with them. Recall that Cam­bridge Ana­lyt­i­ca ini­tial­ly dis­cov­ered that the Uni­ver­si­ty of Cam­bridge team had exact­ly the kind of data they were inter­est­ed in col­lect­ed via a Face­book app, but the nego­ti­a­tions ulti­mate­ly failed and it was then that Cam­bridge Ana­lyt­i­ca found Alek­san­dr Kogan who agreed to cre­ate his own app. Well, accord­ing to this report, it was Chmieli­auskas who ini­tial­ly sug­gest­ed to Cam­bridge Ana­lyt­i­ca that the firm cre­ate its own ver­sion of the Uni­ver­si­ty of Cam­bridge team’s app as lever­age in those nego­ti­a­tions. In essence, Chmieli­auskas want­ed Cam­bridge Ana­lyt­i­ca to show the Uni­ver­si­ty of Cam­bridge team that they could col­lect the infor­ma­tion them­selves, pre­sum­ably to dri­ve a hard­er bar­gain. And when those nego­ti­a­tions failed Cam­bridge Ana­lyt­i­ca did indeed cre­ate their own app after team­ing up with Kogan.

7. Palan­tir asserts that Chmieli­auskas was act­ing in his own capac­i­ty when he con­tin­ued com­mu­ni­cat­ing with Wylie and made the sug­ges­tion to cre­ate their own app. Palan­tir ini­tial­ly told the New York Times that it had “nev­er had a rela­tion­ship with Cam­bridge Ana­lyt­i­ca, nor have we ever worked on any Cam­bridge Ana­lyt­i­ca data.” Palan­tir lat­er revised this, say­ing that Mr. Chmieli­auskas was not act­ing on the company’s behalf when he advised Mr. Wylie on the Face­book data.

And, again, do not for­get that Palan­tir is own by Peter Thiel, the far right bil­lion­aire ear­ly investor in Face­book and one of Face­book’s board mem­bers to this day. He was also a Trump del­e­gate in 2016 and was in dis­cus­sions with the Trump admin­is­tra­tion to lead the pow­er­ful Pres­i­den­t’s Intel­li­gence Advi­so­ry Board, although he ulti­mate­ly turned that offer down. Oh, and he’s an advo­cate of the Dark Enlight­en­ment.

Basi­cal­ly, Peter Thiel was a mem­ber of the ‘Alt Right’ before that term was ever coined. And he’s a very pow­er­ful influ­ence at Face­book. So learn­ing that Palan­tir and Cam­bridge Ana­lyt­i­ca were in dis­cus­sion to work togeth­er on elec­tion projects in 2013 and 2014, a Palan­tir employ­ee was advis­ing Cam­bridge Ana­lyt­i­ca dur­ing the nego­ti­a­tions with the Uni­ver­si­ty of Cam­bridge team, and that Palan­tir employ­ees helped engi­neer Cam­bridge Ana­lyt­i­ca’s psy­cho­graph­ic mod­el based on Face­book is the kind of rev­e­la­tion that just might qual­i­fy as the most scan­dalous rev­e­la­tion in this entire mess:

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

As a start-up called Cam­bridge Ana­lyt­i­ca sought to har­vest the Face­book data of tens of mil­lions of Amer­i­cans in sum­mer 2014, the com­pa­ny received help from at least one employ­ee at Palan­tir Tech­nolo­gies, a top Sil­i­con Val­ley con­trac­tor to Amer­i­can spy agen­cies and the Pen­ta­gon.

It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

Cam­bridge ulti­mate­ly took a sim­i­lar approach. By ear­ly sum­mer, the com­pa­ny found a uni­ver­si­ty researcher to har­vest data using a per­son­al­i­ty ques­tion­naire and Face­book app. The researcher scraped pri­vate data from over 50 mil­lion Face­book users — and Cam­bridge Ana­lyt­i­ca went into busi­ness sell­ing so-called psy­cho­me­t­ric pro­files of Amer­i­can vot­ers, set­ting itself on a col­li­sion course with reg­u­la­tors and law­mak­ers in the Unit­ed States and Britain.

The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

“There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,” said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day.

...

The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book.

The Palan­tir employ­ee, Alfredas Chmieli­auskas, works on busi­ness devel­op­ment for the com­pa­ny, accord­ing to his LinkedIn page. In an ini­tial state­ment, Palan­tir said it had “nev­er had a rela­tion­ship with Cam­bridge Ana­lyt­i­ca, nor have we ever worked on any Cam­bridge Ana­lyt­i­ca data.” Lat­er on Tues­day, Palan­tir revised its account, say­ing that Mr. Chmieli­auskas was not act­ing on the company’s behalf when he advised Mr. Wylie on the Face­book data.

“We learned today that an employ­ee, in 2013–2014, engaged in an entire­ly per­son­al capac­i­ty with peo­ple asso­ci­at­ed with Cam­bridge Ana­lyt­i­ca,” the com­pa­ny said. “We are look­ing into this and will take the appro­pri­ate action.”

The com­pa­ny said it was con­tin­u­ing to inves­ti­gate but knew of no oth­er employ­ees who took part in the effort. Mr. Wylie told law­mak­ers that mul­ti­ple Palan­tir employ­ees played a role.

Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

Ms. Schmidt did not respond to requests for com­ment, nor did a spokesman for Cam­bridge Ana­lyt­i­ca.

In ear­ly 2013, Alexan­der Nix, an SCL direc­tor who became chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns.

A Palan­tir spokes­woman acknowl­edged that the com­pa­nies had briefly con­sid­ered work­ing togeth­er but said that Palan­tir declined a part­ner­ship, in part because exec­u­tives there want­ed to steer clear of elec­tion work. Emails reviewed by The Times indi­cate that Mr. Nix and Mr. Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined.

In his tes­ti­mo­ny, Mr. Wylie acknowl­edged that Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er signed a con­tract or entered into a for­mal busi­ness rela­tion­ship. But he said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.

Palan­tir employ­ees were impressed with Cambridge’s back­ing from Mr. Mer­cer, one of the world’s rich­est men, accord­ing to mes­sages viewed by The Times. And Cam­bridge Ana­lyt­i­ca viewed Palantir’s Sil­i­con Val­ley ties as a valu­able resource for launch­ing and expand­ing its own busi­ness.

In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients.

“I had left field idea,” Mr. Chmieli­auskas wrote in May 2014. “What about repli­cat­ing the work of the cam­bridge prof as a mobile app that con­nects to face­book?” Repro­duc­ing the app, Mr. Chmieli­auskas wrote, “could be a valu­able lever­age nego­ti­at­ing with the guy.”

Those nego­ti­a­tions failed. But Mr. Wylie struck gold with anoth­er Cam­bridge researcher, the Russ­ian-Amer­i­can psy­chol­o­gist Alek­san­dr Kogan, who built his own per­son­al­i­ty quiz app for Face­book. Over sub­se­quent months, Dr. Kogan’s work helped Cam­bridge devel­op psy­cho­log­i­cal pro­files of mil­lions of Amer­i­can vot­ers.

———-

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

“The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

Yep, a Face­book board mem­ber’s pri­vate intel­li­gence firm was work­ing close­ly with Cam­brige Ana­lyt­i­ca as they devel­oped their psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy. It’s quite a rev­e­la­tion. The kind of explo­sive rev­e­la­tion that has Palan­tir first deny­ing that there was any rela­tion­ship at all, fol­lowed with acknowledgement/denial that, yes, a Palan­tir employ­ee, Alfredas Chmieli­auskas, was indeed work­ing with Cam­bridge Ana­lyt­i­ca but not on behalf of Palan­tir:

...
It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

...

The Palan­tir employ­ee, Alfredas Chmieli­auskas, works on busi­ness devel­op­ment for the com­pa­ny, accord­ing to his LinkedIn page. In an ini­tial state­ment, Palan­tir said it had “nev­er had a rela­tion­ship with Cam­bridge Ana­lyt­i­ca, nor have we ever worked on any Cam­bridge Ana­lyt­i­ca data.” Lat­er on Tues­day, Palan­tir revised its account, say­ing that Mr. Chmieli­auskas was not act­ing on the company’s behalf when he advised Mr. Wylie on the Face­book data.
...

Adding the scan­dalous nature of it all is that Google CEO Eric Schmidt’s daugh­ter sud­den­ly appeared in June of 2013 to also pro­mote to her old boss at SCL a rela­tion­ship with Palan­tir:

...
Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

Ms. Schmidt did not respond to requests for com­ment, nor did a spokesman for Cam­bridge Ana­lyt­i­ca.
...

But this June 2013 pro­pos­al by Sophie Schmidt was­n’t what start­ed Cam­bridge Ana­lyt­i­ca’s rela­tion­ship with Palan­tir. Because that report­ed­ly start­ed in ear­ly 2013, when Alexan­der Nix and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns:

...
In ear­ly 2013, Alexan­der Nix, an SCL direc­tor who became chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns.
...

So Sophie Schmidt swooped in to pro­mote Palan­tir to Cam­bridge Ana­lyt­i­ca months after the nego­ti­a­tions began. It rais­es the ques­tion of who encour­aged her to do that.

Palan­tir now admits these nego­ti­a­tions hap­pened, but claims that they chose not to work with Cam­bridge Ana­lyt­i­ca because they “want­ed to steer clear of elec­tion work.” And emails indi­cate that Palan­tir did indeed for­mal­ly turn down the idea of work­ing with Cam­bridge Ana­lyt­i­ca since the emails show that Nix and Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined. And yet, accord­ing to Christo­pher Wylie, some Palan­tir employ­ees helped engi­neer their psy­chogroph­ic mod­els. And that sug­gests Palan­tir turned down a for­mal rela­tion­ship in favor of an infor­mal one:

...
A Palan­tir spokes­woman acknowl­edged that the com­pa­nies had briefly con­sid­ered work­ing togeth­er but said that Palan­tir declined a part­ner­ship, in part because exec­u­tives there want­ed to steer clear of elec­tion work. Emails reviewed by The Times indi­cate that Mr. Nix and Mr. Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined.

In his tes­ti­mo­ny, Mr. Wylie acknowl­edged that Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er signed a con­tract or entered into a for­mal busi­ness rela­tion­ship. But he said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.
...

“There were Palan­tir staff who would come into the office and work on the data...And we would go and meet with Palan­tir staff at Palan­tir.”

That sure sounds like a rela­tion­ship! For­mal or not.

And that infor­mal rela­tion­ship con­tin­ued dur­ing the peri­od when Cam­bridge Ana­lyt­i­ca was in nego­ti­a­tion with the ini­tial Uni­ver­si­ty of Cam­bridge Psy­cho­met­rics Cen­tre in 2014:

...
In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients.
...

And it was dur­ing those nego­ti­a­tions, in May of 2014, when Chmieli­auskas first pro­posed the idea of just repli­cat­ing what the Uni­ver­si­ty of Cam­bridge Psy­cho­met­rics Cen­tre was doing for lever­age in the nego­ti­a­tions. When those nego­ti­a­tions ulti­mate­ly failed, Cam­bridge Ana­lyt­i­ca found anoth­er Cam­bridge Uni­ver­si­ty psy­chol­o­gist, Alek­san­dr Kogan, to build the app for them:

...
“I had left field idea,” Mr. Chmieli­auskas wrote in May 2014. “What about repli­cat­ing the work of the cam­bridge prof as a mobile app that con­nects to face­book?” Repro­duc­ing the app, Mr. Chmieli­auskas wrote, “could be a valu­able lever­age nego­ti­at­ing with the guy.”

Those nego­ti­a­tions failed. But Mr. Wylie struck gold with anoth­er Cam­bridge researcher, the Russ­ian-Amer­i­can psy­chol­o­gist Alek­san­dr Kogan, who built his own per­son­al­i­ty quiz app for Face­book. Over sub­se­quent months, Dr. Kogan’s work helped Cam­bridge devel­op psy­cho­log­i­cal pro­files of mil­lions of Amer­i­can vot­ers.
...

And that’s what we know so far about the rela­tion­ship between Cam­bridge Ana­lyt­i­ca and Palan­tir. Which rais­es a num­ber of ques­tions. Like whether or not this infor­mal rela­tion­ship con­tin­ued well after Cam­bridge Ana­lyt­i­ca start­ed har­vest­ing all that Face­book infor­ma­tion. Let’s look at sev­en key the facts about we know Palan­tir’s involve­ment in this so far:

1. Palan­tir employ­ees helped build the psy­cho­graph­ic pro­files.

2. Mr. Chmieli­auskas was in con­tact with Wylie at least as late as May of 2014 as Cam­bridge Ana­lyt­i­ca was nego­ti­at­ing with the Uni­ver­si­ty of Cam­bridge’s Psy­cho­met­rics Cen­tre.

3. We don’t know when this infor­mal rela­tion­ship between Palan­tir and Cam­bridge Ana­lyt­i­ca end­ed.

4. We don’t know if the infor­mal rela­tion­ship between Palan­tir and Cam­bridge Ana­lyt­i­ca — which large­ly appears to cen­ter around Mr. Chmieli­auskas — real­ly was large­ly Chmieli­auskas’s ini­tia­tive alone after Palan­tir ini­tial­ly reject­ed a for­mal rela­tion­ship (it’s pos­si­ble) or if Chmieli­auskas was direct­ed to pur­sue this rela­tion­ship infor­mal­ly but on behalf of Palan­tir to main­tain deni­a­bil­i­ty in the case of awk­ward sit­u­a­tions like the present one (also very pos­si­ble, and savvy giv­en the cur­rent sit­u­a­tion).

5. We don’t know if the Palan­tir employ­ees who helped build those psy­cho­graph­ic pro­files were work­ing with the data Cam­bridge Ana­lyt­i­ca har­vest­ed from Face­book or were they work­ing with the ear­li­er, inad­e­quate sets of data that did­n’t include the Face­book data? Because if the Palan­tir employ­ees helped build the psy­cho­graph­ic pro­files based on the Face­book data that implies this infor­mal rela­tion­ship went on a lot longer than May of 2014 since that’s when it first start­ed get­ting col­lect­ed via Kogan’s app. How long? We don’t yet know.

6. Nei­ther do we know how much of this data ulti­mate­ly fell into the hands of Palan­tir. As Wylie described it, “There were Palan­tir staff who would come into the office and work on the data...And we would go and meet with Palan­tir staff at Palan­tir.” So did those Palan­tir employ­ees who were work­ing on “the data” take any of that data back to Palan­tir?

7. For that mat­ter, giv­en that Peter Thiel sits on the board of Face­book, and giv­en how freely Face­book hands out this kind of data, we have to ask the ques­tion of whether or not Palan­tir already has direct access to exact­ly the kind of data Cam­bridge Ana­lyt­i­ca was har­vest­ing. Did Palan­tir even need Cam­bridge Ana­lyt­i­ca’s data? Per­haps Palan­tir was already using apps of their own to har­vest this kind of data? We don’t know. At the same time, don’t for­get that even if Palan­tir had ready access to the same Face­book pro­file data gath­ered by Kogan’s app, it’s still pos­si­ble Palan­tir would have had an inter­est in the com­pa­ny pure­ly to see how the data was ana­lyzed and learn from that. In oth­er words, the inter­est in Cam­bridge Ana­lyt­i­ca may be been more relat­ed to the algo­rithms, and not the data, for Peter Thiel’s Palan­tir. Don’t for­get that if any­one is the real pow­er behind the throne at Face­book it’s prob­a­bly Thiel.

8. What on earth is going on with Sophie Schmidt, daugh­ter of Google CEO Eric Schmidt, push­ing Cam­bridge Ana­lyt­i­ca to work with Palan­tir in June of 2013, months after Cam­bridge Ana­lyt­ic and Palan­tir began talk­ing with each oth­er? That seems poten­tial­ly sig­nif­i­cant.

Those are just some of the ques­tions raised about Palan­tir’s ambigu­ous­ly omi­nous rela­tion­ship with Cam­bridge Ana­lyt­i­ca. Bad don’t for­get that it’s not just Palan­tir that we need to ask these kinds of ques­tions. For instance, what about Steve Ban­non’s Bre­it­bart? Does Bre­it­bart, home the neo-Nazi ‘Alt Right’, also have access to all that har­vest­ed Cam­bridge Ana­lyt­i­ca data? Not just the raw Face­book data but also the processed psy­cho­log­i­cal pro­file data on 50 mil­lion Amer­i­cans that Cam­bridge Ana­lyt­i­ca gen­er­at­ed. Does Bre­it­bart have the processed pro­files too? And what about the Repub­li­can Par­ty? And all the oth­er enti­ties out there who gained access to this Face­book pro­file data. Just how many dif­fer­ent enti­ties around the globe pos­sess that Cam­bridge Ana­lyt­i­ca data set?

It’s Not Just Cam­bridge Ana­lyt­i­ca. Or Face­book. Or Google. It’s Soci­ety.

Of course, as we saw with Sandy Parak­i­las’s whis­tle-blow­er claims, when it comes to the ques­tion of who might pos­sess Face­book pro­file data har­vest­ed dur­ing the 2007–2014 peri­od when Face­book had “friends per­mis­sions” pol­i­cy, the list of sus­pects includes poten­tial­ly hun­dreds of thou­sands of devel­op­ers and any­one who has pur­chased this infor­ma­tion on the black mar­ket.

Don’t for­get one of the oth­er amaz­ing aspects of this whole sit­u­a­tion: if hun­dreds of thou­sands of devel­op­ers were using this fea­ture to scrape user pro­files, that means this real­ly was an open secret. Lots and lots of peo­ple were doing this. For years. So, like many scan­dals, per­haps the most scan­dalous part of it is that we’re learn­ing about some­thing we should have known all along and many of did know all along. It’s not like it’s a secret that peo­ple are being sur­veilled in detail in the inter­net age and this data is being stored and aggre­gat­ed in pub­lic and pri­vate data­bas­es and put up for sale. We’ve col­lec­tive­ly known this all along. At least on some lev­el.

And yet this sur­veil­lance is so per­va­sive that it’s almost nev­er thought about on a moment by moment basis at an indi­vid­ual lev­el. When peo­ple browse the web they pre­sum­ably aren’t think­ing about the vol­ume of track­ing cook­ies and oth­er per­son­al infor­ma­tion slurped up as a result of that mouse click. Nor are they think­ing about how that click con­tributes to the numer­ous per­son­al pro­files of them float­ing around the com­mer­cial data bro­ker­age mar­ket­place. So in a more fun­da­men­tal sense we don’t actu­al­ly know we’re being sur­veilled because we’re not think­ing about it.

It’s one exam­ple of how humans aren’t wired to nat­u­ral­ly think about the macro forces impact­ing their lives in day to day deci­sions, which was fine when we were cave men but becomes a prob­lem­at­ic instinct when we’re lit­er­al­ly mas­ter­ing the laws of physics and shap­ing our world and envi­ron­ment. From physics and nature to his­to­ry and con­tem­po­rary trends, the vast major­i­ty of human­i­ty spends very lit­tle time study­ing these top­ics. Which is com­plete­ly under­stand­able giv­en the lack of time or resources to do so, but that under­stand­able instinct cre­ates world per­fect­ly set up for abuse by sur­veil­lance states, both pub­lic and pri­vate, which makes it less under­stand­able and much more prob­lem­at­ic.

So, in the inter­est of gain­ing per­spec­tive on how we got to this point where the Face­book emerged as an ever-grow­ing Panop­ti­con in just a few short years after its con­cep­tion, let’s take a look at one last arti­cle. It’s an arti­cle by inves­tiga­tive jour­nal­ist Yasha Levine, who recent­ly pub­lished the must-read book Sur­veil­lance Val­ley: The Secret Mil­i­tary His­to­ry of the Inter­net. It’s a book filled with vital his­tor­i­cal fun fact about the inter­net. Fun facts like...

1. How the inter­net began as a sys­tem built for nation­al secu­ri­ty pur­pos­es with a focus on mil­i­tary hard­ware and com­mand and con­trol com­mu­ni­ca­tion pur­pos­es in gen­er­al. But there was also a focus on build­ing a sys­tem that could col­lect, store, process, and dis­trib­ute of mas­sive vol­umes of infor­ma­tion used to wage the Viet­nam war. Beyond that, these ear­ly com­put­er net­works also act­ed as a col­lec­tion and shar­ing sys­tem for deal­ing with domes­tic nation­al secu­ri­ty con­cerns (con­cerns that cen­tered around track­ing anti-war pro­test­ers, civ­il rights activists, etc). That’s what the inter­net start­ed out as. A sys­tem for stor­ing data about peo­ple and con­flict for US nation­al secu­ri­ty pur­pos­es.

2. Build­ing data­bas­es of pro­files on peo­ple (for­eign and domes­tic) was one of the very first goals of these inter­net pre­de­ces­sors. In fact, one of the key vision­ar­ies behind the devel­op­ment of the inter­net, Ithiel de Sola Pool, both helped shape the devel­op­ment of the ear­ly inter­net as a sur­veil­lance and coun­terin­sur­gency tech­nol­o­gy and also pio­neered data-dri­ven elec­tion cam­paigns. He even start­ed a pri­vate firm to do this: Simul­mat­ics. Pool’s vision was a world where the sur­veil­lance state act­ed as a benign mas­ter that the kept the peace peace­ful­ly by using supe­ri­or knowl­edge to nudge peo­ple in the ‘right’ direc­tion.

3. This vision of vast data­base of per­son­al pro­files for the pur­pose was large­ly a secret at first, but it did­n’t remain that way. And there was actu­al­ly quite a bit of pub­lic para­noia in the US about these inter­net-pre­de­ces­sors, espe­cial­ly with­in the anti-Viet­nam war activist com­mu­ni­ties. Flash for­ward a cou­ple decades and that para­noia has fad­ed almost entirely...until scan­dals like the cur­rent one erupt and we tem­porar­i­ly grow con­cerned.

4. What Cam­bridge Ana­lyt­i­ca is accused of doing is what the data giants like Face­book and Google do every day and have been going for years. And it’s not just the giants. Small­er firms are scoop­ing up fast amounts of infor­ma­tion too...it’s just not as vast as what the giants are col­lect­ing. Even cute apps, like the wild­ly pop­u­lar Angry Birds, has been found to col­lect all sorts of data about users.

5. While it’s great that pub­lic atten­tion is being direct­ed at the kind of sleazy manip­u­la­tive activ­i­ties Cam­bridge Ana­lyt­i­ca was engag­ing in, decep­tive­ly wield­ing real pow­er over real unwit­ting peo­ple, it is a wild mis­char­ac­ter­i­za­tion to act like Cam­bridge Ana­lyt­i­ca was exert­ing mass mind-con­trol over the mass­es using inter­net mar­ket­ing voodoo. What Cam­bridge Ana­lyt­i­ca, or any of the oth­er sleazy manip­u­la­tors, were doing was indeed influ­en­tial, but it needs to be viewed in the con­text of a polit­i­cal state of affairs where mas­sive num­bers of Amer­i­cans, includ­ing Trump vot­ers, real­ly have been col­lec­tive­ly failed by the Amer­i­can pow­er estab­lish­ment for decades. The col­lapse of the Amer­i­can mid­dle class and rise of the plu­toc­ra­cy is what cre­at­ed the kind of macro envi­ron­ment where car­ni­val bark­er like Don­ald Trump could use firms like Cam­bridge Ana­lyt­i­ca to ‘nudge’ peo­ple in the direc­tion of vot­ing for him. In oth­er words, the focus on Cam­bridge Ana­lyt­i­ca’s manip­u­la­tion of peo­ple’s psy­cho­log­i­cal pro­files in the absence of the recog­ni­tion of the mas­sive polit­i­cal fail­ures of last sev­er­al decades in Amer­i­ca — the mass socioe­co­nom­ic fail­ures of the Amer­i­can embrace of ‘Reaganon­ics’ and right-wing eco­nom­ic gospel cou­pled with the Amer­i­can Left­’s fail­ure to effec­tive­ly repu­di­ate these doc­trines — is pro­found­ly ahis­tor­i­cal. The sto­ry of the rise of the pow­er of firms like Face­book, Google, and Cam­bridge Ana­lyt­i­ca is a sto­ry the implic­it­ly includes the sto­ry of that entire his­to­ry of political/socioeconomic fail­ures tied to fail­ure to effec­tive­ly respond to the rise of the Amer­i­can right-wing over the last sev­er­al decades. And we are mak­ing a mas­sive mis­take if we for­get that. Cam­bridge Ana­lyt­i­ca would­n’t have been near­ly as effec­tive in nudg­ing peo­ple towards vot­ing for some­one like Trump if so many peo­ple weren’t already so ready to burn the cur­rent sys­tem down.

These are the kinds of his­tor­i­cal chap­ters that can’t be left out of any analy­sis of Cam­bridge Ana­lyt­i­ca. Because Cam­bridge Ana­lyt­i­ca isn’t the excep­tion. It’s an excep­tion­al­ly sleazy exam­ple of the rules we’ve been play­ing by for a while, whether we real­ized it or not:

The Baf­fler

The Cam­bridge Ana­lyt­i­ca Con

Yasha Levine,
March 21, 2018

“The man with the prop­er imag­i­na­tion is able to con­ceive of any com­mod­i­ty in such a way that it becomes an object of emo­tion to him and to those to whom he imparts his pic­ture, and hence cre­ates desire rather than a mere feel­ing of ought.”

Wal­ter Dill Scott, Influ­enc­ing Men in Busi­ness: Psy­chol­o­gy of Argu­ment and Sug­ges­tion (1911)

This week, Cam­bridge Ana­lyt­i­ca, the British elec­tion data out­fit fund­ed by bil­lion­aire Robert Mer­cer and linked to Steven Ban­non and Pres­i­dent Don­ald Trump, blew up the news cycle. The charge, as report­ed by twin exposés in the New York Times and the Guardian, is that the firm inap­pro­pri­ate­ly accessed Face­book pro­file infor­ma­tion belong­ing to 50 mil­lion peo­ple and then used that data to con­struct a pow­er­ful inter­net-based psy­cho­log­i­cal influ­ence weapon. This new­fan­gled con­struct was then used to brain­wash-car­pet-bomb the Amer­i­can elec­torate, shred­ding our democ­ra­cy and turn­ing peo­ple into pli­able zom­bie sup­port­ers of Don­ald Trump.

In the words of a pink-haired Cam­bridge Ana­lyt­i­ca data-war­rior-turned-whistle­blow­er, the com­pa­ny served as a dig­i­tal armory that turned “Likes” into weapons and pro­duced “Steve Bannon’s psy­cho­log­i­cal war­fare mind­fuck tool.”

Scary, right? Makes me won­der if I’m still not under Cam­bridge Analytica’s influ­ence right now.

Nat­u­ral­ly, there are also rumors of a nefar­i­ous Russ­ian con­nec­tion. And appar­ent­ly there’s more dirt com­ing. Chan­nel 4 News in Britain just pub­lished an inves­ti­ga­tion show­ing top Cam­bridge Ana­lyt­i­ca execs brag­ging to an under­cov­er reporter that their team uses high-tech psy­cho­me­t­ric voodoo to win elec­tions for clients all over the world, but also dab­bles in tra­di­tion­al meat­space tech­niques as well: bribes, kom­pro­mat, black­mail, Ukrain­ian escort honeypots—you know, the works.

It’s good that the main­stream news media are final­ly start­ing to pay atten­tion to this dark cor­ner of the inter­net —and pro­duc­ing exposés of shady sub rosa polit­i­cal cam­paigns and their eager exploita­tion of our online dig­i­tal trails in order to con­t­a­m­i­nate our infor­ma­tion streams and influ­ence our deci­sions. It’s about time.

But this sto­ry is being cov­ered and framed in a mis­lead­ing way. So far, much of the main­stream cov­er­age, dri­ven by the Times and Guardian reports, looks at Cam­bridge Ana­lyt­i­ca in isolation—almost entire­ly out­side of any his­tor­i­cal or polit­i­cal con­text. This makes it seem to read­ers unfa­mil­iar with the long his­to­ry of the strug­gle for con­trol of the dig­i­tal sphere as if the main prob­lem is that the bad actors at Cam­bridge Ana­lyt­i­ca crossed the trans­mis­sion wires of Face­book in the Promethean man­ner of Vic­tor Frankenstein—taking what were nor­mal­ly respectable, sci­en­tif­ic data pro­to­cols and per­vert­ing them to serve the dia­bol­i­cal aim of rean­i­mat­ing the decom­pos­ing lump of polit­i­cal flesh known as Don­ald Trump.

So if we’re going to view the actions of Cam­bridge Ana­lyt­i­ca in their prop­er light, we need first to start with an admis­sion. We must con­cede that covert influ­ence is not some­thing unusu­al or for­eign to our soci­ety, but is as Amer­i­can as apple pie and free­dom fries. The use of manip­u­la­tive, psy­cho­log­i­cal­ly dri­ven adver­tis­ing and mar­ket­ing tech­niques to sell us prod­ucts, lifestyles, and ideas has been the foun­da­tion of mod­ern Amer­i­can soci­ety, going back to the days of the self-styled inven­tor of pub­lic rela­tions, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ail­ing con­sumer soci­ety togeth­er. And when it comes to mar­ket­ing can­di­dates and polit­i­cal mes­sages, using data to influ­ence peo­ple and shape their deci­sions has been the holy grail of the com­put­er age, going back half a cen­tu­ry.

Let’s start with the basics: What Cam­bridge Ana­lyt­i­ca is accused of doing—siphoning people’s data, com­pil­ing pro­files, and then deploy­ing that infor­ma­tion to influ­ence them to vote a cer­tain way—Facebook and Sil­i­con Val­ley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more inva­sive scale.

Today’s inter­net busi­ness ecosys­tem is built on for-prof­it sur­veil­lance, behav­ioral pro­fil­ing, manip­u­la­tion and influ­ence. That’s the name of the game. It isn’t just Face­book or Cam­bridge Ana­lyt­i­ca or even Google. It’s Ama­zon. It’s eBay. It’s Palan­tir. It’s Angry Birds. It’s MoviePass. It’s Lock­heed Mar­tin. It’s every app you’ve ever down­loaded. Every phone you bought. Every pro­gram you watched on your on-demand cable TV pack­age.

All of these games, apps, and plat­forms prof­it from the con­cert­ed siphon­ing up of all data trails to pro­duce pro­files for all sorts of micro-tar­get­ed influ­ence ops in the pri­vate sec­tor. This com­merce in user data per­mit­ted Face­book to earn $40 bil­lion last year, while Google raked in $110 bil­lion.

What do these com­pa­nies know about us, their users? Well, just about every­thing.

Sil­i­con Val­ley of course keeps a tight lid on this infor­ma­tion, but you can get a glimpse of the kinds of data our pri­vate dig­i­tal dossiers con­tain by trawl­ing through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-tar­get­ed adver­tis­ing tech­nol­o­gy. The lan­guage, stripped of opaque tech jar­gon, revealed that just about every­thing we enter into Google’s many prod­ucts and platforms—from email cor­re­spon­dence to Web search­es and inter­net browsing—is ana­lyzed and used to pro­file users in an extreme­ly inva­sive and per­son­al way. Email cor­re­spon­dence is parsed for mean­ing and sub­ject mat­ter. Names are matched to real iden­ti­ties and address­es. Email attachments—say, bank state­ments or test­ing results from a med­ical lab—are scraped for infor­ma­tion. Demo­graph­ic and psy­cho­graph­ic data, includ­ing social class, per­son­al­i­ty type, age, sex, polit­i­cal affil­i­a­tion, cul­tur­al inter­ests, social ties, per­son­al income, and mar­i­tal sta­tus is extract­ed. In one patent, I dis­cov­ered that Google appar­ent­ly had the abil­i­ty to deter­mine if a per­son was a legal U.S. res­i­dent or not. It also turned out you didn’t have to be a reg­is­tered Google user to be snared in this pro­fil­ing appa­ra­tus. All you had to do was com­mu­ni­cate with some­one who had a Gmail address.

On the whole, Google’s pro­fil­ing phi­los­o­phy was no dif­fer­ent than Facebook’s, which also con­structs “shad­ow pro­files” to col­lect and mon­e­tize data, even if you nev­er had a reg­is­tered Face­book or Gmail account.

It’s not just the big plat­form monop­o­lies that do this, but all the small­er com­pa­nies that run their busi­ness­es on ser­vices oper­at­ed by Google and Face­book. It even includes cute games like Angry Birds, devel­oped by Finland’s Rovio Enter­tain­ment, that’s been down­loaded more than a bil­lion times. The Android ver­sion of Angry Birds was found to pull per­son­al data on its play­ers, includ­ing eth­nic­i­ty, mar­i­tal sta­tus, and sex­u­al orientation—including options for the “sin­gle,” “mar­ried,” “divorced,” “engaged,” and “swinger” cat­e­gories. Pulling per­son­al data like this didn’t con­tra­dict Google’s terms of ser­vices for its Android plat­form. Indeed, for-prof­it sur­veil­lance was the whole point of why Google start­ed plan­ning to launch an iPhone rival as far back as 2004.

In launch­ing Android, Google made a gam­ble that by releas­ing its pro­pri­etary oper­at­ing sys­tem to man­u­fac­tur­ers free of charge, it wouldn’t be rel­e­gat­ed to run­ning apps on Apple iPhone or Microsoft Mobile Win­dows like some kind of dig­i­tal sec­ond-class cit­i­zen. If it played its cards right and Android suc­ceed­ed, Google would be able to con­trol the envi­ron­ment that under­pins the entire mobile expe­ri­ence, mak­ing it the ulti­mate gate­keep­er of the many mon­e­tized inter­ac­tions among users, apps, and adver­tis­ers. And that’s exact­ly what hap­pened. Today, Google monop­o­lizes the smart phone mar­ket and dom­i­nates the mobile for-prof­it sur­veil­lance busi­ness.

These detailed psy­cho­log­i­cal pro­files, togeth­er with the direct access to users that plat­forms like Google and Face­book deliv­er, make both com­pa­nies cat­nip to adver­tis­ers, PR flacks—and dark-mon­ey polit­i­cal out­fits like Cam­bridge Ana­lyt­i­ca.

Indeed, polit­i­cal cam­paigns showed an ear­ly and pro­nounced affin­i­ty for the idea of tar­get­ed access and influ­ence on plat­forms like Face­book. Instead of blan­ket­ing air­waves with a sin­gle polit­i­cal ad, they could show peo­ple ads that appealed specif­i­cal­ly to the issues they held dear. They could also ensure that any such mes­sage spread through a tar­get­ed person’s larg­er social net­work through repost­ing and shar­ing.

The enor­mous com­mer­cial inter­est that polit­i­cal cam­paigns have shown in social media has earned them priv­i­leged atten­tion from Sil­i­con Val­ley plat­forms in return. Face­book runs a sep­a­rate polit­i­cal divi­sion specif­i­cal­ly geared to help its cus­tomers tar­get and influ­ence vot­ers.

The com­pa­ny even allows polit­i­cal cam­paigns to upload their own lists of poten­tial vot­ers and sup­port­ers direct­ly into Facebook’s data sys­tem. So armed, dig­i­tal polit­i­cal oper­a­tives can then use those people’s social net­works to iden­ti­fy oth­er prospec­tive vot­ers who might be sup­port­ive of their candidate—and then tar­get them with a whole new tidal wave of ads. “There’s a lev­el of pre­ci­sion that doesn’t exist in any oth­er medi­um,” Crys­tal Pat­ter­son, a Face­book employ­ee who works with gov­ern­ment and pol­i­tics cus­tomers, told the New York Times back in 2015. “It’s get­ting the right mes­sage to the right peo­ple at the right time.”

Nat­u­ral­ly, a whole slew of com­pa­nies and oper­a­tives in our increas­ing­ly data-dri­ven elec­tion scene have cropped up over the last decade to plug in to these amaz­ing influ­ence machines. There is a whole con­stel­la­tion of them work­ing all sorts of strate­gies: tra­di­tion­al vot­er tar­get­ing, polit­i­cal pro­pa­gan­da mills, troll armies, and bots.

Some of these firms are polit­i­cal­ly agnos­tic; they’ll work for any­one with cash. Oth­ers are par­ti­san. The Demo­c­ra­t­ic Par­ty Data Death Star is NGP VAN. The Repub­li­cans have a few of their own—including i360, a data mon­ster gen­er­ous­ly fund­ed by Charles Koch. Nat­u­ral­ly, i360 part­ners with Face­book to deliv­er tar­get vot­ers. It also claims to have 700 per­son­al data points cross-tab­u­lat­ed on 199 mil­lion vot­ers and near­ly 300 mil­lion con­sumers, with the abil­i­ty to pro­file and tar­get them with pin-point accu­ra­cy based on their beliefs and views.

Here’s how The Nation­al Jour­nal’s Andrew Rice described i360 in 2015:

Like Google, the Nation­al Secu­ri­ty Agency, or the Demo­c­ra­t­ic data machine, i360 has a vora­cious appetite for per­son­al infor­ma­tion. It is con­stant­ly ingest­ing new data into its tar­get­ing sys­tems, which pre­dict not only par­ti­san iden­ti­fi­ca­tion but also sen­ti­ments about issues such as abor­tion, tax­es, and health care. When I vis­it­ed the i360 office, an employ­ee gave me a demon­stra­tion, zoom­ing in on a map to focus on a par­tic­u­lar 66-year-old high school teacher who lives in an apart­ment com­plex in Alexan­dria, Vir­ginia. . . . Though the adver­tis­ing indus­try typ­i­cal­ly eschews address­ing any sin­gle individual—it’s not just inva­sive, it’s also inefficient—it is becom­ing com­mon­place to tar­get extreme­ly nar­row audi­ences. So the school­teacher, along with a few look-alikes, might see a tai­lored ad the next time she clicks on YouTube.

Sil­i­con Val­ley doesn’t just offer cam­paigns a neu­tral plat­form; it also works close­ly along­side polit­i­cal can­di­dates to the point that the biggest inter­net com­pa­nies have become an exten­sion of the Amer­i­can polit­i­cal sys­tem. As one recent study showed, tech com­pa­nies rou­tine­ly embed their employ­ees inside major polit­i­cal cam­paigns: “Face­book, Twit­ter, and Google go beyond pro­mot­ing their ser­vices and facil­i­tat­ing dig­i­tal adver­tis­ing buys, active­ly shap­ing cam­paign com­mu­ni­ca­tion through their close col­lab­o­ra­tion with polit­i­cal staffers . . . these firms serve as qua­si-dig­i­tal con­sul­tants to cam­paigns, shap­ing dig­i­tal strat­e­gy, con­tent, and exe­cu­tion.”

In 2008, the hip young Black­ber­ry-tot­ing Barack Oba­ma was the first major-par­ty can­di­date on the nation­al scene to tru­ly lever­age the pow­er of inter­net-tar­get­ed agit­prop. With help from Face­book cofounder Chris Hugh­es, who built and ran Obama’s inter­net cam­paign divi­sion, the first Oba­ma cam­paign built an inno­v­a­tive micro-tar­get­ing ini­tia­tive to raise huge amounts of mon­ey in small chunks direct­ly from Obama’s sup­port­ers and sell his mes­sage with a hith­er­to unprece­dent­ed laser-guid­ed pre­ci­sion in the gen­er­al elec­tion cam­paign.

...

Now, of course, every elec­tion is a Face­book Elec­tion. And why not? As Bloomberg News has not­ed, Sil­i­con Val­ley ranks elec­tions “along­side the Super Bowl and the Olympics in terms of events that draw block­buster ad dol­lars and boost engage­ment.” In 2016, $1 bil­lion was spent on dig­i­tal advertising—with the bulk going to Face­book, Twit­ter, and Google.

What’s inter­est­ing here is that because so much mon­ey is at stake, there are absolute­ly no rules that would restrict any­thing an unsa­vory polit­i­cal appa­ratchik or a Sil­i­con Val­ley oli­garch might want to foist on the unsus­pect­ing dig­i­tal pub­lic. Creep­i­ly, Facebook’s own inter­nal research divi­sion car­ried out exper­i­ments show­ing that the plat­form could influ­ence people’s emo­tion­al state in con­nec­tion to a cer­tain top­ic or event. Com­pa­ny engi­neers call this fea­ture “emo­tion­al con­ta­gion”—i.e., the abil­i­ty to viral­ly influ­ence people’s emo­tions and ideas just through the con­tent of sta­tus updates. In the twist­ed econ­o­my of emo­tion­al con­ta­gion, a neg­a­tive post by a user sup­press­es pos­i­tive posts by their friends, while a pos­i­tive post sup­press­es neg­a­tive posts. “When a Face­book user posts, the words they choose influ­ence the words cho­sen lat­er by their friends,” explained the company’s lead sci­en­tist on this study.

On a very basic lev­el, Facebook’s opaque con­trol of its feed algo­rithm means the plat­form has real pow­er over people’s ideas and actions dur­ing an elec­tion. This can be done by a data shift as sim­ple and sub­tle as imper­cep­ti­bly tweak­ing a person’s feed to show more posts from friends who are, say, sup­port­ers of a par­tic­u­lar polit­i­cal can­di­date or a spe­cif­ic polit­i­cal idea or event. As far as I know, there is no law pre­vent­ing Face­book from doing just that: it’s plain­ly able and will­ing to influ­ence a user’s feed based on polit­i­cal aims—whether done for inter­nal cor­po­rate objec­tives, or due to pay­ments from polit­i­cal groups, or by the per­son­al pref­er­ences of Mark Zucker­berg.

So our present-day freak­out over Cam­bridge Ana­lyt­i­ca needs to be put in the broad­er his­tor­i­cal con­text of our decades-long com­pla­cen­cy over Sil­i­con Valley’s busi­ness mod­el. The fact is that com­pa­nies like Face­book and Google are the real mali­cious actors here—they are vital pub­lic com­mu­ni­ca­tions sys­tems that run on pro­fil­ing and manip­u­la­tion for pri­vate prof­it with­out any reg­u­la­tion or demo­c­ra­t­ic over­sight from the soci­ety in which it oper­ates. But, hey, let’s blame Cam­bridge Ana­lyt­i­ca. Or bet­ter yet, take a cue from the Times and blame the Rus­sians along with Cam­bridge Ana­lyt­i­ca.

***

There’s anoth­er, big­ger cul­tur­al issue with the way we’ve begun to exam­ine and dis­cuss Cam­bridge Analytica’s bat­tery of inter­net-based influ­ence ops. Peo­ple are still daz­zled by the idea that the inter­net, in its pure, untaint­ed form, is some kind of mag­ic machine dis­trib­ut­ing democ­ra­cy and egal­i­tar­i­an­ism across the globe with the touch of a few key­strokes. This is the gospel preached by a stal­wart cho­rus of Net prophets, from Jeff Jarvis and the late John Per­ry Bar­low to Clay Shirky and Kevin Kel­ly. These char­la­tans all feed on an hon­or­able demo­c­ra­t­ic impulse: peo­ple still want to des­per­ate­ly believe in the utopi­an promise of this technology—its abil­i­ty to equal­ize pow­er, end cor­rup­tion, top­ple cor­po­rate media monop­o­lies, and empow­er the indi­vid­ual.

This mythology—which is of course aggres­sive­ly con­fect­ed for mass con­sump­tion by Sil­i­con Val­ley mar­ket­ing and PR outfits—is deeply root­ed in our cul­ture; it helps explain why oth­er­wise seri­ous jour­nal­ists work­ing for main­stream news out­lets can uniron­i­cal­ly employ phras­es such as “infor­ma­tion wants to be free” and “Facebook’s engine of democ­ra­cy” and get away with it.

The truth is that the inter­net has nev­er been about egal­i­tar­i­an­ism or democ­ra­cy.

The ear­ly inter­net came out of a series of Viet­nam War coun­terin­sur­gency projects aimed at devel­op­ing com­put­er tech­nol­o­gy that would give the gov­ern­ment a way to man­age a com­plex series of glob­al com­mit­ments and to mon­i­tor and pre­vent polit­i­cal strife—both at home and abroad. The inter­net, going back to its first incar­na­tion as the ARPANET mil­i­tary net­work, was always about sur­veil­lance, pro­fil­ing, and tar­get­ing.

The influ­ence of U.S. coun­terin­sur­gency doc­trine on the devel­op­ment of mod­ern com­put­ers and the inter­net is not some­thing that many peo­ple know about. But it is a sub­ject that I explore at length in my book, Sur­veil­lance Val­ley. So what jumps out at me is how seam­less­ly the report­ed activ­i­ties of Cam­bridge Ana­lyt­i­ca fit into this his­tor­i­cal nar­ra­tive.

Cam­bridge Ana­lyt­i­ca is a sub­sidiary of the SCL Group, a mil­i­tary con­trac­tor set up by a spooky huck­ster named Nigel Oakes that sells itself as a high-pow­ered con­clave of experts spe­cial­iz­ing in data-dri­ven coun­terin­sur­gency. It’s done work for the Pen­ta­gon, NATO, and the UK Min­istry of Defense in places like Afghanistan and Nepal, where it says it ran a “cam­paign to reduce and ulti­mate­ly stop the large num­bers of Maoist insur­gents in Nepal from break­ing into hous­es in remote areas to steal food, harass the home­own­ers and cause dis­rup­tion.”

In the grander scheme of high-tech coun­terin­sur­gency boon­dog­gles, which fea­tures such sto­ried psy-ops out­fits as Peter Thiel’s Palan­tir and Cold War dinosaurs like Lock­heed Mar­tin, the SCL Group appears to be a com­par­a­tive­ly minor play­er. Nev­er­the­less, its ambi­tious claims to recon­fig­ure the world order with some well-placed algo­rithms recalls one of the first major play­ers in the field: Simul­mat­ics, a 1960s coun­terin­sur­gency mil­i­tary con­trac­tor that pio­neered data-dri­ven elec­tion cam­paigns and whose founder, Ithiel de Sola Pool, helped shape the devel­op­ment of the ear­ly inter­net as a sur­veil­lance and coun­terin­sur­gency tech­nol­o­gy.

Ithiel de Sola Pool descend­ed from a promi­nent rab­bini­cal fam­i­ly that traced its roots to medieval Spain. Vir­u­lent­ly anti­com­mu­nist and tech-obsessed, he got his start in polit­i­cal work in 1950s work­ing on project at the Hoover Insti­tu­tion at Stan­ford Uni­ver­si­ty that sought to under­stand the nature and caus­es of left-wing rev­o­lu­tions and reduce their like­ly course down to a math­e­mat­i­cal for­mu­la.

He then moved to MIT and made a name for him­self help­ing cal­i­brate the mes­sag­ing of John F. Kennedy’s 1960 pres­i­den­tial cam­paign. His idea was to mod­el the Amer­i­can elec­torate by decon­struct­ing each vot­er into 480 data points that defined every­thing from their reli­gious views to racial atti­tudes to socio-eco­nom­ic sta­tus. He would then use that data to run sim­u­la­tions on how they would respond to a par­tic­u­lar message—and those tri­al runs would per­mit major cam­paigns to fine-tune their mes­sages accord­ing­ly.

These new tar­get­ed mes­sag­ing tac­tics, enabled by rudi­men­ta­ry com­put­ers, had many fans in the per­ma­nent polit­i­cal class of Wash­ing­ton; their liveli­hoods, after all, were large­ly root­ed in their claims to ana­lyze and pre­dict polit­i­cal behav­ior. And so Pool lever­aged his research to launch Simul­mat­ics, a data ana­lyt­ics start­up that offered com­put­er sim­u­la­tion ser­vices to major Amer­i­can cor­po­ra­tions, help­ing them pre-test prod­ucts and con­struct adver­tis­ing cam­paigns.

Simul­mat­ics also did a brisk busi­ness as a mil­i­tary and intel­li­gence con­trac­tor. It ran sim­u­la­tions for Radio Lib­er­ty, the CIA’s covert anti-com­mu­nist radio sta­tion, help­ing the agency mod­el the Sovi­et Union’s inter­nal com­mu­ni­ca­tion sys­tem in order to pre­dict the effect that for­eign news broad­casts would have on the country’s polit­i­cal sys­tem. At the same time, Simul­mat­ics ana­lysts were doing coun­terin­sur­gency work under an ARPA con­tract in Viet­nam, con­duct­ing inter­views and gath­er­ing data to help mil­i­tary plan­ners under­stand why Viet­namese peas­ants rebelled and resist­ed Amer­i­can paci­fi­ca­tion efforts. Simulmatic’s work in Viet­nam was just one piece of a bru­tal Amer­i­can coun­terin­sur­gency pol­i­cy that involved covert pro­grams of assas­si­na­tions, ter­ror, and tor­ture that col­lec­tive­ly came to be known as the Phoenix Pro­gram.

At the same time, Pool was also per­son­al­ly involved in an ear­ly ARPANET-con­nect­ed ver­sion of Thiel’s Palan­tir effort—a pio­neer­ing sys­tem that would allow mil­i­tary plan­ners and intel­li­gence to ingest and work with large and com­plex data sets. Pool’s pio­neer­ing work won him a devot­ed fol­low­ing among a group of tech­nocrats who shared a utopi­an belief in the pow­er of com­put­er sys­tems to run soci­ety from the top down in a har­mo­nious man­ner. They saw the left-wing upheavals of the 1960s not as a polit­i­cal or ide­o­log­i­cal prob­lem but as a chal­lenge of man­age­ment and engi­neer­ing. Pool fed these rever­ies by set­ting out to build com­put­er­ized sys­tems that could mon­i­tor the world in real time and ren­der people’s lives trans­par­ent. He saw these sur­veil­lance and man­age­ment regimes in utopi­an terms—as a vital tool to man­age away social strife and con­flict. “Secre­cy in the modem world is gen­er­al­ly a desta­bi­liz­ing fac­tor,” he wrote in a 1969 essay. “Noth­ing con­tributes more to peace and sta­bil­i­ty than those activ­i­ties of elec­tron­ic and pho­to­graph­ic eaves­drop­ping, of con­tent analy­sis and tex­tu­al inter­pre­ta­tion.”

With the advent of cheap­er com­put­er tech­nol­o­gy in the 1960s, cor­po­rate and gov­ern­ment data­bas­es were already mak­ing a good deal of Pool’s prophe­cy come to pass, via sophis­ti­cat­ed new modes of con­sumer track­ing and pre­dic­tive mod­el­ing. But rather than greet­ing such advances as the augurs of a new demo­c­ra­t­ic mir­a­cle, peo­ple at the time saw it as a threat. Crit­ics across the polit­i­cal spec­trum warned that the pro­lif­er­a­tion of these tech­nolo­gies would lead to cor­po­ra­tions and gov­ern­ments con­spir­ing to sur­veil, manip­u­late, and con­trol soci­ety.

This fear res­onat­ed with every part of the culture—from the new left to prag­mat­ic cen­trists and reac­tionary South­ern Democ­rats. It prompt­ed some high-pro­file exposés in papers like the New York Times and Wash­ing­ton Post. It was report­ed on in trade mag­a­zines of the nascent com­put­er indus­try like Com­put­er­World. And it com­mand­ed prime real estate in estab­lish­ment rags like The Atlantic.

Pool per­son­i­fied the prob­lem. His belief in the pow­er of com­put­ers to bend people’s will and man­age soci­ety was seen as a dan­ger. He was attacked and demo­nized by the anti­war left. He was also reviled by main­stream anti-com­mu­nist lib­er­als.

A prime exam­ple: The 480, a 1964 best-sell­ing polit­i­cal thriller whose plot revolved around the dan­ger that com­put­er polling and sim­u­la­tion posed for demo­c­ra­t­ic pol­i­tics—a plot direct­ly inspired by the activ­i­ties of Ithiel de Sola Pool’s Simul­mat­ics. This new­fan­gled infor­ma­tion tech­nol­o­gy was seen a weapon of manip­u­la­tion and coer­cion, wield­ed by cyn­i­cal tech­nocrats who did not care about win­ning peo­ple over with real ideas, gen­uine states­man­ship or polit­i­cal plat­forms but sim­ply sold can­di­dates just like they would a car or a bar of soap.

***

Simul­mat­ics and its first-gen­er­a­tion imi­ta­tions are now ancient history—dating back from the long-ago time when com­put­ers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The inter­net sur­rounds us, engulf­ing and mon­i­tor­ing every­thing we do. We are tracked and watched and pro­filed every minute of every day by count­less companies—from giant plat­form monop­o­lies like Face­book and Google to bou­tique data-dri­ven elec­tion firms like i360 and Cam­bridge Ana­lyt­i­ca.

Yet the fear that Ithiel de Sola Pool and his tech­no­crat­ic world view inspired half a cen­tu­ry ago has been wiped from our cul­ture. For decades, we’ve been told that a cap­i­tal­ist soci­ety where no secrets could be kept from our benev­o­lent elite is not some­thing to fear—but some­thing to cheer and pro­mote.

Now, only after Don­ald Trump shocked the lib­er­al polit­i­cal class is this fear start­ing to resur­face. But it’s doing so in a twist­ed, nar­row way.

***

And that’s the big­ger issue with the Cam­bridge Ana­lyt­i­ca freak­out: it’s not just anti-his­tor­i­cal, it’s also pro­found­ly anti-polit­i­cal. Peo­ple are still try­ing to blame Don­ald Trump’s sur­prise 2016 elec­toral vic­to­ry on some­thing, anything—other than America’s degen­er­ate pol­i­tics and a polit­i­cal class that has presided over a stun­ning nation­al decline. The keep­ers of con­ven­tion­al wis­dom all insist in one way or anoth­er that Trump won because some­thing nov­el and unique hap­pened; that some­thing had to have gone hor­ri­bly wrong. And if you’re able to iden­ti­fy and iso­late this some­thing and get rid of it, every­thing will go back to normal—back to sta­tus quo, when every­thing was good.

Cam­bridge Ana­lyt­i­ca has been one of the less­er bogey­man used to explain Trump’s vic­to­ry for quite a while, going back more than year. Back in March 2017, the New York Times, which now trum­pets the saga of Cam­bridge Analytica’s Face­book heist, was skep­ti­cal­ly ques­tion­ing the company’s tech­nol­o­gy and its role in help­ing bring about a Trump vic­to­ry. With con­sid­er­able jus­ti­fi­ca­tion, Times reporters then chalked up the company’s over­heat­ed rhetoric to the com­pe­ti­tion for clients in a crowd­ed field of data-dri­ven elec­tion influ­ence ops.

Yet now, with Robert Meuller’s Rus­sia inves­ti­ga­tion drag­ging on and pro­duc­ing no smok­ing gun point­ing to defin­i­tive col­lu­sion, it seems that Cam­bridge Ana­lyt­i­ca has been upgrad­ed to Class A supervil­lain. Now the idea that Steve Ban­non and Robert Mer­cer con­coct­ed a secret psy­cho­log­i­cal weapon to bewitch the Amer­i­can elec­torate isn’t just a far-fetched mar­ket­ing ploy—it’s a real and present dan­ger to a vir­tu­ous info-media sta­tus quo. And it’s most cer­tain­ly not the exten­sion of a lav­ish­ly fund­ed ini­tia­tive that Amer­i­can firms have been pur­su­ing for half a cen­tu­ry. No, like the Trump upris­ing it has alleged­ly mid­wifed into being, it is an oppor­tunis­tic per­ver­sion of the Amer­i­can way. Employ­ing pow­er­ful tech­nol­o­gy that rewires the inner work­ings of our body politic, Cam­bridge Ana­lyt­i­ca and its back­ers duped the Amer­i­can peo­ple into vot­ing for Trump and destroy­ing Amer­i­can democ­ra­cy.

It’s a com­fort­ing idea for our polit­i­cal elite, but it’s not true. Alexan­der Nix, Cam­bridge Analytica’s well-groomed CEO, is not a cun­ning mas­ter­mind but a gar­den-vari­ety dig­i­tal hack. Nix’s busi­ness plan is but an updat­ed ver­sion of Ithiel de Sola Pool’s vision of per­ma­nent peace and pros­per­i­ty won through a placid regime of behav­ioral­ly man­aged social con­trol. And while Nix has been sus­pend­ed fol­low­ing the blus­ter-filled video footage of his cyber-brag­ging aired on Chan­nel 4, we’re kid­ding our­selves if we think his pun­ish­ment will serve as any sort of deter­rent for the thou­sands upon thou­sands of Big Data oper­a­tors nail­ing down bil­lions in cam­paign, mil­i­tary, and cor­po­rate con­tracts to con­tin­ue mon­e­tiz­ing user data into the void. Cam­bridge Ana­lyt­i­ca is unde­ni­ably a rogue’s gallery of bad polit­i­cal actors, but to fin­ger the real cul­prits behind Don­ald Trump’s takeover Amer­i­ca, the self-appoint­ed watch­dogs of our country’s imper­iled polit­i­cal virtue had best take a long and sober­ing look in the mir­ror.

———-

“The Cam­bridge Ana­lyt­i­ca Con” by Yasha Levine; The Baf­fler; 03/21/2018

“It’s good that the main­stream news media are final­ly start­ing to pay atten­tion to this dark cor­ner of the inter­net —and pro­duc­ing exposés of shady sub rosa polit­i­cal cam­paigns and their eager exploita­tion of our online dig­i­tal trails in order to con­t­a­m­i­nate our infor­ma­tion streams and influ­ence our deci­sions. It’s about time.”

Yes indeed, it is great to see that this top­ic is final­ly get­ting the atten­tion it has long deserved. But it’s not great to see the top­ic lim­it­ed to Cam­bridge Ana­lyt­i­ca and Face­book. As Levine puts it, “We must con­cede that covert influ­ence is not some­thing unusu­al or for­eign to our soci­ety, but is as Amer­i­can as apple pie and free­dom fries.” Soci­eties in gen­er­al are held togeth­er via overt and covert influ­ence, but we’ve got­ten real­ly, real­ly good at that over the last half cen­tu­ry in Amer­i­ca and the sto­ry of Cam­bridge Ana­lyt­i­ca, and the larg­er sto­ry of Sandy Parak­i­las’s whis­tle-blow­ing about mass data col­lec­tion, can’t real­ly be under­stood out­side that his­tor­i­cal con­text:

...
But this sto­ry is being cov­ered and framed in a mis­lead­ing way. So far, much of the main­stream cov­er­age, dri­ven by the Times and Guardian reports, looks at Cam­bridge Ana­lyt­i­ca in isolation—almost entire­ly out­side of any his­tor­i­cal or polit­i­cal con­text. This makes it seem to read­ers unfa­mil­iar with the long his­to­ry of the strug­gle for con­trol of the dig­i­tal sphere as if the main prob­lem is that the bad actors at Cam­bridge Ana­lyt­i­ca crossed the trans­mis­sion wires of Face­book in the Promethean man­ner of Vic­tor Frankenstein—taking what were nor­mal­ly respectable, sci­en­tif­ic data pro­to­cols and per­vert­ing them to serve the dia­bol­i­cal aim of rean­i­mat­ing the decom­pos­ing lump of polit­i­cal flesh known as Don­ald Trump.

So if we’re going to view the actions of Cam­bridge Ana­lyt­i­ca in their prop­er light, we need first to start with an admis­sion. We must con­cede that covert influ­ence is not some­thing unusu­al or for­eign to our soci­ety, but is as Amer­i­can as apple pie and free­dom fries. The use of manip­u­la­tive, psy­cho­log­i­cal­ly dri­ven adver­tis­ing and mar­ket­ing tech­niques to sell us prod­ucts, lifestyles, and ideas has been the foun­da­tion of mod­ern Amer­i­can soci­ety, going back to the days of the self-styled inven­tor of pub­lic rela­tions, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ail­ing con­sumer soci­ety togeth­er. And when it comes to mar­ket­ing can­di­dates and polit­i­cal mes­sages, using data to influ­ence peo­ple and shape their deci­sions has been the holy grail of the com­put­er age, going back half a cen­tu­ry.
...

And the first step in putting the Cam­bridge Ana­lyt­i­ca sto­ry in prop­er per­spec­tive is rec­og­niz­ing that what it is accused of doing — grab­bing per­son­al data and build­ing pro­files for the pur­pose of influ­enc­ing vot­ers — is done every day by enti­ties like Face­book and Google. It’s a reg­u­lar part of our lives. And you don’t even need to use Face­book or Google to become part of this vast com­mer­cial sur­veil­lance sys­tem. You just need to com­mu­ni­cate with some­one who does use those plat­forms:

...
Let’s start with the basics: What Cam­bridge Ana­lyt­i­ca is accused of doing—siphoning people’s data, com­pil­ing pro­files, and then deploy­ing that infor­ma­tion to influ­ence them to vote a cer­tain way—Facebook and Sil­i­con Val­ley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more inva­sive scale.

Today’s inter­net busi­ness ecosys­tem is built on for-prof­it sur­veil­lance, behav­ioral pro­fil­ing, manip­u­la­tion and influ­ence. That’s the name of the game. It isn’t just Face­book or Cam­bridge Ana­lyt­i­ca or even Google. It’s Ama­zon. It’s eBay. It’s Palan­tir. It’s Angry Birds. It’s MoviePass. It’s Lock­heed Mar­tin. It’s every app you’ve ever down­loaded. Every phone you bought. Every pro­gram you watched on your on-demand cable TV pack­age.

All of these games, apps, and plat­forms prof­it from the con­cert­ed siphon­ing up of all data trails to pro­duce pro­files for all sorts of micro-tar­get­ed influ­ence ops in the pri­vate sec­tor. This com­merce in user data per­mit­ted Face­book to earn $40 bil­lion last year, while Google raked in $110 bil­lion.

What do these com­pa­nies know about us, their users? Well, just about every­thing.

Sil­i­con Val­ley of course keeps a tight lid on this infor­ma­tion, but you can get a glimpse of the kinds of data our pri­vate dig­i­tal dossiers con­tain by trawl­ing through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-tar­get­ed adver­tis­ing tech­nol­o­gy. The lan­guage, stripped of opaque tech jar­gon, revealed that just about every­thing we enter into Google’s many prod­ucts and platforms—from email cor­re­spon­dence to Web search­es and inter­net browsing—is ana­lyzed and used to pro­file users in an extreme­ly inva­sive and per­son­al way. Email cor­re­spon­dence is parsed for mean­ing and sub­ject mat­ter. Names are matched to real iden­ti­ties and address­es. Email attachments—say, bank state­ments or test­ing results from a med­ical lab—are scraped for infor­ma­tion. Demo­graph­ic and psy­cho­graph­ic data, includ­ing social class, per­son­al­i­ty type, age, sex, polit­i­cal affil­i­a­tion, cul­tur­al inter­ests, social ties, per­son­al income, and mar­i­tal sta­tus is extract­ed. In one patent, I dis­cov­ered that Google appar­ent­ly had the abil­i­ty to deter­mine if a per­son was a legal U.S. res­i­dent or not. It also turned out you didn’t have to be a reg­is­tered Google user to be snared in this pro­fil­ing appa­ra­tus. All you had to do was com­mu­ni­cate with some­one who had a Gmail address.

On the whole, Google’s pro­fil­ing phi­los­o­phy was no dif­fer­ent than Facebook’s, which also con­structs “shad­ow pro­files” to col­lect and mon­e­tize data, even if you nev­er had a reg­is­tered Face­book or Gmail account.
...

The next step in con­tex­tu­al­iz­ing this is rec­og­niz­ing that Face­book and Google are mere­ly the biggest fish in an ocean of data bro­ker­age mar­kets that has many small­er inhab­i­tants try­ing to do the same thing. This is part of what makes Face­book’s hand­ing over of pro­file data to app devel­op­ers so scandalous...Facebook clear­ly new there was a vora­cious mar­ket for this infor­ma­tion and made a lot of mon­ey sell­ing into that mar­ket:

...
It’s not just the big plat­form monop­o­lies that do this, but all the small­er com­pa­nies that run their busi­ness­es on ser­vices oper­at­ed by Google and Face­book. It even includes cute games like Angry Birds, devel­oped by Finland’s Rovio Enter­tain­ment, that’s been down­loaded more than a bil­lion times. The Android ver­sion of Angry Birds was found to pull per­son­al data on its play­ers, includ­ing eth­nic­i­ty, mar­i­tal sta­tus, and sex­u­al orientation—including options for the “sin­gle,” “mar­ried,” “divorced,” “engaged,” and “swinger” cat­e­gories. Pulling per­son­al data like this didn’t con­tra­dict Google’s terms of ser­vices for its Android plat­form. Indeed, for-prof­it sur­veil­lance was the whole point of why Google start­ed plan­ning to launch an iPhone rival as far back as 2004.

In launch­ing Android, Google made a gam­ble that by releas­ing its pro­pri­etary oper­at­ing sys­tem to man­u­fac­tur­ers free of charge, it wouldn’t be rel­e­gat­ed to run­ning apps on Apple iPhone or Microsoft Mobile Win­dows like some kind of dig­i­tal sec­ond-class cit­i­zen. If it played its cards right and Android suc­ceed­ed, Google would be able to con­trol the envi­ron­ment that under­pins the entire mobile expe­ri­ence, mak­ing it the ulti­mate gate­keep­er of the many mon­e­tized inter­ac­tions among users, apps, and adver­tis­ers. And that’s exact­ly what hap­pened. Today, Google monop­o­lizes the smart phone mar­ket and dom­i­nates the mobile for-prof­it sur­veil­lance busi­ness.

These detailed psy­cho­log­i­cal pro­files, togeth­er with the direct access to users that plat­forms like Google and Face­book deliv­er, make both com­pa­nies cat­nip to adver­tis­ers, PR flacks—and dark-mon­ey polit­i­cal out­fits like Cam­bridge Ana­lyt­i­ca.
...

And when it comes to polit­i­cal cam­paigns, the dig­i­tal giants like Face­book and Google already have spe­cial elec­tion units set up to give priv­i­leged access to polit­i­cal cam­paigns so they can influ­ence vot­ers even more effec­tive­ly. The sto­ries about the Trump cam­paign’s use of Face­book “embeds” to run a mas­sive sys­tem­at­ic adver­tis­ing cam­paign of “A/B test­ing on steroids” to sys­tem­at­i­cal­ly exper­i­ment on vot­er ad respons­es is part of that larg­er sto­ry of how these giants have already made the manip­u­la­tion of vot­ers big busi­ness:

...
Indeed, polit­i­cal cam­paigns showed an ear­ly and pro­nounced affin­i­ty for the idea of tar­get­ed access and influ­ence on plat­forms like Face­book. Instead of blan­ket­ing air­waves with a sin­gle polit­i­cal ad, they could show peo­ple ads that appealed specif­i­cal­ly to the issues they held dear. They could also ensure that any such mes­sage spread through a tar­get­ed person’s larg­er social net­work through repost­ing and shar­ing.

The enor­mous com­mer­cial inter­est that polit­i­cal cam­paigns have shown in social media has earned them priv­i­leged atten­tion from Sil­i­con Val­ley plat­forms in return. Face­book runs a sep­a­rate polit­i­cal divi­sion specif­i­cal­ly geared to help its cus­tomers tar­get and influ­ence vot­ers.

The com­pa­ny even allows polit­i­cal cam­paigns to upload their own lists of poten­tial vot­ers and sup­port­ers direct­ly into Facebook’s data sys­tem. So armed, dig­i­tal polit­i­cal oper­a­tives can then use those people’s social net­works to iden­ti­fy oth­er prospec­tive vot­ers who might be sup­port­ive of their candidate—and then tar­get them with a whole new tidal wave of ads. “There’s a lev­el of pre­ci­sion that doesn’t exist in any oth­er medi­um,” Crys­tal Pat­ter­son, a Face­book employ­ee who works with gov­ern­ment and pol­i­tics cus­tomers, told the New York Times back in 2015. “It’s get­ting the right mes­sage to the right peo­ple at the right time.”

Nat­u­ral­ly, a whole slew of com­pa­nies and oper­a­tives in our increas­ing­ly data-dri­ven elec­tion scene have cropped up over the last decade to plug in to these amaz­ing influ­ence machines. There is a whole con­stel­la­tion of them work­ing all sorts of strate­gies: tra­di­tion­al vot­er tar­get­ing, polit­i­cal pro­pa­gan­da mills, troll armies, and bots.

Some of these firms are polit­i­cal­ly agnos­tic; they’ll work for any­one with cash. Oth­ers are par­ti­san. The Demo­c­ra­t­ic Par­ty Data Death Star is NGP VAN. The Repub­li­cans have a few of their own—including i360, a data mon­ster gen­er­ous­ly fund­ed by Charles Koch. Nat­u­ral­ly, i360 part­ners with Face­book to deliv­er tar­get vot­ers. It also claims to have 700 per­son­al data points cross-tab­u­lat­ed on 199 mil­lion vot­ers and near­ly 300 mil­lion con­sumers, with the abil­i­ty to pro­file and tar­get them with pin-point accu­ra­cy based on their beliefs and views.

Here’s how The Nation­al Jour­nal’s Andrew Rice described i360 in 2015:

Like Google, the Nation­al Secu­ri­ty Agency, or the Demo­c­ra­t­ic data machine, i360 has a vora­cious appetite for per­son­al infor­ma­tion. It is con­stant­ly ingest­ing new data into its tar­get­ing sys­tems, which pre­dict not only par­ti­san iden­ti­fi­ca­tion but also sen­ti­ments about issues such as abor­tion, tax­es, and health care. When I vis­it­ed the i360 office, an employ­ee gave me a demon­stra­tion, zoom­ing in on a map to focus on a par­tic­u­lar 66-year-old high school teacher who lives in an apart­ment com­plex in Alexan­dria, Vir­ginia. . . . Though the adver­tis­ing indus­try typ­i­cal­ly eschews address­ing any sin­gle individual—it’s not just inva­sive, it’s also inefficient—it is becom­ing com­mon­place to tar­get extreme­ly nar­row audi­ences. So the school­teacher, along with a few look-alikes, might see a tai­lored ad the next time she clicks on YouTube.

Sil­i­con Val­ley doesn’t just offer cam­paigns a neu­tral plat­form; it also works close­ly along­side polit­i­cal can­di­dates to the point that the biggest inter­net com­pa­nies have become an exten­sion of the Amer­i­can polit­i­cal sys­tem. As one recent study showed, tech com­pa­nies rou­tine­ly embed their employ­ees inside major polit­i­cal cam­paigns: “Face­book, Twit­ter, and Google go beyond pro­mot­ing their ser­vices and facil­i­tat­ing dig­i­tal adver­tis­ing buys, active­ly shap­ing cam­paign com­mu­ni­ca­tion through their close col­lab­o­ra­tion with polit­i­cal staffers . . . these firms serve as qua­si-dig­i­tal con­sul­tants to cam­paigns, shap­ing dig­i­tal strat­e­gy, con­tent, and exe­cu­tion.”
...

And offer­ing spe­cial ser­vices to cam­paign manip­u­late vot­ers isn’t just big busi­ness. It’s a large­ly unreg­u­lat­ed busi­ness. If Face­book decides to covert­ly manip­u­late you by alter­ing its news­feed algo­rithms so it shows you news arti­cles more from your con­ser­v­a­tive-lean­ing friends (or lib­er­al-lean­ing friends), that’s total­ly legal. Because, again, sub­tly manip­u­lat­ing peo­ple is as Amer­i­can as apple pie:

...
Now, of course, every elec­tion is a Face­book Elec­tion. And why not? As Bloomberg News has not­ed, Sil­i­con Val­ley ranks elec­tions “along­side the Super Bowl and the Olympics in terms of events that draw block­buster ad dol­lars and boost engage­ment.” In 2016, $1 bil­lion was spent on dig­i­tal advertising—with the bulk going to Face­book, Twit­ter, and Google.

What’s inter­est­ing here is that because so much mon­ey is at stake, there are absolute­ly no rules that would restrict any­thing an unsa­vory polit­i­cal appa­ratchik or a Sil­i­con Val­ley oli­garch might want to foist on the unsus­pect­ing dig­i­tal pub­lic. Creep­i­ly, Facebook’s own inter­nal research divi­sion car­ried out exper­i­ments show­ing that the plat­form could influ­ence people’s emo­tion­al state in con­nec­tion to a cer­tain top­ic or event. Com­pa­ny engi­neers call this fea­ture “emo­tion­al con­ta­gion”—i.e., the abil­i­ty to viral­ly influ­ence people’s emo­tions and ideas just through the con­tent of sta­tus updates. In the twist­ed econ­o­my of emo­tion­al con­ta­gion, a neg­a­tive post by a user sup­press­es pos­i­tive posts by their friends, while a pos­i­tive post sup­press­es neg­a­tive posts. “When a Face­book user posts, the words they choose influ­ence the words cho­sen lat­er by their friends,” explained the company’s lead sci­en­tist on this study.

On a very basic lev­el, Facebook’s opaque con­trol of its feed algo­rithm means the plat­form has real pow­er over people’s ideas and actions dur­ing an elec­tion. This can be done by a data shift as sim­ple and sub­tle as imper­cep­ti­bly tweak­ing a person’s feed to show more posts from friends who are, say, sup­port­ers of a par­tic­u­lar polit­i­cal can­di­date or a spe­cif­ic polit­i­cal idea or event. As far as I know, there is no law pre­vent­ing Face­book from doing just that: it’s plain­ly able and will­ing to influ­ence a user’s feed based on polit­i­cal aims—whether done for inter­nal cor­po­rate objec­tives, or due to pay­ments from polit­i­cal groups, or by the per­son­al pref­er­ences of Mark Zucker­berg.
...

And this con­tem­po­rary state of affairs did­n’t emerge spon­ta­neous­ly. As Levine cov­ers in Sur­veil­lance Val­ley, this is what the inter­net — back when it was the ARPANET mil­i­tary net­work — was all about from its very con­cep­tion:

...
There’s anoth­er, big­ger cul­tur­al issue with the way we’ve begun to exam­ine and dis­cuss Cam­bridge Analytica’s bat­tery of inter­net-based influ­ence ops. Peo­ple are still daz­zled by the idea that the inter­net, in its pure, untaint­ed form, is some kind of mag­ic machine dis­trib­ut­ing democ­ra­cy and egal­i­tar­i­an­ism across the globe with the touch of a few key­strokes. This is the gospel preached by a stal­wart cho­rus of Net prophets, from Jeff Jarvis and the late John Per­ry Bar­low to Clay Shirky and Kevin Kel­ly. These char­la­tans all feed on an hon­or­able demo­c­ra­t­ic impulse: peo­ple still want to des­per­ate­ly believe in the utopi­an promise of this technology—its abil­i­ty to equal­ize pow­er, end cor­rup­tion, top­ple cor­po­rate media monop­o­lies, and empow­er the indi­vid­ual.

This mythology—which is of course aggres­sive­ly con­fect­ed for mass con­sump­tion by Sil­i­con Val­ley mar­ket­ing and PR outfits—is deeply root­ed in our cul­ture; it helps explain why oth­er­wise seri­ous jour­nal­ists work­ing for main­stream news out­lets can uniron­i­cal­ly employ phras­es such as “infor­ma­tion wants to be free” and “Facebook’s engine of democ­ra­cy” and get away with it.

The truth is that the inter­net has nev­er been about egal­i­tar­i­an­ism or democ­ra­cy.

The ear­ly inter­net came out of a series of Viet­nam War coun­terin­sur­gency projects aimed at devel­op­ing com­put­er tech­nol­o­gy that would give the gov­ern­ment a way to man­age a com­plex series of glob­al com­mit­ments and to mon­i­tor and pre­vent polit­i­cal strife—both at home and abroad. The inter­net, going back to its first incar­na­tion as the ARPANET mil­i­tary net­work, was always about sur­veil­lance, pro­fil­ing, and tar­get­ing.

The influ­ence of U.S. coun­terin­sur­gency doc­trine on the devel­op­ment of mod­ern com­put­ers and the inter­net is not some­thing that many peo­ple know about. But it is a sub­ject that I explore at length in my book, Sur­veil­lance Val­ley. So what jumps out at me is how seam­less­ly the report­ed activ­i­ties of Cam­bridge Ana­lyt­i­ca fit into this his­tor­i­cal nar­ra­tive.
...

“The ear­ly inter­net came out of a series of Viet­nam War coun­terin­sur­gency projects aimed at devel­op­ing com­put­er tech­nol­o­gy that would give the gov­ern­ment a way to man­age a com­plex series of glob­al com­mit­ments and to mon­i­tor and pre­vent polit­i­cal strife—both at home and abroad. The inter­net, going back to its first incar­na­tion as the ARPANET mil­i­tary net­work, was always about sur­veil­lance, pro­fil­ing, and tar­get­ing

And one of the key fig­ures behind this ear­ly ARPANET ver­sion of the inter­net, Ithiel de Sola Pool, got his start in this area in the 1950’s work­ing at the Hoover Insti­tu­tion at Stan­ford Uni­ver­si­ty to under­stand the nature and caus­es of left-wing rev­o­lu­tions and dis­till this down to a math­e­mat­i­cal for­mu­la. Pool, an vir­u­lent anti-Com­mu­nist, also worked for JFK’s 1960 cam­paign and went on to start a pri­vate com­pa­ny, Simul­mat­ics, offer­ing ser­vices in mod­el­ing and manip­u­lat­ing human behav­ior based on large data sets on peo­ple:

...
Cam­bridge Ana­lyt­i­ca is a sub­sidiary of the SCL Group, a mil­i­tary con­trac­tor set up by a spooky huck­ster named Nigel Oakes that sells itself as a high-pow­ered con­clave of experts spe­cial­iz­ing in data-dri­ven coun­terin­sur­gency. It’s done work for the Pen­ta­gon, NATO, and the UK Min­istry of Defense in places like Afghanistan and Nepal, where it says it ran a “cam­paign to reduce and ulti­mate­ly stop the large num­bers of Maoist insur­gents in Nepal from break­ing into hous­es in remote areas to steal food, harass the home­own­ers and cause dis­rup­tion.”

In the grander scheme of high-tech coun­terin­sur­gency boon­dog­gles, which fea­tures such sto­ried psy-ops out­fits as Peter Thiel’s Palan­tir and Cold War dinosaurs like Lock­heed Mar­tin, the SCL Group appears to be a com­par­a­tive­ly minor play­er. Nev­er­the­less, its ambi­tious claims to recon­fig­ure the world order with some well-placed algo­rithms recalls one of the first major play­ers in the field: Simul­mat­ics, a 1960s coun­terin­sur­gency mil­i­tary con­trac­tor that pio­neered data-dri­ven elec­tion cam­paigns and whose founder, Ithiel de Sola Pool, helped shape the devel­op­ment of the ear­ly inter­net as a sur­veil­lance and coun­terin­sur­gency tech­nol­o­gy.

Ithiel de Sola Pool descend­ed from a promi­nent rab­bini­cal fam­i­ly that traced its roots to medieval Spain. Vir­u­lent­ly anti­com­mu­nist and tech-obsessed, he got his start in polit­i­cal work in 1950s work­ing on project at the Hoover Insti­tu­tion at Stan­ford Uni­ver­si­ty that sought to under­stand the nature and caus­es of left-wing rev­o­lu­tions and reduce their like­ly course down to a math­e­mat­i­cal for­mu­la.

He then moved to MIT and made a name for him­self help­ing cal­i­brate the mes­sag­ing of John F. Kennedy’s 1960 pres­i­den­tial cam­paign. His idea was to mod­el the Amer­i­can elec­torate by decon­struct­ing each vot­er into 480 data points that defined every­thing from their reli­gious views to racial atti­tudes to socio-eco­nom­ic sta­tus. He would then use that data to run sim­u­la­tions on how they would respond to a par­tic­u­lar message—and those tri­al runs would per­mit major cam­paigns to fine-tune their mes­sages accord­ing­ly.

These new tar­get­ed mes­sag­ing tac­tics, enabled by rudi­men­ta­ry com­put­ers, had many fans in the per­ma­nent polit­i­cal class of Wash­ing­ton; their liveli­hoods, after all, were large­ly root­ed in their claims to ana­lyze and pre­dict polit­i­cal behav­ior. And so Pool lever­aged his research to launch Simul­mat­ics, a data ana­lyt­ics start­up that offered com­put­er sim­u­la­tion ser­vices to major Amer­i­can cor­po­ra­tions, help­ing them pre-test prod­ucts and con­struct adver­tis­ing cam­paigns.

Simul­mat­ics also did a brisk busi­ness as a mil­i­tary and intel­li­gence con­trac­tor. It ran sim­u­la­tions for Radio Lib­er­ty, the CIA’s covert anti-com­mu­nist radio sta­tion, help­ing the agency mod­el the Sovi­et Union’s inter­nal com­mu­ni­ca­tion sys­tem in order to pre­dict the effect that for­eign news broad­casts would have on the country’s polit­i­cal sys­tem. At the same time, Simul­mat­ics ana­lysts were doing coun­terin­sur­gency work under an ARPA con­tract in Viet­nam, con­duct­ing inter­views and gath­er­ing data to help mil­i­tary plan­ners under­stand why Viet­namese peas­ants rebelled and resist­ed Amer­i­can paci­fi­ca­tion efforts. Simulmatic’s work in Viet­nam was just one piece of a bru­tal Amer­i­can coun­terin­sur­gency pol­i­cy that involved covert pro­grams of assas­si­na­tions, ter­ror, and tor­ture that col­lec­tive­ly came to be known as the Phoenix Pro­gram.
...

And part of what drove Pool’s was a utopi­an belief that com­put­ers and mas­sive amounts of data could be used to run soci­ety har­mo­nious­ly. Left-wing rev­o­lu­tions were prob­lems to be man­aged with Big Data. It’s a pret­ty impor­tant his­tor­i­cal con­text when think­ing about the role Cam­bridge Ana­lyt­i­ca played in elect­ing Don­ald Trump:

...
At the same time, Pool was also per­son­al­ly involved in an ear­ly ARPANET-con­nect­ed ver­sion of Thiel’s Palan­tir effort—a pio­neer­ing sys­tem that would allow mil­i­tary plan­ners and intel­li­gence to ingest and work with large and com­plex data sets. Pool’s pio­neer­ing work won him a devot­ed fol­low­ing among a group of tech­nocrats who shared a utopi­an belief in the pow­er of com­put­er sys­tems to run soci­ety from the top down in a har­mo­nious man­ner. They saw the left-wing upheavals of the 1960s not as a polit­i­cal or ide­o­log­i­cal prob­lem but as a chal­lenge of man­age­ment and engi­neer­ing. Pool fed these rever­ies by set­ting out to build com­put­er­ized sys­tems that could mon­i­tor the world in real time and ren­der people’s lives trans­par­ent. He saw these sur­veil­lance and man­age­ment regimes in utopi­an terms—as a vital tool to man­age away social strife and con­flict. “Secre­cy in the modem world is gen­er­al­ly a desta­bi­liz­ing fac­tor,” he wrote in a 1969 essay. “Noth­ing con­tributes more to peace and sta­bil­i­ty than those activ­i­ties of elec­tron­ic and pho­to­graph­ic eaves­drop­ping, of con­tent analy­sis and tex­tu­al inter­pre­ta­tion.”
...

And guess what: the Amer­i­can pub­lic was­n’t enam­ored with Pool’s vision of a world man­aged by com­put­ing tech­nol­o­gy and Big Data mod­els of soci­ety. When the pub­lic learned about these ear­ly ver­sion of the inter­net inspired by visions of a com­put­er-man­aged world in the 60’s and 70’s, the pub­lic got scared:

...
With the advent of cheap­er com­put­er tech­nol­o­gy in the 1960s, cor­po­rate and gov­ern­ment data­bas­es were already mak­ing a good deal of Pool’s prophe­cy come to pass, via sophis­ti­cat­ed new modes of con­sumer track­ing and pre­dic­tive mod­el­ing. But rather than greet­ing such advances as the augurs of a new demo­c­ra­t­ic mir­a­cle, peo­ple at the time saw it as a threat. Crit­ics across the polit­i­cal spec­trum warned that the pro­lif­er­a­tion of these tech­nolo­gies would lead to cor­po­ra­tions and gov­ern­ments con­spir­ing to sur­veil, manip­u­late, and con­trol soci­ety.

This fear res­onat­ed with every part of the culture—from the new left to prag­mat­ic cen­trists and reac­tionary South­ern Democ­rats. It prompt­ed some high-pro­file exposés in papers like the New York Times and Wash­ing­ton Post. It was report­ed on in trade mag­a­zines of the nascent com­put­er indus­try like Com­put­er­World. And it com­mand­ed prime real estate in estab­lish­ment rags like The Atlantic.

Pool per­son­i­fied the prob­lem. His belief in the pow­er of com­put­ers to bend people’s will and man­age soci­ety was seen as a dan­ger. He was attacked and demo­nized by the anti­war left. He was also reviled by main­stream anti-com­mu­nist lib­er­als.

A prime exam­ple: The 480, a 1964 best-sell­ing polit­i­cal thriller whose plot revolved around the dan­ger that com­put­er polling and sim­u­la­tion posed for demo­c­ra­t­ic pol­i­tics—a plot direct­ly inspired by the activ­i­ties of Ithiel de Sola Pool’s Simul­mat­ics. This new­fan­gled infor­ma­tion tech­nol­o­gy was seen a weapon of manip­u­la­tion and coer­cion, wield­ed by cyn­i­cal tech­nocrats who did not care about win­ning peo­ple over with real ideas, gen­uine states­man­ship or polit­i­cal plat­forms but sim­ply sold can­di­dates just like they would a car or a bar of soap.
...

But that fear some­how dis­ap­peared in sub­se­quent decades, only to be replaced with a faith in our benev­o­lent tech­no-elite. And a faith that this mass public/private sur­veil­lance sys­tem is actu­al­ly an empow­er­ing tool that will lead to a lim­it­less future. And that is per­haps the biggest scan­dal here: The pub­lic did­n’t just for­got to keep an eye on the pow­er­ful. The pub­lic for­got to keep an eye on the peo­ple whose pow­er is derived from keep­ing an eye on the pub­lic. We built a sur­veil­lance state at the same time we fell into a fog of civic and his­tor­i­cal amne­sia. And that has coin­cid­ed with the rise of a plu­toc­ra­cy, the dom­i­nance of right-wing anti-gov­ern­ment eco­nom­ic doc­trines, and the larg­er fail­ure of the Amer­i­can polit­i­cal and eco­nom­ic elites to deliv­er a soci­ety that actu­al­ly works for aver­age peo­ple. To put it anoth­er way, the rise of the mod­ern sur­veil­lance state is one ele­ment of a mas­sive, decades-long process of col­lec­tive­ly ‘drop­ping the ball’. We screwed up mas­sive­ly and Face­book and Google are just one of the con­se­quences of this. And yet we still don’t view the Trump phe­nom­e­na with­in the con­text of that mas­sive col­lec­tive screw up, which means we’re still screw­ing up mas­sive­ly:

...
Yet the fear that Ithiel de Sola Pool and his tech­no­crat­ic world view inspired half a cen­tu­ry ago has been wiped from our cul­ture. For decades, we’ve been told that a cap­i­tal­ist soci­ety where no secrets could be kept from our benev­o­lent elite is not some­thing to fear—but some­thing to cheer and pro­mote.

Now, only after Don­ald Trump shocked the lib­er­al polit­i­cal class is this fear start­ing to resur­face. But it’s doing so in a twist­ed, nar­row way.

***

And that’s the big­ger issue with the Cam­bridge Ana­lyt­i­ca freak­out: it’s not just anti-his­tor­i­cal, it’s also pro­found­ly anti-polit­i­cal. Peo­ple are still try­ing to blame Don­ald Trump’s sur­prise 2016 elec­toral vic­to­ry on some­thing, anything—other than America’s degen­er­ate pol­i­tics and a polit­i­cal class that has presided over a stun­ning nation­al decline. The keep­ers of con­ven­tion­al wis­dom all insist in one way or anoth­er that Trump won because some­thing nov­el and unique hap­pened; that some­thing had to have gone hor­ri­bly wrong. And if you’re able to iden­ti­fy and iso­late this some­thing and get rid of it, every­thing will go back to normal—back to sta­tus quo, when every­thing was good.
...

So the biggest sto­ry here isn’t that Cam­bridge Ana­lyt­i­ca was engaged in mass manip­u­la­tion cam­paign. And the biggest sto­ry isn’t even that Cam­bridge Ana­lyt­i­ca was engaged in a cut­ting-edge com­mer­cial mass manip­u­la­tion cam­paign. Because both of those sto­ries are eclipsed by the sto­ry that even if Cam­bridge Ana­lyt­i­ca real­ly was engaged in a com­mer­cial cut­ting edge cam­paign, it prob­a­bly was­n’t near­ly as cut­ting edge as what Face­book and Google and the oth­er data giants rou­tine­ly engage in. And this sit­u­a­tion has been build­ing for decades and with­in the con­text of the much larg­er scan­dal of the rise of a oli­garchy that more or less runs Amer­i­ca by and for pow­er­ful inter­ests. Pow­er­ful inter­ests that are over­whelm­ing­ly ded­i­cat­ed to right-wing elit­ist doc­trines that view the pub­lic as a resources to be con­trolled and exploit­ed for pri­vate prof­it.

It’s all a reminder that, like so many incred­i­bly com­plex issues, cre­at­ing very high qual­i­ty gov­ern­ment is the only fea­si­ble answer. A high qual­i­ty gov­ern­ment man­aged by a self-aware pub­lic. Some sort of ‘sur­veil­lance state’ is almost an inevitabil­i­ty as long as we have ubiq­ui­tous sur­veil­lance tech­nol­o­gy. Even the array of ‘cryp­to’ tools tout­ed in recent years have con­sis­tent­ly proven to be vul­ner­a­ble, which isn’t nec­es­sar­i­ly a bad thing since ubiq­ui­tous cryp­to-tech­nol­o­gy comes with its own suite of mega-col­lec­tive headaches. Nation­al secu­ri­ty and per­son­al data inse­cu­ri­ty real­ly are inter­twined in both mutu­al­ly inclu­sive and exclu­sive ways. It’s not as if the nation­al secu­ri­ty hawk argu­ments that “you can’t be free if you’re dead from [insert war, ter­ror, ran­dom chaos things a nation­al secu­ri­ty state is sup­posed to deal with]” isn’t valid. But fears of Big Broth­er are also valid, as our present sit­u­a­tion amply demon­strates. The path isn’t clear, which is why a nation­al secu­ri­ty state with a sig­nif­i­cant pri­vate sec­tor com­po­nent and access to ample inti­mate details is like­ly for the fore­see­able future whether you like it or not. Peo­ple err on imme­di­ate safe­ty. So we bet­ter have very high qual­i­ty gov­ern­ment. Espe­cial­ly high qual­i­ty reg­u­la­tions for the pri­vate sec­tor com­po­nents of that nation­al secu­ri­ty state.

And while dig­i­tal giants like Google and Face­book will inevitably have access to a troves of per­son­al data that they need to offer the kinds of ser­vices peo­ple need, there’s no rea­son any sort of reg­u­lat­ing them heav­i­ly so they don’t become per­son­al data repos­i­to­ry for sale. Which is what they are now.

What do we do about ser­vices that peo­ple use to run their lives which, by def­i­n­i­tion, neces­si­tate the col­lec­tion of pri­vate data by a third-par­ty? How do we deal with these chal­lenges? Well, again, it starts with being aware of them and actu­al­ly try­ing to col­lec­tive­ly grap­ple with them so some sort of gen­er­al con­sen­sus can be arrive at. And that’s all why we need to rec­og­nize that it is imper­a­tive that the pub­lic sur­veils the sur­veil­lance state along with sur­veilling the rest of the world going on around us too. A self-aware sur­veil­lance state com­prised of a self-aware pop­u­lace of peo­ple who know what’s going on with their sur­veil­lance state and the world. In oth­er words, part of the solu­tion to ‘Big Data Big Broth­er’ real­ly is a soci­ety of ‘Lit­tle Broth­ers and Sis­ters’ who are col­lec­tive­ly very informed about what is going on in the world and polit­i­cal­ly capa­ble of effect­ing changes to that sur­veil­lance state — and the rest of gov­ern­ment or the pri­vate sec­tor — when nec­es­sary change is iden­ti­fied. In oth­er oth­er words, the one ‘utopi­an’ solu­tion we can’t afford to give up on is the utopia of a well-func­tion democ­ra­cy pop­u­lat­ed by a well-informed cit­i­zen­ry. A well-armed cit­i­zen­ry armed with rel­e­vant facts and wis­dom (and an exten­sive under­stand­ing of the his­to­ry and tech­nique of fas­cism and oth­er author­i­tar­i­an move­ments). Because a clue­less soci­ety will be an abu­sive­ly sur­veilled soci­ety.

But the fact that this Cam­bridge Ana­lyt­i­ca scan­dal is a sur­prise and is being cov­ered large­ly in iso­la­tion of this broad­er his­toric and con­tem­po­rary con­text is a reminder that we are no where near that demo­c­ra­t­ic ide­al of a well-informed cit­i­zen­ry. Well, guess what would be a real­ly valu­able tool for sur­veilling the sur­veil­lance state and the rest of the world around us and becom­ing that well-informed cit­i­zen­ry: the inter­net! Specif­i­cal­ly, we real­ly do need to read and digest grow­ing amounts of infor­ma­tion to make sense of an increas­ing­ly com­plex world. But the inter­net is just the start. The goal needs to be the kind of func­tion­al, self-aware democ­ra­cy were sit­u­a­tions like the cur­rent one don’t devel­op in a fog of col­lec­tive amne­sia and can be pro-active­ly man­aged. To put it anoth­er way, we need an inverse of Ithiel de Sola Pool’s vision of world with benev­o­lent elites use com­put­ers and Big Data to man­age the rab­ble and ward of polit­i­cal rev­o­lu­tions. Instead, we need a polit­i­cal rev­o­lu­tion of the rab­ble fueled by the knowl­edge of our his­to­ry and world the inter­net makes wide­ly acces­si­ble. And one of the key goals of the polit­i­cal rev­o­lu­tion needs to be to cre­ate a world with the knowl­edge the inter­net makes wide­ly avail­able is used to reign in our elites and build a world that works for every­one.

And yes, that implic­it­ly implies a left-wing rev­o­lu­tion since left-wing demo­c­ra­t­ic move­ments those are the only kind that have every­one in mind. And yes, this implies an eco­nom­ic rev­o­lu­tion that sys­tem­at­i­cal­ly frees up time for vir­tu­al­ly every­one one so peo­ple actu­al­ly have the time to inform them­selves. Eco­nom­ic secu­ri­ty and time secu­ri­ty. We need to build a world that pro­vide both to every­one.

So when we ask our­selves how we should respond to the grow­ing Cam­bridge Analytica/Facebook scan­dal, don’t for­get that one of the key lessons that the sto­ry of Cam­bridge Ana­lyt­i­ca teach­es us is that there is an immense amount of knowl­edge about our­selves — our his­to­ry and con­tem­po­rary con­text- that we need­ed to learn and did­n’t. And that includes envi­sion­ing what a func­tion­al demo­c­ra­t­ic soci­ety and econ­o­my that works for every­one would look like and build­ing it. Yes, the inter­net could be very help­ful in that process, just don’t for­get about every­thing else that will be required to build that func­tion­al democ­ra­cy.

Discussion

43 comments for “The Cambridge Analytica Microcosm in Our Panoptic Macrocosm”

  1. Here’s a good exam­ple of many of the prob­lem with Face­book are facil­i­tat­ed by the many pri­va­cy prob­lems with the rest of the tech sec­tor: A num­ber of Face­book users dis­cov­ered a rather creepy pri­va­cy vio­la­tion by Face­book. It turns out that Face­book was col­lect­ing meta­da­ta about the calls and texts peo­ple were send­ing from their smart­phones with the Face­book app and Googles Android oper­at­ing sys­tem.

    And it also turns out that Face­book used a num­ber of sleazy excus­es to “get per­mis­sion” to col­lect this the data. First, Face­book had users agree to giv­ing such data away by hid­ing it away in obtuse lan­guage in the user agree­ment. Sec­ond, the default set­ting for the Face­book app was to give this data away. Users could turn off this data shar­ing, but it was nev­er obvi­ous it was on.

    Third, it was based on exploit­ing how Android’s user per­mis­sions sys­tem encour­ages peo­ple to share vasts amounts of data with­out real­iz­ing it. This is were this becomes a Google scan­dal too. If you had the Android oper­at­ing sys­tem the Face­book app would try to get per­mis­sion to access your phone con­tact infor­ma­tion. This was osten­si­bly to be used for the Face­book’s friend rec­om­men­da­tion algo­rithms. If you grant­ed per­mis­sion to read con­tacts dur­ing the Face­book app’s instal­la­tion on old­er ver­sions of Android — before ver­sion 4.1 (Jel­ly Bean) — giv­ing per­mis­sion to an app to read con­tact infor­ma­tion also grant­ed per­mis­sion to call and mes­sage logs by default. So this was just an egre­gious pri­va­cy design by Google and Face­book egre­gious­ly exploit­ed it (sur­prise!).

    And when this loose per­mis­sions sys­tem was fixed in lat­er ver­sions of Android Face­book con­tin­ued to use a loop­hole to keep grab­bing the call and text meta­da­ta. The per­mis­sion struc­ture was changed in the Android API in ver­sion 16. But Android appli­ca­tions could bypass this change if they were writ­ten to ear­li­er ver­sions of the API, so Face­book API could con­tin­ue to gain access to call and SMS data by spec­i­fy­ing an ear­li­er Android SDK ver­sion. In oth­er words, upgrad­ing the Android oper­at­ing sys­tem did­n’t guar­an­tee that upgrades to user data pri­va­cy rules would actu­al­ly take effect on the apps you already have installed. Which, again, is egre­gious. But that’s what Google’s Android oper­at­ing sys­tem allowed and Face­book total­ly exploit­ed it until Google final­ly closed the loop­hole in Octo­ber of 2017.

    Note that Apple’s iOS phones did­n’t have this issue with the Face­book app because that iOS oper­at­ing sys­tem sim­ply does not give apps access to that kind of infor­ma­tion. So the per­mis­sions Google is giv­ing are bad even com­pared to it’s major com­peti­tor in the smart­phone oper­at­ing sys­tem space.

    It’s also quite anal­o­gous to what Face­book was doing with the “friends per­mis­sions” give­away of Face­book pro­file infor­ma­tion to app devel­op­ers. In both cas­es we have a major plat­form built there was a giant pri­va­cy-vio­lat­ing loop­holes built into the plat­forms that was devel­op­ers know about but the pub­lic isn’t real­ly aware they’re sign­ing up for. That’s become much of the mod­ern inter­net giant busi­ness mod­el and as we can see it’s a mod­el that feeds on itself. Google and Face­book feed infor­ma­tion to each oth­er indi­cat­ing that the Big Data giant have deter­mined that it’s more prof­itable to share their data on all of us than keep it locked and pro­pri­etary.

    Recall how Face­book whis­tle-blow­er Sandy Parak­i­las said he remem­bered Face­book exec­u­tives get­ting con­cerned that they were giv­ing so much of their infor­ma­tion on peo­ple away to app devel­op­ers that com­peti­tors would be able to cre­ate their own social net­works. That’s how much data Face­book was giv­ing away. And now we learn that Google’s oper­at­ing sys­tem made an egre­gious amount of data avail­able to app devel­op­ers — like meta­da­ta on calls and texts — if peo­ple gave an app “con­tact” per­mis­sions.

    And so we can see that Face­book and Google just aren’t in the ad space. They’re in the data bro­ker­age space too. They’ve clear­ly deter­mined that max­i­miz­ing prof­its just might require hand­ing over the kind of data peo­ple assumed these data giants care­ful­ly guard­ed. Instead, they’ve been care­ful­ly and steadi­ly hand­ing that data out. Pre­sum­ably because it’s more prof­itable:

    Giz­mo­do

    Facebook’s Defense for Suck­ing Up Your Call and Text Data Entire­ly Miss­es the Point

    Rhett Jones
    3/26/18 2:00pm

    A num­ber of Face­book users dis­cov­ered over the past few days that the social media com­pa­ny had col­lect­ed a creepy lev­el of infor­ma­tion about their calls and texts. Many users claimed they nev­er gave Face­book per­mis­sion to gath­er this infor­ma­tion. How­ev­er, in response to the uproar, Face­book says the “fea­ture” is opt-in only. Basi­cal­ly, the company’s say­ing it’s your own fault if you don’t like it.

    To under­stand what Face­book is defend­ing requires a lot of explanation—and that’s the heart of the prob­lem.

    ...

    But as the com­pa­ny faces grow­ing scruti­ny over its data prac­tices, a num­ber of users began dig­ging around in their archives. Spurred by a tweet from devel­op­er Dylan McK­ay, social media users com­plained this week­end that Face­book had records of their con­tacts, as well as call and text meta­da­ta. Face­book has let users export their data since 2010.

    Down­loaded my face­book data as a ZIP file­Some­how it has my entire call his­to­ry with my part­ner’s mum pic.twitter.com/CIRUguf4vD— Dylan McK­ay (@dylanmckaynz) March 21, 2018

    Ars Tech­ni­ca spoke with numer­ous users who felt blind­sided, and the publication’s staff did their own tests, find­ing SMS data and con­tacts data from an Android device they used in 2015 and 2106. From the report:

    Face­book uses phone-con­tact data as part of its friend rec­om­men­da­tion algo­rithm. And in recent ver­sions of the Mes­sen­ger appli­ca­tion for Android and Face­book Lite devices, a more explic­it request is made to users for access to call logs and SMS logs on Android and Face­book Lite devices. But even if users didn’t give that per­mis­sion to Mes­sen­ger, they may have giv­en it inad­ver­tent­ly for years through Facebook’s mobile apps—because of the way Android has han­dled per­mis­sions for access­ing call logs in the past.

    If you grant­ed per­mis­sion to read con­tacts dur­ing Facebook’s instal­la­tion on Android a few ver­sions ago—specifically before Android 4.1 (Jel­ly Bean)—that per­mis­sion also grant­ed Face­book access to call and mes­sage logs by default. The per­mis­sion struc­ture was changed in the Android API in ver­sion 16. But Android appli­ca­tions could bypass this change if they were writ­ten to ear­li­er ver­sions of the API, so Face­book API could con­tin­ue to gain access to call and SMS data by spec­i­fy­ing an ear­li­er Android SDK ver­sion. Google dep­re­cat­ed ver­sion 4.0 of the Android API in Octo­ber 2017—the point at which the lat­est call meta­da­ta in Face­book users’ data was found. Apple iOS has nev­er allowed silent access to call data.

    To put all of that into plain Eng­lish, Google’s Android OS has its own pri­va­cy issues, and cou­pled with Facebook’s apps, it could’ve made it pos­si­ble for Face­book users to opt-into the company’s sur­veil­lance pro­gram with­out real­iz­ing it.

    Face­book respond­ed on Sun­day with a “Fact Check” blog post claim­ing that any asser­tion that “Face­book has been log­ging people’s call and SMS (text) his­to­ry with­out their per­mis­sion” is false. As the unsigned blog reads, in part:

    Call and text his­to­ry log­ging is part of an opt-in fea­ture for peo­ple using Mes­sen­ger or Face­book Lite on Android. This helps you find and stay con­nect­ed with the peo­ple you care about, and pro­vides you with a bet­ter expe­ri­ence across Face­book. Peo­ple have to express­ly agree to use this fea­ture. If, at any time, they no longer wish to use this fea­ture they can turn it off in set­tings, or here for Face­book Lite users, and all pre­vi­ous­ly shared call and text his­to­ry shared via that app is delet­ed. While we receive cer­tain per­mis­sions from Android, upload­ing this infor­ma­tion has always been opt-in only.

    It’s true that Face­book, as far as we know, has always made SMS meta­da­ta col­lec­tion an opt-in part of the set­up process. But take a look at the dif­fer­ence between today’s opt-in screen and one users saw back in 2016.

    Today, Mes­sen­ger gives you the options to “turn on” meta­da­ta col­lec­tion, opt-out, or learn more. But before it faced crit­i­cism in 2016, the only options were “OK” or “set­tings.” So, it’s like­ly many peo­ple gave Face­book per­mis­sion at one time with­out real­iz­ing it.

    This is an excel­lent illus­tra­tion of the web that Face­book weaves. In the Cam­bridge Ana­lyt­i­ca scan­dal, Face­book allowed the per­son­al data of 50 mil­lion users to get into the hands of a third-par­ty app, in part because its poli­cies gave up the data of users’ friends based on per­mis­sion from a sin­gle user. When that third par­ty trans­ferred the infor­ma­tion to a polit­i­cal data analy­sis firm, which was a vio­la­tion of Facebook’s poli­cies, Face­book did noth­ing when it found out in 2015 but issue a stern warn­ing and make Cam­bridge Ana­lyt­i­ca sign a doc­u­ment promis­ing that the data was delet­ed. Now, Face­book says that it no longer shot­guns that data out to devel­op­ers based on a sin­gle per­mis­sion, so appar­ent­ly every­one should feel okay going for­ward.

    Explain­ing what’s going on shouldn’t be so dif­fi­cult or time-con­sum­ing. Face­book claims this is all designed to make things more con­ve­nient for you. But it doesn’t have to con­stant­ly track your text mes­sages and the dura­tion of your calls just to cap­ture your con­tacts list. That could be a one-time thing that you do when you set the ser­vice up, and Face­book could peri­od­i­cal­ly ask if you want to do anoth­er import a month lat­er.

    How­ev­er, Face­book has turned a con­ve­nience into an excuse for grab­bing more infor­ma­tion that it can com­bine with every­thing else to make a per­fect psy­cho­log­i­cal and social pro­file of you, the user. And it has demon­strat­ed that it can’t be trust­ed to keep that data to itself.

    Mark Zucker­burg told CNN last week that he was open to more reg­u­la­tions being applied to his plat­form. “You know, I think in gen­er­al tech­nol­o­gy is an increas­ing­ly impor­tant trend in the world and I actu­al­ly think the ques­tion is more what’s the right reg­u­la­tion rather than ‘yes’ or ‘no’ should it be reg­u­lat­ed,” he said. This is fool­ish because gov­ern­ment reg­u­la­tions will undoubt­ed­ly get screwed up and lead to unin­tend­ed con­se­quences.

    But if Mark insists, the gov­ern­ment could cre­ate strict terms of ser­vice require­ments for what a com­pa­ny explains to a user before they sign up. Those reg­u­la­tions could require clear exam­ples of how data might be used and even require users to com­plete a sim­ple quiz to show they under­stand before final­iz­ing the app’s set­up. Of course, that kind of bur­den­some activ­i­ty wouldn’t be nec­es­sary if Face­book would just make every­thing clear on its own. Unfor­tu­nate­ly, with con­gres­sion­al hear­ings sched­uled, and gov­ern­ment agency inves­ti­ga­tions under­way, it may be too late.

    ———-

    “Facebook’s Defense for Suck­ing Up Your Call and Text Data Entire­ly Miss­es the Point” by Rhett Jones; Giz­mo­do; 03/26/2018

    “To under­stand what Face­book is defend­ing requires a lot of explanation—and that’s the heart of the prob­lem.”

    It’s a key insight: It real­ly a reflec­tion of the heart of the prob­lem that sim­ply under­stand­ing what Face­book is defend­ing requires a lot of expla­na­tion. When Face­book start­ed col­lect­ing peo­ple’s call and text meta­da­ta over its app it was exploit­ing the fact that Google’s Android sys­tem allowed them to do that in the first place when users gave “con­tact” per­mis­sions to an app (most peo­ple prob­a­bly did­n’t assume that giv­ing an app con­tact per­mis­sion was also giv­ing away call and text meta­da­ta). And then after Google changed the Android app per­mis­sions sys­tem and sep­a­rat­ed the per­mis­sions for con­tact infor­ma­tion with per­mis­sions for the call and text meta­da­ta Face­book relied a loop­hole Google pro­vid­ed where apps that were already installed could con­tin­ue col­lect­ing that data. And none of this was ever made clear to the mil­lions of peo­ple using the Face­book app on their Android phones because it was hid­den in the dense text of user agree­ments that no one reads. The con­vo­lut­ed­ness of the act obscures the act.

    And keep in mind that Face­book is claim­ing that it mere­ly want­ed this call and text meta­da­ta for its friend rec­om­men­da­tions algo­rithm. Which is, of course, absurd. That data was going to obvi­ous­ly go into the pool of data Face­book is com­pil­ing on every­one.

    ...
    But as the com­pa­ny faces grow­ing scruti­ny over its data prac­tices, a num­ber of users began dig­ging around in their archives. Spurred by a tweet from devel­op­er Dylan McK­ay, social media users com­plained this week­end that Face­book had records of their con­tacts, as well as call and text meta­da­ta. Face­book has let users export their data since 2010.

    Down­loaded my face­book data as a ZIP file­Some­how it has my entire call his­to­ry with my part­ner’s mum pic.twitter.com/CIRUguf4vD— Dylan McK­ay (@dylanmckaynz) March 21, 2018

    Ars Tech­ni­ca spoke with numer­ous users who felt blind­sided, and the publication’s staff did their own tests, find­ing SMS data and con­tacts data from an Android device they used in 2015 and 2106. From the report:

    Face­book uses phone-con­tact data as part of its friend rec­om­men­da­tion algo­rithm. And in recent ver­sions of the Mes­sen­ger appli­ca­tion for Android and Face­book Lite devices, a more explic­it request is made to users for access to call logs and SMS logs on Android and Face­book Lite devices. But even if users didn’t give that per­mis­sion to Mes­sen­ger, they may have giv­en it inad­ver­tent­ly for years through Facebook’s mobile apps—because of the way Android has han­dled per­mis­sions for access­ing call logs in the past.

    If you grant­ed per­mis­sion to read con­tacts dur­ing Facebook’s instal­la­tion on Android a few ver­sions ago—specifically before Android 4.1 (Jel­ly Bean)—that per­mis­sion also grant­ed Face­book access to call and mes­sage logs by default. The per­mis­sion struc­ture was changed in the Android API in ver­sion 16. But Android appli­ca­tions could bypass this change if they were writ­ten to ear­li­er ver­sions of the API, so Face­book API could con­tin­ue to gain access to call and SMS data by spec­i­fy­ing an ear­li­er Android SDK ver­sion. Google dep­re­cat­ed ver­sion 4.0 of the Android API in Octo­ber 2017—the point at which the lat­est call meta­da­ta in Face­book users’ data was found. Apple iOS has nev­er allowed silent access to call data.

    To put all of that into plain Eng­lish, Google’s Android OS has its own pri­va­cy issues, and cou­pled with Facebook’s apps, it could’ve made it pos­si­ble for Face­book users to opt-into the company’s sur­veil­lance pro­gram with­out real­iz­ing it.
    ...

    “To put all of that into plain Eng­lish, Google’s Android OS has its own pri­va­cy issues, and cou­pled with Facebook’s apps, it could’ve made it pos­si­ble for Face­book users to opt-into the company’s sur­veil­lance pro­gram with­out real­iz­ing it.”

    Face­book and Google work­ing togeth­er to share more of what they know about us with each oth­ers. That’s basi­cal­ly what hap­pened. It was a team effort.

    And as the arti­cle notes, when Face­book claims that this was all fine because it was an opt-in option they ignore the fact the app used to make it very unclear opt­ing-out was an option at all. The opt-out option was hid­den in the set­tings and opt­ing-in was the default set­ting that peo­ple had select­ed when they installed the app. And it was like that as recent­ly as 2016:

    ...
    Face­book respond­ed on Sun­day with a “Fact Check” blog post claim­ing that any asser­tion that “Face­book has been log­ging people’s call and SMS (text) his­to­ry with­out their per­mis­sion” is false. As the unsigned blog reads, in part:

    Call and text his­to­ry log­ging is part of an opt-in fea­ture for peo­ple using Mes­sen­ger or Face­book Lite on Android. This helps you find and stay con­nect­ed with the peo­ple you care about, and pro­vides you with a bet­ter expe­ri­ence across Face­book. Peo­ple have to express­ly agree to use this fea­ture. If, at any time, they no longer wish to use this fea­ture they can turn it off in set­tings, or here for Face­book Lite users, and all pre­vi­ous­ly shared call and text his­to­ry shared via that app is delet­ed. While we receive cer­tain per­mis­sions from Android, upload­ing this infor­ma­tion has always been opt-in only.

    It’s true that Face­book, as far as we know, has always made SMS meta­da­ta col­lec­tion an opt-in part of the set­up process. But take a look at the dif­fer­ence between today’s opt-in screen and one users saw back in 2016.

    Today, Mes­sen­ger gives you the options to “turn on” meta­da­ta col­lec­tion, opt-out, or learn more. But before it faced crit­i­cism in 2016, the only options were “OK” or “set­tings.” So, it’s like­ly many peo­ple gave Face­book per­mis­sion at one time with­out real­iz­ing it.
    ...

    And it’s also all an exam­ple of how the osten­si­bly help­ful rea­sons to col­lect this per­son­al­ized data (like make the friend rec­om­men­da­tion algo­rithms bet­ter in this case) are used as an excuse to engage in the per­son­al infor­ma­tion equiv­a­lent of a smash and grab ran­sack­ing:

    ...
    This is an excel­lent illus­tra­tion of the web that Face­book weaves. In the Cam­bridge Ana­lyt­i­ca scan­dal, Face­book allowed the per­son­al data of 50 mil­lion users to get into the hands of a third-par­ty app, in part because its poli­cies gave up the data of users’ friends based on per­mis­sion from a sin­gle user. When that third par­ty trans­ferred the infor­ma­tion to a polit­i­cal data analy­sis firm, which was a vio­la­tion of Facebook’s poli­cies, Face­book did noth­ing when it found out in 2015 but issue a stern warn­ing and make Cam­bridge Ana­lyt­i­ca sign a doc­u­ment promis­ing that the data was delet­ed. Now, Face­book says that it no longer shot­guns that data out to devel­op­ers based on a sin­gle per­mis­sion, so appar­ent­ly every­one should feel okay going for­ward.

    Explain­ing what’s going on shouldn’t be so dif­fi­cult or time-con­sum­ing. Face­book claims this is all designed to make things more con­ve­nient for you. But it doesn’t have to con­stant­ly track your text mes­sages and the dura­tion of your calls just to cap­ture your con­tacts list. That could be a one-time thing that you do when you set the ser­vice up, and Face­book could peri­od­i­cal­ly ask if you want to do anoth­er import a month lat­er.

    How­ev­er, Face­book has turned a con­ve­nience into an excuse for grab­bing more infor­ma­tion that it can com­bine with every­thing else to make a per­fect psy­cho­log­i­cal and social pro­file of you, the user. And it has demon­strat­ed that it can’t be trust­ed to keep that data to itself.
    ...

    “How­ev­er, Face­book has turned a con­ve­nience into an excuse for grab­bing more infor­ma­tion that it can com­bine with every­thing else to make a per­fect psy­cho­log­i­cal and social pro­file of you, the user. And it has demon­strat­ed that it can’t be trust­ed to keep that data to itself.”

    While Face­book may not have per­fect psy­cho­log­i­cal and social pro­files of every­one, they prob­a­bly have the best or near­ly the vest, with Google pos­si­bly know­ing more about peo­ple. And it’s hard to imag­ine that this call and text meta­da­ta isn’t poten­tial­ly pret­ty valu­able infor­ma­tion for putting togeth­er those per­son­al pro­files on every­one. So it’s worth not­ing that this is poten­tial­ly the same kind of pro­file data that Face­book gave out to Cam­bridge Ana­lyt­i­ca and thou­sands of oth­er app devel­op­ers. In oth­er words, this call and text meta­da­ta slurp­ing scan­dal is poten­tial­ly also part of the Cam­bridge Ana­lyt­i­ca scan­dal in the sense that the insights Face­book gained from the call and text meta­da­ta could have shown up in those pro­files Face­book was hand­ing out to app devel­op­ers like Cam­bridge Ana­lyt­i­ca.

    Which is a reminder that this new scan­dal of Google’s Android OS giv­ing Face­book this call and text meta­da­ta prob­a­bly involves a lot more than just Face­book col­lect­ing this kind of data. Who knows how many oth­er app devel­op­ers whose apps request­ed “con­tact” per­mis­sions also went ahead and grabbed all the call and text meta­da­ta?

    Also don’t for­get that this call and text meta­da­ta includes data about the peo­ple on the oth­er side of those calls and texts. So Face­book was grab­bing data on more peo­ple than just the app users. And any oth­er Android devel­op­ers were poten­tial­ly grab­bing that data too. It’s anoth­er par­al­lel with the Face­book “friends per­mis­sion” loop­hole exploit­ed by Cam­bridge Ana­lyt­i­ca and oth­er Face­book app devel­op­ers: you don’t have to down­load these pri­va­cy vio­lat­ing apps to be impact­ed. Sim­ply com­mu­ni­cat­ing with some­one who does have the pri­va­cy vio­lat­ing app will get your pri­va­cy vio­lat­ed too.

    So as we can see, Face­book does­n’t just have a scan­dal involv­ing giv­ing pri­vate data away. It also has a scan­dal involv­ing col­lect­ing pri­vate data too. A scan­dal that poten­tial­ly any oth­er Android app devel­op­er might also be involved in too. Which means there’s prob­a­bly a black mar­ket for this kind of data too. Because Google, like Face­book, appar­ent­ly could­n’t resist mak­ing itself a data-bro­ker too. And now all this data is poten­tial­ly float­ing around out there. It’s was a wild­ly irre­spon­si­ble act on Google’s part to make that kind of data avail­able under the “con­tacts” per­mis­sions in the Android oper­at­ing sys­tem but that’s how much Google designed that sys­tem to make data col­lec­tion a pri­or­i­ty. Pre­sum­ably to encour­age more app devel­op­ers to make Android apps. Access to our data is lit­er­al­ly part of the incen­tive struc­ture. It’s real­ly quite stun­ning. And quite anal­o­gous to what Face­book is in trou­ble for with Cam­bridge Ana­lyt­i­ca.

    But at least those Face­book friend rec­om­men­da­tion algo­rithms are prob­a­bly very well pow­ered, so there’s that.

    Posted by Pterrafractyl | April 1, 2018, 11:15 pm
  2. We should prob­a­bly get ready for a lot more sto­ries like this: Face­book just issued a flur­ry of new updates to its data-shar­ing poli­cies. Some of these changes include new restric­tions on the data made avail­able to app devel­op­ers while oth­er changes are focused on clar­i­fy­ing the user agree­ments that dis­close what data is tak­en.

    And there’s a new esti­mate from Face­book on the num­ber of Face­book pro­files grabbed by Cam­bridge Ana­lyt­i­ca’s app. It’s gone from 50 mil­lion to 87 mil­lion pro­files:

    Asso­ci­at­ed Press

    Face­book: 87M Users May Have Had Data Breached By Cam­bridge Ana­lyt­i­ca

    By BARBARA ORTUTAY
    April 4, 2018 2:47 pm

    NEW YORK (AP) — Face­book revealed Wednes­day that tens of mil­lions more peo­ple might have been exposed in the Cam­bridge Ana­lyt­i­ca pri­va­cy scan­dal than pre­vi­ous­ly thought and said it will restrict the data it allows out­siders to access on its users.

    Those devel­op­ments came as con­gres­sion­al offi­cials said CEO Mark Zucker­berg will tes­ti­fy next week, while Face­book unveiled a new pri­va­cy pol­i­cy that aims to explain the data it gath­ers on users more clear­ly — but doesn’t actu­al­ly change what it col­lects and shares.

    Face­book is fac­ing its worst pri­va­cy scan­dal in years fol­low­ing alle­ga­tions that a Trump-affil­i­at­ed data min­ing firm, Cam­bridge Ana­lyt­i­ca, used used ill-got­ten data from mil­lions of users to try to influ­ence elec­tions. The com­pa­ny said Wednes­day that as many as 87 mil­lion peo­ple might have had their data accessed — an increase from the 50 mil­lion dis­closed in pub­lished reports.

    This Mon­day, all Face­book users will receive a notice on their Face­book feeds with a link to see what apps they use and what infor­ma­tion they have shared with those apps. They’ll have a chance to delete apps they no longer want. Users who might have had their data shared with Cam­bridge Ana­lyt­i­ca will be told of that. Face­book says most of the affect­ed users are in the U.S.

    ...

    Face­book is restrict­ing access that apps can get about users’ events, as well as infor­ma­tion about groups such as mem­ber lists and con­tent. In addi­tion, the com­pa­ny is also remov­ing the option to search for users by enter­ing a phone num­ber or an email address. While this was use­ful to peo­ple to find friends who may have a com­mon name, Face­book says mali­cious actors abused it by col­lect­ing people’s pro­file infor­ma­tion through phone or email lists they had access to.

    This comes on top of changes announced a few weeks ago. For exam­ple, Face­book has said it will remove devel­op­ers’ access to people’s data if the per­son has not used the app in three months.

    Ear­li­er Wednes­day, Face­book unveiled a new pri­va­cy pol­i­cy that seeks to clar­i­fy its data col­lec­tion and use.

    For instance, Face­book added a sec­tion explain­ing that it col­lects people’s con­tact infor­ma­tion if they choose to “upload, sync or import” this to the ser­vice. This may include users’ address books on their phones, as well as their call logs and text his­to­ries. The new pol­i­cy says Face­book may use this data to help “you and oth­ers find peo­ple you may know.”

    The pre­vi­ous pol­i­cy did not men­tion call logs or text his­to­ries. Sev­er­al users were sur­prised to learn recent­ly that Face­book had been col­lect­ing infor­ma­tion about whom they texted or called and for how long, though not the actu­al con­tents of text mes­sages. It seemed to have been done with­out explic­it con­sent, though Face­book says it col­lect­ed such data only from Android users who specif­i­cal­ly allowed it to do so — for instance, by agree­ing to per­mis­sions when installing Face­book.

    Face­book also added clar­i­fi­ca­tion that local laws could affect what it does with “sen­si­tive” data on peo­ple, such as infor­ma­tion about a user’s race or eth­nic­i­ty, health, polit­i­cal views or even trade union mem­ber­ship. This and oth­er infor­ma­tion, the new pol­i­cy states, “could be sub­ject to spe­cial pro­tec­tions under the laws of your coun­try.” But it means the com­pa­ny is unlike­ly to apply stricter pro­tec­tions to coun­tries with loos­er pri­va­cy laws — such as the U.S., for exam­ple. Face­book has always had region­al dif­fer­ences in poli­cies, and the new doc­u­ment makes that clear­er.

    The new pol­i­cy also makes it clear that What­sApp and Insta­gram are part of Face­book and that the com­pa­nies share infor­ma­tion about users. The two were not men­tioned in the pre­vi­ous pol­i­cy. While What­sApp still doesn’t show adver­tise­ments, and has its own pri­va­cy pol­i­cy, Insta­gram long has and its pol­i­cy is the same as Facebook’s. But the notice could be a sign of things to come for What­sApp as well.

    Oth­er changes incor­po­rate some of the restric­tions Face­book pre­vi­ous­ly announced on what third-par­ty apps can col­lect from users and their friends.

    Although Face­book says the changes aren’t prompt­ed by recent events or tighter pri­va­cy rules com­ing from the EU, it’s an oppor­tune time. It comes as Zucker­berg is set to appear April 11 before a House com­mit­tee — his first tes­ti­mo­ny before Con­gress.

    As Face­book evolved from a closed, Har­vard-only net­work with no ads to a giant cor­po­ra­tion with $40 bil­lion in adver­tis­ing rev­enue and huge sub­sidiaries like Insta­gram and What­sApp, its pri­va­cy pol­i­cy has also shift­ed — over and over.

    Almost always, crit­ics say, the changes meant a move away from pro­tect­ing user pri­va­cy toward push­ing open­ness and more shar­ing. On the oth­er hand, reg­u­la­to­ry and user pres­sure has some­times led Face­book to pull back on its data col­lec­tion and use and to explain things in plain­er lan­guage — in con­trast to dense legalese from many oth­er inter­net com­pa­nies.

    The pol­i­cy changes come a week after Face­book gave its pri­va­cy set­tings a makeover. The com­pa­ny tried to make it eas­i­er to nav­i­gate its com­plex and often con­fus­ing pri­va­cy and secu­ri­ty set­tings, though the makeover didn’t change what Face­book col­lects and shares either.

    Those who fol­lowed Facebook’s pri­va­cy gaffes over the years may feel a sense of famil­iar­i­ty. Over and over, the com­pa­ny — often Zucker­berg — owned up to mis­steps and promised changes.

    In 2009, the com­pa­ny announced that it was con­sol­i­dat­ing six pri­va­cy pages and more than 30 set­tings on to a sin­gle pri­va­cy page. Yet, some­how, the com­pa­ny said last week that users still had to go to 20 dif­fer­ent places to access all of their pri­va­cy con­trols and it was chang­ing this so the con­trols will be acces­si­ble from a sin­gle place.

    ———-

    “Face­book: 87M Users May Have Had Data Breached By Cam­bridge Ana­lyt­i­ca” by BARBARA ORTUTAY; Asso­ci­at­ed Press; 04/04/2018

    “Face­book is fac­ing its worst pri­va­cy scan­dal in years fol­low­ing alle­ga­tions that a Trump-affil­i­at­ed data min­ing firm, Cam­bridge Ana­lyt­i­ca, used used ill-got­ten data from mil­lions of users to try to influ­ence elec­tions. The com­pa­ny said Wednes­day that as many as 87 mil­lion peo­ple might have had their data accessed — an increase from the 50 mil­lion dis­closed in pub­lished reports.

    50 mil­lion to now 87 mil­lion. It’s quite a jump. How high might it get when this is all over? We’ll see.

    And beyond that update, Face­book also updat­ed their data-col­lec­tion dis­clo­sure poli­cies. Now they’re actu­al­ly men­tion­ing things like the grab­bing of call and text data off of your smart­phone, which they appar­ent­ly did­n’t feel the need to tell peo­ple about before:

    ...
    Ear­li­er Wednes­day, Face­book unveiled a new pri­va­cy pol­i­cy that seeks to clar­i­fy its data col­lec­tion and use.

    For instance, Face­book added a sec­tion explain­ing that it col­lects people’s con­tact infor­ma­tion if they choose to “upload, sync or import” this to the ser­vice. This may include users’ address books on their phones, as well as their call logs and text his­to­ries. The new pol­i­cy says Face­book may use this data to help “you and oth­ers find peo­ple you may know.”

    The pre­vi­ous pol­i­cy did not men­tion call logs or text his­to­ries. Sev­er­al users were sur­prised to learn recent­ly that Face­book had been col­lect­ing infor­ma­tion about whom they texted or called and for how long, though not the actu­al con­tents of text mes­sages. It seemed to have been done with­out explic­it con­sent, though Face­book says it col­lect­ed such data only from Android users who specif­i­cal­ly allowed it to do so — for instance, by agree­ing to per­mis­sions when installing Face­book.
    ...

    And note how Face­book’s update on how local pri­va­cy laws could affect its han­dling of “sen­si­tive” data implies that the absence of those local laws means the same “sen­si­tive” data isn’t going to be han­dled in a sen­si­tive man­ner. So if you were hop­ing the big new EU data pri­va­cy rules were going to impact Face­book’s poli­cies out­side the EU, nope:

    ...
    Face­book also added clar­i­fi­ca­tion that local laws could affect what it does with “sen­si­tive” data on peo­ple, such as infor­ma­tion about a user’s race or eth­nic­i­ty, health, polit­i­cal views or even trade union mem­ber­ship. This and oth­er infor­ma­tion, the new pol­i­cy states, “could be sub­ject to spe­cial pro­tec­tions under the laws of your coun­try.” But it means the com­pa­ny is unlike­ly to apply stricter pro­tec­tions to coun­tries with loos­er pri­va­cy laws — such as the U.S., for exam­ple. Face­book has always had region­al dif­fer­ences in poli­cies, and the new doc­u­ment makes that clear­er.
    ...

    And that’s just some of the updates Face­book issued today. And while a num­ber of these updates are pret­ty notable, per­haps the most notable part of this flur­ry of updates is is that they’re updates that actu­al­ly increase pri­va­cy pro­tec­tions, which is not how these updates have nor­mal­ly gone for Face­book in the past

    ...
    As Face­book evolved from a closed, Har­vard-only net­work with no ads to a giant cor­po­ra­tion with $40 bil­lion in adver­tis­ing rev­enue and huge sub­sidiaries like Insta­gram and What­sApp, its pri­va­cy pol­i­cy has also shift­ed — over and over.

    Almost always, crit­ics say, the changes meant a move away from pro­tect­ing user pri­va­cy toward push­ing open­ness and more shar­ing. On the oth­er hand, reg­u­la­to­ry and user pres­sure has some­times led Face­book to pull back on its data col­lec­tion and use and to explain things in plain­er lan­guage — in con­trast to dense legalese from many oth­er inter­net com­pa­nies.
    ...

    And now let’s take a look at one of the oth­er dis­clo­sures Face­book made today: Remem­ber how Face­book whis­tle-blow­er Sandy Parak­i­las spec­u­lat­ed that a major­i­ty of Face­book users prob­a­bly had their Face­book pro­file infor­ma­tion scraped by app devel­op­ers using exact­ly the same tech­nique Cam­bridge Ana­lyt­i­ca used? Well, it looks like Face­book has very belat­ed­ly arrived at the same con­clu­sion:

    Wash­ing­ton Post

    Face­book said the per­son­al data of most of its 2 bil­lion users has been col­lect­ed and shared with out­siders

    by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin
    April 4, 2018 at 5:09 PM

    Face­book said Wednes­day that most of its 2 bil­lion users like­ly have had their pub­lic pro­files scraped by out­siders with­out the users’ explic­it per­mis­sion, dra­mat­i­cal­ly rais­ing the stakes in a pri­va­cy con­tro­ver­sy that has dogged the com­pa­ny for weeks, spurred inves­ti­ga­tions in the Unit­ed States and Europe, and sent the com­pa­ny’s stock price tum­bling.

    ...

    “We’re an ide­al­is­tic and opti­mistic com­pa­ny, and for the first decade, we were real­ly focused on all the good that con­nect­ing peo­ple brings,” Chief Exec­u­tive Mark Zucker­berg said on a call with reporters Wednes­day after­noon. “But it’s clear now that we didn’t focus enough on pre­vent­ing abuse and think­ing about how peo­ple could use these tools for harm as well.”

    As part of the dis­clo­sure, Face­book for the first time detailed the scale of the improp­er data col­lec­tion for Cam­bridge Ana­lyt­i­ca, a polit­i­cal data con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­can can­di­dates in the last two fed­er­al elec­tion cycles. The polit­i­cal con­sul­tan­cy gained access to Face­book infor­ma­tion on up to 87 mil­lion users, 71 mil­lion of whom are Amer­i­cans, Face­book said. Cam­bridge Ana­lyt­i­ca obtained the data to build “psy­cho­graph­ic” pro­files that would help deliv­er tar­get­ed mes­sages intend­ed to shape vot­er behav­ior in a wide range of U.S. elec­tions.

    But in research sparked by rev­e­la­tions from a Cam­bridge Ana­lyt­i­ca whistle­blow­er last month, Face­book deter­mined that the prob­lem of third-par­ty col­lec­tion of user data was far larg­er still and, with the com­pa­ny’s mas­sive user base, like­ly affect­ed a large cross-sec­tion of peo­ple in the devel­oped world.

    “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped,” the com­pa­ny wrote in its blog post.

    The scrap­ing by mali­cious actors typ­i­cal­ly involved gath­er­ing pub­lic pro­file infor­ma­tion — includ­ing names, email address­es and phone num­bers, accord­ing to Face­book — by using a “search and account recov­ery” func­tion that Face­book said it has now dis­abled. Face­book did­n’t make clear in its post exact­ly what data was col­lect­ed.

    The data obtained by Cam­bridge Ana­lyt­i­ca was more detailed and exten­sive, includ­ing the names, home towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.

    Face­book ini­tial­ly had sought to down­play the prob­lem, say­ing in March only that 270,000 peo­ple had respond­ed to a sur­vey on an app cre­at­ed by the researcher in 2014. That net­ted Cam­bridge Ana­lyt­i­ca the data on the friends of those who respond­ed to the sur­vey, with­out their per­mis­sion. But Face­book declined to say at the time how many oth­er users may have had their data col­lect­ed in the process. The whistle­blow­er, Christo­pher Wylie, a for­mer researcher for the com­pa­ny, said the real num­ber of affect­ed peo­ple was at least 50 mil­lion.

    Wylie tweet­ed on Wednes­day after­noon that Cam­bridge Ana­lyt­i­ca could have obtained even more than 87 mil­lion pro­files. “Could be more tbh,” he wrote, using an abbre­vi­a­tion for “to be hon­est.”

    Cam­bridge Ana­lyt­i­ca on Wednes­day respond­ed to Face­book’s announce­ment by say­ing that it had licensed data on 30 mil­lion users. Face­book banned Cam­bridge Ana­lyt­i­ca from its plat­form last month for obtain­ing the data under false pre­tens­es.

    Face­book’s announce­ment, made near the bot­tom of a blog post Wednes­day after­noon on plans to restrict access to data in the future, under­scores the sever­i­ty of a data mishap that appears to have affect­ed about one out of every four Amer­i­cans and sparked wide­spread out­rage at the care­less­ness of the com­pa­ny’s han­dling of infor­ma­tion on its users. Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    With its moves over the past week, Face­book is embark­ing on a major shift in its rela­tion­ship with third-par­ty app devel­op­ers that have used Facebook’s vast net­work to expand their busi­ness­es. What was large­ly an auto­mat­ed process will now involve devel­op­ers agree­ing to “strict require­ments,” the com­pa­ny said in its blog post Wednes­day. The 2015 pol­i­cy change cur­tailed devel­op­ers’ abil­i­ties to access data about people’s friends net­works but left open many loop­holes that the com­pa­ny tight­ened on Wednes­day.

    The news quick­ly rever­ber­at­ed on Capi­tol Hill, where law­mak­ers are set to grill Zucker­berg at a series of hear­ings next week.

    “The more we learn, the clear­er it is that this was an avalanche of pri­va­cy vio­la­tions that strike at the core of one of our most pre­cious Amer­i­can val­ues – the right to pri­va­cy,” said Sen. Ed Markey (D‑Mass.), who serves on the Sen­ate Com­merce Com­mit­tee, which has called on Zucker­berg to tes­ti­fy at a hear­ing next week.

    “This lat­est rev­e­la­tion is extreme­ly trou­bling and shows that Face­book still has a lot of work to do to deter­mine how big this breach actu­al­ly is,” said Rep. Frank Pal­lone Jr. (D‑N.J.), the top Demo­c­rat on the House Ener­gy and Com­merce Com­mit­tee, which will hear from Zucker­berg on Wednes­day.

    “I’m deeply con­cerned that Face­book only address­es con­cerns on its plat­form when it becomes a pub­lic cri­sis, and that is sim­ply not the way you run a com­pa­ny that is used by over 2 bil­lion peo­ple,” he said. “We need to know how they are going to fix this prob­lem next week at our hear­ing.”

    Face­book announced plans on Wednes­day to add new restric­tions to how out­siders can gain access to this data, the lat­est steps in a years-long process by the com­pa­ny to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion.

    Cam­bridge Ana­lyt­i­ca, which col­lect­ed this infor­ma­tion with the help of Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, was found­ed by a mul­ti­mil­lion-dol­lar invest­ment by hedge-fund bil­lion­aire Robert Mer­cer and head­ed by his daugh­ter, Rebekah Mer­cer, who was the com­pa­ny’s pres­i­dent, accord­ing to doc­u­ments pro­vid­ed by Wylie. Serv­ing as vice pres­i­dent was con­ser­v­a­tive strate­gist Stephen K. Ban­non, who also was the head of Bre­it­bart News. He has since left both jobs and also his post as top White House advis­er to Trump.

    Until Wednes­day, apps that let peo­ple input a Face­book event into their cal­en­dar could also auto­mat­i­cal­ly import lists of all the peo­ple who attend­ed that event, Face­book said. Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group. App devel­op­ers who want this access will now have to prove their activ­i­ties ben­e­fit the group. Face­book will now need to approve tools that busi­ness­es use to oper­ate Face­book pages. A busi­ness that uses an app to help it respond quick­ly to cus­tomer mes­sages, for exam­ple, will not be able to do so auto­mat­i­cal­ly. Devel­op­ers’ access to Insta­gram will also be severe­ly restrict­ed.

    Face­book is allow ban­ning apps from access­ing users’ infor­ma­tion about their reli­gious or polit­i­cal views, rela­tion­ship sta­tus, edu­ca­tion, work his­to­ry, fit­ness activ­i­ty, book read­ing habits, music lis­ten­ing and news read­ing activ­i­ty, video watch­ing and games. Data bro­kers and busi­ness­es col­lect this type of infor­ma­tion to build pro­files of their cus­tomers’ tastes.

    Face­book last week said it is also shut­ting down access to data bro­kers who use their own data to tar­get cus­tomers on Face­book.

    Facebook’s broad changes to how data is used apply most­ly to out­siders and third par­ties. Face­book is not lim­it­ing the data the com­pa­ny itself can col­lect, nor is it restrict­ing its abil­i­ty to pro­file users to enable adver­tis­ers to tar­get them with per­son­al­ized mes­sages. One piece of data Face­book said it would stop col­lect­ing was the time of phone calls, a response to out­rage from users of Facebook’s mes­sen­ger ser­vice who dis­cov­ered that allow­ing Face­book to access their phone con­tact list was giv­ing the com­pa­ny access to their call logs.

    ———-

    “Face­book said the per­son­al data of most of its 2 bil­lion users has been col­lect­ed and shared with out­siders” by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin; Wash­ing­ton Post; 04/04/2018

    “Face­book said Wednes­day that most of its 2 bil­lion users like­ly have had their pub­lic pro­files scraped by out­siders with­out the users’ explic­it per­mis­sion, dra­mat­i­cal­ly rais­ing the stakes in a pri­va­cy con­tro­ver­sy that has dogged the com­pa­ny for weeks, spurred inves­ti­ga­tions in the Unit­ed States and Europe, and sent the com­pa­ny’s stock price tum­bling.”

    So a bil­lion or so peo­ple prob­a­bly had their Face­book pro­file data sucked away by app devel­op­ers. Face­book appar­ent­ly just dis­cov­ered this. And while it’s laugh­able to imag­ine that Face­book just sud­den­ly dis­cov­ered this now, recall how Sandy Parak­i­las also said exec­u­tives had a “it’s best not to know” atti­tude about how this data was used by third-par­ties, so it’s pos­si­ble that Face­book tech­ni­cal­ly did­n’t offi­cial­ly know this until now because they offi­cial­ly nev­er looked before:

    ...
    As part of the dis­clo­sure, Face­book for the first time detailed the scale of the improp­er data col­lec­tion for Cam­bridge Ana­lyt­i­ca, a polit­i­cal data con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­can can­di­dates in the last two fed­er­al elec­tion cycles. The polit­i­cal con­sul­tan­cy gained access to Face­book infor­ma­tion on up to 87 mil­lion users, 71 mil­lion of whom are Amer­i­cans, Face­book said. Cam­bridge Ana­lyt­i­ca obtained the data to build “psy­cho­graph­ic” pro­files that would help deliv­er tar­get­ed mes­sages intend­ed to shape vot­er behav­ior in a wide range of U.S. elec­tions.

    But in research sparked by rev­e­la­tions from a Cam­bridge Ana­lyt­i­ca whistle­blow­er last month, Face­book deter­mined that the prob­lem of third-par­ty col­lec­tion of user data was far larg­er still and, with the com­pa­ny’s mas­sive user base, like­ly affect­ed a large cross-sec­tion of peo­ple in the devel­oped world.

    “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped,” the com­pa­ny wrote in its blog post.

    The scrap­ing by mali­cious actors typ­i­cal­ly involved gath­er­ing pub­lic pro­file infor­ma­tion — includ­ing names, email address­es and phone num­bers, accord­ing to Face­book — by using a “search and account recov­ery” func­tion that Face­book said it has now dis­abled. Face­book did­n’t make clear in its post exact­ly what data was col­lect­ed.

    The data obtained by Cam­bridge Ana­lyt­i­ca was more detailed and exten­sive, includ­ing the names, home towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.
    ...

    ““Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped,” the com­pa­ny wrote in its blog post.”

    LOL! They just dis­cov­ered this and knew noth­ing about how their mas­sive shar­ing of pro­file infor­ma­tion with app devel­op­ers might lead to a mas­sive release of pro­file data. That’s their sto­ry and they’re stick­ing to it. For now.

    And notice how it’s just casu­al­ly acknowl­edged that “Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015,” and Face­book is announc­ing all these new restric­tions on the data app devel­op­ers, or even data bro­kers, can access. And yet Face­book is act­ing like this is all sort of rev­e­la­tion:

    ...
    Face­book’s announce­ment, made near the bot­tom of a blog post Wednes­day after­noon on plans to restrict access to data in the future, under­scores the sever­i­ty of a data mishap that appears to have affect­ed about one out of every four Amer­i­cans and sparked wide­spread out­rage at the care­less­ness of the com­pa­ny’s han­dling of infor­ma­tion on its users. Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    ...

    Face­book announced plans on Wednes­day to add new restric­tions to how out­siders can gain access to this data, the lat­est steps in a years-long process by the com­pa­ny to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion.

    Cam­bridge Ana­lyt­i­ca, which col­lect­ed this infor­ma­tion with the help of Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, was found­ed by a mul­ti­mil­lion-dol­lar invest­ment by hedge-fund bil­lion­aire Robert Mer­cer and head­ed by his daugh­ter, Rebekah Mer­cer, who was the com­pa­ny’s pres­i­dent, accord­ing to doc­u­ments pro­vid­ed by Wylie. Serv­ing as vice pres­i­dent was con­ser­v­a­tive strate­gist Stephen K. Ban­non, who also was the head of Bre­it­bart News. He has since left both jobs and also his post as top White House advis­er to Trump.

    Until Wednes­day, apps that let peo­ple input a Face­book event into their cal­en­dar could also auto­mat­i­cal­ly import lists of all the peo­ple who attend­ed that event, Face­book said. Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group. App devel­op­ers who want this access will now have to prove their activ­i­ties ben­e­fit the group. Face­book will now need to approve tools that busi­ness­es use to oper­ate Face­book pages. A busi­ness that uses an app to help it respond quick­ly to cus­tomer mes­sages, for exam­ple, will not be able to do so auto­mat­i­cal­ly. Devel­op­ers’ access to Insta­gram will also be severe­ly restrict­ed.

    Face­book is allow ban­ning apps from access­ing users’ infor­ma­tion about their reli­gious or polit­i­cal views, rela­tion­ship sta­tus, edu­ca­tion, work his­to­ry, fit­ness activ­i­ty, book read­ing habits, music lis­ten­ing and news read­ing activ­i­ty, video watch­ing and games. Data bro­kers and busi­ness­es col­lect this type of infor­ma­tion to build pro­files of their cus­tomers’ tastes.

    Face­book last week said it is also shut­ting down access to data bro­kers who use their own data to tar­get cus­tomers on Face­book.

    Facebook’s broad changes to how data is used apply most­ly to out­siders and third par­ties. Face­book is not lim­it­ing the data the com­pa­ny itself can col­lect, nor is it restrict­ing its abil­i­ty to pro­file users to enable adver­tis­ers to tar­get them with per­son­al­ized mes­sages. One piece of data Face­book said it would stop col­lect­ing was the time of phone calls, a response to out­rage from users of Facebook’s mes­sen­ger ser­vice who dis­cov­ered that allow­ing Face­book to access their phone con­tact list was giv­ing the com­pa­ny access to their call logs.
    ...

    And note how Cam­bridge Ana­lyt­i­ca whis­tle-blow­er Christo­pher Wylie has already tweet­ed out that the new 87 mil­lion esti­mate might not be high enough:

    ...
    Face­book ini­tial­ly had sought to down­play the prob­lem, say­ing in March only that 270,000 peo­ple had respond­ed to a sur­vey on an app cre­at­ed by the researcher in 2014. That net­ted Cam­bridge Ana­lyt­i­ca the data on the friends of those who respond­ed to the sur­vey, with­out their per­mis­sion. But Face­book declined to say at the time how many oth­er users may have had their data col­lect­ed in the process. The whistle­blow­er, Christo­pher Wylie, a for­mer researcher for the com­pa­ny, said the real num­ber of affect­ed peo­ple was at least 50 mil­lion.

    Wylie tweet­ed on Wednes­day after­noon that Cam­bridge Ana­lyt­i­ca could have obtained even more than 87 mil­lion pro­files. “Could be more tbh,” he wrote, using an abbre­vi­a­tion for “to be hon­est.”
    ...

    “Could be more tbh.” It’s a rather omi­nous tweet con­sid­er­ing the con­text.

    And don’t for­get that the count in the orig­i­nal num­ber of peo­ple using the Cam­bridge Ana­lyt­i­ca app, ~270,000, has­n’t been updat­ed. That’s still just 270,000 peo­ple. So this scan­dal is pro­vid­ing us a sense of just how many peo­ple were like­ly get­ting their pro­file infor­ma­tion grabbed by app devel­op­ers using the “Friends Per­mis­sion” fea­ture. When it was 50 mil­lion peo­ple in total, that came out to about 187 friends get­ting their pro­files grabbed for each per­son who actu­al­ly down­loaded the app. But if it’s 87 mil­lion peo­ple that makes it ~322 friends for each Cam­bridge Ana­lyt­i­ca app user on aver­age.

    Along those lines, it’s worth not­ing the the aver­age num­ber of friends Face­book users have is 338 while the medi­an num­ber of friends in 200, accord­ing to a 2014 Pew research poll. So if that 87 mil­lion num­ber keeps climb­ing, and there­fore the assumed num­ber of friends per user of the Cam­bridge Ana­lyt­i­ca app keeps climb­ing too, at some point we’re going to start get­ting into sus­pi­cious ter­ri­to­ry and have to ask the ques­tion of whether or not the users of that app were unusu­al­ly pop­u­lar or if Cam­bridge Ana­lyt­i­ca was get­ting data from more than just that app.

    After all, for all we know Cam­bridge Ana­lyt­i­ca may have sim­ply pur­chased a bunch of data on the Face­book pro­file black mar­ket, some­thing else Sandy Parak­i­las warned about. So how high might that 87 mil­lion num­ber get if Cam­bridge Ana­lyt­i­ca was just buy­ing this infor­ma­tion for oth­er app devel­op­ers? Who knows, although at this point, “a bil­lion pro­files” can no longer be ruled out, thanks to Face­book’s very belat­ed update today.

    Posted by Pterrafractyl | April 4, 2018, 3:33 pm
  3. And the hits keep com­ing: Here’s an arti­cle some more infor­ma­tion on the dis­clo­sure Face­book made on Wednes­day that “mali­cious actors” may have been using a cou­ple of ‘fea­tures’ Face­book pro­vides to scrape pub­lic pro­file infor­ma­tion from Face­book accounts and asso­ciate that infor­ma­tion with email address and phone num­bers. This is sep­a­rate from the data col­lec­tion tech­nique used by the Cam­bridge Ana­lyt­i­ca app, and thou­sands of oth­er app devel­op­ers, to grab the pri­vate pro­file infor­ma­tion from app users and their friends.

    One tech­nique used by these “mali­cious actors” was to sim­ply feed phone num­bers and email address­es into a Face­book “search” box that would return the Face­book pro­file asso­ci­at­ed with that email for phone num­ber. All the pub­lic infor­ma­tion on that pro­file could then sub­se­quent­ly be col­lect­ed and asso­ci­at­ed with that email/phone data. Users had the option of turn­ing off the abil­i­ty for oth­ers to find their pro­file using this method, but it was turned on by default and appar­ent­ly few peo­ple turned it off.

    The sec­ond tech­nique involved used an account recov­ery tool Face­book pro­vid­ed names, pro­file pic­tures and links to the pub­lic pro­files them­selves for any­one pre­tend­ing to be a Face­book user who for­got how to access their account.

    And accord­ing to Face­book, this was being done by actors obtain­ing emails address­es and phone num­bers on peo­ple on the Dark Web and then set­ting up scripts to auto­mate this process for large num­bers of emails and phone num­bers, “with few Face­book users like­ly escap­ing the scam.” In oth­er words, almost every Face­book user prob­a­bly had their email and phone num­ber asso­ci­at­ed with their Face­book account via this method. Also keep in mind that you don’t need to go to the Dark Web to buy lists of email address­es and phone num­bers, so plac­ing an empha­sis on the “Dark Web” as the source for this infor­ma­tion is like­ly part of Face­book’s ongo­ing attempt to ensure that this scan­dal does­n’t turn into an edu­ca­tion­al expe­ri­ence for the pub­lic on how wide­spread the data bro­ker­age indus­try real­ly is and how much infor­ma­tion on peo­ple is legal­ly com­mer­cial­ly avail­able. In oth­er words, these “mali­cious actors” were prob­a­bly oper­a­tors in the com­mer­cial data bro­ker­age mar­ket in many cas­es.

    And as the arti­cle notes, pair­ing email and phone num­ber infor­ma­tion with the kind of infor­ma­tion peo­ple made pub­licly avail­able on their pro­files is exact­ly the kind of infor­ma­tion that iden­ti­ty thieves want to obtain as a start­ing point for steal­ing your iden­ti­ty.

    The arti­cle also includes more infor­ma­tion on just what kind of pri­vate pro­file infor­ma­tion app devel­op­ers like Cam­bridge Ana­lyt­i­ca were allowed to grab. Because it’s impor­tant to note that we don’t have clar­i­ty yet on what exact­ly app devel­op­ers were allowed to grab from Face­book pro­files. We’ve heard vague descrip­tions of what was avail­able to the app devel­op­ers, like Face­book’s ‘pro­file’ of you (pre­sum­ably, what they’ve learned or inferred about you) and the list of what you “liked”. But it has­n’t been clear if app devel­op­ers also had access to lit­er­al­ly all of your pri­vate Face­book posts. Well, based on the fol­low­ing arti­cle, it does indeed sound like app devel­op­ers poten­tial­ly had access to lit­er­al­ly all of your pri­vate Face­book posts. And a lot of that data is prob­a­bly avail­able on the Dark Web and oth­er black mar­kets too at this point because why not? Face­book made it avail­able and it’s valu­able, so why would­n’t we expect it to be avail­able for sale?

    And the arti­cle makes one more stun­ning rev­e­la­tion regard­ing the per­mis­sions app devel­op­ers had to scrape this pri­vate infor­ma­tion: Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group.

    So while Face­book has­n’t yet admit­ted that they made almost all the pri­vate infor­ma­tion on peo­ple’s Face­book pro­files avail­able for iden­ti­ty thieves and any oth­er bad actors for years with lit­tle to no over­sight and that this data is prob­a­bly float­ing around on the Dark Web for sale, they are get­ting much close to admit­ting this giv­en their lat­est round of admis­sions:

    The Wash­ing­ton Post

    Face­book: ‘Mali­cious actors’ used its tools to dis­cov­er iden­ti­ties and col­lect data on a mas­sive glob­al scale

    by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin
    April 4, 2017 at 8:13 PM

    Face­book said Wednes­day that “mali­cious actors” took advan­tage of search tools on its plat­form, mak­ing it pos­si­ble for them to dis­cov­er the iden­ti­ties and col­lect infor­ma­tion on most of its 2 bil­lion users world­wide.

    The rev­e­la­tion came amid ris­ing acknowl­edge­ment by Face­book about its strug­gles to con­trol the data it gath­ers on users. Among the announce­ments Wednes­day was that Cam­bridge Ana­lyt­i­ca, a polit­i­cal con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­cans, had improp­er­ly gath­ered detailed Face­book infor­ma­tion on 87 mil­lion peo­ple, of whom 71 mil­lion were Amer­i­cans.

    But the abuse of Facebook’s search tools — now dis­abled — hap­pened far more broad­ly and over the course of sev­er­al years, with few Face­book users like­ly escap­ing the scam, com­pa­ny offi­cials acknowl­edged.

    The scam start­ed when mali­cious hack­ers har­vest­ed email address­es and phone num­bers on the so-called “Dark Web,” where crim­i­nals post infor­ma­tion stolen from data breach­es over the years. Then the hack­ers used auto­mat­ed com­put­er pro­grams to feed the num­bers and address­es into Facebook’s “search” box, allow­ing them to dis­cov­er the full names of peo­ple affil­i­at­ed with the phone num­bers or address­es, along with what­ev­er Face­book pro­file infor­ma­tion they chose to make pub­lic, often includ­ing their pro­file pho­tos and home­town.

    “We built this fea­ture, and it’s very use­ful. There were a lot of peo­ple using it up until we shut it down today,” Chief Exec­u­tive Mark Zucker­berg said in a call with reporters Wednes­day.

    Face­book said in a blog post Wednes­day, “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped.”

    Face­book users could have blocked this search func­tion, which was turned on by default, by tweak­ing their set­tings to restrict find­ing their iden­ti­ties by using phone num­bers or email address­es. But research has con­sis­tent­ly shown that users of online plat­forms rarely adjust default pri­va­cy set­tings and often fail to under­stand what infor­ma­tion they are shar­ing.

    Hack­ers also abused Facebook’s account recov­ery func­tion, by pre­tend­ing to be legit­i­mate users who had for­got­ten account details. Facebook’s recov­ery sys­tem served up names, pro­file pic­tures and links to the pub­lic pro­files them­selves. This tool could also be blocked in pri­va­cy set­tings.

    Names, phone num­bers, email address­es and oth­er per­son­al infor­ma­tion amount to crit­i­cal starter kits for iden­ti­ty theft and oth­er mali­cious online activ­i­ty, experts on Inter­net crime say. The Face­book hack allowed bad actors to tie raw data to people’s real iden­ti­ties and build fuller pro­files of them.

    Pri­va­cy experts had issued warn­ings that the phone num­ber and email address lookip tool left Face­book users’ data exposed.

    Face­book didn’t dis­close who the mali­cious actors are, how the data might have been used, or exact­ly how many peo­ple were affect­ed.

    The rev­e­la­tions about the pri­va­cy mishaps come at a per­ilous time for Face­book, which since last month has wres­tled with the fall­out of how the data of tens of mil­lions of Amer­i­cans end­ed up in the hands of Cam­bridge Ana­lyt­i­ca. Those reports have spurred inves­ti­ga­tions in the Unit­ed States and Europe and sent the company’s stock price tum­bling.

    The news quick­ly rever­ber­at­ed on Capi­tol Hill, where law­mak­ers are set to grill Zucker­berg at a series of hear­ings next week.

    “The more we learn, the clear­er it is that this was an avalanche of pri­va­cy vio­la­tions that strike at the core of one of our most pre­cious Amer­i­can val­ues – the right to pri­va­cy,” said Sen. Ed Markey (D‑Mass.), who serves on the Sen­ate Com­merce Com­mit­tee, which has called on Zucker­berg to tes­ti­fy at a hear­ing next week.

    Per­haps the most urgent ques­tion for Face­book is whether its prac­tices ran afoul of a set­tle­ment it bro­kered with the Fed­er­al Trade Com­mis­sion in 2011 in response to pre­vi­ous con­tro­ver­sies over its han­dling of user data.

    At the time, the FTC fault­ed Face­book for mis­rep­re­sent­ing the pri­va­cy pro­tec­tions it afford­ed its users and required the com­pa­ny to main­tain a com­pre­hen­sive pri­va­cy pol­i­cy and ask per­mis­sion before shar­ing user data in new ways. Vio­lat­ing the terms could result in many mil­lions of dol­lars of fines.

    The FTC said last week that it would open a new inves­ti­ga­tion in light of the Cam­bridge Ana­lyt­i­ca news, and Wedneday’s rev­e­la­tions are like­ly to com­pli­cate the legal sit­u­a­tion, said David Vladeck, a for­mer FTC direc­tor of con­sumer pro­tec­tion who over­saw the 2011 con­sent decree.

    “This is a com­pa­ny that is, in my view, like­ly gross­ly out of com­pli­ance with the FTC con­sent decree,” said Vladeck, now a George­town Uni­ver­si­ty Law pro­fes­sor. “I don’t think that after these rev­e­la­tions they have any defense at all.” He called the num­bers “just stag­ger­ing.”

    The data Cam­bridge Ana­lyt­i­ca obtained relied on dif­fer­ent tech­niques and was more detailed and exten­sive than what the hack­ers col­lect­ed using Facebook’s search func­tions. The Cam­bridge Ana­lyt­i­ca data set includ­ed user names, home­towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users and their friends, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.

    Face­book said it banned Cam­bridge Ana­lyt­i­ca last month because the data firm improp­er­ly obtained pro­file infor­ma­tion.

    Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    Face­book in March declined to say how much user data went to Cam­bridge Ana­lyt­i­ca, say­ing only that 270,000 peo­ple had respond­ed to a sur­vey app cre­at­ed by the researcher in 2014. The researcher was able to gath­er infor­ma­tion on the friends of the respon­dents with­out their per­mis­sion, vast­ly expand­ing the scope of his data. That researcher then passed the infor­ma­tion on to Cam­bridge Ana­lyt­i­ca.

    Face­book declined to say at the time how many oth­er users may have had their data col­lect­ed in the process. A Cam­bridge Ana­lyt­i­ca whistle­blow­er, for­mer researcher Christo­pher Wylie, said last month the real num­ber of affect­ed peo­ple was at least 50 mil­lion.

    ...

    With its moves over the past week, Face­book is embark­ing on a major shift in its rela­tion­ship with third-par­ty app devel­op­ers that have used Facebook’s vast net­work to expand their busi­ness­es. What was large­ly an auto­mat­ed process will now involve devel­op­ers agree­ing to “strict require­ments,” the com­pa­ny said in its blog post Wednes­day. The 2015 pol­i­cy change cur­tailed devel­op­ers’ abil­i­ties to access data about people’s friends net­works but left open many loop­holes that the com­pa­ny tight­ened on Wednes­day.

    “This lat­est rev­e­la­tion is extreme­ly trou­bling and shows that Face­book still has a lot of work to do to deter­mine how big this breach actu­al­ly is,” said Rep. Frank Pal­lone Jr. (D‑N.J.), the top Demo­c­rat on the House Ener­gy and Com­merce Com­mit­tee, which will hear from Zucker­berg next Wednes­day.

    “I’m deeply con­cerned that Face­book only address­es con­cerns on its plat­form when it becomes a pub­lic cri­sis, and that is sim­ply not the way you run a com­pa­ny that is used by over 2 bil­lion peo­ple,” he said.

    Face­book announced plans on Wednes­day to add new restric­tions to how app devel­op­ers, data bro­kers and oth­er third par­ties can gain access to this data, the lat­est steps in a years-long process to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion, Face­book said.

    Until Wednes­day, apps that let peo­ple input a Face­book event into their cal­en­dar could also auto­mat­i­cal­ly import lists of all the peo­ple who attend­ed that event, Face­book said. Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group. App devel­op­ers who want this access will now have to prove their activ­i­ties ben­e­fit the group. Face­book will now need to approve tools that busi­ness­es use to oper­ate Face­book pages. A busi­ness that uses an app to help it respond quick­ly to cus­tomer mes­sages, for exam­ple, will not be able to do so auto­mat­i­cal­ly. Devel­op­ers’ access to Insta­gram will also be severe­ly restrict­ed.

    Face­book is now ban­ning apps from access­ing users’ infor­ma­tion about their reli­gious or polit­i­cal views, rela­tion­ship sta­tus, edu­ca­tion, work his­to­ry, fit­ness activ­i­ty, book read­ing habits, music lis­ten­ing and news read­ing activ­i­ty, video watch­ing and games. Data bro­kers and busi­ness­es col­lect this type of infor­ma­tion to build pro­files of their cus­tomers’ tastes.

    ———-

    “Face­book: ‘Mali­cious actors’ used its tools to dis­cov­er iden­ti­ties and col­lect data on a mas­sive glob­al scale” by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin; The Wash­ing­ton Post; 04/04/2018

    “But the abuse of Facebook’s search tools — now dis­abled — hap­pened far more broad­ly and over the course of sev­er­al years, with few Face­book users like­ly escap­ing the scam, com­pa­ny offi­cials acknowl­edged.”

    Few Face­book users like­ly escap­ing the “scam” of using the fea­ture Face­book turned on by default and was a obvi­ous mas­sive pri­va­cy vio­la­tion. A “scam” that was also far less of a pri­va­cy vio­la­tion than what Face­book made avail­able to app devel­op­ers, but still a scam that like­ly impact­ed almost all Face­book users. And the more infor­ma­tion peo­ple made avail­able on their pub­lic pro­files, the more these “scam­mers” could col­lect about them:

    ...
    The scam start­ed when mali­cious hack­ers har­vest­ed email address­es and phone num­bers on the so-called “Dark Web,” where crim­i­nals post infor­ma­tion stolen from data breach­es over the years. Then the hack­ers used auto­mat­ed com­put­er pro­grams to feed the num­bers and address­es into Facebook’s “search” box, allow­ing them to dis­cov­er the full names of peo­ple affil­i­at­ed with the phone num­bers or address­es, along with what­ev­er Face­book pro­file infor­ma­tion they chose to make pub­lic, often includ­ing their pro­file pho­tos and home­town.

    “We built this fea­ture, and it’s very use­ful. There were a lot of peo­ple using it up until we shut it down today,” Chief Exec­u­tive Mark Zucker­berg said in a call with reporters Wednes­day.

    Face­book said in a blog post Wednes­day, “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped.”

    Face­book users could have blocked this search func­tion, which was turned on by default, by tweak­ing their set­tings to restrict find­ing their iden­ti­ties by using phone num­bers or email address­es. But research has con­sis­tent­ly shown that users of online plat­forms rarely adjust default pri­va­cy set­tings and often fail to under­stand what infor­ma­tion they are shar­ing.
    ...

    And then there was Face­book’s account recov­ery func­tion that Face­book also made easy to exploit:

    ...
    Hack­ers also abused Facebook’s account recov­ery func­tion, by pre­tend­ing to be legit­i­mate users who had for­got­ten account details. Facebook’s recov­ery sys­tem served up names, pro­file pic­tures and links to the pub­lic pro­files them­selves. This tool could also be blocked in pri­va­cy set­tings.
    ...

    And, again, while this kind of infor­ma­tion was­n’t nec­es­sar­i­ly as exten­sive as the pri­vate infor­ma­tion Face­book made avail­able to app devel­op­ers, it was still a very valu­able starter kit of iden­ti­ty theft:

    ...
    Names, phone num­bers, email address­es and oth­er per­son­al infor­ma­tion amount to crit­i­cal starter kits for iden­ti­ty theft and oth­er mali­cious online activ­i­ty, experts on Inter­net crime say. The Face­book hack allowed bad actors to tie raw data to people’s real iden­ti­ties and build fuller pro­files of them.

    Pri­va­cy experts had issued warn­ings that the phone num­ber and email address lookip tool left Face­book users’ data exposed.
    ...

    And, of course, that ‘iden­ti­ty theft starter kit’ data — asso­ci­at­ed phone num­bers and emails with real names and oth­er pub­licly avail­able infor­ma­tion — could poten­tial­ly become com­bined with the pri­vate infor­ma­tion made to app devel­op­ers. Infor­ma­tion to app devel­op­ers that appar­ent­ly includ­ed “people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data”:

    ...
    The data Cam­bridge Ana­lyt­i­ca obtained relied on dif­fer­ent tech­niques and was more detailed and exten­sive than what the hack­ers col­lect­ed using Facebook’s search func­tions. The Cam­bridge Ana­lyt­i­ca data set includ­ed user names, home­towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users and their friends, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.

    Face­book said it banned Cam­bridge Ana­lyt­i­ca last month because the data firm improp­er­ly obtained pro­file infor­ma­tion.

    Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    ...

    With its moves over the past week, Face­book is embark­ing on a major shift in its rela­tion­ship with third-par­ty app devel­op­ers that have used Facebook’s vast net­work to expand their busi­ness­es. What was large­ly an auto­mat­ed process will now involve devel­op­ers agree­ing to “strict require­ments,” the com­pa­ny said in its blog post Wednes­day. The 2015 pol­i­cy change cur­tailed devel­op­ers’ abil­i­ties to access data about people’s friends net­works but left open many loop­holes that the com­pa­ny tight­ened on Wednes­day.

    ...

    Face­book announced plans on Wednes­day to add new restric­tions to how app devel­op­ers, data bro­kers and oth­er third par­ties can gain access to this data, the lat­est steps in a years-long process to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion, Face­book said.
    ...

    So if “people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data” was made avail­able to app devel­op­ers, it rais­es the ques­tion: what was­n’t made avail­able?

    It’s all a reminder that there is indeed a “mali­cious actor” who took pos­ses­sion of all your pri­vate data and its name is Face­book.

    Posted by Pterrafractyl | April 5, 2018, 1:53 pm
  4. Here’s a series of arti­cles that that serve as a reminder that Face­book isn’t just an ever-grow­ing vault of per­son­al data pro­files on almost every­one (albeit a very leaky data vault). It’s also a medi­um through which non-Face­book ever-grow­ing vaults of per­son­al data, in par­tic­u­lar the data bro­ker­age giants like Acx­iom, can be merged with Face­book’s vault, osten­si­bly for the pur­pose of mak­ing Face­book’s tar­get­ed ads even more tar­get­ed.

    This third-par­ty shar­ing is done through Face­book’s “Part­ner Cat­e­gories” pro­gram: Face­book adver­tis­ers have the option of fil­ter­ing their Face­book ad tar­get­ing based on, for instance, the group of peo­ple who pur­chased cere­al using data from Acx­iom’s con­sumer spend­ing data base. As such, data bro­ker giants that are poten­tial­ly Face­book’s biggest com­peti­tors become Face­book’s biggest part­ners.

    Not sur­pris­ing­ly, merg­ing Face­book’s exten­sive per­son­al data pro­files with the already very exten­sive per­son­al data pro­files held by the data bro­ker­age indus­try rais­es a num­ber of pri­va­cy con­cerns. Pri­va­cy con­cerns that are hit­ting a peak in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. So, also not sur­pris­ing­ly, Face­book just announced the end of the Part­ner Cat­e­gories pro­gram over the next six months as part of its post-Cam­bridge Ana­lyt­i­ca pub­lic rela­tions cam­paign:

    Recode

    Face­book is cut­ting third-par­ty data providers out of ad tar­get­ing to clean up its act
    Face­book says it’s going to stop using data from third-par­ty data pro­vides like Exper­ian and Acx­iom.

    By Kurt Wag­n­er
    Mar 28, 2018, 6:11pm EDT

    Face­book is going to lim­it how much data it makes avail­able to adver­tis­ers buy­ing hyper-tar­get­ed ads on the social net­work.

    More specif­i­cal­ly, Face­book says it will stop using data from third-par­ty data aggre­ga­tors — com­pa­nies like Exper­ian and Acx­iom — to help sup­ple­ment its own data set for ad tar­get­ing.

    Face­book pre­vi­ous­ly let adver­tis­ers tar­get peo­ple using data from a num­ber of sources:

    * Data from Face­book, which the com­pa­ny col­lects from user activ­i­ty and pro­files.
    * Data from the adver­tis­er itself, like cus­tomer emails they’ve col­lect­ed on their own.
    * Data from third-par­ty ser­vices like Exper­ian, which can col­lect offline data such as pur­chas­ing activ­i­ty, that Face­book uses to help sup­ple­ment its own data set. When mar­keters use this data to tar­get ads on Face­book, the social giant gives some of the ad mon­ey from that sale to the data provider.

    This third data set is pri­mar­i­ly help­ful to adver­tis­ers that might not have their own cus­tomer data, like small busi­ness­es or con­sumer pack­aged goods com­pa­nies that sell their prod­ucts through brick-and-mor­tar retail­ers.

    But now Face­book is chang­ing its rela­tion­ship with these third par­ties as part of a broad­er effort to clean up its data prac­tices fol­low­ing the recent Cam­bridge Ana­lyt­i­ca pri­va­cy scan­dal. (Face­book still uses these com­pa­nies to help with ad mea­sure­ment, though a source says that the com­pa­ny is reeval­u­at­ing that prac­tice, too.)

    The think­ing is that Face­book has less con­trol over where and how these firms col­lect their data, which makes using it more of a risk. Appar­ent­ly it’s not impor­tant enough to Facebook’s rev­enue stream to deal with a poten­tial headache if some­thing goes wrong.

    Face­book con­firmed the move in a state­ment attrib­ut­able to Gra­ham Mudd, a prod­uct mar­ket­ing direc­tor at the com­pa­ny.

    ”We want to let adver­tis­ers know that we will be shut­ting down Part­ner Cat­e­gories,” Mudd said in the state­ment. “This prod­uct enables third-par­ty data providers to offer their tar­get­ing direct­ly on Face­book. While this is com­mon indus­try prac­tice, we believe this step, wind­ing down over the next six months, will help improve people’s pri­va­cy on Face­book.”

    Had it been made ear­li­er, Facebook’s deci­sion to stop using third-par­ty data providers for tar­get­ing would not have impact­ed the out­come of the Cam­bridge Ana­lyt­i­ca scan­dal, in which the out­side firm col­lect­ed the per­son­al data of some 50 mil­lion Face­book users with­out their per­mis­sion.

    ...

    ———

    “Face­book is cut­ting third-par­ty data providers out of ad tar­get­ing to clean up its act” by Kurt Wag­n­er; Recode; 03/28/2018

    “More specif­i­cal­ly, Face­book says it will stop using data from third-par­ty data aggre­ga­tors — com­pa­nies like Exper­ian and Acx­iom — to help sup­ple­ment its own data set for ad tar­get­ing.”

    As we can see, Face­book isn’t just promis­ing to cut off the per­son­al data leak­ing out of its plat­forms to address pri­va­cy con­cerns. It’s also promis­ing to cut off some of the data flow­ing into its plat­forms. Data from the data bro­ker­age giants flow­ing into Face­book in exchange for some of the ad mon­ey when that data results in a sale:

    ...
    Face­book pre­vi­ous­ly let adver­tis­ers tar­get peo­ple using data from a num­ber of sources:

    * Data from Face­book, which the com­pa­ny col­lects from user activ­i­ty and pro­files.
    * Data from the adver­tis­er itself, like cus­tomer emails they’ve col­lect­ed on their own.
    * Data from third-par­ty ser­vices like Exper­ian, which can col­lect offline data such as pur­chas­ing activ­i­ty, that Face­book uses to help sup­ple­ment its own data set. When mar­keters use this data to tar­get ads on Face­book, the social giant gives some of the ad mon­ey from that sale to the data provider.

    This third data set is pri­mar­i­ly help­ful to adver­tis­ers that might not have their own cus­tomer data, like small busi­ness­es or con­sumer pack­aged goods com­pa­nies that sell their prod­ucts through brick-and-mor­tar retail­ers.
    ...

    And while the pub­lic expla­na­tion for this move is that this is being done to address pri­va­cy con­cerns, there’s also the sus­pi­cion that Face­book is will­ing to make this move sim­ply because Face­book does­n’t nec­es­sar­i­ly need this third-par­ty data to make its ads more effec­tive. So while cut­ting out this data-bro­ker­age data is a poten­tial loss for Face­book, that loss might be out­weighed by the grow­ing headache of pri­va­cy con­cerns for Face­book that comes from direct­ly incor­po­rat­ing third-par­ty data into its ad algo­rithms when it can’t con­trol whether or not these third-par­ty data bro­ker­ages obtained their own data sets in an eth­i­cal man­ner. In oth­er words, the headache isn’t worth the extra prof­it this data-shar­ing arrange­ment yields:

    ...
    The think­ing is that Face­book has less con­trol over where and how these firms col­lect their data, which makes using it more of a risk. Appar­ent­ly it’s not impor­tant enough to Facebook’s rev­enue stream to deal with a poten­tial headache if some­thing goes wrong.
    ...

    So is it the case that Face­book is using this Cam­bridge Ana­lyt­i­ca scan­dal as an excuse to cut these data bro­kers that Face­book does­n’t actu­al­ly need out of the loop? Well, as the fol­low­ing arti­cle notes, it’s not like Face­book does­n’t have the option of buy­ing that data from the data bro­kers them­selves and just incor­po­rat­ing the data into their inter­nal ad tar­get­ing mod­els. But Face­book always had that option and still chose to go ahead with this Part­ner Cat­e­gories pro­gram, so it’s pre­sum­ably the case that pay­ing out­right for that bro­ker­age data is more expen­sive than set­ting up the Part­ner Cat­e­gories pro­gram and giv­ing the bro­ker­ages a cut of the ad sales.

    As the fol­low­ing arti­cle also notes, adver­tis­ers will still be able to get that data bro­ker­age infor­ma­tion for the pur­pose of fur­ther tar­get­ing Face­book users. How so? Because notice the sec­ond data set in the above arti­cle that Face­book uses for tar­get­ing ads: data sets from the adver­tis­ers them­selves. Like lists of email address­es of the peo­ple they want to tar­get. It’s the same Cus­tom Audi­ences tool that was used exten­sive­ly by the Trump cam­paign for its “A/B test­ing on steroids” psy­cho­log­i­cal pro­fil­ing tech­niques. So there’s noth­ing stop­ping adver­tis­ers from get­ting that list of email address­es from a data bro­ker and then feed­ing that into Face­book, effec­tive­ly leav­ing the same arrange­ment in place but in a less direct man­ner. But it’s less con­ve­nient and pre­sum­ably less prof­itable if adver­tis­ers have to do this them­selves. It’s a reminder that part­ner­ing means more prof­its in the busi­ness Face­book is in.

    Final­ly, as dig­i­tal pri­va­cy expert Frank Pasquale also points out in the fol­low­ing arti­cle, there’s no real rea­son to assume Face­book is actu­al­ly going to stand by this pledge to shut down the Part­ner Cat­e­gories pro­gram over the next six months. It might just qui­et­ly start it up again in some oth­er form or reverse sim­ply reverse this deci­sion after the pub­lic’s atten­tion shifts away.

    So while there are valid ques­tions as to why Face­book is mak­ing this pol­i­cy change, there are unfor­tu­nate­ly also valid ques­tions over whether or not this pol­i­cy change will make any dif­fer­ence and whether or not Face­book will even make this pol­i­cy change at all:

    The Wash­ing­ton Post

    Face­book, long­time friend of data bro­kers, becomes their stiffest com­pe­ti­tion

    by Drew Har­well
    March 29, 2018

    Face­book was for years a best friend to the data bro­kers who make hun­dreds of mil­lions of dol­lars a year gath­er­ing and sell­ing Amer­i­cans’ per­son­al infor­ma­tion. Now, the world’s largest social net­work is sour­ing that rela­tion­ship — a sign that the com­pa­ny believes it has over­shad­owed their data-gath­er­ing machine.

    Face­book said late Wednes­day that it would stop data bro­kers from help­ing adver­tis­ers tar­get peo­ple with ads, sev­er­ing one of the key meth­ods mar­keters used to link users’ Face­book data about their friends and lifestyle with their offline data about their fam­i­lies, finances and health.

    The data bro­kers have for years served a silent but crit­i­cal role in direct­ing users’ atten­tion to Face­book’s ads. They also, crit­ics say, stealth­ily con­tributed to the seem­ing­ly all-know­ing creepi­ness of users see­ing ads for things they nev­er men­tioned on their Face­book pages. A mar­keter who want­ed to tar­get new moth­ers, for instance, could use the data bro­kers’ infor­ma­tion to send Face­book ads to all women who bought baby for­mu­la with a store rewards card.

    Acx­iom, Exper­ian and oth­er data bro­kers once had a prized seat at Face­book’s table, through a pro­gram called Part­ner Cat­e­gories, that allowed adver­tis­ers to tap into the shad­ow pro­files craft­ed with data from Face­book and the bro­kers to drill down on their tar­get audi­ence. The data bro­kers got a cut of the mon­ey when the ads they helped place turned into a sale, and Face­book also shared some data with the bro­kers to help gauge how well its ads per­formed.

    A Face­book direc­tor said in a state­ment that the com­pa­ny will wind down that pro­gram over the next six months, which “will help improve people’s pri­va­cy on Face­book.” But pri­va­cy experts saw the move as an asser­tion of dom­i­nance from the social net­work, which in recent years has con­sol­i­dat­ed its pow­er over an increas­ing­ly inti­mate lev­el of detail about its users’ lives — and wants adver­tis­ers to pay for its exper­tise.

    “Face­book is offi­cial­ly in the data-min­ing busi­ness,” said Joel Win­ston, a pri­va­cy lawyer in Pitts­burgh. “It’s a defin­i­tive sig­nal that Face­book’s data cap­ture and iden­ti­ty-tar­get­ing tech­nol­o­gy is light-years ahead of its com­peti­tors’.”

    The move comes as Face­book bat­tles a major pri­va­cy scan­dal in the wake of rev­e­la­tions that a polit­i­cal data firm, Cam­bridge Ana­lyt­i­ca, took advan­tage of the site’s loose pri­va­cy rules and improp­er­ly obtained data on more than 30 mil­lion Face­book users. The com­pa­ny has in recent days out­lined steps show­ing how users can see and lim­it what Face­book knows about them, fol­low­ing what chief exec­u­tive Mark Zucker­berg called a “major breach of trust.”

    In 2015, Face­book restrict­ed the kinds of data that out­side devel­op­ers, includ­ing the researcher who fed data to Cam­bridge Ana­lyt­i­ca, could gath­er from users and their friends. Christo­pher Wylie, Cam­bridge Ana­lyt­i­ca’s whistle­blow­er, told The Wash­ing­ton Post that Cam­bridge Ana­lyt­i­ca had paired Face­book data with infor­ma­tion from data bro­kers to build out its vot­er pro­files.

    But the social net­work con­tin­ued to strength­en its ties with the data bro­kers who gath­er and repack­age user infor­ma­tion. That year, Acx­iom said its involve­ment in Part­ner Cat­e­gories helped its adver­tis­ing clients use Face­book “to bet­ter con­nect with peo­ple more inclined to buy cer­tain prod­ucts or ser­vices,” adding that its clients includ­ed most of the coun­try’s top 10 insur­ers, retail­ers, automak­ers, hotels, telecom­mu­ni­ca­tions giants and banks. One year ear­li­er, in 2014, the Fed­er­al Trade Com­mis­sion issued a report find­ing that data bro­kers had col­lect­ed infor­ma­tion on near­ly every Amer­i­can and say­ing that the bro­kers “oper­ate with a fun­da­men­tal lack of trans­paren­cy.”

    While Face­book gath­ers much of its 2 bil­lion users’ online infor­ma­tion, the data bro­kers attempt to scoop up every­thing else, includ­ing bil­lions of bits of infor­ma­tion from vot­er rolls, prop­er­ty records, pur­chase his­to­ries, loy­al­ty card pro­grams, con­sumer sur­veys, car deal­er­ship records and oth­er data­bas­es.

    The bro­kers use that raw data to build mod­els pre­dict­ing (with vary­ing suc­cess) many hun­dreds of details about a cus­tomer’s behav­ior, finances and per­son­al­i­ty: age, fam­i­ly sta­tus, house­hold income, whether she likes cross­word puz­zles, inter­est in buy­ing a house­hold pet, like­li­hood of hav­ing a funer­al plan. The data bro­kers then sell those con­sumer pro­files to mar­keters and major con­glom­er­ates seek­ing a vast and tar­get­ed cus­tomer base — includ­ing on Face­book, which now accounts for a fifth of the world’s online ads.

    Acx­iom, the Arkansas-based bro­ker that has worked with Face­book since 2013 and report­ed more than $880 mil­lion in rev­enue last year, esti­mat­ed Face­book’s ditch­ing of its data-shar­ing pro­gram would carve as much as $25 mil­lion from the com­pa­ny’s rev­enue and prof­it. In a state­ment late Wednes­day, Acx­iom said Face­book had alert­ed it that day to the news. “Today, more than ever, it is impor­tant for busi­ness­es to be able to rely upon com­pa­nies that under­stand the crit­i­cal impor­tance of eth­i­cal­ly sourced data and strong data gov­er­nance. These are among Acxiom’s core strengths,” chief exec­u­tive Scott Howe said. Its stock plunged more than 30 per­cent Thurs­day morn­ing.

    Rep­re­sen­ta­tives for data bro­ker Exper­ian did not respond to ques­tions, and data bro­ker Ora­cle Data Cloud declined to com­ment. Exper­ian stock moved down­ward slight­ly, while Ora­cle shares trad­ed up about 1 per­cent. Face­book shares climbed about 3 per­cent, help­ing punc­ture weeks of loss­es.

    Data bro­kers’ mod­els are often intri­cate­ly and odd­ly detailed: Acx­iom has cat­e­go­rized peo­ple into one of 70 “house­hold life stage clus­ters,” includ­ing “Career-Cen­tered Sin­gles,” “Soc­cer and SUVs,” “Apple Pie Fam­i­lies” and “Rolling Stones.” But adver­tis­ers want­i­ng more infor­ma­tion — served straight from the source, in the per­son­’s own words — have increas­ing­ly turned to Face­book, where they can grab first-par­ty data from the actu­al cus­tomer, and not just third-par­ty data gath­ered and ana­lyzed from afar.

    Face­book and the data bro­kers have often dealt in the same kinds of per­son­al infor­ma­tion adver­tis­ers find impos­si­ble to resist. Exper­ian, for instance, runs a New­born Net­work that sells adver­tis­ers detailed infor­ma­tion, gleaned from per­son­al spend­ing and demo­graph­ic data, of women they pre­dict are new and expec­tant moth­ers; the com­pa­ny says it “cap­tures more than 80 per­cent of all U.S. births.” But Face­book users also freely share baby pho­tos and mark their life events — a more direct way of relay­ing the same infor­ma­tion to sell­ers of baby for­mu­la, cribs and mater­ni­ty clothes.

    Adver­tis­ers will still be able to work with the data bro­kers to gath­er infor­ma­tion and tar­get cus­tomers; they’ll just have to do it out­side Face­book. Crit­ics point­ed to a few ways, such as Face­book’s Cus­tom Audi­ences tool, that will allow adver­tis­ers to still tar­get cus­tomers en masse based on finan­cial and oth­er data they’ve pulled from across the Web.

    Some pri­va­cy experts cheered Face­book’s data-bro­ker move as a step toward pre­serv­ing user pri­va­cy. “It’s long over­due that Face­book owned up to the seri­ous ero­sion of con­sumer pri­va­cy made pos­si­ble by its alliance with pow­er­ful data bro­kers,” said Jef­frey Chester, exec­u­tive direc­tor of the Wash­ing­ton pri­va­cy-rights non­prof­it Cen­ter for Dig­i­tal Democ­ra­cy.

    Chris Speran­dio, a prod­uct head of pri­va­cy at the mar­ket­ing-data start-up Seg­ment, said the move also helps Face­book dodge grow­ing ques­tions over the source of its user infor­ma­tion. That is quick­ly becom­ing a high-stakes legal issue: A sweep­ing pri­va­cy rule com­ing to Europe in May, the Gen­er­al Data Pro­tec­tion Reg­u­la­tion, will make the com­pa­ny more liable and account­able for know­ing where its data comes from.

    ...

    But some crit­ics ques­tioned what effect the move would have in a site that counts sell­ing access to its users’ infor­ma­tion as its biggest mon­ey­mak­er. Face­book, pri­va­cy experts said, nets a vast range of real-time infor­ma­tion — friend­ships, pho­tos, work his­to­ries, inter­ests and con­sumer tastes, as well as mobile, loca­tion and facial-recog­ni­tion data — that adver­tis­ers view as more cur­rent and accu­rate than the bro­ker infor­ma­tion inferred from old receipts and gov­ern­ment logs. What, they ask, would adver­tis­ers need to pay data bro­kers for?

    “We don’t know enough about Face­book’s data trove to know whether their aban­don­ment of Part­ner Cat­e­gories helps users avoid pri­va­cy inva­sions,” said Frank Pasquale, a Uni­ver­si­ty of Mary­land pro­fes­sor who spe­cial­izes in algo­rithms and pri­va­cy. “Even if we did have that knowl­edge, we have lit­tle rea­son to trust Face­book to actu­al­ly fol­low through on it. It may well change course once media atten­tion has gone else­where.”

    ———-

    “Face­book, long­time friend of data bro­kers, becomes their stiffest com­pe­ti­tion” by Drew Har­well; The Wash­ing­ton Post; 03/29/2018

    “Face­book said late Wednes­day that it would stop data bro­kers from help­ing adver­tis­ers tar­get peo­ple with ads, sev­er­ing one of the key meth­ods mar­keters used to link users’ Face­book data about their friends and lifestyle with their offline data about their fam­i­lies, finances and health.”

    Yep, one of the key meth­ods mar­keters used to link Face­book data with all the offline data that these data bro­ker­ages were able to col­lect just might get sev­ered. It’s poten­tial­ly a big deal for Face­book and the adver­tis­ing indus­try. Or poten­tial­ly not. That’s part of what makes this such a fas­ci­nat­ing move by Face­book: It’s poten­tial­ly quite sig­nif­i­cant and poten­tial­ly incon­se­quen­tial:

    ...
    The data bro­kers have for years served a silent but crit­i­cal role in direct­ing users’ atten­tion to Face­book’s ads. They also, crit­ics say, stealth­ily con­tributed to the seem­ing­ly all-know­ing creepi­ness of users see­ing ads for things they nev­er men­tioned on their Face­book pages. A mar­keter who want­ed to tar­get new moth­ers, for instance, could use the data bro­kers’ infor­ma­tion to send Face­book ads to all women who bought baby for­mu­la with a store rewards card.

    Acx­iom, Exper­ian and oth­er data bro­kers once had a prized seat at Face­book’s table, through a pro­gram called Part­ner Cat­e­gories, that allowed adver­tis­ers to tap into the shad­ow pro­files craft­ed with data from Face­book and the bro­kers to drill down on their tar­get audi­ence. The data bro­kers got a cut of the mon­ey when the ads they helped place turned into a sale, and Face­book also shared some data with the bro­kers to help gauge how well its ads per­formed.
    ...

    And note how this coop­er­at­ing with these bro­ker­ages as only grow­ing dur­ing the same peri­od that Face­book cut off the “friends per­mis­sions” pri­va­cy loop­hole exploit­ed by Cam­bridge Ana­lyt­i­ca’s app and thou­sands of oth­er apps in 2015. It’s a reminder that even when Face­book is get­ting bet­ter in some ways, it’s prob­a­bly get­ting worse in oth­er ways:

    ...
    In 2015, Face­book restrict­ed the kinds of data that out­side devel­op­ers, includ­ing the researcher who fed data to Cam­bridge Ana­lyt­i­ca, could gath­er from users and their friends. Christo­pher Wylie, Cam­bridge Ana­lyt­i­ca’s whistle­blow­er, told The Wash­ing­ton Post that Cam­bridge Ana­lyt­i­ca had paired Face­book data with infor­ma­tion from data bro­kers to build out its vot­er pro­files.

    But the social net­work con­tin­ued to strength­en its ties with the data bro­kers who gath­er and repack­age user infor­ma­tion. That year, Acx­iom said its involve­ment in Part­ner Cat­e­gories helped its adver­tis­ing clients use Face­book “to bet­ter con­nect with peo­ple more inclined to buy cer­tain prod­ucts or ser­vices,” adding that its clients includ­ed most of the coun­try’s top 10 insur­ers, retail­ers, automak­ers, hotels, telecom­mu­ni­ca­tions giants and banks. One year ear­li­er, in 2014, the Fed­er­al Trade Com­mis­sion issued a report find­ing that data bro­kers had col­lect­ed infor­ma­tion on near­ly every Amer­i­can and say­ing that the bro­kers “oper­ate with a fun­da­men­tal lack of trans­paren­cy.”
    ...

    And while some of the data gath­ered by the data bro­ker­ages inevitably over­laps with what Face­book also gath­ers on peo­ple, there are quite a few cat­e­gories of ‘offline’ data these data bro­kers sys­tem­at­i­cal­ly gath­er that Face­book can’t gath­er with­out seem­ing super extra creepy. Data bro­kers gath­er data from places like vot­er rolls, prop­er­ty records, pur­chase his­to­ries, loy­al­ty card pro­grams, con­sumer sur­veys, car deal­er­ship records. Imag­ine if Face­book direct­ly gath­ered that kind of offline infor­ma­tion about every­one instead of just buy­ing it from the bro­ker­ages or set­ting up arrange­ments like the Part­ner Cat­e­gories pro­gram. Imag­ine how incred­i­bly creepy that would be if Face­book had an ‘offline data col­lec­tive’ divi­sion. It’s a reminder that Face­book and the data bro­kers real­ly are engaged in an ‘online’/‘offline’ data gath­er­ing and aggre­ga­tion joint effort. “Part­ner Cat­e­gories” is an appro­pri­ate name because it’s a real part­ner­ship that’s impor­tant to both par­ties because it would be a big­ger PR night­mare if Face­book had to col­lect all this offline data itself:

    ...
    While Face­book gath­ers much of its 2 bil­lion users’ online infor­ma­tion, the data bro­kers attempt to scoop up every­thing else, includ­ing bil­lions of bits of infor­ma­tion from vot­er rolls, prop­er­ty records, pur­chase his­to­ries, loy­al­ty card pro­grams, con­sumer sur­veys, car deal­er­ship records and oth­er data­bas­es.

    The bro­kers use that raw data to build mod­els pre­dict­ing (with vary­ing suc­cess) many hun­dreds of details about a cus­tomer’s behav­ior, finances and per­son­al­i­ty: age, fam­i­ly sta­tus, house­hold income, whether she likes cross­word puz­zles, inter­est in buy­ing a house­hold pet, like­li­hood of hav­ing a funer­al plan. The data bro­kers then sell those con­sumer pro­files to mar­keters and major con­glom­er­ates seek­ing a vast and tar­get­ed cus­tomer base — includ­ing on Face­book, which now accounts for a fifth of the world’s online ads.
    ...

    And, of course, the Cus­tom Audi­ences tool that lets adver­tis­ers feed in lists of things like email address to tar­get spe­cif­ic audi­ences — used exten­sive­ly by the 2016 Trump cam­paign — might make the deci­sion to end the Part­ner Cat­e­gories pro­gram moot:

    ...
    Adver­tis­ers will still be able to work with the data bro­kers to gath­er infor­ma­tion and tar­get cus­tomers; they’ll just have to do it out­side Face­book. Crit­ics point­ed to a few ways, such as Face­book’s Cus­tom Audi­ences tool, that will allow adver­tis­ers to still tar­get cus­tomers en masse based on finan­cial and oth­er data they’ve pulled from across the Web.
    ...

    And as Frank Pasquale points out, we also don’t know enough about what Face­book knows about us to know now much of an impact end­ing the Part­ner Cat­e­gories pro­gram will make to the pri­va­cy vio­la­tions involved with Face­book’s whole busi­ness mod­el. It’s entire­ly pos­si­ble this change will make fus­ing data bro­ker data with Face­book data less con­ve­nient and less prof­itable, but also still just as pri­va­cy vio­lat­ing both because the present day set up can be repli­cat­ed indi­rect­ly (by Face­book adver­tis­ers coor­di­nat­ing with the data bro­kers sep­a­rate­ly) and also because Face­book might know almost every­one the data bro­kers know just from its own data col­lec­tion meth­ods. In oth­er words, this could be large­ly cos­met­ic. And, as Pasquale also point­ed out, Face­book might just change its mind and not end the pro­gram once pub­lic atten­tion wanes:

    ...
    Some pri­va­cy experts cheered Face­book’s data-bro­ker move as a step toward pre­serv­ing user pri­va­cy. “It’s long over­due that Face­book owned up to the seri­ous ero­sion of con­sumer pri­va­cy made pos­si­ble by its alliance with pow­er­ful data bro­kers,” said Jef­frey Chester, exec­u­tive direc­tor of the Wash­ing­ton pri­va­cy-rights non­prof­it Cen­ter for Dig­i­tal Democ­ra­cy.

    ...

    But some crit­ics ques­tioned what effect the move would have in a site that counts sell­ing access to its users’ infor­ma­tion as its biggest mon­ey­mak­er. Face­book, pri­va­cy experts said, nets a vast range of real-time infor­ma­tion — friend­ships, pho­tos, work his­to­ries, inter­ests and con­sumer tastes, as well as mobile, loca­tion and facial-recog­ni­tion data — that adver­tis­ers view as more cur­rent and accu­rate than the bro­ker infor­ma­tion inferred from old receipts and gov­ern­ment logs. What, they ask, would adver­tis­ers need to pay data bro­kers for?

    “We don’t know enough about Face­book’s data trove to know whether their aban­don­ment of Part­ner Cat­e­gories helps users avoid pri­va­cy inva­sions,” said Frank Pasquale, a Uni­ver­si­ty of Mary­land pro­fes­sor who spe­cial­izes in algo­rithms and pri­va­cy. “Even if we did have that knowl­edge, we have lit­tle rea­son to trust Face­book to actu­al­ly fol­low through on it. It may well change course once media atten­tion has gone else­where.”

    So is this annoiced pol­i­cy changed going to hap­pen? Will it mat­ter if it hap­pens? It’s a pret­ty sig­nif­i­cant ques­tion and not one easy to answer giv­en that Face­book’s algo­rithms are large­ly a black box.

    That said, Josh Mar­shall might have a sig­nif­i­cant data point for us with regards to how impor­tant the cur­rent third-par­ty data shar­ing arrange­ment with data bro­ker­age giants real­ly is in terms of the per­for­mance of Face­book’s ad tar­get­ing per­for­mance: start­ing in ear­ly March adver­tis­ers start­ed notic­ing a sig­nif­i­cant drop off in the tar­get­ing qual­i­ty of Face­book’s ads. Face­books ad tar­get­ing qual­i­ty just got worse for some rea­son. And this was ear­ly March, which is before the Cam­bridge Ana­lyt­i­ca sto­ry hit in mid-march but pos­si­bly after Face­book knew the Cam­bridge Ana­lyt­i­ca sto­ry was com­ing. So the tim­ing of this obser­va­tion is inter­est­ing and Mar­shall has a hunch: Face­book was already exper­i­ment­ing with how its inter­nal adver­tis­ing algo­rithm would oper­ate with­out direct access to the data bro­ker­ages and poten­tial­ly with­out access to a lot of oth­er data sources in antic­i­pa­tion of the new EU reg­u­la­tions and new reg­u­la­tions from the US Con­gress. In oth­er words, Face­book already saw the writ­ing on the wall before the recent wave of Cam­bridge Ana­lyt­i­ca rev­e­la­tions went pub­lic and has already start­ed the shift to an in-house ad tar­get­ing algo­rithm and it shows.

    Now, it’s pos­si­ble that Josh Mar­shall could be cor­rect that Face­book has already start­ed imple­ment­ing an inter­nal-only ad tar­get­ing algo­rithm and it’s not­i­ca­bly worse now but that it get bet­ter in the long run because Face­bill will improve its third par­ty-lim­it­ed algo­rithm and the adver­tis­ers and bro­kers adapt to a new, less direct data-shar­ing arrange­ment. Maybe every­one will adapt and get up to par. Time will tell.

    But if not, and if the loss of these data shar­ing arrange­ments makes Face­book’s ads less effec­tive in the long run because — maybe because it’s much more effi­cient to direct­ly fun­nel the bro­ker data and a whole bunch of oth­er third-par­ty data into Face­book and the indi­rect meth­ods can’t repli­cate this arrange­ment — then it’s worth not­ing that this down­grade in Face­book’s ad tar­get­ing qual­i­ty as a result of the loss of this third-par­ty data would reflect a real form of pri­va­cy enhance­ment and gen­er­al­ly should be cheered. And is also a state­ment on the pub­lic util­i­ty of the over­all data bro­ker­age indus­try that is ded­i­cat­ed to col­lect­ing, aggre­gat­ed, and sell­ing per­son­al data pro­files. There’s a lot of neg­a­tive util­i­ty in this indus­try and this wave of Face­book scan­dals is just one facet of it. So if Mar­shal­l’s guess is cor­rect and this observ­able dropoff in Face­book ad qual­i­ty reflects a deci­sion by Face­book to pre­emp­tive­ly take third-par­ty data out of its ad tar­get­ing algo­rithms in antic­i­pa­tion of the new EU data pri­va­cy laws and future con­gres­sion­al action in the US, let’s hope that drop-off is sus­tained for our pri­va­cy’s sake:

    Talk­ing Points Memo
    Edi­tor’s Blog

    Is Face­book In More Trou­ble Than Peo­ple Think?

    By Josh Mar­shall | April 5, 2018 12:45 pm

    For more than a year, Face­book has faced a rolling pub­lic rela­tions deba­cle. Part of this is the Amer­i­can public’s shift­ing atti­tudes toward Big Tech and plat­forms in gen­er­al. But the dri­ving prob­lem has been the way the plat­form was tied up with and per­haps impli­cat­ed in Russia’s attempt to influ­ence the 2016 pres­i­den­tial elec­tion. Users’ trust in the plat­form has been shak­en, politi­cians are threat­en­ing scruti­ny and pos­si­ble reg­u­la­tion, and there’s even a cam­paign to get peo­ple to delete their Face­book accounts. All of this is wide­ly known and we hear more about it every day. But most users, most peo­ple in tech and also Wall Street (which is the source of Facebook’s gar­gan­tu­an val­u­a­tion) don’t yet get the full pic­ture. We know about Facebook’s rep­u­ta­tion­al cri­sis. But peo­ple aren’t ful­ly inter­nal­iz­ing that the cur­rent cri­sis pos­es a poten­tial­ly dire threat to Facebook’s core busi­ness mod­el, its core adver­tis­ing busi­ness.

    Face­book is fun­da­men­tal­ly an adver­tis­ing busi­ness. Almost all of the company’s rev­enue comes from adver­tis­ing that it tar­gets with unpar­al­leled effi­cien­cy to its bil­lions of users. In a media world in which adver­tis­ing rates face almost uni­ver­sal down­ward pres­sure, Facebook’s rates have con­sis­tent­ly risen. Monop­oly pow­er may dri­ve some of that growth. But the key dri­ver is effi­cien­cy. If old-fash­ioned adver­tis­ing shows my adver­tise­ment to 100 peo­ple for every actu­al buy­er and oth­er dig­i­tal plat­forms show it to 30 peo­ple and Face­book shows it to 5 peo­ple, Facebook’s ads are just worth a lot more.

    As long as the rates bear some rela­tion­ship to that effi­cien­cy (those num­bers above are just for illus­tra­tion), I’ll be hap­py to pay it. Because it’s objec­tive­ly worth more. Indeed, as the prices have gone up, Face­book has actu­al­ly got­ten more effi­cient. As one dig­i­tal ad agency exec­u­tive recent­ly told me, even if Face­book jacked up the prices a lot more, his firm would like­ly keep using them just as much because on this cost to effi­cien­cy basis it’s still cheap. This is the basis of Facebook’s astro­nom­i­cal mar­ket cap­i­tal­iza­tion which today rates at over $450 bil­lion, even after some recent revers­es.

    So the mon­ey comes from the adver­tis­ing. And the adver­tis­ing comes from the data and the arti­fi­cial intel­li­gence that crunch­es it and mod­els it into pre­dic­tive effi­cien­cy. But what if there’s a break­down in the data?

    Start­ing in a ear­ly March, a num­ber of mar­keters run­ning sub­stan­tial sums on the Face­book ad engine, who’ve spo­ken to TPM, start­ed notic­ing a new lev­el of plat­form insta­bil­i­ty and reduc­tions in tar­get­ing effi­cien­cy. To under­stand what this means, think about it like how an effi­cient debt or equi­ty mar­ket oper­ates. If there is rel­a­tive­ly accu­rate infor­ma­tion, no big exter­nal shocks and enough buy­ers and sell­ers, pric­ing should have rel­a­tive sta­bil­i­ty and oper­ate with­in cer­tain bands. Account­ing for some rea­son­able amount of bumpi­ness that’s what Facebook’s ad engine has looked like for a few years. But start­ing in March, if you’re down in the trench­es work­ing with the gran­u­lar num­bers, some­thing start­ed to look weird: price oscil­la­tions, reduced tar­get­ing effi­cien­cy and even glitch­es.

    We’ve talked to a num­ber of adver­tis­ers who’ve report­ed this. We’ve also talk to oth­ers who haven’t. But the ones who have tend to be the ones more tight­ly tied to the num­bers and in mar­ket­ing oper­a­tions with tighter ROI (return on invest­ment). Where we’ve seen the most of this is with so-called DTC (direct to con­sumer) mar­keters. Face­book is an amaz­ing­ly large ecosys­tem. And it’s all a black box. So there’s no way for us to talk to a rep­re­sen­ta­tive sam­ple of adver­tis­ers. But some­thing is going on in at least sub­stan­tial sec­tors of Facebook’s ad engine. What is it? Mar­keters who’ve asked main­ly get told it’s their cre­ativ­i­ty. In oth­er words, the ad you’re run­ning isn’t work­ing. Come up with anoth­er ad. Here at TPM, we oper­ate in a dif­fer­ent part of the pro­gram­mat­ic ad uni­verse. You hear com­pa­ra­ble things like that a lot. And it’s hard to ignore. But we’ve talked to peo­ple with dif­fer­ent peo­ple with (by def­i­n­i­tion) dif­fer­ent ads in total­ly dif­fer­ent indus­tries. So that’s not it. Some­thing is hap­pen­ing.

    So what’s up?

    One thing is already being dis­cussed wide­ly in the trade press. In response to the rolling pub­lic rela­tions deba­cle Face­book has already dra­mat­i­cal­ly reduced or has announced that it will reduce adver­tis­ers’ abil­i­ty to use third par­ty ad data on the Face­book plat­form. That is a big deal. What’s that mean?

    ...

    As you know, through your activ­i­ty on Face­book, Face­book col­lects lots of data about you that it then uses to tar­get ads. That’s “Face­book data” (or it’s yours, but you know what I mean). Face­book also allows adver­tis­ers to upload “1st par­ty data”. What’s that? That’s if my book pub­lish­ing com­pa­ny has a list of 50,000 emails, I can upload those emails to Face­book and run ads to those peo­ple. Then there’s “3rd par­ty data”. That’s if the adver­tis­er or Face­book itself goes to anoth­er per­son­al data bro­ker, buys access to that data and pours it into the Face­book ecosys­tem for more effi­cient tar­get­ing.

    If you’re not versed in the world of data and dig­i­tal adver­tis­ing, there’s a ton here to keep up with. But here’s the key. How reliant is Facebook’s adver­tis­ing cash cow on third par­ty data? Not just the third par­ty data that Face­book allows adver­tis­ers to put into its ecosys­tem for bet­ter tar­get­ing (which is now being phased out) but 3rd par­ty data Face­book uses itself to improve its ad tar­get­ing? As one data indus­try exec­u­tive put it to me, sure Face­book can crunch its own data to find out all sorts of things about you. But in a lot of cas­es it may be eas­i­er, cheap­er and in some cas­es sim­ply more effec­tive to buy that data from oth­er sources. We don’t real­ly know – and no one out­side Face­book real­ly knows – how good Facebook’s AI real­ly is at mod­el­ing user data entire­ly on its own with­out oth­er sorts of data mixed in. It’s a black box. It mat­ters a lot in terms of Facebook’s core rev­enue stream.

    Here’s anoth­er ques­tion: when you con­sid­er Facebook’s own data, how reliant is Face­book on ways it col­lects and process­es Face­book data which it may not be able to do any longer either because of new reg­u­la­tions that come into effect lat­er this year for the EU or because of new reg­u­la­tions Con­gress may put into effect as it puts new scruti­ny on Facebook’s behav­ior?

    My hunch is that the answer to most or all of these ques­tions is “a lot more than most peo­ple real­ize.” We already know that Face­book is mak­ing a lot of changes to how it uses data, espe­cial­ly third par­ty data and how it allows adver­tis­ers to use data. Some of this is already pub­lic. Indeed, it’s get­ting dis­cussed a lot in the trade press – par­tic­u­lar how Face­book will imple­ment with and cope with the new regs from the Euro­pean Union. So why all the chop­pi­ness in Facebook’s adver­tis­ing and tar­get­ing met­rics? I sus­pect that Face­book is try­ing to rejig­ger its algo­rithm on the fly more than peo­ple real­ize in order to see if they can get it to work as effec­tive­ly for ads with­out a lot of data sources or data uses they real­ly aren’t sup­posed to be doing or which they sus­pect they’ll lose access to in com­ing reg­u­la­tion. That is the most log­i­cal expla­na­tion of the insta­bil­i­ty in their report­ing.

    If you talk to ad indus­try peo­ple, they treat it as a giv­en that Face­book is already hav­ing to “rebuild their plat­form basi­cal­ly from the ground up” as one top agency exec­u­tive told me, in response to “fake news”, pro­pa­gan­da cam­paigns, pri­va­cy scruti­ny, etc. – all the stuff we’ve read about over recent months. But it’s Face­book. They’ll work it out, is what these peo­ple fig­ure. And they’re prob­a­bly right. Face­book is huge, has mas­sive resources and access to the world’s largest audi­ence for any­thing ever. They have oceans of data and a mas­sive leg up on every­one. Down at the more gran­u­lar lev­el though, even in the indus­try press, it is treat­ed as a giv­en that the already pub­licly announced new restric­tions on third par­ty data will like­ly lead to at least some migra­tion of adver­tis­ers to new plat­forms. Gin­ny Mar­vin, a top trade press reporter work­ing at the gran­u­lar ad tech and mar­ket­ing lev­el rather than up in tech big think land, tweet­ed this on March 30th: “FB remov­ing 3P [3rd par­ty] data is a big change for adver­tis­ers. But at FB’s scale, you’re not going to see advts sharply piv­ot else­where en masse. This will look more like a slow mov­ing ship of bud­gets divert­ing to oth­er media if they don’t get per­for­mance they want from FB.”

    For now, as Mar­vin notes, Facebook’s adver­tis­er lock-in, mar­ket pow­er and sim­ple price val­ue make it high­ly unlike­ly that there’s going to be any dra­mat­ic near-term move from Face­book even in the worse case sce­nario. But Face­book isn’t just mak­ing mon­ey hand over fist. It’s mar­ket val­u­a­tion rests on the assump­tion that it will keep mak­ing that amount of mon­ey hand over fist and indeed keep increas­ing the amount of mon­ey it makes hand over fist. Any break­down or sig­nif­i­cant slow­down in that growth and con­sis­ten­cy is a big prob­lem. Years ago, every­one count­ed Face­book out as a true prof­it plat­form, until it exceed­ed everyone’s expec­ta­tions. Now, even with all the bad press, most fig­ure that it’s prof­itable for­ev­er. Both con­ven­tion­al wis­doms were wrong. For now, keep in mind that Face­book isn’t just deal­ing with a rep­u­ta­tion­al cri­sis. It’s hav­ing to clean up the rep­u­ta­tion­al mess by rejig­ger­ing parts of its core rev­enue stream it’s not clear it real­ly knows how to do. That cre­ates a lot of unpre­dictabil­i­ty. More than most peo­ple seem to real­ize.

    ———-

    “Is Face­book In More Trou­ble Than Peo­ple Think?” by Josh Mar­shall; Talk­ing Points Memo; 04/05/2018

    “For more than a year, Face­book has faced a rolling pub­lic rela­tions deba­cle. Part of this is the Amer­i­can public’s shift­ing atti­tudes toward Big Tech and plat­forms in gen­er­al. But the dri­ving prob­lem has been the way the plat­form was tied up with and per­haps impli­cat­ed in Russia’s attempt to influ­ence the 2016 pres­i­den­tial elec­tion. Users’ trust in the plat­form has been shak­en, politi­cians are threat­en­ing scruti­ny and pos­si­ble reg­u­la­tion, and there’s even a cam­paign to get peo­ple to delete their Face­book accounts. All of this is wide­ly known and we hear more about it every day. But most users, most peo­ple in tech and also Wall Street (which is the source of Facebook’s gar­gan­tu­an val­u­a­tion) don’t yet get the full pic­ture. We know about Facebook’s rep­u­ta­tion­al cri­sis. But peo­ple aren’t ful­ly inter­nal­iz­ing that the cur­rent cri­sis pos­es a poten­tial­ly dire threat to Facebook’s core busi­ness mod­el, its core adver­tis­ing busi­ness.”

    As Josh Mar­shall points out, if Face­book real­ly does have to turn off the third-par­ty data spig­ot, the ques­tion of what this will actu­al­ly do to the qual­i­ty of its ad tar­get­ing is a mas­sive ques­tion. The impor­tance of the direct third-par­ty data shar­ing arrange­ment is one of the big ques­tions swirling around Face­book for both Face­book’s investors (from a price per share stand­point) and the pub­lic (from a pub­lic pri­va­cy stand­point). The fact that the EU’s new data pri­va­cy rules are hit­ting Face­book in Europe right when the Cam­bridge Ana­lyt­i­ca scan­dal starts play­ing out in the US and threat­ens to snow­ball into a larg­er scan­dal about Face­book’s busi­ness mod­el in gen­er­al just makes it a big­ger ques­tion for Face­book.

    And it’s a cri­sis for Face­book that will be numer­i­cal­ly reflect­ed in one key mea­sure point­ed out by Mar­shall: the num­ber of adver­tise­ments that need to be shown to trig­ger a sale on Face­book com­pared to oth­er plat­forms. It’s a 5‑to‑1 ratio for Face­book vs a 30-to‑1 ratio for oth­er dig­i­tal plat­forms and 100-to‑1 for tra­di­tion­al ads. Face­book real­ly is much bet­ter at tar­get­ing its ads than even its dig­i­tal peers. So when Face­book gets worse at tar­get­ing its ads, that does amount to real pri­va­cy gains because it’s one of the biggest and best ad cut­ting edge ad tar­get­ing plat­forms. This is why Face­book is worth over $450 bil­lion:

    ...
    Face­book is fun­da­men­tal­ly an adver­tis­ing busi­ness. Almost all of the company’s rev­enue comes from adver­tis­ing that it tar­gets with unpar­al­leled effi­cien­cy to its bil­lions of users. In a media world in which adver­tis­ing rates face almost uni­ver­sal down­ward pres­sure, Facebook’s rates have con­sis­tent­ly risen. Monop­oly pow­er may dri­ve some of that growth. But the key dri­ver is effi­cien­cy. If old-fash­ioned adver­tis­ing shows my adver­tise­ment to 100 peo­ple for every actu­al buy­er and oth­er dig­i­tal plat­forms show it to 30 peo­ple and Face­book shows it to 5 peo­ple, Facebook’s ads are just worth a lot more.

    As long as the rates bear some rela­tion­ship to that effi­cien­cy (those num­bers above are just for illus­tra­tion), I’ll be hap­py to pay it. Because it’s objec­tive­ly worth more. Indeed, as the prices have gone up, Face­book has actu­al­ly got­ten more effi­cient. As one dig­i­tal ad agency exec­u­tive recent­ly told me, even if Face­book jacked up the prices a lot more, his firm would like­ly keep using them just as much because on this cost to effi­cien­cy basis it’s still cheap. This is the basis of Facebook’s astro­nom­i­cal mar­ket cap­i­tal­iza­tion which today rates at over $450 bil­lion, even after some recent revers­es.
    ...

    “If old-fash­ioned adver­tis­ing shows my adver­tise­ment to 100 peo­ple for every actu­al buy­er and oth­er dig­i­tal plat­forms show it to 30 peo­ple and Face­book shows it to 5 peo­ple, Facebook’s ads are just worth a lot more”

    And that’s why this is a pret­ty big sto­ry if there’s a real drop in the qual­i­ty of Face­book’s ad tar­get­ing qual­i­ty. Face­book is wild­ly ahead of almost all of its com­pe­ti­tion. Only Google and gov­ern­ments are going to com­pete with what Face­book knows about us all. So if Face­book effec­tive­ly knows less about us, as reflect­ed in a drop in the ad tar­get­ing observed start­ing in ear­ly March, that reflects a real de fac­to increase in pub­lic pri­va­cy. And it’s also a big sto­ry from a busi­ness stand­point because it’s it’s not just about Face­book, it’s also about the entire data bro­ker­age indus­try. There’s a large part of the mod­ern US econ­o­my poten­tial­ly tied into this Face­book scan­dal. A scan­dal that now extends beyond the Cam­bridge Ana­lyt­i­ca app sit­u­a­tion and has led to Face­book declar­ing the phase­out of its Part­ner Cat­e­gories pro­gram. Is this ush­er­ing in a sea change in the data bro­ker­age indus­try? If so, that’s big.

    Face­book was going to have a sea change in how it did busi­ness in the EU thanks to the new data pri­va­cy laws, but it’s this Cam­bridge Ana­lyt­i­ca scan­dal that appears to be dri­ving the like­li­hood of sea change in the US mar­ket too. And that’s part of why it’s notable if Face­book real­ly did start rejig­ger­ing its algo­rithms with­out that third-par­ty data in ear­ly March, poten­tial­ly in antic­i­pa­tion of this flur­ry of bad press, and then the ad tar­get­ing sud­den­ly got worse. Because if it turns out that the loss of the third-par­ty data makes Face­book’s ad tar­get­ing worse, we should note that. And ask our­selves whether or not mak­ing Face­book even worse at tar­get­ing ads would be desir­able from a pub­lic pri­va­cy per­spec­tive. The more Face­book sucks at ads the bet­ter Face­book is for every­one from a pri­va­cy per­spec­tive. It’s one of the fun­da­men­tal con­tra­dic­tions of Face­book’s busi­ness mod­el that this Cam­bridge Ana­lyt­i­ca scan­dal risks expos­ing to the pub­lic:

    ...
    So the mon­ey comes from the adver­tis­ing. And the adver­tis­ing comes from the data and the arti­fi­cial intel­li­gence that crunch­es it and mod­els it into pre­dic­tive effi­cien­cy. But what if there’s a break­down in the data?

    Start­ing in a ear­ly March, a num­ber of mar­keters run­ning sub­stan­tial sums on the Face­book ad engine, who’ve spo­ken to TPM, start­ed notic­ing a new lev­el of plat­form insta­bil­i­ty and reduc­tions in tar­get­ing effi­cien­cy. To under­stand what this means, think about it like how an effi­cient debt or equi­ty mar­ket oper­ates. If there is rel­a­tive­ly accu­rate infor­ma­tion, no big exter­nal shocks and enough buy­ers and sell­ers, pric­ing should have rel­a­tive sta­bil­i­ty and oper­ate with­in cer­tain bands. Account­ing for some rea­son­able amount of bumpi­ness that’s what Facebook’s ad engine has looked like for a few years. But start­ing in March, if you’re down in the trench­es work­ing with the gran­u­lar num­bers, some­thing start­ed to look weird: price oscil­la­tions, reduced tar­get­ing effi­cien­cy and even glitch­es.

    We’ve talked to a num­ber of adver­tis­ers who’ve report­ed this. We’ve also talk to oth­ers who haven’t. But the ones who have tend to be the ones more tight­ly tied to the num­bers and in mar­ket­ing oper­a­tions with tighter ROI (return on invest­ment). Where we’ve seen the most of this is with so-called DTC (direct to con­sumer) mar­keters. Face­book is an amaz­ing­ly large ecosys­tem. And it’s all a black box. So there’s no way for us to talk to a rep­re­sen­ta­tive sam­ple of adver­tis­ers. But some­thing is going on in at least sub­stan­tial sec­tors of Facebook’s ad engine. What is it? Mar­keters who’ve asked main­ly get told it’s their cre­ativ­i­ty. In oth­er words, the ad you’re run­ning isn’t work­ing. Come up with anoth­er ad. Here at TPM, we oper­ate in a dif­fer­ent part of the pro­gram­mat­ic ad uni­verse. You hear com­pa­ra­ble things like that a lot. And it’s hard to ignore. But we’ve talked to peo­ple with dif­fer­ent peo­ple with (by def­i­n­i­tion) dif­fer­ent ads in total­ly dif­fer­ent indus­tries. So that’s not it. Some­thing is hap­pen­ing.

    So what’s up?

    One thing is already being dis­cussed wide­ly in the trade press. In response to the rolling pub­lic rela­tions deba­cle Face­book has already dra­mat­i­cal­ly reduced or has announced that it will reduce adver­tis­ers’ abil­i­ty to use third par­ty ad data on the Face­book plat­form. That is a big deal. What’s that mean?
    ...

    And as Josh Mar­shall points out, the impact of the loss of this third-par­ty data on Face­book’s ad tar­get­ing algo­rithms is large­ly spec­u­la­tive because we know so lit­tle about what Face­book knows about us with­out those third par­ty algo­rithms. Face­book is a black box:

    ...
    As you know, through your activ­i­ty on Face­book, Face­book col­lects lots of data about you that it then uses to tar­get ads. That’s “Face­book data” (or it’s yours, but you know what I mean). Face­book also allows adver­tis­ers to upload “1st par­ty data”. What’s that? That’s if my book pub­lish­ing com­pa­ny has a list of 50,000 emails, I can upload those emails to Face­book and run ads to those peo­ple. Then there’s “3rd par­ty data”. That’s if the adver­tis­er or Face­book itself goes to anoth­er per­son­al data bro­ker, buys access to that data and pours it into the Face­book ecosys­tem for more effi­cient tar­get­ing.

    If you’re not versed in the world of data and dig­i­tal adver­tis­ing, there’s a ton here to keep up with. But here’s the key. How reliant is Facebook’s adver­tis­ing cash cow on third par­ty data? Not just the third par­ty data that Face­book allows adver­tis­ers to put into its ecosys­tem for bet­ter tar­get­ing (which is now being phased out) but 3rd par­ty data Face­book uses itself to improve its ad tar­get­ing? As one data indus­try exec­u­tive put it to me, sure Face­book can crunch its own data to find out all sorts of things about you. But in a lot of cas­es it may be eas­i­er, cheap­er and in some cas­es sim­ply more effec­tive to buy that data from oth­er sources. We don’t real­ly know – and no one out­side Face­book real­ly knows – how good Facebook’s AI real­ly is at mod­el­ing user data entire­ly on its own with­out oth­er sorts of data mixed in. It’s a black box. It mat­ters a lot in terms of Facebook’s core rev­enue stream.
    ...

    But we might get an answer to the ques­tion of whether or not Face­book needs that third-par­ty data to achieve the ad tar­get­ing pro­fi­cien­cy is has today because of those new EU reg­u­la­tions and the real pos­si­bil­i­ty of some sort of con­gres­sion­al action as a result of the Cam­bridge Ana­lyt­i­ca scan­dal. And that, of course, is why Josh Mar­shall sus­pects what we’re see­ing in the report­ed drop in Face­book’s ad tar­get­ing is that Face­book is already prepar­ing for com­ing reg­u­la­tion:

    ...
    Here’s anoth­er ques­tion: when you con­sid­er Facebook’s own data, how reliant is Face­book on ways it col­lects and process­es Face­book data which it may not be able to do any longer either because of new reg­u­la­tions that come into effect lat­er this year for the EU or because of new reg­u­la­tions Con­gress may put into effect as it puts new scruti­ny on Facebook’s behav­ior?

    My hunch is that the answer to most or all of these ques­tions is “a lot more than most peo­ple real­ize.” We already know that Face­book is mak­ing a lot of changes to how it uses data, espe­cial­ly third par­ty data and how it allows adver­tis­ers to use data. Some of this is already pub­lic. Indeed, it’s get­ting dis­cussed a lot in the trade press – par­tic­u­lar how Face­book will imple­ment with and cope with the new regs from the Euro­pean Union. So why all the chop­pi­ness in Facebook’s adver­tis­ing and tar­get­ing met­rics? I sus­pect that Face­book is try­ing to rejig­ger its algo­rithm on the fly more than peo­ple real­ize in order to see if they can get it to work as effec­tive­ly for ads with­out a lot of data sources or data uses they real­ly aren’t sup­posed to be doing or which they sus­pect they’ll lose access to in com­ing reg­u­la­tion. That is the most log­i­cal expla­na­tion of the insta­bil­i­ty in their report­ing.
    ...

    And if Josh Mar­shal­l’s hunch is cor­rect and Face­book real­ly did start rejig­ger­ing its ad tar­get­ing algo­rithms in antic­i­pa­tion of com­ing con­gres­sion­al reg­u­la­tion — which points towards an antic­i­pa­tion by Face­book of a very neg­a­tive pub­lic response to the yet-to-be-released Cam­bridge Ana­lyt­i­ca sto­ry — we have to won­der just have many oth­er pri­va­cy vio­lat­ing schemes Face­book has been up to with oth­er third-par­ties beyond the data bro­ker­age giants like Acx­iom or Exper­ian. Like what kinds of oth­er class­es of third-par­ty providers might Face­book be incor­po­rat­ing into their algo­rithms?

    Well, here’s a chill­ing exam­ple of the kind of third-par­ty data-shar­ing part­ner­ship Face­book might be inter­est­ed in: hos­pi­tal record meta data. Like what dis­eases peo­ple have an the med­ica­tions they’re on and when they vis­it­ed the hos­pi­tal. From sev­er­al major hos­pi­tals, includ­ing Stan­ford Med­ical School’s.

    Face­book says it would be for research pur­pos­es only by the med­ical com­mu­ni­ty but Face­book would have been able to deanonymize the data. And it’s kind of obscene because Face­book says the plan for pro­tect­ing every­one’s pri­va­cy is to using “hash­ing” — where patients would be assign an anony­mous num­ber that is assigned based on a math­e­mat­i­cal algo­rithm that takes some­thing like the patient name and turns it into a seem­ing­ly ran­dom num­ber — and that only the med­ical research com­mu­ni­ty will have access to the anonymized data so no one’s pri­va­cy is at risk. But using hash­ing to match the Face­book data set and the hos­pi­tal data set means Face­book can match up the hos­pi­tal data with its Face­book users. Face­book is try­ing to get deanonymized patient health data from hos­pi­tals. It’s a dis­turb­ing exam­ple of the kind of third-par­ty data that Face­book is inter­est­ed in.

    And there’s no real rea­son to believe they would­n’t wild­ly abuse the data and prob­a­bly turn the patients of those hos­pi­tals into focus groups of algo­rith­mic test­ing using their med­ical records to pitch ads. Which will prob­a­bly freak those peo­ple out. Face­book + hos­pi­tal data = yikes.

    And this plan was being pur­sued last month. The Cam­bridge Ana­lyt­i­ca scan­dal dis­rupt­ed active talks. The plan was “put on pause” by Face­book last week in response to the Cam­bridge Ana­lyt­i­ca out­rage. Still, that’s just “on pause”. So it sounds like the plan is still “on” and we should expect a con­tin­ued push into the med­ical record space by Face­book.

    Face­book’s pitch was to com­bine what health sys­tem data on patients (such as: per­son has heart dis­ease, is age 50, takes 2 med­ica­tions and made 3 trips to the hos­pi­tal this year) with Face­book’s data on the per­son (such as: user is age 50, mar­ried with 3 kids, Eng­lish isn’t a pri­ma­ry lan­guage, active­ly engages with the com­mu­ni­ty by send­ing a lot of mes­sages). And then the research project would try to use this com­bined infor­ma­tion to improve patient care in some way, with an ini­tial focus on car­dio­vas­cu­lar health. For instance, if Face­book could deter­mine that an elder­ly patient does­n’t have many near­by close friends or much com­mu­ni­ty sup­port, the health sys­tem might decide to send over a nurse to check in after a major surgery.

    In oth­er words, Face­book was set­ting up a research project ded­i­cat­ed to devel­op­er hos­pi­tal deci­sion-mak­ing sup­port that uti­lizes Face­book’s pool of per­son­al­ized data on peo­ple. Which is a path to plug Face­book into the hos­pi­tal sys­tem. Yikes:

    CNBC

    Face­book sent a doc­tor on a secret mis­sion to ask hos­pi­tals to share patient data

    * Face­book was in talks with top hos­pi­tals and oth­er med­ical groups as recent­ly as last month about a pro­pos­al to share data about the social net­works of their most vul­ner­a­ble patients.
    * The idea was to build pro­files of peo­ple that includ­ed their med­ical con­di­tions, infor­ma­tion that health sys­tems have, as well as social and eco­nom­ic fac­tors gleaned from Face­book.
    * Face­book said the project is on hia­tus so it can focus on “oth­er impor­tant work, includ­ing doing a bet­ter job of pro­tect­ing peo­ple’s data.”

    Christi­na Farr | @chrissyfarr
    Pub­lished 2:01 PM ET Thu, 5 April 2018 Updat­ed 11:46 AM ET Fri, 6 April 2018

    Face­book has asked sev­er­al major U.S. hos­pi­tals to share anonymized data about their patients, such as ill­ness­es and pre­scrip­tion info, for a pro­posed research project. Face­book was intend­ing to match it up with user data it had col­lect­ed, and help the hos­pi­tals fig­ure out which patients might need spe­cial care or treat­ment.

    The pro­pos­al nev­er went past the plan­ning phas­es and has been put on pause after the Cam­bridge Ana­lyt­i­ca data leak scan­dal raised pub­lic con­cerns over how Face­book and oth­ers col­lect and use detailed infor­ma­tion about Face­book users.

    “This work has not pro­gressed past the plan­ning phase, and we have not received, shared, or ana­lyzed any­one’s data,” a Face­book spokesper­son told CNBC.

    But as recent­ly as last month, the com­pa­ny was talk­ing to sev­er­al health orga­ni­za­tions, includ­ing Stan­ford Med­ical School and Amer­i­can Col­lege of Car­di­ol­o­gy, about sign­ing the data-shar­ing agree­ment.

    While the data shared would obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as the patien­t’s name, Face­book pro­posed using a com­mon com­put­er sci­ence tech­nique called “hash­ing” to match indi­vid­u­als who exist­ed in both sets. Face­book says the data would have been used only for research con­duct­ed by the med­ical com­mu­ni­ty.

    The project could have raised new con­cerns about the mas­sive amount of data Face­book col­lects about its users, and how this data can be used in ways users nev­er expect­ed.

    ...

    Led out of Build­ing 8

    The explorato­ry effort to share med­ical-relat­ed data was led by an inter­ven­tion­al car­di­ol­o­gist called Fred­dy Abnousi, who describes his role on LinkedIn as “lead­ing top-secret projects.” It was under the purview of Regi­na Dugan, the head of Face­book’s “Build­ing 8” exper­i­ment projects group, before she left in Octo­ber 2017.

    Face­book’s pitch, accord­ing to two peo­ple who heard it and one who is famil­iar with the project, was to com­bine what a health sys­tem knows about its patients (such as: per­son has heart dis­ease, is age 50, takes 2 med­ica­tions and made 3 trips to the hos­pi­tal this year) with what Face­book knows (such as: user is age 50, mar­ried with 3 kids, Eng­lish isn’t a pri­ma­ry lan­guage, active­ly engages with the com­mu­ni­ty by send­ing a lot of mes­sages).

    The project would then fig­ure out if this com­bined infor­ma­tion could improve patient care, ini­tial­ly with a focus on car­dio­vas­cu­lar health. For instance, if Face­book could deter­mine that an elder­ly patient does­n’t have many near­by close friends or much com­mu­ni­ty sup­port, the health sys­tem might decide to send over a nurse to check in after a major surgery.

    The peo­ple declined to be named as they were asked to sign con­fi­den­tial­i­ty agree­ments.

    Face­book pro­vid­ed a quote from Cath­leen Gates, the inter­im CEO of the Amer­i­can Col­lege of Car­di­ol­o­gy, explain­ing the pos­si­ble ben­e­fits of the plan:

    “For the first time in his­to­ry, peo­ple are shar­ing infor­ma­tion about them­selves online in ways that may help deter­mine how to improve their health. As part of its mis­sion to trans­form car­dio­vas­cu­lar care and improve heart health, the Amer­i­can Col­lege of Car­di­ol­o­gy has been engaged in dis­cus­sions with Face­book around the use of anonymized Face­book data, cou­pled with anonymized ACC data, to fur­ther sci­en­tif­ic research on the ways social media can aid in the pre­ven­tion and treat­ment of heart disease—the #1 cause of death in the world. This part­ner­ship is in the very ear­ly phas­es as we work on both sides to ensure pri­va­cy, trans­paren­cy and sci­en­tif­ic rig­or. No data has been shared between any par­ties.”

    Health sys­tems are noto­ri­ous­ly care­ful about shar­ing patient health infor­ma­tion, in part because of state and fed­er­al patient pri­va­cy laws that are designed to ensure that peo­ple’s sen­si­tive med­ical infor­ma­tion does­n’t end up in the wrong hands.

    To address these pri­va­cy laws and con­cerns, Face­book pro­posed to obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as names, in the data being shared by both sides.

    How­ev­er, the com­pa­ny pro­posed using a com­mon cryp­to­graph­ic tech­nique called hash­ing to match indi­vid­u­als who were in both data sets. That way, both par­ties would be able to tell when a spe­cif­ic set of Face­book data matched up with a spe­cif­ic set of patient data.

    The issue of patient con­sent did not come up in the ear­ly dis­cus­sions, one of the peo­ple said. Crit­ics have attacked Face­book in the past for doing research on users with­out their per­mis­sion. Notably, in 2014, Face­book manip­u­lat­ed hun­dreds of thou­sands of peo­ple’s news feeds to study whether cer­tain types of con­tent made peo­ple hap­pi­er or sad­der. Face­book lat­er apol­o­gized for the study.

    Health pol­i­cy experts say that this health ini­tia­tive would be prob­lem­at­ic if Face­book did not think through the pri­va­cy impli­ca­tions.

    “Con­sumers would­n’t have assumed their data would be used in this way,” said Aneesh Chopra, pres­i­dent of a health soft­ware com­pa­ny spe­cial­iz­ing in patient data called Care­Jour­ney and the for­mer White House chief tech­nol­o­gy offi­cer.

    “If Face­book moves ahead (with its plans), I would be wary of efforts that repur­pose user data with­out explic­it con­sent.”

    When asked about the plans, Face­book pro­vid­ed the fol­low­ing state­ment:

    “The med­ical indus­try has long under­stood that there are gen­er­al health ben­e­fits to hav­ing a close-knit cir­cle of fam­i­ly and friends. But deep­er research into this link is need­ed to help med­ical pro­fes­sion­als devel­op spe­cif­ic treat­ment and inter­ven­tion plans that take social con­nec­tion into account.”

    “With this in mind, last year Face­book began dis­cus­sions with lead­ing med­ical insti­tu­tions, includ­ing the Amer­i­can Col­lege of Car­di­ol­o­gy and the Stan­ford Uni­ver­si­ty School of Med­i­cine, to explore whether sci­en­tif­ic research using anonymized Face­book data could help the med­ical com­mu­ni­ty advance our under­stand­ing in this area. This work has not pro­gressed past the plan­ning phase, and we have not received, shared, or ana­lyzed any­one’s data.”

    Last month we decid­ed that we should pause these dis­cus­sions so we can focus on oth­er impor­tant work, includ­ing doing a bet­ter job of pro­tect­ing peo­ple’s data and being clear­er with them about how that data is used in our prod­ucts and ser­vices.”

    ...

    ———-

    “Face­book sent a doc­tor on a secret mis­sion to ask hos­pi­tals to share patient data” by Christi­na Farr; CNBC; 04/05/2018

    “Face­book has asked sev­er­al major U.S. hos­pi­tals to share anonymized data about their patients, such as ill­ness­es and pre­scrip­tion info, for a pro­posed research project. Face­book was intend­ing to match it up with user data it had col­lect­ed, and help the hos­pi­tals fig­ure out which patients might need spe­cial care or treat­ment.”

    Patient data from hos­pi­tals. It’s Face­book’s brave new third-par­ty data fron­tier. Cur­rent­ly under the aus­pices of med­ical research, but its research for the pur­pose of show­ing Face­book’s util­i­ty in med­ical deci­sion-sup­port which is research to demon­strate the util­i­ty of shar­ing patient infor­ma­tion with Face­book. That was the gen­er­al pitch Face­book was mak­ing to sev­er­al major US hos­pi­tals, includ­ing Stan­ford. And it’s a plan that, accord­ing to Face­book, was being pur­sued last month and has mere­ly been “put on pause” in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal:

    ...
    The pro­pos­al nev­er went past the plan­ning phas­es and has been put on pause after the Cam­bridge Ana­lyt­i­ca data leak scan­dal raised pub­lic con­cerns over how Face­book and oth­ers col­lect and use detailed infor­ma­tion about Face­book users.

    “This work has not pro­gressed past the plan­ning phase, and we have not received, shared, or ana­lyzed any­one’s data,” a Face­book spokesper­son told CNBC.

    But as recent­ly as last month, the com­pa­ny was talk­ing to sev­er­al health orga­ni­za­tions, includ­ing Stan­ford Med­ical School and Amer­i­can Col­lege of Car­di­ol­o­gy, about sign­ing the data-shar­ing agree­ment.
    ...

    The way Face­book pitched it, the anonymized data from Face­book and the anonymized data from the hos­pi­tals would be com­bined and used for med­ical com­mu­ni­ty research (research into Face­book as a patient care deci­sion-sup­port part­ner):

    ...
    Face­book’s pitch, accord­ing to two peo­ple who heard it and one who is famil­iar with the project, was to com­bine what a health sys­tem knows about its patients (such as: per­son has heart dis­ease, is age 50, takes 2 med­ica­tions and made 3 trips to the hos­pi­tal this year) with what Face­book knows (such as: user is age 50, mar­ried with 3 kids, Eng­lish isn’t a pri­ma­ry lan­guage, active­ly engages with the com­mu­ni­ty by send­ing a lot of mes­sages).

    The project would then fig­ure out if this com­bined infor­ma­tion could improve patient care, ini­tial­ly with a focus on car­dio­vas­cu­lar health. For instance, if Face­book could deter­mine that an elder­ly patient does­n’t have many near­by close friends or much com­mu­ni­ty sup­port, the health sys­tem might decide to send over a nurse to check in after a major surgery.
    ...

    But what Face­book does­n’t acknowl­edge in that pitch is that the tech­nique it’s propos­ing to anonymiz­ing the data only anonymizes it to every­one except the hos­pi­tal and Face­book. Face­book can eas­i­ly deanonymize the hos­pi­tal data if it gets its hands on it. The med­ical researchers aren’t the pri­va­cy threat. It’s actu­al­ly anonymized for them because they don’t know the patients or the Face­book pro­files. They’re just hashed ids. But Face­book sure as hell is a pri­va­cy threat because it’s Face­book with it’s hands on the deanonymized data:

    ...
    While the data shared would obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as the patien­t’s name, Face­book pro­posed using a com­mon com­put­er sci­ence tech­nique called “hash­ing” to match indi­vid­u­als who exist­ed in both sets. Face­book says the data would have been used only for research con­duct­ed by the med­ical com­mu­ni­ty.

    The project could have raised new con­cerns about the mas­sive amount of data Face­book col­lects about its users, and how this data can be used in ways users nev­er expect­ed.

    ...

    Health sys­tems are noto­ri­ous­ly care­ful about shar­ing patient health infor­ma­tion, in part because of state and fed­er­al patient pri­va­cy laws that are designed to ensure that peo­ple’s sen­si­tive med­ical infor­ma­tion does­n’t end up in the wrong hands.

    To address these pri­va­cy laws and con­cerns, Face­book pro­posed to obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as names, in the data being shared by both sides.

    How­ev­er, the com­pa­ny pro­posed using a com­mon cryp­to­graph­ic tech­nique called hash­ing to match indi­vid­u­als who were in both data sets. That way, both par­ties would be able to tell when a spe­cif­ic set of Face­book data matched up with a spe­cif­ic set of patient data.
    ...

    And note how the issue of patient con­sent did­n’t come up in these ear­ly dis­cus­sions, sug­gest­ing that Face­book is try­ing to work out a sit­u­a­tion where peo­ple don’t know their patient record data was hand­ed over to Face­book:

    ...
    The issue of patient con­sent did not come up in the ear­ly dis­cus­sions, one of the peo­ple said. Crit­ics have attacked Face­book in the past for doing research on users with­out their per­mis­sion. Notably, in 2014, Face­book manip­u­lat­ed hun­dreds of thou­sands of peo­ple’s news feeds to study whether cer­tain types of con­tent made peo­ple hap­pi­er or sad­der. Face­book lat­er apol­o­gized for the study.

    Health pol­i­cy experts say that this health ini­tia­tive would be prob­lem­at­ic if Face­book did not think through the pri­va­cy impli­ca­tions.

    “Con­sumers would­n’t have assumed their data would be used in this way,” said Aneesh Chopra, pres­i­dent of a health soft­ware com­pa­ny spe­cial­iz­ing in patient data called Care­Jour­ney and the for­mer White House chief tech­nol­o­gy offi­cer.

    “If Face­book moves ahead (with its plans), I would be wary of efforts that repur­pose user data with­out explic­it con­sent.”
    ...

    And, of course, it was Face­book’s mad sci­ence “Build­ing 8” R&D group that is behind this pro­pos­al. The same group behind projects like the human-to-com­put­er mind-read­ing inter­face tech­nol­o­gy that will allow human-to-com­put­er inter­faces (so Face­book can lit­er­al­ly data mine your brain activ­i­ty). And the same R&D group that was recent­ly led by for­mer DARPA chief Regi­na Dugan, until Dugan left last year with a cryp­tic mes­sage about step­ping away to be “pur­pose­ful about what’s next, thought­ful about new ways to con­tribute in times of dis­rup­tion.”. This next-gen­er­a­tion Face­book stuff:

    ...
    The explorato­ry effort to share med­ical-relat­ed data was led by an inter­ven­tion­al car­di­ol­o­gist called Fred­dy Abnousi, who describes his role on LinkedIn as “lead­ing top-secret projects.” It was under the purview of Regi­na Dugan, the head of Face­book’s “Build­ing 8” exper­i­ment projects group, before she left in Octo­ber 2017.
    ...

    It’s a reminder that Face­book’s R&D teams are prob­a­bly work­ing on all sorts of new ways to tap into data-rich third-par­ty sources. Hos­pi­tals are mere­ly one par­tic­u­lar­ly data rich exam­ple of the prob­lem.

    And if Face­book real­ly does cuts out third-par­ty data bro­kers from its algo­rithms, let’s not for­get that Face­book is prob­a­bly going to use that as an excuse and imper­a­tive to reach out to all sorts of niche third-par­ty data providers for direct access. Like hos­pi­tals. Don’t for­get that the above plan was mere­ly “put on pause”. They want to do more stuff like this going for­ward. And why not if they can get hos­pi­tals to give this kind of data out. And any oth­er kind of insti­tu­tion they can con­vince to out our data. This is how Face­book can go “offline”. With direct data shar­ing ser­vices, like patient care deci­sion-mak­ing sup­port ser­vices, with one field of insti­tu­tion at a time. Hos­pi­tals are just one exam­ple.

    So giv­en Face­book faces poten­tial con­gres­sion­al action and new reg­u­la­tions, it’s going to be impor­tant to keep in mind that those reg­u­la­tions are going to have to include more than just the data bro­ker­age giants like Exper­ian. Because Face­book is inter­est­ed in what you tell your doc­tor too. And pre­sum­ably lots of oth­er ‘ser­vices’ where they fuse their data about you with anoth­er data source for com­bined deci­sion-mak­ing sup­port. And the more Face­book promis­ing to cut out third-par­ty data, but more Face­book is going to try to direct­ly col­lect “offline” data by fus­ing itself with oth­er facets of our lives. It’s real­ly quite dis­turb­ing.

    And who knows who else in the data bro­ker­age indus­try might try to fol­low Face­book’s lead. Will Google also wants get into the patient care deci­sion sup­port mar­ket? Third-par­ty data-bro­ker­age deci­sion-mak­ing sup­port could be poten­tial­ly applied to a lot more than just the med­ical sec­tor. It’s a creepy new prof­it fron­tier.

    Beyond that, how else might Face­book attempt to replace the “offline” third-par­ty data it’s pledg­ing to phase out over the next six months? We’ll see, but we can be sure that Face­book is work­ing on some­thing.

    Posted by Pterrafractyl | April 8, 2018, 1:01 am
  5. Here’s a reminder that the pro­pos­al to com­bine Face­book data with patient hos­pi­tal data — osten­si­bly for patient care deci­sion-sup­port pur­pos­es but also like­ly so Face­book can get its hands on patient med­ical record infor­ma­tion — isn’t the only project Face­book has put ‘on pause’ (but not can­celed) in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. For exam­ple, there’s a new hard­ware prod­uct for your home that Face­book is plan­ning out rolling out lat­er this year.

    It’s a “smart speak­er” like the kind Ama­zon and Google already have sale. A smart speak­er that will sit in your home and lis­ten to every­thing and answer ques­tions and sched­ule things. Poten­tial­ly with cam­eras. Your per­son­al home assis­tant. That’s the mar­ket Face­book is get­ting into lat­er this year. But thanks to the pub­lic rela­tions night­mare sit­u­a­tion Face­book is expe­ri­enc­ing at the moment the announce­ment of this new smart speak­er at its devel­op­ers con­fer­ence in May has been can­celled. But it sounds like the roll out is still planned for this fall. So that smart speak­er is a use­ful reminder to the US pub­lic and reg­u­la­tors of the future direc­tion Face­book is plan­ning on head­ing: in home “offline” data col­lec­tion using inter­net-con­nect­ed smart devices:

    Bloomberg Tech­nol­o­gy

    Face­book Delays Home-Speak­er Unveil Amid Data Cri­sis

    By Sarah Frier
    March 27, 2018, 7:34 PM CDT

    * Social net­work had hoped to show off devices at F8 in May
    * Com­pa­ny still plans to launch prod­ucts lat­er this year

    Face­book Inc. has decid­ed not to unveil new home prod­ucts at its major devel­op­er con­fer­ence in May, in part because the pub­lic is cur­rent­ly so out­raged about the social network’s data-pri­va­cy prac­tices, accord­ing to peo­ple famil­iar with the mat­ter.

    The company’s new hard­ware prod­ucts, con­nect­ed speak­ers with dig­i­tal-assis­tant and video-chat capa­bil­i­ties, are under­go­ing a deep­er review to ensure that they make the right trade-offs regard­ing user data, the peo­ple said. While the hard­ware wasn’t expect­ed to be avail­able until the fall, the com­pa­ny had hoped to pre­view the devices at the largest annu­al gath­er­ing of Face­book devel­op­ers, said the peo­ple, who asked not to be named dis­cussing inter­nal plans.

    The devices are part of Facebook’s plan to become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers. As con­cerns esca­late about Facebook’s col­lec­tion and use of per­son­al data, now may be the wrong time to ask con­sumers to trust it with even more infor­ma­tion by plac­ing a con­nect­ed device in their homes. A Face­book spokes­woman declined to com­ment.

    ...

    The social-media com­pa­ny had already found in focus-group test­ing that users were con­cerned about a Face­book-brand­ed device in their liv­ing rooms, giv­en how much inti­mate data the social net­work col­lects. Face­book still plans to launch the devices lat­er this year.

    At the devel­op­er con­fer­ence, set for May 1, the com­pa­ny will also need to explain new, more restric­tive rules around what kinds of infor­ma­tion app mak­ers can col­lect on their users via Facebook’s ser­vice. The Men­lo Park, Cal­i­for­nia-based com­pa­ny said in a blog post this week that for devel­op­ers, the changes “are not easy,” but are impor­tant to “mit­i­gate any breach of trust with the broad­er devel­op­er ecosys­tem.”

    ———-

    “Face­book Delays Home-Speak­er Unveil Amid Data Cri­sis” by Sarah Frier; Bloomberg Tech­nol­o­gy; 03/27/2018

    “Face­book Inc. has decid­ed not to unveil new home prod­ucts at its major devel­op­er con­fer­ence in May, in part because the pub­lic is cur­rent­ly so out­raged about the social network’s data-pri­va­cy prac­tices, accord­ing to peo­ple famil­iar with the mat­ter.”

    Yeah, it’s under­stand­able that pub­lic out­rage over years of decep­tive and sys­temic mass pri­va­cy vio­la­tions might com­pli­cate the roll out of your new in-home “smart speak­ers” which will be lis­ten­ing to every­thing hap­pen­ing in your home and send­ing that infor­ma­tion back to Face­book. A pause on that grand unveil­ing does seem pru­dent.

    And yet Face­book still plans to actu­al­ly launch its new smart speak­ers lat­er this year:

    ...
    The social-media com­pa­ny had already found in focus-group test­ing that users were con­cerned about a Face­book-brand­ed device in their liv­ing rooms, giv­en how much inti­mate data the social net­work col­lects. Face­book still plans to launch the devices lat­er this year.
    ...

    And that planned roll out of these smart speak­ers lat­er this year is just one ele­ment of Face­book’s plan to “become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers”:

    ...
    The company’s new hard­ware prod­ucts, con­nect­ed speak­ers with dig­i­tal-assis­tant and video-chat capa­bil­i­ties, are under­go­ing a deep­er review to ensure that they make the right trade-offs regard­ing user data, the peo­ple said. While the hard­ware wasn’t expect­ed to be avail­able until the fall, the com­pa­ny had hoped to pre­view the devices at the largest annu­al gath­er­ing of Face­book devel­op­ers, said the peo­ple, who asked not to be named dis­cussing inter­nal plans.

    The devices are part of Facebook’s plan to become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers. As con­cerns esca­late about Facebook’s col­lec­tion and use of per­son­al data, now may be the wrong time to ask con­sumers to trust it with even more infor­ma­tion by plac­ing a con­nect­ed device in their homes. A Face­book spokes­woman declined to com­ment.
    ...

    “The devices are part of Facebook’s plan to become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers.”

    Yep, Face­book has all sorts of plans to become more inti­mate­ly involved with your every­day life. Using arti­fi­cial intel­li­gence. And smart speak­ers. And no pri­va­cy con­cerns, of course.

    And in fair­ness this move to sell con­sumer devices that mon­i­tor you for the pur­pose of offer­ing use­ful ser­vices with the data its col­lect­ing (and for sell­ing you ads and pro­fil­ing you) is mere­ly fol­low­ing in the foot­steps of com­pa­nies like Google or Ama­zon with their wild­ly pop­u­lar smart speak­ers. As the fol­low­ing arti­cle notes, A recent Gallup poll found found that 22 per­cent of Amer­i­cans use “Home per­son­al assis­tants” like Google Home or Ama­zon Echo. That is a huge per­cent­age of the Amer­i­can pub­lic that’s already hand­ing out exact­ly the kind of data Face­book is try­ing to col­lect with its new smart speak­er.

    And as the fol­low­ing arti­cle also notes, if the creepy patents Google and Ama­zon have already filed are any indi­ca­tion of what we can expect from Face­book, we should expect Face­book to work on things like incor­po­rat­ing the smart speak­ers into smart home AI sys­tems for mon­i­tor­ing chil­dren, with whis­per detec­tion capa­bil­i­ties and the abil­i­ty to issue ver­bal com­mands at the kids. The smart home would replace the tele­vi­sion as the tech­no­log­i­cal par­ent of today’s kids and one of these mega cor­po­ra­tions sell­ing this tech­nol­o­gy will get audio and visu­al access to your home. Yes, the exist­ing Google and Ama­zon patents would incor­po­rate visu­al data too since these smart speak­ers tend to have cam­eras.

    And one patent involved a sce­nario where the cam­era on a smart speak­er rec­og­nized a t‑shirt on the floor and rec­og­nized a pic­ture of Will Smith on the shirt and then tied that to a data­base of that per­son­’s brows­ing his­to­ry to see if they looked up Will Smith con­tent online and then serv­ing up tar­get­ed ads if they found a Will Smith hit. That’s a real patent from Google and that’s the kind of Orwellian patent race that Face­book is qui­et­ly get­ting ready to join lat­er this year:

    The New York Times

    Hey, Alexa, What Can You Hear? And What Will You Do With It?

    By SAPNA MAHESHWARI
    MARCH 31, 2018

    Ama­zon ran a com­mer­cial on this year’s Super Bowl that pre­tend­ed its dig­i­tal assis­tant Alexa had tem­porar­i­ly lost her voice. It fea­tured celebri­ties like Rebel Wil­son, Car­di B and even the company’s chief exec­u­tive, Jeff Bezos.

    While the ad riffed on what Alexa can say to users, the more intrigu­ing ques­tion may be what she and oth­er dig­i­tal assis­tants can hear — espe­cial­ly as more peo­ple bring smart speak­ers into their homes.

    Ama­zon and Google, the lead­ing sell­ers of such devices, say the assis­tants record and process audio only after users trig­ger them by push­ing a but­ton or utter­ing a phrase like “Hey, Alexa” or “O.K., Google.” But each com­pa­ny has filed patent appli­ca­tions, many of them still under con­sid­er­a­tion, that out­line an array of pos­si­bil­i­ties for how devices like these could mon­i­tor more of what users say and do. That infor­ma­tion could then be used to iden­ti­fy a person’s desires or inter­ests, which could be mined for ads and prod­uct rec­om­men­da­tions.

    In one set of patent appli­ca­tions, Ama­zon describes how a “voice snif­fer algo­rithm” could be used on an array of devices, like tablets and e‑book read­ers, to ana­lyze audio almost in real time when it hears words like “love,” bought” or “dis­like.” A dia­gram includ­ed with the appli­ca­tion illus­trat­ed how a phone call between two friends could result in one receiv­ing an offer for the San Diego Zoo and the oth­er see­ing an ad for a Wine of the Month Club mem­ber­ship.

    Some patent appli­ca­tions from Google, which also owns the smart home prod­uct mak­er Nest Labs, describe how audio and visu­al sig­nals could be used in the con­text of elab­o­rate smart home setups.

    One appli­ca­tion details how audio mon­i­tor­ing could help detect that a child is engag­ing in “mis­chief” at home by first using speech pat­terns and pitch to iden­ti­fy a child’s pres­ence, one fil­ing said. A device could then try to sense move­ment while lis­ten­ing for whis­pers or silence, and even pro­gram a smart speak­er to “pro­vide a ver­bal warn­ing.”

    A sep­a­rate appli­ca­tion regard­ing per­son­al­iz­ing con­tent for peo­ple while respect­ing their pri­va­cy not­ed that voic­es could be used to deter­mine a speaker’s mood using the “vol­ume of the user’s voice, detect­ed breath­ing rate, cry­ing and so forth,” and med­ical con­di­tion “based on detect­ed cough­ing, sneez­ing and so forth.”

    The same appli­ca­tion out­lines how a device could “rec­og­nize a T‑shirt on a floor of the user’s clos­et” bear­ing Will Smith’s face and com­bine that with a brows­er his­to­ry that shows search­es for Mr. Smith “to pro­vide a movie rec­om­men­da­tion that dis­plays, ‘You seem to like Will Smith. His new movie is play­ing in a the­ater near you.’”

    In a state­ment, Ama­zon said the com­pa­ny took “pri­va­cy seri­ous­ly” and did “not use cus­tomers’ voice record­ings for tar­get­ed adver­tis­ing.” Ama­zon said that it filed “a num­ber of for­ward-look­ing patent appli­ca­tions that explore the full pos­si­bil­i­ties of new tech­nol­o­gy,” and that they “take mul­ti­ple years to receive and do not nec­es­sar­i­ly reflect cur­rent devel­op­ments to prod­ucts and ser­vices.”

    Google said it did not “use raw audio to extrap­o­late moods, med­ical con­di­tions or demo­graph­ic infor­ma­tion.” The com­pa­ny added, “All devices that come with the Google Assis­tant, includ­ing Google Home, are designed with user pri­va­cy in mind.”

    Tech com­pa­nies apply for a dizzy­ing num­ber of patents every year, many of which are nev­er used and are years from even being pos­si­ble.

    Still, Jamie Court, the pres­i­dent of Con­sumer Watch­dog, a non­prof­it advo­ca­cy group in San­ta Mon­i­ca, Calif., which pub­lished a study of some of the patent appli­ca­tions in Decem­ber, said, “When you read parts of the appli­ca­tions, it’s real­ly clear that this is spy­ware and a sur­veil­lance sys­tem meant to serve you up to adver­tis­ers.”

    The com­pa­nies, Mr. Court added, are “basi­cal­ly going to be find­ing out what our home life is like in qual­i­ta­tive ways.”

    Google called Con­sumer Watchdog’s claims “unfound­ed,” and said, “Prospec­tive prod­uct announce­ments should not nec­es­sar­i­ly be inferred from our patent appli­ca­tions.”

    A recent Gallup poll found found that 22 per­cent of Amer­i­cans used devices like Google Home or Ama­zon Echo. The grow­ing adop­tion of smart speak­ers means that gad­gets, some of which con­tain up to eight micro­phones and a cam­era, are being placed in kitchens and bed­rooms and used to answer ques­tions, con­trol appli­ances and make phone calls. Apple recent­ly intro­duced its own ver­sion, called the Home­Pod.

    ...

    Both Ama­zon and Google have empha­sized that devices with Alexa and Google Assis­tant store voice record­ings from users only after they are inten­tion­al­ly trig­gered. Amazon’s Echo and its new­er smart speak­ers with screens use lights to show when they are stream­ing audio to the cloud, and con­sumers can view and delete their record­ings on the Alexa smart­phone app or on Amazon’s web­site (though they are warned online that “may degrade” their expe­ri­ence). Google Home also has a light that indi­cates when it is record­ing, and users can sim­i­lar­ly see and delete that audio online.

    Ama­zon says voice record­ings may help ful­fill requests and improve its ser­vices, while Google says the data helps it learn over time to pro­vide bet­ter, more per­son­al­ized respons­es.

    But the ecosys­tem around voice data is still evolv­ing.

    Take the thou­sands of third-par­ty apps devel­oped for Alexa called “skills,” which can be used to play games, dim lights or pro­vide clean­ing advice. While Ama­zon said it didn’t share users’ actu­al record­ings with third par­ties, its terms of use for Alexa say it may share the con­tent of their requests or infor­ma­tion like their ZIP codes. Google says it will “gen­er­al­ly” not pro­vide audio record­ings to third-par­ty ser­vice providers, but may send tran­scrip­tions of what peo­ple say.

    And some devices have already shown that they are capa­ble of record­ing more than what users expect. Google faced some embar­rass­ment last fall when a batch of Google Home Min­is that it dis­trib­uted at com­pa­ny events and to jour­nal­ists were almost con­stant­ly record­ing.

    In a stark­er exam­ple, detec­tives inves­ti­gat­ing a death at an Arkansas home sought access to audio on an Echo device in 2016. Ama­zon resist­ed, but the record­ings were ulti­mate­ly shared with the per­mis­sion of the defen­dant, James Bates. (A judge lat­er dis­missed Mr. Bates’s first-degree mur­der charge based on sep­a­rate evi­dence.)

    Kath­leen Zell­ner, his lawyer, said in an inter­view that the Echo had been record­ing more than it was sup­posed to. Mr. Bates told her that it had been reg­u­lar­ly light­ing up with­out being prompt­ed, and had logged con­ver­sa­tions that were unre­lat­ed to Alexa com­mands, includ­ing a con­ver­sa­tion about foot­ball in a sep­a­rate room, she said.

    “It was just extreme­ly slop­py the way the acti­va­tion occurred,” Ms. Zell­ner said.

    The Elec­tron­ic Pri­va­cy Infor­ma­tion Cen­ter has rec­om­mend­ed more robust dis­clo­sure rules for inter­net-con­nect­ed devices, includ­ing an “algo­rith­mic trans­paren­cy require­ment” that would help peo­ple under­stand how their data was being used and what auto­mat­ed deci­sions were then being made about them.

    Sam Lester, the center’s con­sumer pri­va­cy fel­low, said he believed that the abil­i­ties of new smart home devices high­light­ed the need for Unit­ed States reg­u­la­tors to get more involved with how con­sumer data was col­lect­ed and used.

    “A lot of these tech­no­log­i­cal inno­va­tions can be very good for con­sumers,” he said. “But it’s not the respon­si­bil­i­ty of con­sumers to pro­tect them­selves from these prod­ucts any more than it’s their respon­si­bil­i­ty to pro­tect them­selves from the safe­ty risks in food and drugs. It’s why we estab­lished a Food and Drug Admin­is­tra­tion years ago.”

    ———–

    “Hey, Alexa, What Can You Hear? And What Will You Do With It?” by SAPNA MAHESHWARI; The New York Times; 03/31/2018

    “While the ad riffed on what Alexa can say to users, the more intrigu­ing ques­tion may be what she and oth­er dig­i­tal assis­tants can hear — espe­cial­ly as more peo­ple bring smart speak­ers into their homes.”

    It’s one of the conun­drums of the smart speak­er busi­ness mod­el: it’s obvi­ous these smart speak­er man­u­fac­tur­ers would love to just col­lect all the infor­ma­tion they can about what peo­ple are say­ing and doing, but they need to main­tain the pre­tense of not doing that in order to get peo­ple to buy their devices. So it’s no sur­prise that Google and Ama­zon rou­tine­ly make it clear that their devices are only record­ing infor­ma­tion after they’ve been acti­vat­ed by the users. But as these patents make clear, there are all sorts of home life sur­veil­lance appli­ca­tions that these com­pa­nies have in mind. Like the smart home child mon­i­tor­ing sys­tem, with whis­per detec­tion capa­bil­i­ties and mis­chief-detect­ing AI capa­bil­i­ties:

    ...
    Ama­zon and Google, the lead­ing sell­ers of such devices, say the assis­tants record and process audio only after users trig­ger them by push­ing a but­ton or utter­ing a phrase like “Hey, Alexa” or “O.K., Google.” But each com­pa­ny has filed patent appli­ca­tions, many of them still under con­sid­er­a­tion, that out­line an array of pos­si­bil­i­ties for how devices like these could mon­i­tor more of what users say and do. That infor­ma­tion could then be used to iden­ti­fy a person’s desires or inter­ests, which could be mined for ads and prod­uct rec­om­men­da­tions.

    In one set of patent appli­ca­tions, Ama­zon describes how a “voice snif­fer algo­rithm” could be used on an array of devices, like tablets and e‑book read­ers, to ana­lyze audio almost in real time when it hears words like “love,” bought” or “dis­like.” A dia­gram includ­ed with the appli­ca­tion illus­trat­ed how a phone call between two friends could result in one receiv­ing an offer for the San Diego Zoo and the oth­er see­ing an ad for a Wine of the Month Club mem­ber­ship.

    Some patent appli­ca­tions from Google, which also owns the smart home prod­uct mak­er Nest Labs, describe how audio and visu­al sig­nals could be used in the con­text of elab­o­rate smart home setups.

    One appli­ca­tion details how audio mon­i­tor­ing could help detect that a child is engag­ing in “mis­chief” at home by first using speech pat­terns and pitch to iden­ti­fy a child’s pres­ence, one fil­ing said. A device could then try to sense move­ment while lis­ten­ing for whis­pers or silence, and even pro­gram a smart speak­er to “pro­vide a ver­bal warn­ing.”
    ...

    “One appli­ca­tion details how audio mon­i­tor­ing could help detect that a child is engag­ing in “mis­chief” at home by first using speech pat­terns and pitch to iden­ti­fy a child’s pres­ence, one fil­ing said. A device could then try to sense move­ment while lis­ten­ing for whis­pers or silence, and even pro­gram a smart speak­er to “pro­vide a ver­bal warn­ing.”

    Lis­ten­ing for the mis­chie­vous whis­pers of chil­dren and issu­ing a ver­bal warn­ing. Those are the kinds of capa­bil­i­ties com­pa­nies like Google, Ama­zon, and now Face­book are going to be invest­ing in. And it will prob­a­bly be very pop­u­lar because that would be a very handy tool for par­ents to have smart home sys­tems that lit­er­al­ly watch the kids. But it’s going to come at the cost of open­ing up our homes to mon­i­tor­ing by one of these data giants. And that’s insane, right?

    Anoth­er patent not­ed how the smart speak­ers could detect med­ical con­di­tions from your voice, like detect­ing cough­ing, sneez­ing, and the breath­ing rate. And that’s just an exam­ple of the kind of per­son­al data these devices are clear­ly capa­ble of gath­er­ing and they’re only going to get bet­ter at it:

    ...
    A sep­a­rate appli­ca­tion regard­ing per­son­al­iz­ing con­tent for peo­ple while respect­ing their pri­va­cy not­ed that voic­es could be used to deter­mine a speaker’s mood using the “vol­ume of the user’s voice, detect­ed breath­ing rate, cry­ing and so forth,” and med­ical con­di­tion “based on detect­ed cough­ing, sneez­ing and so forth.”

    The same appli­ca­tion out­lines how a device could “rec­og­nize a T‑shirt on a floor of the user’s clos­et” bear­ing Will Smith’s face and com­bine that with a brows­er his­to­ry that shows search­es for Mr. Smith “to pro­vide a movie rec­om­men­da­tion that dis­plays, ‘You seem to like Will Smith. His new movie is play­ing in a the­ater near you.’”
    ...

    “The same appli­ca­tion out­lines how a device could “rec­og­nize a T‑shirt on a floor of the user’s clos­et” bear­ing Will Smith’s face and com­bine that with a brows­er his­to­ry that shows search­es for Mr. Smith “to pro­vide a movie rec­om­men­da­tion that dis­plays, ‘You seem to like Will Smith. His new movie is play­ing in a the­ater near you.’””

    The smart speak­er cam­era is going to inter­face things it sees in your home with your brows­er his­to­ry. For ad tar­get­ing. That’s a patent.

    It’s why Con­sumer Watch­dog’s Jamie Court warn­ings that these con­sumer home devices are real­ly just home life spy­ware should be heed­ed. Because it’s pret­ty obvi­ous that the plan is to turn these things into home activ­i­ty mon­i­tor­ing devices. And with 22 per­cent of Amer­i­cans say­ing they use a “Home per­son­al assis­tants” in a recent Gallup poll, that real­ly does make the com­ing era of smart device home mon­i­tor­ing a pub­lic pri­va­cy night­mare:

    ...
    Still, Jamie Court, the pres­i­dent of Con­sumer Watch­dog, a non­prof­it advo­ca­cy group in San­ta Mon­i­ca, Calif., which pub­lished a study of some of the patent appli­ca­tions in Decem­ber, said, “When you read parts of the appli­ca­tions, it’s real­ly clear that this is spy­ware and a sur­veil­lance sys­tem meant to serve you up to adver­tis­ers.”

    The com­pa­nies, Mr. Court added, are “basi­cal­ly going to be find­ing out what our home life is like in qual­i­ta­tive ways.”

    Google called Con­sumer Watchdog’s claims “unfound­ed,” and said, “Prospec­tive prod­uct announce­ments should not nec­es­sar­i­ly be inferred from our patent appli­ca­tions.”

    A recent Gallup poll found found that 22 per­cent of Amer­i­cans used devices like Google Home or Ama­zon Echo. The grow­ing adop­tion of smart speak­ers means that gad­gets, some of which con­tain up to eight micro­phones and a cam­era, are being placed in kitchens and bed­rooms and used to answer ques­tions, con­trol appli­ances and make phone calls. Apple recent­ly intro­duced its own ver­sion, called the Home­Pod.
    ...

    Of course, both Google and Ama­zon assure us that their devices are only record­ing audio after they’re trig­gered. And it’s only being used to improve the user expe­ri­ence and make it more per­son­al­ized:

    ...
    Both Ama­zon and Google have empha­sized that devices with Alexa and Google Assis­tant store voice record­ings from users only after they are inten­tion­al­ly trig­gered. Amazon’s Echo and its new­er smart speak­ers with screens use lights to show when they are stream­ing audio to the cloud, and con­sumers can view and delete their record­ings on the Alexa smart­phone app or on Amazon’s web­site (though they are warned online that “may degrade” their expe­ri­ence). Google Home also has a light that indi­cates when it is record­ing, and users can sim­i­lar­ly see and delete that audio online.

    Ama­zon says voice record­ings may help ful­fill requests and improve its ser­vices, while Google says the data helps it learn over time to pro­vide bet­ter, more per­son­al­ized respons­es.
    ...

    And while Google assures us those voice record­ings will only be used to per­son­al­ize the expe­ri­ence, Google’s user agree­ment includes the pos­si­bil­i­ty of send­ing tran­scripts of what peo­ple say to third-par­ty ser­vice providers. And it “gen­er­al­ly” won’t send audio sam­ples to those third-par­ty providers. It’s an exam­ple of how lit­tle audio and visu­al snip­pets of peo­ple’s home life are becom­ing the new “mouse click” of con­sumer data col­lect­ed and sold in exchange for a dig­i­tal ser­vice:

    ...
    Take the thou­sands of third-par­ty apps devel­oped for Alexa called “skills,” which can be used to play games, dim lights or pro­vide clean­ing advice. While Ama­zon said it didn’t share users’ actu­al record­ings with third par­ties, its terms of use for Alexa say it may share the con­tent of their requests or infor­ma­tion like their ZIP codes. Google says it will “gen­er­al­ly” not pro­vide audio record­ings to third-par­ty ser­vice providers, but may send tran­scrip­tions of what peo­ple say.
    ...

    And it’s not like these patents are nec­es­sar­i­ly future pri­va­cy night­mares. They’re poten­tial­ly present pri­va­cy night­mares if it’s the case that these devices are actu­al­ly just col­lect­ing data all the time in secret. And in a num­ber of doc­u­ment­ed cas­es that’s been exact­ly what hap­pened, includ­ing a mur­der case par­tial­ly solved by an Ama­zon Echo with a propen­si­ty to start record­ing ran­dom­ly:

    ...
    And some devices have already shown that they are capa­ble of record­ing more than what users expect. Google faced some embar­rass­ment last fall when a batch of Google Home Min­is that it dis­trib­uted at com­pa­ny events and to jour­nal­ists were almost con­stant­ly record­ing.

    In a stark­er exam­ple, detec­tives inves­ti­gat­ing a death at an Arkansas home sought access to audio on an Echo device in 2016. Ama­zon resist­ed, but the record­ings were ulti­mate­ly shared with the per­mis­sion of the defen­dant, James Bates. (A judge lat­er dis­missed Mr. Bates’s first-degree mur­der charge based on sep­a­rate evi­dence.)

    Kath­leen Zell­ner, his lawyer, said in an inter­view that the Echo had been record­ing more than it was sup­posed to. Mr. Bates told her that it had been reg­u­lar­ly light­ing up with­out being prompt­ed, and had logged con­ver­sa­tions that were unre­lat­ed to Alexa com­mands, includ­ing a con­ver­sa­tion about foot­ball in a sep­a­rate room, she said.

    “It was just extreme­ly slop­py the way the acti­va­tion occurred,” Ms. Zell­ner said.
    ...

    And that’s all why bet­ter con­sumer reg­u­la­tion in this area real­ly is called fall, because there’s no way con­sumers can real­is­ti­cal­ly nav­i­gate this tech­no­log­i­cal land­scape:

    ...
    Sam Lester, the center’s con­sumer pri­va­cy fel­low, said he believed that the abil­i­ties of new smart home devices high­light­ed the need for Unit­ed States reg­u­la­tors to get more involved with how con­sumer data was col­lect­ed and used.

    “A lot of these tech­no­log­i­cal inno­va­tions can be very good for con­sumers,” he said. “But it’s not the respon­si­bil­i­ty of con­sumers to pro­tect them­selves from these prod­ucts any more than it’s their respon­si­bil­i­ty to pro­tect them­selves from the safe­ty risks in food and drugs. It’s why we estab­lished a Food and Drug Admin­is­tra­tion years ago.”
    ...

    And that’s one of the big ques­tions that real­ly should be asked in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal: does the US need some­thing like the Food and Drug Admin­is­tra­tion for data pri­va­cy for devices? Some­thing far more sub­stan­tial than the reg­u­la­to­ry infra­struc­ture that exists today and is ded­i­cat­ed to ensur­ing trans­paren­cy of data col­lec­tion prac­tices? It seems like the answer is obvi­ous­ly yes. And if the Cam­bridge Ana­lyt­i­ca scan­dal is enough evi­dence those Orwellian patents should suf­fice. It

    And as the Cam­bridge Ana­lyt­i­ca scan­dal also reminds us, we can either wait for the data abus­es to hap­pen and only belat­ed­ly deal with the prob­lem or we can deal with it proac­tive­ly. And deal­ing with it proac­tive­ly real­is­ti­cal­ly involves some­thing like an FDA for data pri­va­cy.

    But as we also just saw with those creepy patents, espe­cial­ly the child monitoring/scolding patent, con­sumers have much more than data pri­va­cy con­cerns with the world of smart devices Google and Face­book and Ama­zon have in mind. That future is going to involve devices that are lit­er­al­ly rais­ing the kids. Move over tele­vi­sion, it’s par­ent­ing brought to you by smart home AIs and Sil­i­con Val­ley.

    And let’s also not for­get one of the oth­er lessons that we can take from the Cam­bridge Ana­lyt­i­ca scan­dal: the data col­lect­ed by these smart devices isn’t just going to be col­lect­ed by Google and Face­book and Ama­zon. Some of that data is going to be col­lect­ed by all the third-par­ty app devel­op­ers too. Home life, brought to you by Google/Facebook/Amazon. That’s going to be a thing.

    At the same time it’s unde­ni­able that there will be very pos­i­tive appli­ca­tions for this kind of tech­nol­o­gy. And that’s why it’s such a shame com­pa­nies with the track record of Face­book and Google and Ama­zon are the ones lead­ing this kind of tech­no­log­i­cal rev­o­lu­tion: like much tech­nol­o­gy, the con­sumer home smart device tech­nol­o­gy is heav­i­ly reliant on trust in the man­u­fac­tur­er and trust that the man­u­fac­tur­er won’t screw things up and turn their device into a pri­va­cy night­mare. That’s not the kind of sit­u­a­tion where you want Google, Face­book, and Ama­zon lead­ing the way.

    So that’s all some­thing to keep in mind when Face­book does­n’t talk about its upcom­ing smart speak­ers at its annu­al devel­op­ers con­fer­ence next month.

    Posted by Pterrafractyl | April 8, 2018, 9:32 pm
  6. Here’s a fas­ci­nat­ing angle to the Cam­bridge Ana­lyt­i­ca scan­dal that involves an East­ern Ukrain­ian politi­cian with pro-EU lean­ings and ties to Yulia Tymoshenko and the Azov Bat­tal­ion:

    It turns out Cam­bridge Ana­lyt­i­ca out­sourced the pro­duc­tion of its “Ripon” psy­cho­log­i­cal pro­fil­ing soft­ware to a sep­a­rate com­pa­ny, Aggre­gateIQ (AIQ). AIQ was found­ed by Cam­bridge Ana­lyt­i­ca co-founder/whis­tle-blow­er Christo­pher Wylie, so it’s basi­cal­ly a sub­sidiary of Cam­bridge Ana­lyt­i­ca. But they were tech­ni­cal­ly sep­a­rate com­pa­nies and it turns out that AIQ could end up play­ing a big role in an inves­ti­ga­tion into whether or not UK elec­tion laws were vio­lat­ed by the “Vote Leave” camp dur­ing the lead up to the Brex­it vote.

    It looks like the “Vote Leave” camp basi­cal­ly secret­ly spent more than it legal­ly could using AIQ as a vehi­cle for doing this. Here’s how it worked: There was offi­cial “leave” polit­i­cal cam­paign but there were also third-par­ty pro-leave cam­paigns. One of those was Leave.EU. In 2016, Robert Mer­cer offered Leave.EU the ser­vices of Cam­bridge Ana­lyt­i­ca for free. Leave.EU relied on Cam­bridge Ana­lyt­i­ca’s ser­vices for its vot­er influ­ence cam­paign.

    The offi­cial Vote Leave cam­paign, on the oth­er hand, relied on AIQ for its data ana­lyt­ics ser­vices. Vote Leave even­tu­al­ly payed AIQ rough­ly 40 per­cent of its £7 mil­lion cam­paign bud­get. Here’s where the ille­gal­i­ty came int: Vote Leave also end­ed up gath­er­ing more cash than British law legal­ly allowed it to spend. Vote Leave could legal­ly donate that cash to oth­er cam­paigns but it could­n’t then coor­di­nate with those cam­paigns. But that’s exact­ly what it looks like Vote Leave did. About a a week before the EU ref­er­en­dum, Vote Leave inex­plic­a­bly donat­ed £625,000 to the founder of a small, unof­fi­cial Brex­it cam­paign called BeLeave. Grimes then imme­di­ate­ly gave a sub­stan­tial amount of the cash he received to AIQ. Vote Leave also donat­ed £100,000 to anoth­er Leave cam­paign called Vet­er­ans for Britain, which then paid AIQ pre­cise­ly that amount. So Vote Leave was basi­cal­ly using these small ‘leave’ groups as cam­paign mon­ey laun­der­ing vehi­cles, with AIQ as the final des­ti­na­tion of that mon­ey.

    That’s all why AIQ is now the focus of British inves­ti­ga­tors. AIQ’s role in this came to light in part from thou­sands of pages of code that was dis­cov­ered by a cyber­se­cu­ri­ty researcher at UpGuard on the web page of a devel­op­er named Ali Yas­sine who worked for SCL Group. With­in the code are notes that show SCL had request­ed that code be turned over by AIQ’s lead devel­op­er, Koji Hamid Pourseyed.

    AIQ’s con­tract with SCL stip­u­lates that SCL is the sole own­er of “Ripon”, Cam­bridge Analytica’s cam­paign plat­form. The doc­u­ments also include an inter­nal wiki where AIQ devel­op­ers also dis­cussed a project known as The Data­base of Truth, a sys­tem that “inte­grates, obtains, and nor­mal­izes data from dis­parate sources, includ­ing start­ing with the RNC Data Trust. It’s a reminder that the sto­ry of Cam­bridge Ana­lyt­i­ca isn’t just a sto­ry about the Trump cam­paign or the Brex­it vote. It’s also about the Repub­li­can Par­ty’s polit­i­cal ana­lyt­ics in gen­er­al

    Also includ­ed to the dis­cov­ered AIQ files were notes relat­ed to active projects for Cruz, Abbott, and a Ukrain­ian oli­garch, Sergei Taru­ta.

    So who is Sergei Taru­ta? Well, he’s a Ukrain­ian bil­lion­aire and co-founder of the Indus­tri­al Union of Don­bass, one of the largest com­pa­nies in Ukraine. He was appoint­ed gov­er­nor of the Donet­sk Oblast in East­ern Ukraine by Petro Poroshenko in March of 2014 before being fired in Octo­ber of 2014.

    Taru­ta went on to get elect­ed to par­lia­ment where he remains today. He recent­ly co-found­ed the “Osno­va” polit­i­cal par­ty that describes itself as pop­ulist and a pro­mot­er of “lib­er­al con­ser­vatism” (pre­sum­ably it’s “lib­er­al” in the lib­er­tar­i­an sense). It’s sus­pect­ed by some that Rinat Akhme­tov, Ukraine’s wealth­i­est oli­garch and anoth­er East­ern Ukrain­ian who strad­dles the line between back­ing the Kiev gov­ern­ment and main­tain­ing friend­ly ties with the pro-Russ­ian seg­ments of East­ern Urkaine, is also one of the par­ty back­ers. Ahk­me­tov was a sig­nif­i­cant backer of Yankovy­ch’s Par­ty of Regions and a dom­i­nant fig­ure in the Oppo­si­tion Bloc today. It was Ahk­me­tov who ini­tial­ly hired Paul Man­afort back in 2005 to act as a polit­i­cal con­sul­tant.

    It’s report­ed­ly pret­ty clear that Taru­ta’s Osno­va par­ty is designed to splin­ter away ex-sup­port­ers of Vik­tor Yankovy­ch’s Par­ty of Regions based on the politi­cians who have already declared they are going to join it. And yet as a politi­cian Taru­ta is char­ac­ter­ized as hav­ing nev­er real­ly tried to cozy up to the pro-Russ­ian side and has a his­to­ry of sup­port­ing pro-EU politi­cians. In 2006 he sup­port­ed Vik­tor Yuschenko over Vik­tor Yanukovych. In 2010 he backed Yulia Tymoshenko over Yik­tor Yanukovych.

    So Taru­ta a pro-EU East­ern Ukrain­ian politi­cian, which is notable because he’s not the only pro-EU East­ern Ukrain­ian politi­cian to be involved with enti­ties and fig­ures in the #TrumpRus­sia orbit. Don’t for­get about Andreii Arte­menko, the Ukrain­ian politi­cian who was involved with that ‘peace plan’ pro­pos­al with Michael Cohen and Felix Sater — a pro­pos­al that may have been part of a broad­er offer made to Rus­sia over both Ukraine and the Syr­ia and Iran — and how Arte­menko was a pro-EU mem­ber of the far right “Rad­i­cal Par­ty” and also has ties to Right Sec­tor. Arte­menko head­ed up the Kiev depart­ment of Yulia Tymoshenko’s Batkivshchy­na Par­ty par­ty back in 2006 and was serv­ing in a coali­tion head­ed by Tymoshenko.

    Also recall that the fig­ure who appears to have arranged for the ini­tial con­tact between Andreii Arte­menko with Michael Cohen and Felix Sater was Alexan­der Oronov, the father-in-law of Michael Cohen’s broth­er. And Oronov was, him­self, co-owned an ethanol plant with Vik­tor Topolov, anoth­er Ukrain­ian oli­garch who was Vik­tor Yuschenko’s coal min­is­ter and who because an assas­si­na­tion tar­get by Semi­on Mogilevy­ch’s mafia orga­ni­za­tion. One of Topolov’s part­ners who was also tar­get­ed by Mogilevych, Sla­va Kon­stan­ti­novsky, end up form­ing and join­ing one of the “vol­un­teer bat­tal­ions” fight­ing the sep­a­ratists in the East.

    So now we learn that AIQ (so, basi­cal­ly Cam­bridge Ana­lyt­i­ca) is doing some sort of work for Sergei Taru­ta, putting anoth­er East­ern Ukrain­ian oli­garch politi­cians with pro-EU lean­ings in the orbit of this #TrumpRus­sia scan­dal.

    So what kind of work did AIQ do for Taru­ta? That’s unclear. And it seems rea­son­able to assume that it’s work involv­ing Taru­ta’s new par­ty in Ukraine and its attempts to splin­ter off for­mer Par­ty of Regions vot­ers.

    But as we’re also going to see, Sergei Taru­ta has been doing some lob­by­ing work in Wash­ing­ton DC. Rather curi­ous lob­by­ing work: It turns out Taru­ta was at the cen­ter of a bizarre ‘con­gres­sion­al hear­ing’ that took place in the US cap­i­tal last Sep­tem­ber. This hear­ing focused on cor­rup­tion alle­ga­tions Taru­ta has been pro­mot­ing for over a year against the Nation­al Bank of Ukraine, the coun­try’s cen­tral bank.

    There were two Ukrain­ian tele­vi­sion sta­tions cov­er­ing the event and pre­tend­ing like it was a real con­gres­sion­al hear­ing. For­mer CIA direc­tor James Woolsey, who was briefly part of the Trump cam­paign, was also at the event, along with for­mer Repub­li­can House mem­ber Con­nie Mack, who is now a lob­by­ist. Mack was basi­cal­ly pre­tend­ing to speak on behalf of the US Con­gress and express­ing out­rage over Taru­ta’s cor­rup­tion alle­ga­tions for the Ukrain­ian tele­vi­sion audi­ences while express­ing his resolve to inves­ti­gate it. Rep. Ron Estes, a fresh­man Repub­li­can, booked the room in the US Cap­i­tal for Mack and lob­by­ing first. Estes’s office lat­er said it won’t hap­pen again.

    And there’s anoth­er twist to this strange attack on the Nation­al Bank of Ukraine: Accord­ing to Vox Ukraine, a a num­ber of the crit­i­cisms Taru­ta brings against the bank are based on dis­tor­tions and half-truths. In oth­er words, it does­n’t appear to be a gen­uine anti-cor­rup­tion cam­paign. So what is Taru­ta moti­va­tion? Well it’s notable that his crit­i­cism of the Nation­al Bank of Urkaine extends back to the actions of its pre­vi­ous chair, Valeriya Gontare­va (Hontare­va). Gontare­va was appoint­ed chair­man of the bank in June of 2014. And one of her first big moves was the gov­ern­ment takeover of Ukraine’s biggest com­mer­cial bank, Pri­vat­bank. Pri­vate­bank was co-found­ed by Ihor Kolo­moisky, anoth­er East­ern Ukrain­ian oli­garch.

    Ihor Kolo­moisky was appoint­ed gov­er­nor of the East­ern oblast of Dnipropetro­vsk at the same time Taru­ta was appoint­ed gov­er­nor of Donet­sk. Kolo­moisky has been sup­port­ing the Kiev gov­ern­ment in the civ­il war by finan­cial­ly sup­port­ing a num­ber of the vol­un­teer bat­tal­ions, includ­ing direct­ly cre­at­ing the large pri­vate Dnipro Bat­tal­ion. As we’ll see, both Kolo­moisky and Taru­ta report­ed­ly sup­port­ed the neo-Nazi Azov Bat­tal­ion accord­ing to a 2015 Reuters report. In oth­er words, Kolo­moisky is an East­ern Ukrain­ian oli­garch with ties to the far right, kind of like Andreii Arte­menko.

    Kolo­moisky was­n’t hap­py about the takeover of Pri­vat­bank. When Gontare­va presided over the bank’s nation­al­iza­tion, its accounts were miss­ing more than $5 bil­lion in large part because the bank lent so much mon­ey to peo­ple with con­nec­tions to Kolo­moisky. After the bank takeover, Gontare­va received numer­ous threats. On April 10, 2017, she announced at a press con­fer­ence that she was resign­ing from her post.

    So it looks like Sergei Taru­ta might be wag­ing an inter­na­tion­al PR bat­tle in against the Nation­al Bank of Ukraine as part of a counter move on half of Ihor Kolo­moisky an the Pri­vat­bank investors.

    And then there’s the per­son who actu­al­ly orga­nized this fake con­gres­sion­al hear. A lit­tle-known fig­ure came for­ward to take full respon­si­bil­i­ty: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch accused of lead­ing a coup attempt. Motkin the founder and pres­i­dent of StrategEast, a lob­by­ing firm that describes itself as “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe.”

    That’s all some new con­text to fac­tor into the analy­sis of Cam­bridge Ana­lyt­i­ca and the forces it was work­ing for: one of its clients is a pro-EU East­ern Ukrain­ian oli­garch who just set up a polit­i­cal par­ty designed to appeal to for Yanukovych sup­port­ers.

    Ok, so first, let’s look at the sto­ry of Cam­bridge Ana­lyt­i­ca and Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot that was used to both devel­op the GOP’s “Ripon” ana­lyt­ics soft­ware and also act­ed as the ana­lyt­ics firm for the Vote Leave cam­paign. And the work AIQ was doing for Vote Leave was appar­ent­ly so valu­able that Vote Leave secret­ly laun­der almost a mil­lion pounds through two small­er ‘leave’ groups in order to get that mon­ey to AIQ and secret­ly exceed the legal spend­ing caps. And that’s the dis­cov­ery of thou­sands of AIQ doc­u­ments by a cyber­se­cu­ri­ty firm is so polit­i­cal­ly sig­nif­i­cant in the UK right now. But as those doc­u­ments also reveal, AIQ was doing work for oth­er clients: Texas Gov­er­nor Greg Abbott, Texas Sen­a­tor Ted Cruz, and Ukrain­ian oli­garch Sergei Taru­ta:

    Giz­mo­do

    Aggre­gateIQ Cre­at­ed Cam­bridge Ana­lyt­i­ca’s Elec­tion Soft­ware, and Here’s the Proof

    Dell Cameron
    3/26/18 12:50pm

    A lit­tle-known Cana­di­an data firm ensnared by an inter­na­tion­al inves­ti­ga­tion into alleged wrong­do­ing dur­ing the Brex­it cam­paign cre­at­ed an elec­tion soft­ware plat­form mar­ket­ed by Cam­bridge Ana­lyt­i­ca, accord­ing to a batch of inter­nal files obtained exclu­sive­ly by Giz­mo­do.

    Dis­cov­ered by a secu­ri­ty researcher last week, the files con­firm that Aggre­gateIQ, a British Colum­bia-based data firm, devel­oped the tech­nol­o­gy Cam­bridge Ana­lyt­i­ca sold to clients for mil­lions of dol­lars dur­ing the 2016 US pres­i­den­tial elec­tion. Hun­dreds if not thou­sands of pages of code, as well as detailed notes signed by Aggre­gateIQ staff, whol­ly sub­stan­ti­ate recent reports that Cam­bridge Analytica’s soft­ware plat­form was not its own cre­ation.

    What’s more, the files reveal that AggregateIQ—also known as “AIQ”—is the devel­op­er behind cam­paign apps cre­at­ed for Texas Sen­a­tor Ted Cruz and Texas Gov­er­nor Greg Abbott, as well as a Ukrain­ian steel mag­nate named Ser­hiy Taru­ta, head the country’s new­ly formed Osno­va par­ty.

    Oth­er records show the firm once pitched an app to Bre­it­bart News, the far-right web­site fund­ed by hedge-fund bil­lion­aire Robert Mercer—Cambridge Analytica’s prin­ci­pal investor—and are cur­rent­ly con­tract­ed by WPA Intel­li­gence, a US-based con­sul­tan­cy found­ed by Repub­li­can poll­ster Chris Wil­son, who was direc­tor of dig­i­tal strat­e­gy for Cruz’s 2016 pres­i­den­tial cam­paign.

    The files were unearthed last week by Chris Vick­ery, research direc­tor at UpGuard, a Cal­i­for­nia-based cyber risk firm. On Sun­day night, after Giz­mo­do reached out to Jeff Sil­vester, co-founder of AIQ, the files were quick­ly tak­en offline.

    The files are like­ly to draw the inter­est of inves­ti­ga­tors on both sides of the Atlantic. Cana­di­an and British reg­u­la­tors are cur­rent­ly pur­su­ing leads to estab­lish whether mul­ti­ple “Leave” cam­paigns ille­gal­ly coor­di­nat­ed dur­ing the 2016 EU ref­er­en­dum.

    Ties between AIQ and Cam­bridge Analytica—the focus of recent wide­spread furor over the mis­use of data pulled from 50 mil­lion Face­book accounts—has like­wise drawn the inter­est of US and British author­i­ties. Cam­bridge CEO Alexan­der Nix was sus­pend­ed by his com­pa­ny last week after British reporters pub­lished covert­ly record­ed footage show­ing Nix boast­ing about brib­ing and black­mail­ing polit­i­cal rivals.

    Cam­bridge Ana­lyt­i­ca did not respond to a request for com­ment.

    AIQ is bound by an non-dis­clo­sure agree­ment the com­pa­ny signed in 2014 to take on for­mer client SCL Group, Cam­bridge Analytica’s par­ent com­pa­ny, accord­ing to a source with direct knowl­edge of the con­tract.

    In an inter­view over the week­end with London’s The Observ­er, Christo­pher Wylie, the for­mer Cam­bridge Ana­lyt­i­ca employ­ee turned whistle­blow­er, claimed that he helped estab­lish AIQ years ago in an effort to help SCL Group expand its data oper­a­tions. Sil­vester denied that Wylie was ever involved on that lev­el, but admits that Wylie helped AIQ land its first big con­tract.

    “We did some work with SCL and had a con­tract with them in 2014 for some cus­tom soft­ware devel­op­ment,” Sil­vester told Giz­mo­do. “We last worked with SCL in 2016 and have not worked with them since.”

    Data Exposed

    UpGuard first dis­cov­ered code belong­ing to AIQ last Thurs­day on the web page of a devel­op­er named Ali Yas­sine who worked for SCL Group. With­in the code—uploaded to the web­site GitHub in August 2016—are notes that show SCL had request­ed that code be turned over by AIQ’s lead devel­op­er, Koji Hamid Pourseyed.

    AIQ’s con­tract with SCL, a por­tion of which was pub­lished by The Guardian last year, stip­u­lates that SCL is the sole own­er of the intel­lec­tu­al prop­er­ty per­tain­ing to the contract—namely, the devel­op­ment of Ripon, Cam­bridge Analytica’s cam­paign plat­form.

    The find led UpGuard to unearth a code repos­i­to­ry on AIQ’s web­site. With­in it were count­less files link­ing AIQ to the Ripon pro­gram, as well as notes relat­ed to active projects for Cruz, Abbott, and the Ukrain­ian oli­garch.

    In an inter­nal wiki, AIQ devel­op­ers also dis­cussed a project known as The Data­base of Truth, a sys­tem that “inte­grates, obtains, and nor­mal­izes data from dis­parate sor­ces, includ­ing start­ing with the RNC Data Trust.” (RNC Data Trust is the Repub­li­can party’s pri­ma­ry vot­er file provider.) “The pri­ma­ry data source will be com­bined with state vot­er files, con­sumer data, third par­ty data providers, his­tor­i­cal WPA sur­vey and projects and cus­tomer data.”

    The Data­base of Truth, accord­ing to the wiki, is a project under devel­op­ment for WPA Intel­li­gence.

    Wil­son told Giz­mo­do on Mon­day that he has almost no knowl­edge of the con­tro­ver­sy unfold­ing over AIQ’s role in the UK. “I would nev­er work with a firm that I felt had done some­thing ille­gal or even uneth­i­cal,” he said. AIQ’s work for WPA fol­lowed a com­pet­i­tive bid process, he said. “They offered us the best capa­bil­i­ties for the best price.”

    Vapor­ware

    Until recent­ly, Cam­bridge Ana­lyt­i­ca oper­at­ed large­ly in the shad­ows. For years, it planned to tar­get right-lean­ing vot­ers for a host of high-pro­file polit­i­cal cam­paigns, work­ing for both Cruz and Pres­i­dent Don­ald Trump. With its bil­lion­aire back­ing, the firm promised to lever­age oceans of data col­lect­ed about voters—which we now know was acquired from sources both legal and unau­tho­rized.

    Known as Project Ripon, Cam­bridge Analytica’s goal was to fur­nish Repub­li­can can­di­dates with a tech­nol­o­gy plat­form capa­ble of reach­ing vot­ers through the use of psy­cho­log­i­cal pro­fil­ing. (SCL Group has long used behav­ioral research to con­duct “influ­ence oper­a­tions” on behalf of mil­i­tary and polit­i­cal clients world­wide.)

    Cam­bridge Ana­lyt­i­ca, which even­tu­al­ly chose AIQ to help build its plat­form, once boast­ed that it pos­sessed files on as many as 230 mil­lion Amer­i­cans com­piled from thou­sands of data points, includ­ing psy­cho­log­i­cal data har­vest­ed from social media, as well as com­mer­cial data avail­able to vir­tu­al­ly any­one who can afford it. The com­pa­ny intend­ed to clas­si­fy vot­ers by select per­son­al­i­ty types, apply­ing its sys­tem to craft mes­sages, online ads, and mail­ers that, it believed, would res­onate dis­tinc­tive­ly with vot­ers of each group.

    Sources with­in the Cruz cam­paign, which large­ly fund­ed Ripon’s devel­op­ment, claim the soft­ware nev­er actu­al­ly func­tioned. One for­mer staffer told Giz­mo­do the prod­uct was noth­ing but “vapor­ware.”

    AIQ’s inter­nal files show the com­pa­ny had unlim­it­ed access to the Ripon code, and a source with­in the Cruz cam­paign con­firmed to Giz­mo­do that AIQ was sole­ly respon­si­ble for the software’s devel­op­ment.

    The cam­paign even­tu­al­ly dumped more than $5.8 mil­lion into Ripon’s development—which is only about half the amount Robert Mer­cer, Cam­bridge Analytica’s prin­ci­pal investor, poured into Cruz’s White House bid. (After Trump took the nom­i­na­tion, Mer­cer con­tributed more than $15.5 mil­lion to his cam­paign, $5 mil­lion of which end­ed up back in Cam­bridge Analytica’s pock­ets.)

    A for­mer Cruz aide, who request­ed anonymi­ty to dis­cuss their work for the cam­paign, told Giz­mo­do that even at the high­est lev­els, no one knew that Cam­bridge Ana­lyt­i­ca had out­sourced Ripon’s devel­op­ment. “Ulti­mate­ly, I found out through some emails that they’re not even doing this work,” the source said. “It was being out­sourced to AIQ.”

    Accord­ing to the aide, when Cruz’s staff began to ques­tion AIQ over whether it was behind Ripon’s devel­op­ment, AIQ con­firmed that it was, but said it was nev­er sup­posed to dis­cuss its work.

    The Brex­it

    In 2016, Mer­cer report­ed­ly offered up Cam­bridge Analytica’s ser­vices for free to Leave.EU, one of sev­er­al group urg­ing the UK to depart the Euro­pean Union, accord­ing to The Guardian. Leave.EU was not, how­ev­er, the offi­cial “Leave” group rep­re­sent­ing the Brex­it cam­paign. Instead, a seper­ate group, known as Vote Leave, was for­mal­ly cho­sen by elec­tion offi­cials to lead the ref­er­en­dum.

    Where­as Leave.EU relied on Cam­bridge to influ­ence vot­ers through its use of data ana­lyt­ics, Vote Leave turned to AIQ, even­tu­al­ly pay­ing the firm rough­ly 40 per­cent of its £7 mil­lion cam­paign bud­get, accord­ing to The Guardian. Over time, how­ev­er, Vote Leave amassed more cash than it was legal­ly allowed to spend. While UK elec­tion laws per­mit­ted Vote Leave to gift its remain­ing funds to oth­er cam­paigns, fur­ther coor­di­na­tion between them was express­ly for­bid­den.

    Rough­ly a week before the EU ref­er­en­dum, Vote Leave inex­plic­a­bly donat­ed £625,000 to a young fash­ion design stu­dent named Dar­ren Grimes, the founder of a small, unof­fi­cial Brex­it cam­paign called BeLeave. Accord­ing to a Buz­zFeed inves­ti­ga­tion, Grimes imme­di­ate­ly gave a “sub­stan­tial amount” of the cash he received from Vote Leave to AIQ. Vote Leave also donat­ed £100,000 to anoth­er Leave cam­paign called Vet­er­ans for Britain, which, accord­ing to The Guardian, then paid AIQ pre­cise­ly that amount.

    A review of the AIQ files by UpGuard’s Chris Vick­ery revealed sev­er­al men­tions of Vote Leave and at least one men­tion of Vet­er­ans for Britain, appar­ent­ly relat­ed to web­site devel­op­ment.

    In an inter­view on Mon­day, Shah­mir San­ni, a for­mer vol­un­teer for Vote Leave cam­paign, told The Globe and Mail that he had “first-hand knowl­edge about the alleged wrong­do­ing in the Brex­it cam­paign.” San­ni, who was 22 when he worked for Vote Leave, said he was “encour­aged to spin out” anoth­er cam­paign, but that he had “no con­trol” over the £625,000 that was imme­di­ate­ly spent on AIQ’s ser­vices.

    British author­i­ties are pur­su­ing leads to estab­lish whether BeLeave and Vet­er­ans for Britain were mere­ly a con­duit through which Vote Leave sought to direct addi­tion­al funds to AIQ. While the UK Elec­toral Com­mis­sion took no action in ear­ly 2017, in Novem­ber it claimed that “new infor­ma­tion” had “come to light,” giv­ing the com­mis­sion “rea­son­able grounds to sus­pect an offence may have been com­mit­ted.”

    In an email, the UK Elec­tion Com­mis­sion told Giz­mo­do its inves­ti­ga­tion into Vote Leave pay­ments was ongo­ing.

    ...

    ———-

    “Aggre­gateIQ Cre­at­ed Cam­bridge Ana­lyt­i­ca’s Elec­tion Soft­ware, and Here’s the Proof” by Dell Cameron; Giz­mo­do; 3/26/2018

    “A lit­tle-known Cana­di­an data firm ensnared by an inter­na­tion­al inves­ti­ga­tion into alleged wrong­do­ing dur­ing the Brex­it cam­paign cre­at­ed an elec­tion soft­ware plat­form mar­ket­ed by Cam­bridge Ana­lyt­i­ca, accord­ing to a batch of inter­nal files obtained exclu­sive­ly by Giz­mo­do.”

    As we can see, AIQ was the under-the-radar SCL sub­sidiary that actu­al­ly cre­at­ed “Ripon”, the polit­i­cal mod­el­ing soft­ware Cam­bridge Ana­lyt­i­ca was offer­ing to client. Cam­bridge Ana­lyt­i­ca co-founder/whis­tle-blow­er Christo­pher Wylie helped SCL found the firm. Also AIQ co-found, Jeff Sil­vester, admits that Wylie was involved with AIQ land­ing its first big con­tract but asserts that Wylie was nev­er close­ly involved with the com­pa­ny. And Sil­vester also admits that the com­pa­ny had a con­tract with SCL in 2014 but haven’t worked with SCL since 2016. So AIQ is offi­cial­ly act­ing like it’s not real­ly an SCL off­shoot at this point:

    ...
    The files were unearthed last week by Chris Vick­ery, research direc­tor at UpGuard, a Cal­i­for­nia-based cyber risk firm. On Sun­day night, after Giz­mo­do reached out to Jeff Sil­vester, co-founder of AIQ, the files were quick­ly tak­en offline.

    ...

    In an inter­view over the week­end with London’s The Observ­er, Christo­pher Wylie, the for­mer Cam­bridge Ana­lyt­i­ca employ­ee turned whistle­blow­er, claimed that he helped estab­lish AIQ years ago in an effort to help SCL Group expand its data oper­a­tions. Sil­vester denied that Wylie was ever involved on that lev­el, but admits that Wylie helped AIQ land its first big con­tract.

    “We did some work with SCL and had a con­tract with them in 2014 for some cus­tom soft­ware devel­op­ment,” Sil­vester told Giz­mo­do. “We last worked with SCL in 2016 and have not worked with them since.”
    ...

    And based on the AIQ’s con­tract with SCL, we have a bet­ter idea of when exact­ly AIQ’s work with SCL end­ed in 2016: the code found by UpGuard was uploaded to the code-repos­i­to­ry web­site GitHub in August of 2016. That sug­gests that was the point when the cod­ed was effec­tive­ly hand­ed off from AIQ to SCL. And August of 2016, it’s impor­tant to recall, is the same month that Steve Ban­non, a Cam­bridge Ana­lyt­i­ca com­pa­ny offi­cer — and “the boss” accord­ing to Wylie — went to work as cam­paign man­ag­er of the Trump cam­paign. So you have to won­der if that’s a coin­ci­dence or a reflec­tion of con­cerns over this SCL/Cambridge Analytica/AIQ nexus get­ting some unwant­ed atten­tion:

    ...
    Data Exposed

    UpGuard first dis­cov­ered code belong­ing to AIQ last Thurs­day on the web page of a devel­op­er named Ali Yas­sine who worked for SCL Group. With­in the code—uploaded to the web­site GitHub in August 2016—are notes that show SCL had request­ed that code be turned over by AIQ’s lead devel­op­er, Koji Hamid Pourseyed.

    AIQ’s con­tract with SCL, a por­tion of which was pub­lished by The Guardian last year, stip­u­lates that SCL is the sole own­er of the intel­lec­tu­al prop­er­ty per­tain­ing to the contract—namely, the devel­op­ment of Ripon, Cam­bridge Analytica’s cam­paign plat­form.
    ...

    And in those dis­cov­ered AIQ doc­u­ments are notes on projects AIQ was doing for Cruz, Abbott and Taru­ta. Along with notes on a project for the GOP called The Data­base of Truth:

    ...
    The find led UpGuard to unearth a code repos­i­to­ry on AIQ’s web­site. With­in it were count­less files link­ing AIQ to the Ripon pro­gram, as well as notes relat­ed to active projects for Cruz, Abbott, and the Ukrain­ian oli­garch.

    In an inter­nal wiki, AIQ devel­op­ers also dis­cussed a project known as The Data­base of Truth, a sys­tem that “inte­grates, obtains, and nor­mal­izes data from dis­parate sor­ces, includ­ing start­ing with the RNC Data Trust.” (RNC Data Trust is the Repub­li­can party’s pri­ma­ry vot­er file provider.) “The pri­ma­ry data source will be com­bined with state vot­er files, con­sumer data, third par­ty data providers, his­tor­i­cal WPA sur­vey and projects and cus­tomer data.”

    The Data­base of Truth, accord­ing to the wiki, is a project under devel­op­ment for WPA Intel­li­gence.
    ...

    AIQ is mak­ing the GOP a “Data­base of Truth”. Great.

    And that sounds like a sep­a­rate sys­tem from Ripon. The Data­base of Truth appears to focus on the kind of data found in data bro­ker­ages — state vot­er files, con­sumer data, third par­ty data providers, etc. — where­as Ripon soft­ware appeared to be specif­i­cal­ly focused on the kind of psy­cho­log­i­cal pro­fil­ing Cam­bridge Ana­lyt­i­ca was spe­cial­iz­ing in:

    ...
    Vapor­ware

    Until recent­ly, Cam­bridge Ana­lyt­i­ca oper­at­ed large­ly in the shad­ows. For years, it planned to tar­get right-lean­ing vot­ers for a host of high-pro­file polit­i­cal cam­paigns, work­ing for both Cruz and Pres­i­dent Don­ald Trump. With its bil­lion­aire back­ing, the firm promised to lever­age oceans of data col­lect­ed about voters—which we now know was acquired from sources both legal and unau­tho­rized.

    Known as Project Ripon, Cam­bridge Analytica’s goal was to fur­nish Repub­li­can can­di­dates with a tech­nol­o­gy plat­form capa­ble of reach­ing vot­ers through the use of psy­cho­log­i­cal pro­fil­ing. (SCL Group has long used behav­ioral research to con­duct “influ­ence oper­a­tions” on behalf of mil­i­tary and polit­i­cal clients world­wide.)

    Cam­bridge Ana­lyt­i­ca, which even­tu­al­ly chose AIQ to help build its plat­form, once boast­ed that it pos­sessed files on as many as 230 mil­lion Amer­i­cans com­piled from thou­sands of data points, includ­ing psy­cho­log­i­cal data har­vest­ed from social media, as well as com­mer­cial data avail­able to vir­tu­al­ly any­one who can afford it. The com­pa­ny intend­ed to clas­si­fy vot­ers by select per­son­al­i­ty types, apply­ing its sys­tem to craft mes­sages, online ads, and mail­ers that, it believed, would res­onate dis­tinc­tive­ly with vot­ers of each group.
    ...

    And as we’ve heard from the Trump cam­paign, and their asser­tions that the Cam­bridge Ana­lyt­i­ca soft­ware was­n’t actu­al­ly very use­ful, the Cruz cam­paign is also call­ing this Ripon soft­ware just “vapor­ware”. Denials of the effec­tive­ness of Cam­bridge Ana­lyt­i­ca’s psy­cho­log­i­cal pro­fil­ing meth­ods has been one of the across-the-board asser­tions we’ve seen from the peo­ple involved with this sto­ry:

    ...
    Sources with­in the Cruz cam­paign, which large­ly fund­ed Ripon’s devel­op­ment, claim the soft­ware nev­er actu­al­ly func­tioned. One for­mer staffer told Giz­mo­do the prod­uct was noth­ing but “vapor­ware.”

    AIQ’s inter­nal files show the com­pa­ny had unlim­it­ed access to the Ripon code, and a source with­in the Cruz cam­paign con­firmed to Giz­mo­do that AIQ was sole­ly respon­si­ble for the software’s devel­op­ment.

    The cam­paign even­tu­al­ly dumped more than $5.8 mil­lion into Ripon’s development—which is only about half the amount Robert Mer­cer, Cam­bridge Analytica’s prin­ci­pal investor, poured into Cruz’s White House bid. (After Trump took the nom­i­na­tion, Mer­cer con­tributed more than $15.5 mil­lion to his cam­paign, $5 mil­lion of which end­ed up back in Cam­bridge Analytica’s pock­ets.)
    ...

    And while every­one involved with Cam­bridge Ana­lyt­i­ca has been claim­ing it’s large­ly use­less, it’s hard to ignored the Brex­it scan­dal that involved Vote Leave using two out­side groups to laun­der almost a mil­lion pounds to AIQ for AIQ’s ana­lyt­ics ser­vices in excess of the legal spend­ing caps. That’s quite a vote of con­fi­dence by Vote Leave:

    ...
    The Brex­it

    In 2016, Mer­cer report­ed­ly offered up Cam­bridge Analytica’s ser­vices for free to Leave.EU, one of sev­er­al group urg­ing the UK to depart the Euro­pean Union, accord­ing to The Guardian. Leave.EU was not, how­ev­er, the offi­cial “Leave” group rep­re­sent­ing the Brex­it cam­paign. Instead, a seper­ate group, known as Vote Leave, was for­mal­ly cho­sen by elec­tion offi­cials to lead the ref­er­en­dum.

    Where­as Leave.EU relied on Cam­bridge to influ­ence vot­ers through its use of data ana­lyt­ics, Vote Leave turned to AIQ, even­tu­al­ly pay­ing the firm rough­ly 40 per­cent of its £7 mil­lion cam­paign bud­get, accord­ing to The Guardian. Over time, how­ev­er, Vote Leave amassed more cash than it was legal­ly allowed to spend. While UK elec­tion laws per­mit­ted Vote Leave to gift its remain­ing funds to oth­er cam­paigns, fur­ther coor­di­na­tion between them was express­ly for­bid­den.

    Rough­ly a week before the EU ref­er­en­dum, Vote Leave inex­plic­a­bly donat­ed £625,000 to a young fash­ion design stu­dent named Dar­ren Grimes, the founder of a small, unof­fi­cial Brex­it cam­paign called BeLeave. Accord­ing to a Buz­zFeed inves­ti­ga­tion, Grimes imme­di­ate­ly gave a “sub­stan­tial amount” of the cash he received from Vote Leave to AIQ. Vote Leave also donat­ed £100,000 to anoth­er Leave cam­paign called Vet­er­ans for Britain, which, accord­ing to The Guardian, then paid AIQ pre­cise­ly that amount.

    A review of the AIQ files by UpGuard’s Chris Vick­ery revealed sev­er­al men­tions of Vote Leave and at least one men­tion of Vet­er­ans for Britain, appar­ent­ly relat­ed to web­site devel­op­ment.

    In an inter­view on Mon­day, Shah­mir San­ni, a for­mer vol­un­teer for Vote Leave cam­paign, told The Globe and Mail that he had “first-hand knowl­edge about the alleged wrong­do­ing in the Brex­it cam­paign.” San­ni, who was 22 when he worked for Vote Leave, said he was “encour­aged to spin out” anoth­er cam­paign, but that he had “no con­trol” over the £625,000 that was imme­di­ate­ly spent on AIQ’s ser­vices.
    ...

    As we can see, AIQ is an impor­tant enti­ty in terms of under­stand­ing the broad­er scope of the kind of work and clients this SCL/Cambridge Analytica/Bannon/Mercer polit­i­cal influ­ence project was under­tak­ing. AIQ is crit­i­cal for under­stand­ing the extent of the role this influ­ence net­work played in the Brex­it vote but also impor­tant for show­ing the oth­er kinds of clients this net­work was tak­ing on. Like Sergei Taru­ta.

    Now let’s take a clos­er look at Taru­ta with this Ukrain­ian Week pro­file from Octo­ber about the cre­ation of Taru­ta’s new Osno­va polit­i­cal par­ty. Many sus­pect has Rinat Akhme­tov of the Oppo­si­tion Bloc is behind Taru­ta’s new par­ty. But there is no evi­dence of that yet and the par­ty so far appears to be designed to appeal to for­mer Par­ty of Regions vot­ers, man of which are now Oppo­si­tion Bloc vot­ers in many cas­es and Akhme­tov is a major Oppo­si­tion Bloc backer. So ques­tions about Akhme­tov’s involve­ment remain open but it’s clear that Osno­va is try­ing to appeal to Akhme­tov’s polit­i­cal con­stituen­cy.

    As the arti­cle also notes, Taru­ta has a his­to­ry of sup­port­ing pro-EU politi­cians, includ­ing Vik­tor Yuschenko and Yulia Tymoshenko. And he’s nev­er cozied up to the pro-Russ­ian groups.

    But Taru­ta does have one very notable Krem­lin con­nec­tion: In 2010, 50%+2 shares of the Taru­ta’s indus­tri­al con­glom­er­ate, Indus­tri­al Union of Don­bas (IUD), was bought up by Russia’s Vneshekonom­bank, the for­eign trade bank. It is 100% state-owned and Russ­ian Pre­mier Dmit­ry Medvedev is the chair of its super­vi­so­ry board. So Taru­ta does have a notable direct busi­ness tie with with the Russ­ian gov­ern­ment. But as the arti­cle notes, there are no indi­ca­tions Taru­ta or his new par­ty are tak­ing Russ­ian mon­ey. And based on his polit­i­cal his­to­ry it would be sur­pris­ing if he was tak­ing Kremiln mon­ey because he’s clear­ly part of the pro-Euro­pean branch of Ukraine’s pol­i­tics.

    So we have AIQ doing some sort of work for Sergei (Ser­hiy) Taru­ta. Is that work data ana­lyt­ics for Osno­va? We don’t know. If it prob­a­bly involves Taru­ta’s cam­paign against the Nation­al Bank of Ukraine, because Taru­ta is clear­ly very inter­est­ed in wag­ing that polit­i­cal fight. So inter­est­ed that he had a fake con­gres­sion­al hear­ing at the US cap­i­tal that was broad­cast on two Ukrain­ian tele­vi­sion chan­nels and sent the mes­sage that the US con­gress was going to inves­ti­gate Taru­ta’s claims about cor­rup­tion at Ukraine’s cen­tral bank. So it’s pos­si­ble AIQ was involved in that kind of polit­i­cal work too. Espe­cial­ly giv­en what we know about Cam­bridge Ana­lyt­i­ca and SCL and their reliance of psy­cho­log­i­cal war­fare meth­ods to change pub­lic opin­ion. A fake con­gres­sion­al hear­ing, made pos­si­ble with the help of a Repub­li­can Con­gress­man, Rep. Estes, to sched­ule the room at the US Cap­i­tal, seems like exact­ly the kind of advice we should expect from the Cam­bridge Ana­lyt­i­ca peo­ple.
    The ques­tion of what exact­ly AIQ has been doing for Taru­ta would be a pret­ty big ques­tion giv­en the scan­dal and mys­tery swirling around Cam­bridge Ana­lyt­i­ca and SCL. The fake con­gres­sion­al hear­ing made it a much wierder big ques­tion about the ulti­mate goals and agen­da of the peo­ple behind Cam­bridge Ana­lyt­i­ca:

    Ukrain­ian Week

    Osno­va: Taruta’s polit­i­cal foun­da­tion
    Found­ed this fall, Donet­sk oli­garch Ser­hiy Taruta’s Osno­va or Foun­da­tion par­ty has already start­ed cam­paign­ing although the next Verk­hov­na Rada elec­tion is two years away

    Denys Kazan­skyi
    18 Octo­ber, 2017

    Dozens of bill­boards with his por­trait and the party’s name and slo­gan have popped up in Kyiv and in the south­east­ern oblasts of Ukraine. Infor­ma­tion about the new par­ty is not read­i­ly avail­able, how­ev­er, as it is still most­ly just on paper. But any oli­garchic project stands a good chance of meet­ing the thresh­old require­ment for gain­ing seats in the Rada based on a sol­id adver­tis­ing bud­get, as past expe­ri­ence has shown.

    Short on ide­ol­o­gy

    The Osno­va site states that the party’s ide­ol­o­gy is based on the prin­ci­ples of lib­er­al con­ser­vatism. In Ukrain­ian pol­i­tics, how­ev­er, these words typ­i­cal­ly mean very lit­tle. What kind of con­ser­vatism are we talk­ing about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s oth­er pop­ulists, all of whom count on a fair­ly unde­mand­ing elec­toral base. In some ways, he resem­bles Ser­hiy Tihip­ko, who tried over and over again to enter pol­i­tics as a “new face,” although he had been in pol­i­tics since his days in the Dnipropetro­vsk Oblast Kom­so­mol Exec­u­tive.

    Who will join the Taru­ta team? Whose inter­ests will the par­ty pro­mote and who will be its allies? Where will its mon­ey come from? Taru­ta him­self is a very ambigu­ous fig­ure. For a long time he was seen as an untyp­i­cal Donet­sk home­boy: a high-pro­file busi­ness­man with an intel­li­gent demeanor with­out any known crim­i­nal back­ground. He also dif­fered from the oth­er Donet­sk politi­cians in his polit­i­cal posi­tions. He nev­er played up to pro-Russ­ian par­ties and move­ments, sup­port­ing, instead, pro-Ukrain­ian forces that were nev­er very pop­u­lar in Don­bas.

    For instance, in a 2006 inter­view in Ukrain­s­ka Prav­da, the tycoon admit­ted that in 2004 he had cast his bal­lot for Vik­tor Yushchenko. “My posi­tion was the Euro­pean choice,” he empha­sized. In that same inter­view he also men­tioned that he liked Yulia Tymoshenko.

    In 2010, Taru­ta, in fact, sup­port­ed Tymoshenko in her bid for the pres­i­den­cy. “Of the two can­di­dates run­ning today, only Yulia Tymoshenko will be able to effec­tive­ly defend busi­ness inter­ests and over­come cor­rup­tion,” he said in Feb­ru­ary 2010. “She rep­re­sents polit­i­cal and eco­nom­ic sta­bil­i­ty in Ukraine and will work in the country’s inter­ests, not the inter­ests of some par­tic­u­lar busi­ness clan. Besides, Ms. Tymoshenko has well-deserved author­i­ty in the eyes of lead­ers in Rus­sia and Europe, which means she will always be able to work out a deal in favor of Ukrain­ian busi­ness. Only with Pres­i­dent Tymoshenko will it be pos­si­ble for Ukraine to see all those promis­ing growth plans that we have out­lined with our new Russ­ian part­ners.”

    Pos­i­tive image, poor per­for­mance

    And so, when Taru­ta was appoint­ed Gov­er­nor of Donet­sk Oblast in 2014, just as the anti-Ukrain­ian putsch began there, Ukraini­ans by and large saw this as some­thing pos­i­tive. Taru­ta seemed to be just the right can­di­date with the strength to resolve the sit­u­a­tion: a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Kolo­moyskiy in neigh­bor­ing Dnipropetro­vsk Oblast, Taru­ta either did not dare or did not want to put togeth­er pro-Ukrain­ian Self-Defense squads. And so the Don­bas Bat­tal­ion was actu­al­ly formed in Dnipro, and not in the Don­bas. Mean­while in Donet­sk Oblast, the advan­tage went to the mil­i­tants almost from the start.

    After he resigned as gov­er­nor, Taru­ta was elect­ed to the Verk­hov­na Rada. Even­tu­al­ly, he announced the for­ma­tion of his own polit­i­cal par­ty. Based on infor­ma­tion leaked in the press, it was clear from the begin­ning that this new par­ty was intend­ed to pick up the elec­torate of the now-defunct Par­ty of the Regions, most­ly in Ukraine’s south­ern and east­ern oblasts. This cer­tain­ly makes sense, but the prob­lem is that there are sev­er­al sim­i­lar par­ties already busy work­ing to win over this same elec­torate. The monop­oly enjoyed by PR has long since col­lapsed. Now, vot­ers in those regions have the Oppo­si­tion Bloc or Opobloc, Vadym Rabinovych’s Za Zhyt­tia [For Life] Par­ty, the for­ev­er-lurk­ing Vidrodzhen­nia [Revival] found­ed in 2004, and Nash Krai [Our Coun­try]. Osno­va will make five in this clus­ter and can only hope that yet anoth­er project along the lines of the also-defunct Social­ist Par­ty doesn’t make an appear­ance in the run-up to the 2019 elec­tion. In this kind of sit­u­a­tion, the chances of Osno­va suc­ceed­ing with­out form­ing an alliance with any of the more pop­u­lar polit­i­cal par­ties are very low.

    There were rumors at one point that : a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Taruta’s par­ty was being sup­port­ed by Rinat Akhme­tov, but this is hard to con­firm, one way or anoth­er, espe­cial­ly since rela­tions between the two Donet­sk tycoons were always strained. The chances of this being true are at most 50–50. One sto­ry is that the pur­pose of Osno­va is to grad­u­al­ly siphon off Akhmetov’s folks from the Oppo­si­tion Bloc, giv­en that for­mer Region­als split into the Akhme­tov wing, which is more loy­al to Poroshenko, and the Liovochkin-Fir­tash wing, which is com­plete­ly opposed. If this is true, how­ev­er, then Osno­va is pret­ty much guar­an­teed a spot in the next Rada, because Akhme­tov has both the mon­ey and the admin­is­tra­tive lever­age in Donet­sk, Zapor­izhzhia and Dnipropetro­vsk Oblasts, where his busi­ness­es are locat­ed, to make sure of this.

    Fill­ing Osno­va’s ranks

    So far, it’s not obvi­ous that Akhme­tov is behind this new par­ty of Taruta’s. Of those who have already con­firmed that they will join Osno­va, Akhmetov’s peo­ple are not espe­cial­ly evi­dent. Right now, the par­ty appears to be draw­ing peo­ple who are not espe­cial­ly known in Ukrain­ian pol­i­tics. Indeed, judg­ing from the party’s Face­book page, there are only three or four spokesper­sons oth­er than Taru­ta.

    ...

    The PR-Rus­sia con­nec­tion

    Why Ser­hiy Taru­ta decid­ed to put his faith in peo­ple relat­ed to the Yanukovych regime is not entire­ly under­stand­able. Is this the per­son­al ini­tia­tive of the oli­garch him­self or is it at the request of some silent investor? It’s not clear who actu­al­ly is fund­ing the par­ty, but it seems unlike­ly that Taru­ta is putting up his own mon­ey. Although this oligarch’s worth was esti­mat­ed at over US $2 bil­lion back in 2008, he claims today that his wealth has shrunk a thou­sand-fold. In an inter­view with Hard Talk in 2015, he announced that he had pre­served only 0.1% of his for­mer wealth.

    Which brings the sto­ry around to Taruta’s busi­ness inter­ests. In 2010, 50%+2 shares of the Indus­tri­al Union of Don­bas (IUD), found­ed by the oli­garch, was bought up by Russia’s Vneshekonom­bank, the for­eign trade bank. That means that Taru­ta and the bank are part­ners. Taru­ta him­self holds only 24.999% of IUD, while the bank is 100% state-owned and Russ­ian Pre­mier Dmit­ry Medvedev is the chair of its super­vi­so­ry board. And so, whether he intend­ed it to be so or not, Ser­hiy Taru­ta is busi­ness part­ners with the Krem­lin.

    What kind of influ­ence the Krem­lin has over the Donet­sk oli­garch and his par­ty is not entire­ly clear and, so far, there is no evi­dence. Nor is there evi­dence that Osno­va is being financed by Russ­ian mon­ey. Giv­en the polit­i­cal his­to­ries of the party’s spokesper­sons, how­ev­er, and the nature of Taruta’s busi­ness inter­ests, it’s worth get­ting a good glimpse into its inner work­ings. It’s entire­ly pos­si­ble that, under the aegis of a pro-Euro­pean politi­cian, some more agents of influ­ence from an ene­my state could find their way to seats in the Rada.

    In the base­ment of the Capi­tol

    Anna Kor­but

    On Sep­tem­ber 25, New­sOne report­ed on Ser­hiy Taruta’s event in Wash­ing­ton, “The high­est lev­el in the US, the Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues [sic], will find out about the cor­rup­tion at the NBU, Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence about the cor­rup­tion of the top offi­cials at the Nation­al Bank of Ukraine, will the strongest in the world find out about this.” At the event, Taru­ta and Olek­san­dr Zavadet­skiy, a one-time direc­tor of the NBU Depart­ment for Mon­i­tor­ing indi­vid­u­als con­nect­ed to banks, were plan­ning to report on the deals by-then-depart­ed NBU Gov­er­nor Vale­ria Hontare­va had cut. The event did take place… in a tiny base­ment room at the Capi­tol where the Con­gress meets, with a very small audience—and New­sOne cam­eras.

    The speak­ers at the event were intro­duced, not with­out some prob­lems in pro­nun­ci­a­tion, by Con­nie Mack IV, a Repub­li­can mem­ber of the US House of Rep­re­sen­ta­tives from 2005 to 2013. Since leav­ing his Con­gres­sion­al career behind, Mack has been work­ing as a lob­by­ist and con­sul­tant. Over 2015–2016, his name often came up as a lob­by­ist for Hungary’s Vik­tor Orban Admin­is­tra­tion in the US.

    For­mer CIA direc­tor James Woolsey Jr. offered a few gen­er­al­ized com­ments about cor­rup­tion. In addi­tion to being the CIA boss in 1993–1995 under the first Clin­ton Admin­is­tra­tion, Woolsey held high posts under oth­er US pres­i­dents as well and was involved in nego­ti­a­tions with the USSR over arms treaties in the 1980s.

    Inter­est­ing­ly, there were no cur­rent elect­ed Amer­i­can offi­cials in atten­dance at the event. More­over, there is no such crea­ture as a “Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues” in the US Con­gress. The Con­gress has a Finan­cial Ser­vices Com­mit­tee and the Sen­ate has Finance Com­mit­tee. Among the joint Con­gres­sion­al com­mit­tees there is none that spe­cial­izes specif­i­cal­ly on finan­cial issues. The Sen­ate Finance Com­mit­tee met on Sep­tem­ber 25 but the agen­da includ­ed only propo­si­tions from a num­ber of sen­a­tors on how to reform the Afford­able Care Act. Pret­ty much the only reac­tion to Taruta’s US event was an arti­cle by JP Car­roll in the Week­ly Stan­dard under the head­line, “The moth­er of all fake news items: How a win­dow­less room in the base­ment of the Capi­tol was set up to look like a fake [sic] Con­gres­sion­al hear­ing.” And some angry tweets in response.

    Lat­er on, in fact, some ques­tions did arise, such as the valid­i­ty of infor­ma­tion pub­lished in a pam­phlet enti­tled: “Hontare­va: a threat to Ukraine’s eco­nom­ic secu­ri­ty,” which was hand­ed out to par­tic­i­pants. Yet, this very brochure had been chal­lenged near­ly a year ear­li­er, in Octo­ber 2016, by reporters at Vox Ukraine, who ana­lyzed the infor­ma­tion pre­sent­ed. In an arti­cle enti­tled “Vox­Check of the Year. How Much Truth There Is in Ser­hiy Taruta’s Pam­phlet about the Head of Ukraine’s Cen­tral Bank,” jour­nal­ists came to the con­clu­sion that, while the data in the text was large­ly accu­rate, it had been com­plete­ly manip­u­lat­ed. Some­what lat­er, they did a fol­low-up analy­sis of what Ms. Hontare­va actu­al­ly did wrong as NBU Chair.

    Trans­lat­ed by Lidia Wolan­skyj

    ———-
    “Osno­va: Taruta’s polit­i­cal foun­da­tion” by Denys Kazan­skyi; Ukrain­ian Week; 10/18/2017

    “The Osno­va site states that the party’s ide­ol­o­gy is based on the prin­ci­ples of lib­er­al con­ser­vatism. In Ukrain­ian pol­i­tics, how­ev­er, these words typ­i­cal­ly mean very lit­tle. What kind of con­ser­vatism are we talk­ing about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s oth­er pop­ulists, all of whom count on a fair­ly unde­mand­ing elec­toral base. In some ways, he resem­bles Ser­hiy Tihip­ko, who tried over and over again to enter pol­i­tics as a “new face,” although he had been in pol­i­tics since his days in the Dnipropetro­vsk Oblast Kom­so­mol Exec­u­tive.”

    A par­ty based on the prin­ci­ples of lib­er­al con­ser­vatism. So a vague par­ty for a vague cause. That seems like an appro­pri­ate fit for Sergei Taru­ta, an intrigu­ing­ly vague fig­ure. But a notable fig­ure from Donet­sk, the heart­land of the sep­a­ratists, because he nev­er played up to the pro-Russ­ian par­ties and move­ments and was con­sis­tent­ly a sup­port of the pro-Kiev forces. That includ­ed sup­port­ing Vik­tor Yuschenko in 2006 and Yulia Tymoshenko in 2010:

    ...
    Who will join the Taru­ta team? Whose inter­ests will the par­ty pro­mote and who will be its allies? Where will its mon­ey come from? Taru­ta him­self is a very ambigu­ous fig­ure. For a long time he was seen as an untyp­i­cal Donet­sk home­boy: a high-pro­file busi­ness­man with an intel­li­gent demeanor with­out any known crim­i­nal back­ground. He also dif­fered from the oth­er Donet­sk politi­cians in his polit­i­cal posi­tions. He nev­er played up to pro-Russ­ian par­ties and move­ments, sup­port­ing, instead, pro-Ukrain­ian forces that were nev­er very pop­u­lar in Don­bas.

    For instance, in a 2006 inter­view in Ukrain­s­ka Prav­da, the tycoon admit­ted that in 2004 he had cast his bal­lot for Vik­tor Yushchenko. “My posi­tion was the Euro­pean choice,” he empha­sized. In that same inter­view he also men­tioned that he liked Yulia Tymoshenko.

    In 2010, Taru­ta, in fact, sup­port­ed Tymoshenko in her bid for the pres­i­den­cy. “Of the two can­di­dates run­ning today, only Yulia Tymoshenko will be able to effec­tive­ly defend busi­ness inter­ests and over­come cor­rup­tion,” he said in Feb­ru­ary 2010. “She rep­re­sents polit­i­cal and eco­nom­ic sta­bil­i­ty in Ukraine and will work in the country’s inter­ests, not the inter­ests of some par­tic­u­lar busi­ness clan. Besides, Ms. Tymoshenko has well-deserved author­i­ty in the eyes of lead­ers in Rus­sia and Europe, which means she will always be able to work out a deal in favor of Ukrain­ian busi­ness. Only with Pres­i­dent Tymoshenko will it be pos­si­ble for Ukraine to see all those promis­ing growth plans that we have out­lined with our new Russ­ian part­ners.”
    ...

    And Taru­ta’s pro-Kiev ori­en­ta­tion is no doubt a big rea­son he was appoint­ed gov­er­nor of Donet­sk in March of 2014 fol­low­ing the post-Maid­an col­lapse of the Yanukovych gov­ern­ment. But he did­n’t last long, resign­ing in Octo­ber of 2014. And that was part­ly attrib­uted to his lim­it­ed sup­port for the vol­un­teer mili­tias when com­pared to the appoint­ed gov­er­nor of the neigh­bor­ing Dnipo oblast, Ihor Kolo­moisky (note that, as we’ll see in a fol­low­ing arti­cle, both Taru­ta and Kolo­moisky report­ed­ly sup­port­ed the Azov Bat­tal­ion):

    ...
    Pos­i­tive image, poor per­for­mance

    And so, when Taru­ta was appoint­ed Gov­er­nor of Donet­sk Oblast in 2014, just as the anti-Ukrain­ian putsch began there, Ukraini­ans by and large saw this as some­thing pos­i­tive. Taru­ta seemed to be just the right can­di­date with the strength to resolve the sit­u­a­tion: a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Kolo­moyskiy in neigh­bor­ing Dnipropetro­vsk Oblast, Taru­ta either did not dare or did not want to put togeth­er pro-Ukrain­ian Self-Defense squads. And so the Don­bas Bat­tal­ion was actu­al­ly formed in Dnipro, and not in the Don­bas. Mean­while in Donet­sk Oblast, the advan­tage went to the mil­i­tants almost from the start.
    ...

    After resign­ing as gov­er­nor, he gets elect­ed to the par­lia­ment. And now he has a new par­ty, Osno­va, which is char­ac­ter­ized as clear­ly designed to pick up the elec­torate of the now-defunct Par­ty of Regions:

    ...
    After he resigned as gov­er­nor, Taru­ta was elect­ed to the Verk­hov­na Rada. Even­tu­al­ly, he announced the for­ma­tion of his own polit­i­cal par­ty. Based on infor­ma­tion leaked in the press, it was clear from the begin­ning that this new par­ty was intend­ed to pick up the elec­torate of the now-defunct Par­ty of the Regions, most­ly in Ukraine’s south­ern and east­ern oblasts. This cer­tain­ly makes sense, but the prob­lem is that there are sev­er­al sim­i­lar par­ties already busy work­ing to win over this same elec­torate. The monop­oly enjoyed by PR has long since col­lapsed. Now, vot­ers in those regions have the Oppo­si­tion Bloc or Opobloc, Vadym Rabinovych’s Za Zhyt­tia [For Life] Par­ty, the for­ev­er-lurk­ing Vidrodzhen­nia [Revival] found­ed in 2004, and Nash Krai [Our Coun­try]. Osno­va will make five in this clus­ter and can only hope that yet anoth­er project along the lines of the also-defunct Social­ist Par­ty doesn’t make an appear­ance in the run-up to the 2019 elec­tion. In this kind of sit­u­a­tion, the chances of Osno­va suc­ceed­ing with­out form­ing an alliance with any of the more pop­u­lar polit­i­cal par­ties are very low.
    ...

    And while the trans­la­tion is some­what gar­bled here, it appears that there is spec­u­la­tion that Rinat Akhme­tov, a top oli­garch and one of the pri­ma­ry back­ers of the “Oppo­si­tion Bloc”, may be behind Taru­ta’s Osno­va ini­tia­tive. But there’s no evi­dence of this and if true it would put Osno­va in com­pe­ti­tion for Akhme­tov’s Oppo­si­tion Bloc vot­ers. Also, peo­ple close to Akhme­tov aren’t found in Osno­va’s lead­er­ship:

    ...
    There were rumors at one point that : a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Taruta’s par­ty was being sup­port­ed by Rinat Akhme­tov, but this is hard to con­firm, one way or anoth­er, espe­cial­ly since rela­tions between the two Donet­sk tycoons were always strained. The chances of this being true are at most 50–50. One sto­ry is that the pur­pose of Osno­va is to grad­u­al­ly siphon off Akhmetov’s folks from the Oppo­si­tion Bloc, giv­en that for­mer Region­als split into the Akhme­tov wing, which is more loy­al to Poroshenko, and the Liovochkin-Fir­tash wing, which is com­plete­ly opposed. If this is true, how­ev­er, then Osno­va is pret­ty much guar­an­teed a spot in the next Rada, because Akhme­tov has both the mon­ey and the admin­is­tra­tive lever­age in Donet­sk, Zapor­izhzhia and Dnipropetro­vsk Oblasts, where his busi­ness­es are locat­ed, to make sure of this.

    Fill­ing Osno­va’s ranks

    So far, it’s not obvi­ous that Akhme­tov is behind this new par­ty of Taruta’s. Of those who have already con­firmed that they will join Osno­va, Akhmetov’s peo­ple are not espe­cial­ly evi­dent. Right now, the par­ty appears to be draw­ing peo­ple who are not espe­cial­ly known in Ukrain­ian pol­i­tics. Indeed, judg­ing from the party’s Face­book page, there are only three or four spokesper­sons oth­er than Taru­ta.
    ...

    But while Taru­ta is clear­ly a pro-Kiev/pro-EU kind of Ukrain­ian politi­cian, he does have one notable tie to the Kremiln: a major­i­ty stake in his indus­tri­al con­glom­er­ate was sold to a Russ­ian state-own bank in 2010:

    ...
    The PR-Rus­sia con­nec­tion

    Why Ser­hiy Taru­ta decid­ed to put his faith in peo­ple relat­ed to the Yanukovych regime is not entire­ly under­stand­able. Is this the per­son­al ini­tia­tive of the oli­garch him­self or is it at the request of some silent investor? It’s not clear who actu­al­ly is fund­ing the par­ty, but it seems unlike­ly that Taru­ta is putting up his own mon­ey. Although this oligarch’s worth was esti­mat­ed at over US $2 bil­lion back in 2008, he claims today that his wealth has shrunk a thou­sand-fold. In an inter­view with Hard Talk in 2015, he announced that he had pre­served only 0.1% of his for­mer wealth.

    Which brings the sto­ry around to Taruta’s busi­ness inter­ests. In 2010, 50%+2 shares of the Indus­tri­al Union of Don­bas (IUD), found­ed by the oli­garch, was bought up by Russia’s Vneshekonom­bank, the for­eign trade bank. That means that Taru­ta and the bank are part­ners. Taru­ta him­self holds only 24.999% of IUD, while the bank is 100% state-owned and Russ­ian Pre­mier Dmit­ry Medvedev is the chair of its super­vi­so­ry board. And so, whether he intend­ed it to be so or not, Ser­hiy Taru­ta is busi­ness part­ners with the Krem­lin.

    What kind of influ­ence the Krem­lin has over the Donet­sk oli­garch and his par­ty is not entire­ly clear and, so far, there is no evi­dence. Nor is there evi­dence that Osno­va is being financed by Russ­ian mon­ey. Giv­en the polit­i­cal his­to­ries of the party’s spokesper­sons, how­ev­er, and the nature of Taruta’s busi­ness inter­ests, it’s worth get­ting a good glimpse into its inner work­ings. It’s entire­ly pos­si­ble that, under the aegis of a pro-Euro­pean politi­cian, some more agents of influ­ence from an ene­my state could find their way to seats in the Rada.
    ...

    And beyond build­ing his mys­te­ri­ous new Osno­va par­ty, Taru­ta is also busy lob­by­ing the US about his pet project of out­ing alleged cor­rup­tion at Ukraine’s cen­tral bank. Or at least he’s busy mak­ing it look like he’s lob­by­ing the US about this. And he’s will­ing to go to enor­mous lengths to cre­ate those appear­ances, like a Sep­tem­ber 25, 2017 fake con­gres­sion­al hear­ing in the US Cap­i­tal where an ex-Con­gress­man, Con­nie Mack, pre­tend­ed to expres­sion con­gers­sion­al out­rage over Taru­ta’s alle­ga­tions and an ex-CIA chief, James Woolsey, gave words of sup­port for the ‘anti-cor­rup­tion dri­ve’. And this was all tele­vised in Ukraine and treat­ed like a real US polit­i­cal event:

    ...
    On Sep­tem­ber 25, New­sOne report­ed on Ser­hiy Taruta’s event in Wash­ing­ton, “The high­est lev­el in the US, the Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues [sic], will find out about the cor­rup­tion at the NBU, Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence about the cor­rup­tion of the top offi­cials at the Nation­al Bank of Ukraine, will the strongest in the world find out about this.” At the event, Taru­ta and Olek­san­dr Zavadet­skiy, a one-time direc­tor of the NBU Depart­ment for Mon­i­tor­ing indi­vid­u­als con­nect­ed to banks, were plan­ning to report on the deals by-then-depart­ed NBU Gov­er­nor Vale­ria Hontare­va had cut. The event did take place… in a tiny base­ment room at the Capi­tol where the Con­gress meets, with a very small audience—and New­sOne cam­eras.

    The speak­ers at the event were intro­duced, not with­out some prob­lems in pro­nun­ci­a­tion, by Con­nie Mack IV, a Repub­li­can mem­ber of the US House of Rep­re­sen­ta­tives from 2005 to 2013. Since leav­ing his Con­gres­sion­al career behind, Mack has been work­ing as a lob­by­ist and con­sul­tant. Over 2015–2016, his name often came up as a lob­by­ist for Hungary’s Vik­tor Orban Admin­is­tra­tion in the US.

    For­mer CIA direc­tor James Woolsey Jr. offered a few gen­er­al­ized com­ments about cor­rup­tion. In addi­tion to being the CIA boss in 1993–1995 under the first Clin­ton Admin­is­tra­tion, Woolsey held high posts under oth­er US pres­i­dents as well and was involved in nego­ti­a­tions with the USSR over arms treaties in the 1980s.

    Inter­est­ing­ly, there were no cur­rent elect­ed Amer­i­can offi­cials in atten­dance at the event. More­over, there is no such crea­ture as a “Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues” in the US Con­gress. The Con­gress has a Finan­cial Ser­vices Com­mit­tee and the Sen­ate has Finance Com­mit­tee. Among the joint Con­gres­sion­al com­mit­tees there is none that spe­cial­izes specif­i­cal­ly on finan­cial issues. The Sen­ate Finance Com­mit­tee met on Sep­tem­ber 25 but the agen­da includ­ed only propo­si­tions from a num­ber of sen­a­tors on how to reform the Afford­able Care Act. Pret­ty much the only reac­tion to Taruta’s US event was an arti­cle by JP Car­roll in the Week­ly Stan­dard under the head­line, “The moth­er of all fake news items: How a win­dow­less room in the base­ment of the Capi­tol was set up to look like a fake [sic] Con­gres­sion­al hear­ing.” And some angry tweets in response.

    Lat­er on, in fact, some ques­tions did arise, such as the valid­i­ty of infor­ma­tion pub­lished in a pam­phlet enti­tled: “Hontare­va: a threat to Ukraine’s eco­nom­ic secu­ri­ty,” which was hand­ed out to par­tic­i­pants. Yet, this very brochure had been chal­lenged near­ly a year ear­li­er, in Octo­ber 2016, by reporters at Vox Ukraine, who ana­lyzed the infor­ma­tion pre­sent­ed. In an arti­cle enti­tled “Vox­Check of the Year. How Much Truth There Is in Ser­hiy Taruta’s Pam­phlet about the Head of Ukraine’s Cen­tral Bank,” jour­nal­ists came to the con­clu­sion that, while the data in the text was large­ly accu­rate, it had been com­plete­ly manip­u­lat­ed. Some­what lat­er, they did a fol­low-up analy­sis of what Ms. Hontare­va actu­al­ly did wrong as NBU Chair.
    ...

    So now lets take a look at a report in this bizarre fake event writ­ten by the one Amer­i­can reporter who was invit­ed to attend. As the arti­cle notes, the event will billed by the Ukrain­ian tele­vi­sion chan­nel as a meet­ing of the “US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.” No cur­rent mem­bers of Con­gress were there. Instead, it was a pri­vate pan­el dis­cus­sion host­ed by for­mer Rep. Con­nie Mack IV (R‑FL), and Matt Kee­len, a vet­er­an polit­i­cal fundrais­er and oper­a­tive. It was open only to invit­ed guests (includ­ing con­gres­sion­al staffers), two Ukrain­ian reporters (from New­sOne), and one Amer­i­can reporter. Mack was wear­ing his old con­gres­sion­al pin on his lapel.

    Much of the event was spent crit­i­ciz­ing Ukraine’s for­mer cen­tral banker Valeriya Hontare­va (Gontare­va). The “HONTAREVA report” is the prod­uct of Taru­ta, and he has been out pro­mot­ing it since late 2016. Accord­ing to Vox­Check, a Ukrain­ian fact check­ing web­site, “the data [in the report], though most­ly cor­rect, are manip­u­lat­ed in almost all occa­sions.” Vox­Check also notes that the report has split Ukrain­ian politi­cians.

    James Woolsey, the for­mer CIA direc­tor and for­mer Trump cam­paign advis­er, was also at the event and briefly spoke. Woolsey talked about how “sweet” Rus­sia was in the ear­ly years after the fall of the Berlin Wall and the need to find a way to make Rus­sia “sweet” like that again.

    One Sen­ate Aide described Woolsey’s appear­ance there a strange, strange event and an “inter-oli­garch dis­pute”: “It was a strange, strange event. Even by Ukrain­ian stan­dards, that was an odd one. . . . I mean, why would a for­mer CIA direc­tor be in the base­ment of the Capi­tol for a inter-oli­garch dis­pute? [For­mer] CIA direc­tors don’t just go to events and say, how much we could get along with the Rus­sians. They don’t do that with­out a rea­son.” And that seems like a good way to sum­ma­rizee this: a strange, strange event that’s one ele­ment of a broad inter-oli­garch dis­pute. A dis­pute that’s giv­ing us some insights in the the kind of fig­ures in Ukraine Cam­bridge Ana­lyt­i­ca and AIQ want to work for:

    The Week­ly Stan­dard

    The Moth­er of All Fake News
    How a win­dow­less room in the base­ment of the Capi­tol was set up to look like a fake con­gres­sion­al hear­ing.

    1:12 PM, Sep 29, 2017 | By J.P. CARROLL

    Watch­ers of Ukraine’s New­sOne tele­vi­sion chan­nel on Sep­tem­ber 25 were treat­ed to what was sug­gest­ed to be a con­gres­sion­al hear­ing in Wash­ing­ton about cor­rup­tion in the Nation­al Bank of Ukraine (the NBU), which is the Ukrain­ian equiv­a­lent of the Fed­er­al Reserve Board.

    The event, which took place in the base­ment of the U.S. Capi­tol, Room HC 8, was billed by the Ukrain­ian tele­vi­sion chan­nel as a meet­ing of the “US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.” New­sOne teased it this way:

    The high­est lev­els of cor­rup­tion in the NBU are known by the US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.

    Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence of cor­rup­tions of the most impor­tant offi­cials of the Nation­al Bank, the strongest of the world will find out about it.

    Shock­ing details and res­o­nant details—live stream­ing on New­sOne! Turn on at 21:00—live from Wash­ing­ton DC

    Except, what was broad­cast was not a hear­ing of any com­mit­tee of Con­gress. No cur­rent mem­bers of Con­gress were even there. What was this odd event? A pri­vate pan­el dis­cus­sion host­ed by for­mer Rep. Con­nie Mack IV (R‑FL), along with vet­er­an polit­i­cal fundrais­er and oper­a­tive, Matt Kee­len. But unlike an actu­al con­gres­sion­al hear­ing, this pri­vate event was open only to invit­ed guests (includ­ing con­gres­sion­al staffers), two Ukrain­ian reporters (from New­sOne), and one Amer­i­can reporter (me).

    Hand­ed out to atten­dees was a report titled “HONTAREVA: Com­bat­ting Cor­rup­tion in the Nation­al Bank of Ukraine.” The report’s sub­ject is Valeriya Hontare­va, who resigned as gov­er­nor from the NBU in April in the wake of death threats after she reformed the Ukraine’s bank­ing sys­tem, includ­ing nation­al­iz­ing the largest bank, Pri­vat­Bank. Hontare­va is an ally of Ukrain­ian Pres­i­dent Petro Poroshenko.

    Join­ing Mack and Kee­len at the front of the room were two pan­elists: Sergiy Taru­ta, a bil­lion­aire mem­ber of the Ukrain­ian par­lia­ment who pre­vi­ous­ly served as gov­er­nor of Donet­sk in east­ern Ukraine, and Olek­san­dr Zavadet­skyi, who for­mer­ly worked at the NBU and claimed to have been fired after ask­ing inap­pro­pri­ate ques­tions regard­ing bank nation­al­iza­tion pro­ce­dures while Hontare­va was in charge.

    The HONTAREVA report is the prod­uct of Sergiy Taru­ta, and he has been out flog­ging it for near­ly a year. Vox­Check, a Ukrain­ian fact check­ing web­site, ana­lyzed Taruta’s report in late 2016 and says of the report: “Vox­Check has checked most of the facts from the Taruta’s brochure and has dis­cov­ered that the data, though most­ly cor­rect, are manip­u­lat­ed in almost all occa­sions.”

    Vox­Check reports that the effect of Taruta’s “pam­phlet” has been a “split [between] politi­cians and experts into two oppos­ing camps, those who sup­port Taru­ta and those who sup­port Valeriya Hontare­va.” (Vox­Check was sim­i­lar­ly crit­i­cal of Hontareva’s rebut­tal.)

    Much of the event was spent crit­i­ciz­ing Hontare­va. Mack wore his old con­gres­sion­al pin on his lapel through­out. He opened by mus­ing about his time on the House For­eign Affairs Com­mit­tee. “It was always impor­tant for us as a com­mit­tee and as a Con­gress to under­stand what’s hap­pen­ing around the world, and the top­ic of cor­rup­tion would always come up,” he said.

    Curi­ous­ly, James Woolsey, the for­mer Clin­ton Admin­is­tra­tion CIA direc­tor and for­mer Trump cam­paign advis­er, also attend­ed and briefly spoke dur­ing the event.

    Mack iden­ti­fied Woolsey as “a spe­cial guest with us today.” Woolsey got up from his seat in the sparse audi­ence and recalled the time years ago when he helped nego­ti­ate a con­ven­tion­al arms treaty in Europe. He men­tioned Ukraine in that con­text, but did not talk about cor­rup­tion. Woolsey said in part that after the fall of the Berlin Wall, “For the next three to four years, the Rus­sians were very easy to get along with. They were sweet­hearts.” The for­mer CIA direc­tor went on to say, “I would love to see the inter­na­tion­al events work out in such a way that we end up being able to do two things. One, is to deal with the exis­tence of cor­rup­tion in the way that you referred to and that many peo­ple here are experts on. And the oth­er is to keep Ukraine and oth­er states in the region, such as Poland, from feel­ing that they are con­stant­ly under pres­sure from Rus­sia to do the wrong thing. Resus­ci­tate the days of friend­ly Rus­sia in the ear­ly ‘90s.

    When asked as to why he host­ed this event, for­mer Con­gress­man Mack told this reporter, “I rep­re­sent a group that is inter­est­ed in high­light­ing cor­rup­tion, not just in Ukraine, but all over: from Cen­tral to South Amer­i­ca, to East­ern Europe.” Mack acknowl­edged before­hand that the event was on the record, but when I asked Woolsey about his atten­dance after the event, he sug­gest­ed that his remarks were off the record, despite the event being record­ed and broad­cast on Ukraine’s New­sOne.

    Whether inten­tion­al or not, the nature and loca­tion of the event gave Ukrain­ian jour­nal­ists the pre­text to mis­lead­ing­ly sug­gest the event was action by the Unit­ed States Con­gress.

    In an inter­view after the event con­clud­ed, Taru­ta told New­sOne: “the fact that we’re here is exact­ly proof that the Amer­i­can gov­ern­ment, the Amer­i­can Con­gress, are not indif­fer­ent to the cor­rup­tion that is today at the high­est ech­e­lons of power/government.”

    Ukrain­ian offi­cials derid­ed Mack’s pan­el as fake news. Via a press release, the NBU’s web­site respond­ed this way:

    Ser­hii Taru­ta spreads false infor­ma­tion about an alleged hear­ing in the Con­gress of the Unit­ed States of Amer­i­ca ded­i­cat­ed to Ukrain­ian author­i­ties and the NBU.

    As far as the NBU is informed, the US Con­gress held no offi­cial hear­ing or meet­ing on the sub­jects indi­cat­ed in Mr Taruta’s mes­sage either today or any oth­er day. In real­i­ty, an infor­mal meet­ing host­ing less than 20 per­sons was held in a room tak­en on lease; the orga­niz­er and mod­er­a­tor was a rep­re­sen­ta­tive of the lob­by­ing com­pa­ny Lib­er­ty­In­ter­na­tion­al­Group, and the speak­ers were Mr Ser­hii Taru­ta and Mr Olek­san­dr Zavadet­skyi, an NBU’s for­mer employ­ee. No offi­cials from the US Admin­is­tra­tion or Con­gress attend­ed the events.

    In an email, Dmytro Shymkiv, the deputy head of Pres­i­den­tial Admin­is­tra­tion of Ukraine, said: “The event on Capi­tol Hill about the Nation­al Bank of Ukraine was not a con­gres­sion­al hear­ing . . . The dis­cus­sion was held with­out pub­lic scruti­ny and was spon­sored by a secret source. It just hap­pened to be con­vened in a room on Capi­tol Hill by an Amer­i­can who was once, years ago, a con­gress­man.” Mack, who is now a reg­is­tered lob­by­ist, was last in Con­gress in 2013 after being defeat­ed in a race for a U.S. Sen­ate seat.

    It is unclear whether the event was “spon­sored” in the sense that mon­ey was exchanged for use of the room. Meet­ing rooms—like HC‑8—are typ­i­cal­ly used in con­junc­tion with offi­cial con­gres­sion­al activ­i­ty, but cur­rent mem­bers of Con­gress are able to spon­sor use of the such rooms for con­stituent groups, pro­vid­ed they attend. If they can­not attend, one of their aides is required to attend. The room reser­va­tion form from the speaker’s office, which con­trols reser­va­tions, warns con­gres­sion­al offices that these rooms can­not be used for: “Com­mer­cial, prof­it-mak­ing, fundrais­ing, adver­tis­ing, polit­i­cal or lob­by­ing pur­pos­es, nor for enter­tain­ing tour groups.”

    An inquiry to Speak­er Ryan’s office about the use of the space was not returned.

    Mack is reg­is­tered to lob­by on behalf of Inter­con­nec­tion Com­merce S.A. to try to raise aware­ness of “cor­rup­tion with­in the Nation­al Bank of Ukraine.” POLITICO Influ­ence reports that “It’s unclear who Inter­con­nec­tion S.A. rep­re­sents. The firm lists an address in the British Vir­gin Islands and shows up in the Pana­ma Papers leaks but oth­er­wise has no online pres­ence.”

    A Sen­ate aide with knowl­edge of the event said, “It was a strange, strange event. Even by Ukrain­ian stan­dards, that was an odd one. . . . I mean, why would a for­mer CIA direc­tor be in the base­ment of the Capi­tol for a inter-oli­garch dis­pute? [For­mer] CIA direc­tors don’t just go to events and say, how much we could get along with the Rus­sians. They don’t do that with­out a rea­son.”

    ...

    ———-

    “The Moth­er of All Fake News” by J.P. CARROLL; The Week­ly Stan­dard; 09/29/2017

    The HONTAREVA report is the prod­uct of Sergiy Taru­ta, and he has been out flog­ging it for near­ly a year. Vox­Check, a Ukrain­ian fact check­ing web­site, ana­lyzed Taruta’s report in late 2016 and says of the report: “Vox­Check has checked most of the facts from the Taruta’s brochure and has dis­cov­ered that the data, though most­ly cor­rect, are manip­u­lat­ed in almost all occa­sions.””

    The fake con­gres­sion­al hear is a sign of how much Taru­ta wants to pub­li­cize his report report on the cor­rup­tion at Ukraine’s cen­tral bank. But it’s also a sign that Taru­ta’s pri­ma­ry audi­ence with this fake hear­ing was Ukraini­ans. And Taru­ta and his NewOne Ukrain­ian media part­ners were more than hap­py to main­tain the pre­tense that this was a real con­gres­sion­al event for that Ukrain­ian audi­ence. It was a pri­vate event hoax designed to look like a pub­lic event:

    ...
    The event, which took place in the base­ment of the U.S. Capi­tol, Room HC 8, was billed by the Ukrain­ian tele­vi­sion chan­nel as a meet­ing of the “US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.” New­sOne teased it this way:

    The high­est lev­els of cor­rup­tion in the NBU are known by the US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.

    Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence of cor­rup­tions of the most impor­tant offi­cials of the Nation­al Bank, the strongest of the world will find out about it.

    Shock­ing details and res­o­nant details—live stream­ing on New­sOne! Turn on at 21:00—live from Wash­ing­ton DC

    Except, what was broad­cast was not a hear­ing of any com­mit­tee of Con­gress. No cur­rent mem­bers of Con­gress were even there. What was this odd event? A pri­vate pan­el dis­cus­sion host­ed by for­mer Rep. Con­nie Mack IV (R‑FL), along with vet­er­an polit­i­cal fundrais­er and oper­a­tive, Matt Kee­len. But unlike an actu­al con­gres­sion­al hear­ing, this pri­vate event was open only to invit­ed guests (includ­ing con­gres­sion­al staffers), two Ukrain­ian reporters (from New­sOne), and one Amer­i­can reporter (me).
    ...

    Adding to the bizarreness was the speech by for­mer CIA direc­tor James Woolsey about what sweet­hearts Rus­sia was after the fall of the Berlin wall and the need to return to that point:

    ...
    Curi­ous­ly, James Woolsey, the for­mer Clin­ton Admin­is­tra­tion CIA direc­tor and for­mer Trump cam­paign advis­er, also attend­ed and briefly spoke dur­ing the event.

    Mack iden­ti­fied Woolsey as “a spe­cial guest with us today.” Woolsey got up from his seat in the sparse audi­ence and recalled the time years ago when he helped nego­ti­ate a con­ven­tion­al arms treaty in Europe. He men­tioned Ukraine in that con­text, but did not talk about cor­rup­tion. Woolsey said in part that after the fall of the Berlin Wall, “For the next three to four years, the Rus­sians were very easy to get along with. They were sweet­hearts.” The for­mer CIA direc­tor went on to say, “I would love to see the inter­na­tion­al events work out in such a way that we end up being able to do two things. One, is to deal with the exis­tence of cor­rup­tion in the way that you referred to and that many peo­ple here are experts on. And the oth­er is to keep Ukraine and oth­er states in the region, such as Poland, from feel­ing that they are con­stant­ly under pres­sure from Rus­sia to do the wrong thing. Resus­ci­tate the days of friend­ly Rus­sia in the ear­ly ‘90s.
    ...

    And that’s all why one Sen­ate Aide referred to it all as a strange, strange event to see a for­mer CIA direc­tor show up at a hoax event that’s part of a larg­er inter-oli­garch dis­pute:

    ...
    A Sen­ate aide with knowl­edge of the event said, “It was a strange, strange event. Even by Ukrain­ian stan­dards, that was an odd one. . . . I mean, why would a for­mer CIA direc­tor be in the base­ment of the Capi­tol for a inter-oli­garch dis­pute? [For­mer] CIA direc­tors don’t just go to events and say, how much we could get along with the Rus­sians. They don’t do that with­out a rea­son.”
    ...

    So let’s now take a clos­er look at that inter-oli­garch dis­pute to get a bet­ter sense of who Taru­ta is aligned with in Ukraine. And in this case he’s clear­ly aligned with Ihor Kolo­moisky, co-founder of the nation­al­ized Pri­vat­bank.

    As the arti­cle also notes, when Taru­ta was sell­ing the major­i­ty stake in the indus­tri­al con­glom­er­ate he co-found­ed, Indus­tri­al Union of Don­bass, in 2010, he was a close ally Yulia Tymoshenko at the time. And accord­ing to leaked cables, Tymoshenko want­ed him to keep the sale a secret over fears that she would be attacked for sell­ing out Ukraine. It’s anoth­er indi­ca­tion of Taru­ta’s polit­i­cal pedi­gree.

    The arti­cle also has an expla­na­tion from James Woolsey on why he attend­ed that event: he was duped. He agreed to show up in the audi­ence and then was asked on the spot to make some remarks. That’s the line he’s going with.

    And the arti­cle iden­ti­fies the per­son who has come for­ward to claim respon­si­bil­i­ty for arrang­ing the event: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch. Motkin found­ed the StrategEast con­sult­ing firm that describes itself as “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe.” Motkin claims that he decid­ed to fund the event because Taru­ta brought the alle­ga­tions about Gontare­va to his atten­tion.

    So that gives us a few more data points about Taru­ta: he was close to Tymoshenko, he’s doing Ihor Kolo­moisky’s bid­ding in wag­ing this fight against the nation­al­iza­tion of Pri­vat­bank, and the per­son who actu­al­ly set up the even runs a lob­by­ing firm for that described itself as “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe”:

    The Dai­ly Beast

    The Alleged­ly Mur­der­ous Oli­garch, the Duped CIA Chief, and the Trump­kin
    Who was behind a mys­te­ri­ous fake hear­ing in the base­ment of the U.S. Capi­tol?

    Bet­sy Woodruff
    03.27.18 5:04 AM ET

    On Sept. 25, 2017, a win­dow­less room in the base­ment of the Capi­tol Build­ing became the site of one of Washington’s more mys­te­ri­ous recent events.

    On hand: an investor who was once unsuc­cess­ful­ly sued for alleged­ly help­ing mur­der his own boss, a for­mer con­gress­man from the Flori­da pan­han­dle, and a for­mer Trump cam­paign staffer. One of two Ukrain­ian media out­lets to cov­er the event is owned by an old asso­ciate of Paul Manafort’s—a man who fed­er­al pros­e­cu­tors allege to be an “upper-ech­e­lon asso­ciate of Russ­ian orga­nized crime.”

    Oh, and the for­mer direc­tor of the CIA was involved.

    The for­mer CIA direc­tor told The Dai­ly Beast he wouldn’t have got­ten involved if he had known what was going on. One of the Amer­i­can lob­by­ists said the event was used for pro­pa­gan­da. The guy who got sued over his boss’s death? He now takes cred­it for the whole she­bang.

    THE BANK TAKEOVER

    This sto­ry starts in Kyiv, Ukraine, on June 19, 2014. That’s when a woman named Valeriya Gontare­va became the chair of the country’s pow­er­ful cen­tral bank. Ukrain­ian pol­i­tics is rife with cor­rup­tion, espe­cial­ly by Amer­i­can stan­dards, and is dom­i­nat­ed by the country’s pow­er­ful oli­garchs. As chair of the nation­al bank, Gontare­va made a host of changes to the country’s finan­cial system—and some pow­er­ful ene­mies.

    One of the biggest changes she over­saw was a gov­ern­ment takeover of the country’s biggest com­mer­cial bank, Pri­vat­bank. The oli­garch Ihor Kolo­moisky (who The Wall Street Jour­nal once described as “feisty”) co-found­ed it. When Gontare­va presided over the bank’s nation­al­iza­tion, its accounts were miss­ing more than $5 bil­lion, accord­ing to the Finan­cial Times, in large part because the bank lent so much mon­ey to peo­ple with con­nec­tions to Kolo­moisky.

    “Inter­na­tion­al finan­cial insti­tu­tions applaud­ed the state takeover,” wrote FT. “It has been wide­ly seen as the cul­mi­na­tion of Ukraine’s efforts since 2014 to clean up a dys­func­tion­al bank­ing sec­tor dom­i­nat­ed by oli­garch-owned banks.”

    The bank’s founders weren’t pleased.

    After the bank takeover, Gontare­va received numer­ous threats. One pro­test­er put a cof­fin out­side her door, accord­ing to Reuters. On April 10, 2017, she announced at a press con­fer­ence that she was resign­ing from her post. She tout­ed her accom­plish­ments at the event, but cau­tioned that in her absence the country’s finan­cial sec­tor could fake greater trou­bles.

    “I believe that resis­tance to changes and reforms will grow stronger now,” she said.

    THE FAKE HEARING

    Five months lat­er, in Wash­ing­ton D.C., some­thing odd hap­pened: Amer­i­can lob­by­ists host­ed an event, osten­si­bly on anti-cor­rup­tion issues, in the base­ment of the Capi­tol Build­ing. The event vil­i­fied Gontare­va. Orga­niz­ers dis­trib­uted lit­er­a­ture fea­tur­ing a grim close-up of her face, call­ing her a threat to Ukraine’s eco­nom­ic secu­ri­ty, and ask­ing if she was “CINDERELLA OR WICKED STEPMOTHER?”

    Ser­hiy Taru­ta, a mem­ber of the Ukrain­ian par­lia­ment, is named as the author of the report. In 2008, Forbes esti­mat­ed his net worth at $2.7 bil­lion. Accord­ing to a diplo­mat­ic cable pub­lished by Wik­iLeaks, Amer­i­can gov­ern­ment offi­cials believed Taru­ta played a role in the sale of a major­i­ty stake in the sale of one of Ukraine’s largest steel groups—val­ued at $2 bil­lion—to a pow­er­ful Russ­ian busi­ness­man. Taru­ta was a close ally of politi­cian Yulia Tymoshenko at the time, and the cable said she and Taru­ta want­ed to keep the deal “hid­den from pub­lic view” to avoid crit­i­cism. Had the nature of the deal been made pub­lic, the cable said, Tymoshenko could have faced “increased attacks from polit­i­cal rivals for ‘sell­ing out’ Ukrain­ian assets to Russ­ian inter­ests, per­haps to finance her pres­i­den­tial cam­paign.”

    The event’s orga­niz­ers are adamant that they did not plan for it to look like a fake con­gres­sion­al hear­ing. But Ukrain­ian reporters who attend­ed the event cov­ered it that way. For­mer Rep. Con­nie Mack, one of the Amer­i­can lob­by­ists who orga­nized the event, sport­ed the pin that mem­bers of Con­gress wear. James Woolsey, for­mer CIA direc­tor, attend­ed and spoke briefly to the group.

    Woolsey’s spokesper­son, Jonathan Franks, lat­er said he was duped.

    “Ambas­sador Woolsey was delib­er­ate­ly mis­led about the nature of this event when he agreed to attend,” Franks told The Dai­ly Beast. “He expect­ed to be a mem­ber of the audi­ence for a seri­ous dis­cus­sion of issues fac­ing the Ukraine, an area he’s been inter­est­ed in for decades. He didn’t agree to be iden­ti­fied a ‘spe­cial guest’ nor did he agree to speak. Per­haps he was guilty of being old fash­ioned, but it nev­er occurred to him the orga­niz­ers would lure him to an event in the Capi­tol in order to make him an invol­un­tary par­tic­i­pant in a sham.”

    Rep. Ron Estes, a fresh­man from Kansas, booked the room for Mack and Co. His office lat­er told The Dai­ly Beast this won’t hap­pen again.

    Mack and Matt Kee­len, a lob­by­ist whose firm’s web­site boasts of his “well fos­tered rela­tion­ships” in the Trump admin­is­tra­tion, both dis­closed in fed­er­al reg­is­tra­tion forms that they put on the event for a shell com­pa­ny based in the British Vir­gin Islands called Inter­con­nec­tion Com­merce SA.

    “I nev­er por­trayed this as a hear­ing,” Mack told The Dai­ly Beast. “We didn’t do any­thing to make it look like a hear­ing. It was in a very stale room in the base­ment, no mark­ings of a con­gres­sion­al hear­ing at all.”

    At the event, Mack used the term “we” when refer­ring to Con­gress, and was emphat­ic that mem­bers should inves­ti­gate Gontare­va.

    “One thing is clear: that we, the Con­gress of the Unit­ed States—and there are tax­pay­er dol­lars at risk, and there are alle­ga­tions, sug­ges­tions, and evidence—should inves­ti­gate,” he said, accord­ing to an audio record­ing of the event.

    Mack blamed BGR Group, a lob­by­ing firm that works for Ukraine’s cur­rent pres­i­dent, Petro Poroshenko, for push­ing the nar­ra­tive that he and Kee­len put on a fake hear­ing.

    Two Ukrain­ian news out­lets cov­ered the event. One of those out­lets, Chan­nelOne, described it as a hear­ing of the nonex­is­tent “U.S. Con­gres­sion­al Com­mit­tee on Finan­cial Issues.”

    “That was pure pro­pa­gan­da on their part,” Mack said. “Who­ev­er those news out­lets are, it real­ly is fake news. They had to go a long way to try to make it look like a hear­ing.”

    The oth­er Ukrain­ian news out­let that cov­ered the event was UkraNews, which—accord­ing to the Objec­tive Project, which mon­i­tors media own­er­ship in Ukraine—belongs to Dmit­ry Fir­tash.

    That name should ring a bell, if you’ve been fol­low­ing the far-flung dra­ma into for­eign influ­ence on the 2016 elec­tion. Fed­er­al pros­e­cu­tors in Chica­go are seek­ing Firtash’s extra­di­tion to the Unit­ed States to put him on tri­al for rack­e­teer­ing. Man­afort, for­mer Man­afort deputy Rick Gates, and Fir­tash worked on a deal in 2008 to buy New York’s Drake Hotel—for a cool $850 million—but the deal fell through.

    Lan­ny Davis—a for­mer spe­cial coun­sel in Bill Clinton’s White House who today rep­re­sents Firtash—said his client had noth­ing to do with the hear­ing.

    “Mr. Fir­tash had and has no knowl­edge of, no posi­tion on, and no involve­ment what­so­ev­er in the con­gres­sion­al brief­ing that occurred and takes no posi­tion and has no inter­est in the issues dis­cussed,” Davis said.

    THE MYSTERY MAN

    So who dreamed up this fake hear­ing? And who paid for it? For months, the backer of this so-called sham was a mys­tery. But when The Dai­ly Beast start­ed ask­ing who paid for the event, a lit­tle-known fig­ure came for­ward to take full respon­si­bil­i­ty: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch accused of lead­ing a coup attempt.

    A spokesper­son for Motkin, for­mer­ly an asso­ciate to the now-deceased Badri Patarkat­sishvili, told The Dai­ly Beast that he paid for the entire event. Ali­son Patch, a spokesper­son for Motkin, said Motkin paid for the event him­self in his per­son­al capac­i­ty.

    Motkin was an aide to Patarkat­sishvili when he report­ed­ly tried to foment a coup in Geor­gia. After Patarkat­sishvili died, Motkin found him­self embroiled in a legal bat­tle with Patarkatsishvili’s cousin. The cousin alleged in doc­u­ments filed as part of a civ­il suit in New York state court that Motkin was part of a plot to kill Patarkat­sishvili (PDF).

    A spokesper­son for Motkin said he decid­ed to fund the event because Taru­ta, the Ukrain­ian bil­lion­aire, brought the alle­ga­tions about Gontare­va to his atten­tion.

    “Although this report was entire­ly brought by Mr. Taruta’s ini­tia­tive, for many years Mr. Motkin has worked on pro­mot­ing demo­c­ra­t­ic val­ues amongst com­mu­ni­ties close to the for­mer Sovi­et Union,” said Patch. “Know­ing of his inter­est in sup­port­ing anti-cor­rup­tion efforts, Mr. Taru­ta shared the infor­ma­tion about his report. Mr. Motkin found the evi­dence pre­sent­ed com­pelling and decid­ed that if he could help get the issues in front of peo­ple who may make a dif­fer­ence, he would.”

    Anders Aslund of the Atlantic Coun­cil, an expert on oli­garchs’ pol­i­tick­ing, didn’t quite believe it. Aslund said he believes the dri­ving force behind the event was Ihor Kolomoisky—the Ukrain­ian oli­garch whose cronies lost all that mon­ey when Pri­vat­bank was nation­al­ized. Kolomisky would have mil­lions of rea­sons to detest Gontare­va, the object of the fake hearing’s ire, accord­ing to Aslund.

    “This was entire­ly Kolo­moisky,” he said. “Kolo­moisky is crooked and clever. He is a per­son who makes busi­ness by doing bank­rupt­cy rather than mak­ing prof­its.”

    Kolo­moisky has faced alle­ga­tions of involve­ment in con­tract killings, which he denies. An attor­ney for Kolo­moisky did not respond to mul­ti­ple requests for com­ment.

    ...

    ———-

    “The Alleged­ly Mur­der­ous Oli­garch, the Duped CIA Chief, and the Trump­kin” by Bet­sy Woodruff; The Dai­ly Beast; 03/27/2018

    “Ser­hiy Taru­ta, a mem­ber of the Ukrain­ian par­lia­ment, is named as the author of the report. In 2008, Forbes esti­mat­ed his net worth at $2.7 bil­lion. Accord­ing to a diplo­mat­ic cable pub­lished by Wik­iLeaks, Amer­i­can gov­ern­ment offi­cials believed Taru­ta played a role in the sale of a major­i­ty stake in the sale of one of Ukraine’s largest steel groups—val­ued at $2 bil­lion—to a pow­er­ful Russ­ian busi­ness­man. Taru­ta was a close ally of politi­cian Yulia Tymoshenko at the time, and the cable said she and Taru­ta want­ed to keep the deal “hid­den from pub­lic view” to avoid crit­i­cism. Had the nature of the deal been made pub­lic, the cable said, Tymoshenko could have faced “increased attacks from polit­i­cal rivals for ‘sell­ing out’ Ukrain­ian assets to Russ­ian inter­ests, per­haps to finance her pres­i­den­tial cam­paign.””

    That’s a key obser­va­tion: Taru­ta was seen as a close Tymoshenko ally.

    But he’s also a Koloimoisky ally since this inter-oli­garch dis­pute is Kolo­moisky’s dis­pute and Taru­ta is fight­ing Kolo­moisky’s fight:

    ...
    This sto­ry starts in Kyiv, Ukraine, on June 19, 2014. That’s when a woman named Valeriya Gontare­va became the chair of the country’s pow­er­ful cen­tral bank. Ukrain­ian pol­i­tics is rife with cor­rup­tion, espe­cial­ly by Amer­i­can stan­dards, and is dom­i­nat­ed by the country’s pow­er­ful oli­garchs. As chair of the nation­al bank, Gontare­va made a host of changes to the country’s finan­cial system—and some pow­er­ful ene­mies.

    One of the biggest changes she over­saw was a gov­ern­ment takeover of the country’s biggest com­mer­cial bank, Pri­vat­bank. The oli­garch Ihor Kolo­moisky (who The Wall Street Jour­nal once described as “feisty”) co-found­ed it. When Gontare­va presided over the bank’s nation­al­iza­tion, its accounts were miss­ing more than $5 bil­lion, accord­ing to the Finan­cial Times, in large part because the bank lent so much mon­ey to peo­ple with con­nec­tions to Kolo­moisky.

    “Inter­na­tion­al finan­cial insti­tu­tions applaud­ed the state takeover,” wrote FT. “It has been wide­ly seen as the cul­mi­na­tion of Ukraine’s efforts since 2014 to clean up a dys­func­tion­al bank­ing sec­tor dom­i­nat­ed by oli­garch-owned banks.”

    The bank’s founders weren’t pleased.

    After the bank takeover, Gontare­va received numer­ous threats. One pro­test­er put a cof­fin out­side her door, accord­ing to Reuters. On April 10, 2017, she announced at a press con­fer­ence that she was resign­ing from her post. She tout­ed her accom­plish­ments at the event, but cau­tioned that in her absence the country’s finan­cial sec­tor could fake greater trou­bles.
    ...

    But what about James Woolsey? What’s his excuse for fight­ing Kolo­moiksy’s fight? He was tricked. That was his excuse:

    ...
    Woolsey’s spokesper­son, Jonathan Franks, lat­er said he was duped.

    “Ambas­sador Woolsey was delib­er­ate­ly mis­led about the nature of this event when he agreed to attend,” Franks told The Dai­ly Beast. “He expect­ed to be a mem­ber of the audi­ence for a seri­ous dis­cus­sion of issues fac­ing the Ukraine, an area he’s been inter­est­ed in for decades. He didn’t agree to be iden­ti­fied a ‘spe­cial guest’ nor did he agree to speak. Per­haps he was guilty of being old fash­ioned, but it nev­er occurred to him the orga­niz­ers would lure him to an event in the Capi­tol in order to make him an invol­un­tary par­tic­i­pant in a sham.”
    ...

    And what about Rep. Estes, the con­gress­man who made this offi­cial room avail­able for the stunt? Well, he assures us that it won’t hap­pen again. It’s sort of an expla­na­tion:

    ...
    Rep. Ron Estes, a fresh­man from Kansas, booked the room for Mack and Co. His office lat­er told The Dai­ly Beast this won’t hap­pen again.
    ...

    And note the two Ukrain­ian media com­pa­nies that cov­ered this. There was Chan­nelOne, which is owned by 1+1 Media, Ihor Kolo­moisky’s media group. And also UkraNews, which belongs to Dmit­ry Fir­tash:

    ...
    Two Ukrain­ian news out­lets cov­ered the event. One of those out­lets, Chan­nelOne, described it as a hear­ing of the nonex­is­tent “U.S. Con­gres­sion­al Com­mit­tee on Finan­cial Issues.”

    “That was pure pro­pa­gan­da on their part,” Mack said. “Who­ev­er those news out­lets are, it real­ly is fake news. They had to go a long way to try to make it look like a hear­ing.”

    The oth­er Ukrain­ian news out­let that cov­ered the event was UkraNews, which—accord­ing to the Objec­tive Project, which mon­i­tors media own­er­ship in Ukraine—belongs to Dmit­ry Fir­tash.

    That name should ring a bell, if you’ve been fol­low­ing the far-flung dra­ma into for­eign influ­ence on the 2016 elec­tion. Fed­er­al pros­e­cu­tors in Chica­go are seek­ing Firtash’s extra­di­tion to the Unit­ed States to put him on tri­al for rack­e­teer­ing. Man­afort, for­mer Man­afort deputy Rick Gates, and Fir­tash worked on a deal in 2008 to buy New York’s Drake Hotel—for a cool $850 million—but the deal fell through.
    ...

    And recall what we saw in the above Ukraine Week piece about the make­up of the Oppo­si­tion Bloc and the unproven spec­u­la­tion that Rinat Akhme­tov could be behind Osno­va: “One sto­ry is that the pur­pose of Osno­va is to grad­u­al­ly siphon off Akhmetov’s folks from the Oppo­si­tion Bloc, giv­en that for­mer Region­als split into the Akhme­tov wing, which is more loy­al to Poroshenko, and the Liovochkin-Fir­tash wing, which is com­plete­ly opposed”. That sure sounds like Fir­tash rep­re­sents a fac­tion of the Oppo­si­tion Bloc that would like to see Poroshenko go (recall that Andreii Arte­menko’s peace plan pro­pos­al involved the col­lapse of the Porokshenko gov­ern­ment under a wave scan­dal rev­e­la­tions. Arte­menko would pro­vide the scan­dal evi­dence). So it’s notable that we have Fir­tash’s news chan­nel also pro­mot­ing Taru­ta’s fake con­gres­sion­al along with Kolo­moisky’s Chan­nelOne.

    And look who has come for­ward as the even orga­niz­er. Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch:

    ...
    THE MYSTERY MAN

    So who dreamed up this fake hear­ing? And who paid for it? For months, the backer of this so-called sham was a mys­tery. But when The Dai­ly Beast start­ed ask­ing who paid for the event, a lit­tle-known fig­ure came for­ward to take full respon­si­bil­i­ty: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch accused of lead­ing a coup attempt.

    A spokesper­son for Motkin, for­mer­ly an asso­ciate to the now-deceased Badri Patarkat­sishvili, told The Dai­ly Beast that he paid for the entire event. Ali­son Patch, a spokesper­son for Motkin, said Motkin paid for the event him­self in his per­son­al capac­i­ty.

    Motkin was an aide to Patarkat­sishvili when he report­ed­ly tried to foment a coup in Geor­gia. After Patarkat­sishvili died, Motkin found him­self embroiled in a legal bat­tle with Patarkatsishvili’s cousin. The cousin alleged in doc­u­ments filed as part of a civ­il suit in New York state court that Motkin was part of a plot to kill Patarkat­sishvili (PDF).

    A spokesper­son for Motkin said he decid­ed to fund the event because Taru­ta, the Ukrain­ian bil­lion­aire, brought the alle­ga­tions about Gontare­va to his atten­tion.

    “Although this report was entire­ly brought by Mr. Taruta’s ini­tia­tive, for many years Mr. Motkin has worked on pro­mot­ing demo­c­ra­t­ic val­ues amongst com­mu­ni­ties close to the for­mer Sovi­et Union,” said Patch. “Know­ing of his inter­est in sup­port­ing anti-cor­rup­tion efforts, Mr. Taru­ta shared the infor­ma­tion about his report. Mr. Motkin found the evi­dence pre­sent­ed com­pelling and decid­ed that if he could help get the issues in front of peo­ple who may make a dif­fer­ence, he would.”
    ...

    And when we look at how Motk­in’s lob­by­ing firm describes itself, it’s “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe”:

    Strategeast

    About US

    Ana­toly Motkin
    Founder and Pres­i­dent

    Ana­toly Motkin is founder and pres­i­dent of StrategEast, a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe. In this role, Mr. Motkin uses his two decades of involve­ment in the devel­op­ment of media and polit­i­cal projects in the post-Sovi­et region to sup­port var­i­ous pro­grams and com­bat cor­rup­tion in the region.

    Mr. Motkin has devot­ed much of his career to assist­ing the process­es of West­ern­iza­tion in post-Sovi­et states through the launch­ing of a vari­ety of media, polit­i­cal and busi­ness ini­tia­tives aimed to dri­ve social aware­ness and con­nect com­mu­ni­ties. He has suc­cess­ful­ly invest­ed in mul­ti­ple tech­nol­o­gy star­tups, such as one of the most pop­u­lar mes­sag­ing apps and the rideshar­ing ser­vice app Juno, which was recent­ly acquired by on-demand ride ser­vice Gett.

    Mr. Motkin has also cre­at­ed and pro­duced sev­er­al suc­cess­ful Russ­ian-lan­guage media projects in his native Israel, as well as in Latvia, Belarus and Geor­gia.

    Projects estab­lished by Mr. Motkin include a part­ner­ship with Yedio­th Ahronoth pub­lish­ing group, the strongest media house in Israel, to pro­duce an enter­tain­ment mag­a­zine “Tele-Boom”, Time Out – Israel and “7:40” – a prime­time show on Chan­nel 9 – the only Israeli TV broad­cast chan­nel in Russ­ian. He is also the founder of Cur­sor­in­fo, one of the old­est Russ­ian-lan­guage news web­sites and one of the most cit­ed sources for infor­ma­tion on cur­rent events in Israel.

    Mr. Motkin began his career as a polit­i­cal con­sul­tant advis­ing the Israeli Gov­ern­ment on the country’s Russ­ian-speak­ing sec­tor. Dur­ing this time, Mr. Motkin served as the head of the Russ­ian-speak­ing vot­ers cam­paign for the Shinui par­ty, assist­ing to triple the num­ber of votes for the par­ty and assist­ing the Shinui in win­ning 15 seats in the 2003 Knes­set elec­tion.

    ...

    ———-

    “Strategeast: About US: Ana­toly Motkin”; Strategeast.org; 04/07/2018

    Mr. Motkin has devot­ed much of his career to assist­ing the process­es of West­ern­iza­tion in post-Sovi­et states through the launch­ing of a vari­ety of media, polit­i­cal and busi­ness ini­tia­tives aimed to dri­ve social aware­ness and con­nect com­mu­ni­ties. He has suc­cess­ful­ly invest­ed in mul­ti­ple tech­nol­o­gy star­tups, such as one of the most pop­u­lar mes­sag­ing apps and the rideshar­ing ser­vice app Juno, which was recent­ly acquired by on-demand ride ser­vice Gett.”

    And the involved of some­one like Motkin in arrang­ing the the­atrics of what amounts to an inter-oli­garch dis­pute over Ihor Kolo­moisky’s nation­al­ized bank points to one of the key obser­va­tions in this sit­u­a­tion: it appears to be an inter-oli­garch fight of dif­fer­ent fac­tions of pro-west­ern Ukrain­ian oli­garchs. And Sergei Taru­ta appears to be square­ly in the camp of fac­tion that does­n’t sup­port the sep­a­ratists but also does­n’t sup­port Poroshenko. As we’ve seen, Taru­ta has his­to­ry ties to Yulia Tymosheko’s pow­er base, but he also appears to be work­ing with fel­low East Ukrain­ian oli­garch Ihor Kolo­moisky.

    So, final­ly, let’s note some­thing impor­tant about Taru­ta and Kolo­moisky from this 2015 report by Joshua Cohen, who has done a lot of good report­ing about the risk of the neo-Nazi in Ukraine. It’s a report that would explain some of the ani­mos­i­ty between Kolo­moisky and the Poroshenko gov­ern­ment: The report describes the use of pri­vate­ly financed mili­tias that are, in effect, pri­vate armies con­trolled by their Ukrain­ian oli­garch financiers, with Ihor Kolo­moisky being one of the biggest mili­tia financiers. And this actu­al­ly led to Kol­moiksy’s fir­ing in 2015 after Komoloisky sent one of this pri­vate armies to seize con­trol of the head­quar­ters of the state-owned oil com­pa­ny, Ukr­TransNaf­ta, after Kiev fired the company’s chief exec­u­tive offi­cer who hap­pened to be an ally of Kolo­moisky. This led to Kolo­moiksy’s fir­ing as gov­er­nor of Dnipro. So that, in addi­tion to the Pri­vat­bank nation­al­iza­tion, is no doubt part of why Koloimoisky might not be super enthi­a­sis­tic about the Poroshenko gov­ern­ment.

    Giv­en the ongo­ing ten­sions between the neo-Nazis groups in Ukraine and the Kiev gov­ern­ment and the ongo­ing Nazi threats from groups like the Azov Bat­tal­ion to ‘march on Kiev’ and take over, it’s note­wor­thy that one of their biggest finan­cial back­ers, Ihor Kolo­moisky, has so much ani­mos­i­ty towards the Poroshenko gov­ern­ment. And in our look at Sergei Targuta it’s also pret­ty wor­thy that, as the arti­cle notes, both Kolo­moisky and Taru­ta were par­tial­ly financ­ing the neo-Nazi Azov Bat­tal­ion:

    Reuters

    The Great Debate

    In the bat­tle between Ukraine and Russ­ian sep­a­ratists, shady pri­vate armies take the field

    By Josh Cohen
    May 5, 2015

    While the cease­fire agree­ment between the Ukrain­ian gov­ern­ment and sep­a­ratist rebels in the east­ern part of the coun­try seems large­ly to be hold­ing, a recent show­down in Kiev between a Ukrain­ian oli­garch and the gov­ern­ment revealed one of the country’s ongo­ing chal­lenges: pri­vate mil­i­tary bat­tal­ions that do not always oper­ate under the cen­tral government’s con­trol.

    In March, mem­bers of the pri­vate army backed by tycoon Ihor Kolo­moisky showed up at the head­quar­ters of the state-owned oil com­pa­ny, Ukr­TransNaf­ta. The stand­off occurred after Kiev fired the company’s chief exec­u­tive offi­cer — an ally of Kolomoisky’s. Kolo­moisky said that he was try­ing to pro­tect the com­pa­ny from an ille­gal takeover.

    More than 30 of these pri­vate bat­tal­ions, com­prised most­ly of vol­un­teer sol­diers, exist through­out Ukraine. Although all have been brought under the author­i­ty of the mil­i­tary or the Nation­al Guard, the post-Maid­an gov­ern­ment is still strug­gling to con­trol them.

    Ukraine’s mil­i­tary is so weak that after the Russ­ian Fed­er­a­tion seized Crimea, Russ­ian-spon­sored sep­a­ratists were able to take over large swathes of east­ern Ukraine. Pri­vate bat­tal­ions, fund­ed par­tial­ly by Ukrain­ian oli­garchs, stepped into this vac­u­um and played a key role in stop­ping the sep­a­ratists’ advance.

    By sup­ply­ing weapons to the bat­tal­ions and in some cas­es pay­ing recruits, Ukraine’s rich­est men are defend­ing their coun­try — and also pro­tect­ing their own eco­nom­ic inter­ests. Many of the oli­garchs amassed great wealth by using their polit­i­cal con­nec­tions to pur­chase gov­ern­ment assets at knock­down prices, siphon off prof­its from state-owned com­pa­nies and bribe Ukrain­ian offi­cials to win state con­tracts.

    When the Maid­an pro­test­ers over­threw for­mer Pres­i­dent Vik­tor Yanukovich, they demand­ed that the new gov­ern­ment clamp down on the oli­garchs’ abuse of pow­er. Instead, many became even more pow­er­ful: Kiev hand­ed Kolo­moisky and min­ing tycoon Ser­hiy Taru­ta gov­er­nor posts in impor­tant east­ern regions of Ukraine, for exam­ple.

    Many of these para­mil­i­tary groups are accused of abus­ing the cit­i­zens they are charged with pro­tect­ing. Amnesty Inter­na­tion­al has report­ed that the Aidar bat­tal­ion — also par­tial­ly fund­ed by Kolo­moisky — com­mit­ted war crimes, includ­ing ille­gal abduc­tions, unlaw­ful deten­tion, rob­bery, extor­tion and even pos­si­ble exe­cu­tions.

    Oth­er pro-Kiev pri­vate bat­tal­ions have starved civil­ians as a form of war­fare, pre­vent­ing aid con­voys from reach­ing sep­a­ratist-con­trolled areas of east­ern Ukraine, accord­ing to the Amnesty report.

    Some of Ukraine’s pri­vate bat­tal­ions have black­ened the country’s inter­na­tion­al rep­u­ta­tion with their extrem­ist views. The Azov bat­tal­ion, par­tial­ly fund­ed by Taru­ta and Kolo­moisky, uses the Nazi Wolf­san­gel sym­bol as its logo, and many of its mem­bers open­ly espouse neo-Nazi, anti-Semit­ic views. The bat­tal­ion mem­bers have spo­ken about “bring­ing the war to Kiev,” and said that Ukraine needs “a strong dic­ta­tor to come to pow­er who could shed plen­ty of blood but unite the nation in the process.”

    Ukraine’s Pres­i­dent Petro Poroshenko has made clear his inten­tion to rein in Ukraine’s vol­un­teer war­riors. Days after Kolomoisky’s sol­diers appeared at Ukr­TransNaf­ta, he said that he would not tol­er­ate oli­garchs with “pock­et armies” and then fired Kolo­moisky from his perch as the gov­er­nor of Dnipropetro­vsk.

    By bring­ing the pri­vate vol­un­teers under Kiev’s full con­trol, Ukraine will ben­e­fit in a num­ber of ways. The vol­un­teer bat­tal­ions will receive the same train­ing as the mil­i­tary, which should help them to bet­ter inte­grate their tac­tics. They’ll qual­i­fy for reg­u­lar mil­i­tary ben­e­fits and pen­sions. Final­ly, they will be sub­ject to mil­i­tary law, which allows the gov­ern­ment to bet­ter deal with any crim­i­nal or human rights vio­la­tions that they com­mit.

    ...

    ———-

    “In the bat­tle between Ukraine and Russ­ian sep­a­ratists, shady pri­vate armies take the field” by Josh Cohen; Reuters; 05/05/2015

    “Ukraine’s Pres­i­dent Petro Poroshenko has made clear his inten­tion to rein in Ukraine’s vol­un­teer war­riors. Days after Kolomoisky’s sol­diers appeared at Ukr­TransNaf­ta, he said that he would not tol­er­ate oli­garchs with “pock­et armies” and then fired Kolo­moisky from his perch as the gov­er­nor of Dnipropetro­vsk.”

    Yep, it was the pri­vate use of a pri­vate army to seize state assets in a busi­ness dis­pute that got Ihor Kolo­moisky fired as gov­ern­ment of the Dnipro Oblast in May of 2015. And that was just one exam­ple of how these neo-Nazi mili­tias posed a threat to Ukrain­ian soci­ety. There’s also the obvi­ous risk that they act on their own orders and try to seize con­trol.

    But the great­est threat these neo-Nazi mili­tias pose clear­ly involves work­ing in coor­di­na­tion with a team of Ukrain­ian oli­garchs. And that’s part of what makes an under­stand­ing of the opaque Ukrain­ian oli­garchic fault lines so impor­tant, because there’s always the chance that these inter-oli­garch dis­putes will result in these pri­vate armies get­ting used for a coup or some­thing along those lines.

    And that’s a big part of why it’s notable that about Taru­ta and Kolo­moisky have a his­to­ry of financ­ing groups like the Azov Bat­tal­ion:

    ...
    “Some of Ukraine’s pri­vate bat­tal­ions have black­ened the country’s inter­na­tion­al rep­u­ta­tion with their extrem­ist views. The Azov bat­tal­ion, par­tial­ly fund­ed by Taru­ta and Kolo­moisky, uses the Nazi Wolf­san­gel sym­bol as its logo, and many of its mem­bers open­ly espouse neo-Nazi, anti-Semit­ic views. The bat­tal­ion mem­bers have spo­ken about “bring­ing the war to Kiev,” and said that Ukraine needs “a strong dic­ta­tor to come to pow­er who could shed plen­ty of blood but unite the nation in the process.””
    ...

    And that’s also why it’s so notable if a com­pa­ny like AIQ is offer­ing polit­i­cal ser­vices to some­one like Taru­ta: Because Taru­ta appears to be allied with the pro-West­ern fac­tion of Ukrain­ian oli­garchs who want to replace their cur­rent Ukrain­ian gov­ern­ment with their own fac­tion. Much like Andreii Arte­menko and his ‘peace plan’ pro­pos­al, which also appeared to be a plan from a pro-West­ern-anti-Poroshenko fac­tion of Ukrain­ian oli­garchs.

    In oth­er words, the sto­ry about Sergei Taru­ta and the bizarre fake con­gres­sion­al cam­paign appears to be one ele­ment of a much larg­er very A real inter-oli­garch dis­pute involv­ing some very pow­er­ful oli­garchs. And Cam­bridge Analytica/AIQ/SCL appears to be work­ing for one of those sides and it’s the side cur­rent­ly out of pow­er and try­ing to reverse that sit­u­a­tion.

    Posted by Pterrafractyl | April 9, 2018, 4:27 pm
  7. So you know that creepy feel­ing you get when you Google some­thing and ads creep­i­ly relat­ed to what you just browsed start fol­low­ing you around on the inter­net? Rejoice! At least, rejoice if you enjoy that creepy feel­ing. Because you’ll get to expe­ri­ence that creepy feel­ing watch­ing broad­cast tv too with the next gen­er­a­tion of tele­vi­sions and ATSC 3.0 broad­cast for­mat tech­nol­o­gy that just got offered to the Amer­i­can pub­lic for the first time on KFPH UniMás 35 in Pheonix, Ari­zona, with more mar­ket roll­outs planned soon.

    So how is the ATSC 3.0 broad­cast for­mat for tele­vi­sion going to allow creep­i­ly per­son­al­ized ads to fol­low you on tele­vi­sion too? The new for­mat basi­cal­ly com­bines over-the-air TV with inter­net stream­ing. So part of what you’ll see on the screen will be con­tent sent over the inter­net which will obvi­ous­ly be per­son­al­ized. And that’s going to include ads.

    But it won’t just be deliv­ery per­son­al­ized con­tent. The tech­nol­o­gy will also allow for track­ing of user behav­ior. And there are no pri­va­cy stan­dards at all. That will be up to indi­vid­ual broad­cast­ers who will design their own app will will deliv­er the per­son­al­ized con­tent. Which obvi­ous­ly means there are going to be lots of broad­cast­ers track­ing your tele­vi­sion view­ing habits, cre­at­ing the kind of night­mare pri­va­cy sit­u­a­tion we’ve already seen on plat­forms like Face­book and app devel­op­ers. This ATSC 3.0 broad­cast for­mat is like a new giant plat­form that every­one will share in the US, but there are no pri­va­cy stan­dards for the app devel­op­ers which might even be worse than Face­book.

    So that’s com­ing with the next gen­er­a­tion of tele­vi­sions. As one might imag­ine giv­en the fact that this new tech­nol­o­gy threat­ens to turn the tv into the next con­sumer pri­va­cy night­mare, this tech­nol­o­gy was a major focus of sev­er­al tech demon­stra­tions at the recent Nation­al Asso­ci­a­tion of Broad­cast­ers (NAB) con­fer­ence in Las Vegas. And as one might also imag­ine, the indus­try has­n’t had much to say about the pri­va­cy aspect of this pri­va­cy night­mare it’s about to unleash:

    Tech­Hive

    Next-gen TV to ush­er in view­er track­ing and per­son­al­ized ads along with 4K broad­casts
    More of your life will be lost to adver­tis­ers when TV sta­tions switch to a new dig­i­tal for­mat

    Mar­tyn Williams By Mar­tyn Williams

    Senior Cor­re­spon­dent, Tech­Hive
    Apr 13, 2018 3:00 AM PT

    On Mon­day a lit­tle bit of U.S. tele­vi­sion his­to­ry was made when KFPH UniMás 35 became the first sta­tion to go on air using the new ATSC 3.0 broad­cast for­mat in Phoenix, Ari­zona. Over the com­ing weeks, sev­er­al more broad­cast­ers will fol­low and the first wide-scale test of the new for­mat will be under­way.

    The for­mat attempts to blend over-the-air TV with inter­net stream­ing, can sup­port 4K broad­cast­ing and local­ized emer­gency alerts, and should be more robust for city recep­tion; but it also gives TV sta­tions the chance to start serv­ing per­son­al­ized adver­tis­ing.

    Broad­cast­ers haven’t talked much about the adver­tis­ing aspect, and they’ve said even less about the poten­tial pri­va­cy impli­ca­tions, but it was a major focus of sev­er­al tech demon­stra­tions at the Nation­al Asso­ci­a­tion of Broad­cast­ers (NAB) con­fer­ence in Las Vegas this week.

    At the event, about 300 miles to the north of Phoenix, it was clear that TV sta­tions are keen to use the new for­mat to track more close­ly what view­ers are watch­ing and serve up the same kind of tar­get­ed ads that are com­mon on the Inter­net.

    When view­ers tune into an ATSC 3.0, the TV sta­tion has the abil­i­ty to serve them an appli­ca­tion that will run inside a brows­er on their TV. View­ers won’t see a tra­di­tion­al brows­er win­dow, it will look some­thing like the images above, and because it’s writ­ten in HTML5 it will work across all TVs.

    But the style of the app and the fea­tures it offers will be down to each indi­vid­ual broad­cast­er. Some might offer quick links to news clips and the weath­er and access to a catch-up ser­vice (i.e., video on demand that would let you watch pre­vi­ous­ly aired pro­gram­ming you’d missed the first time), while small­er sta­tions might just pro­vide a TV guide.

    One thing many are like­ly to do is track exact­ly what you’re watch­ing and for how long.

    The ATSC 3.0 for­mat does­n’t define a pri­va­cy pol­i­cy. It’s down to each TV sta­tion so there is no guar­an­tee they will all be uni­form.

    In a demon­stra­tion app on dis­play at NAB, the TV tracked what a view­er watched and for how long. The pay-off for the view­er would be free or exclu­sive access to con­tent. So, for exam­ple, imag­ine a future where a TV sta­tion gives you free access to pre­mi­um con­tent in return for being loy­al to its news­casts.

    But the TV sta­tion would be get­ting more than loy­al­ty. The data would be used to build a pro­file of the view­er and serve them per­son­al­ized ads, deliv­ered over the inter­net to their TV.

    That will be a lucra­tive new ad mod­el for TV broadcasters–and that’s why the TV indus­try is so excit­ed about ATSC 3.0.

    ...

    Can you imag­ine being a mid­dle-of-the-road vot­er in a swing state when the elec­tion rolls around? If you thought polit­i­cal adver­tis­ing was bad now, just wait until the cam­paigns get their teeth into tar­get­ing on this per­son­al­ized lev­el. It might be bet­ter to leave the TV off for six months.

    In the demon­stra­tions I saw this week, apps were capa­ble of track­ing only what a user did inside the app in ques­tion. One sta­tion won’t be able to see what you watch on a rival, but that gets blur­ri­er in mar­kets where a sin­gle own­er oper­ates sev­er­al chan­nels.

    It’s worth remem­ber­ing that ATSC 3.0 does­n’t inevitably mean a loss in pri­va­cy. None of this mat­ters if you don’t hook up a TV to the inter­net, but then you forego addi­tion­al ser­vices like catch-up.

    ———-

    “Next-gen TV to ush­er in view­er track­ing and per­son­al­ized ads along with 4K broad­casts” By Mar­tyn Williams; Tech­Hive; 04/13/2018

    “Broad­cast­ers haven’t talked much about the adver­tis­ing aspect, and they’ve said even less about the poten­tial pri­va­cy impli­ca­tions, but it was a major focus of sev­er­al tech demon­stra­tions at the Nation­al Asso­ci­a­tion of Broad­cast­ers (NAB) con­fer­ence in Las Vegas this week.”

    Mum’s the word on the poten­tial pri­va­cy impli­ca­tions for Amer­i­can tele­vi­sion view­ers. Poten­tial pri­va­cy impli­ca­tions that could be com­ing to a media mar­ket near you soon:

    On Mon­day a lit­tle bit of U.S. tele­vi­sion his­to­ry was made when KFPH UniMás 35 became the first sta­tion to go on air using the new ATSC 3.0 broad­cast for­mat in Phoenix, Ari­zona. Over the com­ing weeks, sev­er­al more broad­cast­ers will fol­low and the first wide-scale test of the new for­mat will be under­way.
    ...

    And while the broad­cast­ing indus­try may not want to talk about poten­tial pri­va­cy vio­la­tions, they sure are excit­ed to talk about col­lect­ing view­er data for the pur­pose of serv­ing up per­son­al­ized ads:

    ...
    The for­mat attempts to blend over-the-air TV with inter­net stream­ing, can sup­port 4K broad­cast­ing and local­ized emer­gency alerts, and should be more robust for city recep­tion; but it also gives TV sta­tions the chance to start serv­ing per­son­al­ized adver­tis­ing.

    ...

    At the event, about 300 miles to the north of Phoenix, it was clear that TV sta­tions are keen to use the new for­mat to track more close­ly what view­ers are watch­ing and serve up the same kind of tar­get­ed ads that are com­mon on the Inter­net.
    ...

    And in this new app-based mod­el for per­son­al­ized broad­cast tele­vi­sion each broad­cast­er devel­op their own apps, mean­ing there’s going to be a lot of dif­fer­ent apps/broadcasters poten­tial­ly track­ing what you do with those next-gen­er­a­tion TVs:

    ...
    When view­ers tune into an ATSC 3.0, the TV sta­tion has the abil­i­ty to serve them an appli­ca­tion that will run inside a brows­er on their TV. View­ers won’t see a tra­di­tion­al brows­er win­dow, it will look some­thing like the images above, and because it’s writ­ten in HTML5 it will work across all TVs.

    But the style of the app and the fea­tures it offers will be down to each indi­vid­ual broad­cast­er. Some might offer quick links to news clips and the weath­er and access to a catch-up ser­vice (i.e., video on demand that would let you watch pre­vi­ous­ly aired pro­gram­ming you’d missed the first time), while small­er sta­tions might just pro­vide a TV guide.

    One thing many are like­ly to do is track exact­ly what you’re watch­ing and for how long.

    The ATSC 3.0 for­mat does­n’t define a pri­va­cy pol­i­cy. It’s down to each TV sta­tion so there is no guar­an­tee they will all be uni­form.

    In a demon­stra­tion app on dis­play at NAB, the TV tracked what a view­er watched and for how long. The pay-off for the view­er would be free or exclu­sive access to con­tent. So, for exam­ple, imag­ine a future where a TV sta­tion gives you free access to pre­mi­um con­tent in return for being loy­al to its news­casts.

    But the TV sta­tion would be get­ting more than loy­al­ty. The data would be used to build a pro­file of the view­er and serve them per­son­al­ized ads, deliv­ered over the inter­net to their TV.

    That will be a lucra­tive new ad mod­el for TV broadcasters–and that’s why the TV indus­try is so excit­ed about ATSC 3.0.
    ...

    Although it’s worth not­ing that the demon­stra­tion apps shown to the author of that Tech­Hive arti­cle weren’t capa­ble of track­ing what you do on dif­fer­ent app. So each broad­cast­er would, in the­o­ry, only get to see what you do with their app and not oth­er broad­cast­ers’ apps. But, of course, a lot of broad­cast­ers are going to own mul­ti­ple chan­nels in a mar­ket. Or they just might decide to share the data with each oth­er:

    ...
    In the demon­stra­tions I saw this week, apps were capa­ble of track­ing only what a user did inside the app in ques­tion. One sta­tion won’t be able to see what you watch on a rival, but that gets blur­ri­er in mar­kets where a sin­gle own­er oper­ates sev­er­al chan­nels.
    ...

    Also keep in mind that there are still sig­nif­i­cant poten­tial pri­va­cy vio­la­tions even if apps can’t read the activ­i­ty of oth­er apps. For instance, if an app is capa­ble of sim­ply detect­ing when you turn the tv off or on, that gives infor­ma­tion about your day to day liv­ing sched­ule. It’s one of the gener­ic pri­va­cy vio­la­tions that come with the “inter­net-of-things”.

    And then there’s the pos­si­ble pri­va­cy vio­la­tions that come with next-gen­er­a­tion tele­vi­sions with built in micro­phones. Imag­ine how many apps will ask for per­mis­sion to lis­ten to every­thing you say in order to bet­ter per­son­al­ize the ser­vice. Remem­ber those sto­ries about the CIA hack­ing into Sam­sung Smart TVs with built in micro­phones? That’s prob­a­bly going to be the stan­dard app behav­ior if peo­ple allow it.

    And, final­ly, the arti­cle notes that this means the night­mare of micro-tar­get­ed per­son­al­ized polit­i­cal ads is com­ing to broad­cast tele­vi­sion:

    ...
    Can you imag­ine being a mid­dle-of-the-road vot­er in a swing state when the elec­tion rolls around? If you thought polit­i­cal adver­tis­ing was bad now, just wait until the cam­paigns get their teeth into tar­get­ing on this per­son­al­ized lev­el. It might be bet­ter to leave the TV off for six months.
    ...

    Yep, just wait for Cam­bridge Ana­lyt­i­ca-style per­son­al­ized psy­cho­log­i­cal pro­fil­ing of you, a pro­file that incor­po­rates all the infor­ma­tion already gath­ered about you from all the exist­ing sources of infor­ma­tion about you — Face­book, Google, data bro­ker giants like Acx­iom — and com­bines that with the knowl­edge on you obtained through your smart tele­vi­sion, and get ready for the next-gen­er­a­tion onslaught of the full-spec­trum of per­son­al­ized polit­i­cal ads designed to inflame you and polar­ize the coun­try. The “A/B test­ing on steroids” adver­tis­ing exper­i­ments employed by the Trump team on social media is com­ing to tele­vi­sion.

    It’ll be a gold­en age for tele­vi­sion com­mer­cial actors because they’re going to have to shoot all the dif­fer­ent cus­tomized ver­sions of the same com­mer­cials used to micro-tar­get the audi­ence’s psy­cho­log­i­cal pro­files.

    Of course, there is going to be the one option for next-gen­er­a­tion tele­vi­sion own­ers for avoid­ing the data pri­va­cy night­mare of per­son­al­ized tv: unplug it from the inter­net and just watch tv the soon-to-be-old-fash­ioned way:

    ...
    It’s worth remem­ber­ing that ATSC 3.0 does­n’t inevitably mean a loss in pri­va­cy. None of this mat­ters if you don’t hook up a TV to the inter­net, but then you forego addi­tion­al ser­vices like catch-up.

    And that points towards one of the glar­ing prob­lems and solu­tions to this sit­u­a­tion: the only option Amer­i­can tele­vi­sion con­sumers are going to have is either nav­i­gate a data pri­va­cy night­mare land­scape, where each app can have its own pri­va­cy stan­dards and there are almost no rules, or unplug the smart tvs from the inter­net and for­go the inter­net-based ser­vices. And that’s because spy­ing on con­sumers in exchange for ser­vices and enhanced prof­its is the fun­da­men­tal mod­el of the inter­net and this new data pri­va­cy night­mare land­scape for smart tvs is mere­ly the log­i­cal exten­sion of that fun­da­men­tal mod­el. It’s a fun­da­men­tal prob­lem with the future of tele­vi­sion ads and a fun­da­men­tal prob­lem with the inter­net-of-things in gen­er­al: mass com­mer­cial spy­ing is just assumed in Amer­i­ca. It’s the mod­el for the inter­net in Amer­i­ca. There is no alter­na­tive. And that mod­el is com­ing to broad­cast tele­vi­sion since that com­mer­cial mass spy­ing mod­el is clear­ly enshrined in the new ATSC 3.0. broad­cast for­mat. It’s a for­mat that lets each app devel­op­er make up their own pri­va­cy stan­dards. A ‘pre­pare-for-the-worst-hope-for-the-best’ mod­el that lit­er­al­ly pre­pares the way for the worst case sce­nario for con­sumer pri­va­cy and then just hopes that it won’t be abused. Like the inter­net.

    And in the case of this next-gen­er­a­tion inter­net-con­nect­ed tele­vi­sion it’s not like there’s the same pos­si­bil­i­ty for com­pe­ti­tion that we find with Face­book because there’s the pos­si­bil­i­ty for a Face­book com­peti­tor. But there’s only one nation­al broad­cast for­mat for smart tvs and for nations that use teh ATSC 3.0 stan­dard it’s going to let each app mak­er make up their own pri­va­cy rules. Note that the ATSC 3.0 stan­dard does­n’t just apply the US. It was cre­at­ed by the Advanced Tele­vi­sion Sys­tems Com­mit­tee which is shared by the US, Cana­da, Mex­i­co, South Korea, and Hon­duras. So this is a multi­na­tion­al tele­vi­sion stan­dard and it’s a stan­dard gov­ern­ments approve so it’s not like there’s com­pe­ti­tion. This is as good as the pri­va­cy stan­dards are going to get for North Amer­i­can and South Kore­an inter­net-con­nect­ed tv con­sumers: it’s up to the app devel­op­ers i.e. no pri­va­cy stan­dards.

    And no stan­dards on the exploita­tion of all the data col­lect­ed on us to deliv­ered high­ly per­sua­sive micro-tar­get­ed ad cam­paigns. Cam­bridge Ana­lyt­i­ca-style micro-tar­get­ing psy­cho­log­i­cal oper­a­tions for tv. That’s com­ing to all elec­tions.

    So just FYI, your next smart tele­vi­sion is going to be very per­sua­sive.

    Posted by Pterrafractyl | April 15, 2018, 7:41 pm
  8. This was more or less inevitable: it sounds like the ’87 mil­lion’ fig­ure — the num­ber of Face­book pro­files that had their data scraped by Cam­bridge Ana­lyt­i­ca — is set to be raised again. Recall that it was ini­tial­ly a 50 mil­lion fig­ure before Cam­bridge Ana­lyt­i­ca whis­tle-blow­er Christo­pher Wylie raised the esti­mate to 87 mil­lion, while hint­ing that the fig­ure could be more.

    Also recall that the 87 mil­lion fig­ure, osten­si­bly derived from the 270,000 peo­ple who down­loaded the Cam­bridge Ana­lyt­i­ca Face­book app and their many friends, cor­re­spond­ed to ~322 friends for each app user on aver­age, which is very clos­er to the 338 aver­age num­ber of friends Face­book users had in 2014. In oth­er words, the 87 mil­lion fig­ure is rough­ly what we should expect if you start off with 270,000 app users and scrape the pro­file infor­ma­tion for each of their 338 friends on aver­age. So if that 87 mil­lion fig­ure was to rise sig­nif­i­cant­ly, it would raise the ques­tion of where else did Cam­brdi­ge Ana­lyt­i­ca get their data.

    Well, we have a new Cam­bridge Ana­lyt­i­ca whis­tle-blow­er, Brit­tany Kaiser, who worked full-time for SCL, Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary of 2018. And accord­ing to Kaiser, it is indeed “much greater” than 87 mil­lion users. And Kaiser has a pos­si­ble expla­na­tion for how Cam­bridge Ana­lyt­i­ca got data on all these addi­tion­al users: they had more than one app that was scrap­ing Face­book pro­file data.

    And the way Kaiser puts it, it sounds like there were quite a few dif­fer­ent apps used by Cam­bridge Ana­lyt­i­ca. Includ­ing one she calls the “sex com­pass quiz”. So, yes, the Trump team was appar­ent­ly explor­ing the sex­u­al predilec­tions of the Amer­i­can elec­torate.

    Addi­tion­al­ly, Kaiser makes ref­er­ences to Cam­bridge Ana­lyt­i­ca’s “part­ners”. As she puts it, “I am aware in a gen­er­al sense of a wide range of sur­veys which were done by CA or its part­ners, usu­al­ly with a Face­book login–for exam­ple, the ‘sex com­pass’ quiz.” So is that ref­er­ence to Cam­bridge Ana­lyt­i­ca’s “part­ners” a ref­er­ence to SCL or Alek­san­dr Kogan’s Glob­al Sci­ence Research (GSR) com­pa­ny? Or were there oth­er third-par­ty firms that are also feed­ing infor­ma­tion into Cam­bridge Ana­lyt­i­ca? The Repub­li­can Nation­al Com­mit­tee, per­haps?

    Along those lines, Kaiser has anoth­er remark­able claim that office cul­ture was like the “Wild West” and that per­son­al data was “being scraped, resold and mod­eled willy-nil­ly.” So Kaiser is assert­ing that Cam­bridge Ana­lyt­i­ca resold the data too? It sure sounds like it.

    These are the kinds of ques­tions raised by Brit­tany Kaiser’s new claims. Along with the open ques­tion of exact­ly how many peo­ple Cam­bridge Ana­lyt­i­ca was col­lect­ing this kind of Face­book data on. We know it’s “much greater” than 87 mil­lion, accord­ing to Kaiser, but we have no idea how much greater it is:

    Newsweek

    Who Is Brit­tany Kaiser? Face­book Leak ‘Much Greater’ Than 87M Accounts Warns Ex-Cam­bridge Ana­lyt­i­ca Direc­tor

    By Jason Mur­dock
    On 4/17/18 at 12:30 PM

    Cam­bridge Ana­lyt­i­ca, the Lon­don-based polit­i­cal analy­sis firm that worked on the pres­i­den­tial elec­tion cam­paign of Don­ald Trump, used mul­ti­ple apps to har­vest Face­book data—and the true scope of the abuse is like­ly “much greater” than 87 mil­lion accounts, a for­mer staffer-turned-whistle­blow­er has claimed.

    Brit­tany Kaiser, who worked full-time for the SCL Group, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary this year, told a U.K. gov­ern­ment com­mit­tee on Tues­day the firm had used Face­book data it pre­vi­ous­ly claimed to have delet­ed.

    Face­book has faced an unprece­dent­ed back­lash after user data was alleged­ly abused by a researcher called Alek­san­dr Kogan. Kogan has been accused of using a per­son­al­i­ty test app to obtain data linked to mil­lions of accounts.

    Kaiser, who released a num­ber of new doc­u­ments into the pub­lic domain alleg­ing to show how the com­pa­ny worked on pro­pos­als for the U.K. “Brex­it” cam­paign, wrote in a tes­ti­mo­ny sub­mit­ted to the government’s enquiry into fake news: “I am aware in a gen­er­al sense of a wide range of sur­veys which were done by CA or its part­ners, usu­al­ly with a Face­book login–for exam­ple, the ‘sex com­pass’ quiz.

    “I do not know the specifics of these sur­veys or how the data was acquired or processed. But I believe it is almost cer­tain that the num­ber of Face­book users whose data was com­pro­mised through routes sim­i­lar to that used by Kogan is much greater than 87 mil­lion; and that both Cam­bridge Ana­lyt­i­ca and oth­er uncon­nect­ed com­pa­nies and cam­paigns were involved in these activ­i­ties.”

    Face­book’s founder and CEO, Mark Zucker­berg, has said Kogan broke the website’s poli­cies and stressed a full audit is cur­rent­ly tak­ing place to find out if oth­er apps were using sim­i­lar tac­tics. It is believed that Kogan—who is alleged to have sold the infor­ma­tion to Cam­bridge Analytica—designed the sys­tem so users’ social media activ­i­ty could be used for inten­sive polit­i­cal pro­fil­ing.

    Zucker­berg him­self has warned all users were at risk of data scrap­ing.

    Accord­ing to Kaiser, a U.S. cit­i­zen who, along­side for­mer Cam­bridge Ana­lyt­i­ca staffer Christo­pher Wylie, is now con­sid­ered a whistle­blow­er, her for­mer employ­er used the Face­book data dur­ing sales pitch­es to poten­tial clients.

    She alleged it had links to the Lon­don bureau of far-right news web­site Bre­it­bart and sig­nif­i­cant time dur­ing the hear­ing was ded­i­cat­ed to its sus­pect­ed work with Leave.EU, a cam­paign push­ing for Britain to exit the Euro­pean Union (EU). In a series of updates via Twit­ter, Cam­bridge Ana­lyt­i­ca denied links to Leave.EU.

    In a state­ment to Newsweek, Cam­bridge Ana­lyt­i­ca said:

    “In the past Cam­bridge Ana­lyt­i­ca has designed and run quizzes for inter­nal research projects. This has includ­ed a fair­ly con­ven­tion­al per­son­al­i­ty quiz as well as broad­er quizzes such as one that probed peo­ple’s music pref­er­ences.

    “Data col­lect­ed from these quizzes were always col­lect­ed under a clear state­ment of con­sent. When mem­bers of the pub­lic logged into a quiz with their Face­book details, only their pub­lic pro­file infor­ma­tion was col­lect­ed. The vol­umes of users who took the quizzes num­bered in the tens of thou­sands: any sug­ges­tion that we col­lect­ed data on the scale of [Glob­al Sci­ence Research Lim­it­ed] is incor­rect.

    “We no longer run such quizzes or hold data that was col­lect­ed in this way.”

    Who is Brit­tany Kaiser?

    Accord­ing to her writ­ten tes­ti­mo­ny, Kaiser was born in Hous­ton, Texas, and grew up in Chica­go. She was a part of Barack Obama’s media team dur­ing the pres­i­den­tial cam­paign in 2007 and has also worked for Amnesty Inter­na­tion­al as a lob­by­ist appeal­ing for an end to crimes against human­i­ty. This month, Kaiser start­ed a Face­book cam­paign appeal­ing for trans­paren­cy called #OwnY­our­Da­ta.

    Dur­ing her time at Cam­bridge Ana­lyt­i­ca she worked on sales pro­pos­als and liaised with clients. She worked under senior man­age­ment includ­ing CEO Alexan­der Nix, who this week declined to appear before the same fake news enquiry.

    Kaiser claimed that the office cul­ture was like the “Wild West” and alleged that cit­i­zens’ data was “being scraped, resold and mod­eled willy-nil­ly.”

    “Pri­va­cy has become a myth, and track­ing people’s behav­ior has become an essen­tial part of using social media and the inter­net itself; tools that were meant to free our minds and make us more con­nect­ed, with faster access to infor­ma­tion than ever before,” she wrote in her tes­ti­mo­ny.

    “Instead of con­nect­ing us, these tools have divid­ed us. It’s time to expose their abus­es, so we can have an hon­est con­ver­sa­tion about how we build a bet­ter way for­ward,” Kaiser added.

    ———-

    “Who Is Brit­tany Kaiser? Face­book Leak ‘Much Greater’ Than 87M Accounts Warns Ex-Cam­bridge Ana­lyt­i­ca Direc­tor” by Jason Mur­dock; Newsweek; 04/17/2018

    “Kaiser claimed that the office cul­ture was like the “Wild West” and alleged that cit­i­zens’ data was “being scraped, resold and mod­eled willy-nil­ly.””

    That’s rights, Cam­bridge Ana­lyt­i­ca was­n’t just scrap­ing Face­book users’ data. They were appar­ent­ly reselling it too. These are the claims by Brit­tany Kaiser, who worked full-time for the SCL Group, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary this year, dur­ing her tes­ti­mo­ny to a UK gov­ern­ment gov­ern­ment:

    ...
    Brit­tany Kaiser, who worked full-time for the SCL Group, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary this year, told a U.K. gov­ern­ment com­mit­tee on Tues­day the firm had used Face­book data it pre­vi­ous­ly claimed to have delet­ed.
    ...

    And accord­ing to Kaiser, the addi­tion­al apps used by Cam­bridge Ana­lyt­i­ca include a “sex com­pass” quiz.

    ...
    Kaiser, who released a num­ber of new doc­u­ments into the pub­lic domain alleg­ing to show how the com­pa­ny worked on pro­pos­als for the U.K. “Brex­it” cam­paign, wrote in a tes­ti­mo­ny sub­mit­ted to the government’s enquiry into fake news: “I am aware in a gen­er­al sense of a wide range of sur­veys which were done by CA or its part­ners, usu­al­ly with a Face­book login–for exam­ple, the ‘sex com­pass’ quiz.
    ...

    And keep in mind that the use of this sex app quiz is prob­a­bly pret­ty sim­i­lar to how Alek­san­dr’s psy­cho­log­i­cal pro­fil­ing app worked: you use the data col­lect­ed on the peo­ple tak­ing the quiz as the “train­ing set” in order to devel­op algo­rithms for infer­ring Face­book users’ sex­u­al pref­er­ences based on their Face­book pro­file data. And then Cam­bridge Ana­lyt­i­ca uses those algo­rithms to make edu­cat­ed guess­es about the ‘sex­u­al com­pass’ of all the oth­er Face­book user they have pro­file data on. We don’t know that this is what Cam­bridge Ana­lyt­i­ca did with the ‘sex com­pass’ app, but we know that’s prob­a­bly what they did because that is the busi­ness they are in.

    And it’s the use of all these addi­tion­al apps that Kaiser saw Cam­bridge Ana­lyt­i­ca employ that appears to be the basis for her con­clu­sion that the num­ber of Face­book pro­files scraped by Cam­bridge Ana­lyt­i­ca is “much greater than 87 mil­lion”. And she also asserts, quite rea­son­ably, that Cam­bridge Ana­lyt­i­ca was­n’t the only enti­ty engaged in this kind of activ­i­ty:

    ...
    “I do not know the specifics of these sur­veys or how the data was acquired or processed. But I believe it is almost cer­tain that the num­ber of Face­book users whose data was com­pro­mised through routes sim­i­lar to that used by Kogan is much greater than 87 mil­lion; and that both Cam­bridge Ana­lyt­i­ca and oth­er uncon­nect­ed com­pa­nies and cam­paigns were involved in these activ­i­ties.”
    ...

    So how much high­er is that 87 mil­lion fig­ure going to go? Well, there’s one oth­er high­ly sig­nif­i­cant num­ber we should keep in mind when try­ing to under­stand what kind of data Cam­bridge Ana­lyt­i­ca acquired: The com­pa­ny claimed to have up to 5,000 data points on 220 mil­lion Amer­i­cans. Also keep in mind that 220 mil­lion is greater than the total num­ber of Face­book users in the US (~214 mil­lion in 2018).

    So if we’re won­der­ing how high that 87 mil­lion fig­ure might go, the answers might be some­thing along the lines of “almost all the Face­book users in the US in 2014–2015”. What­ev­er that num­ber hap­pens to be is prob­a­bly the answer.

    Posted by Pterrafractyl | April 17, 2018, 3:43 pm
  9. Here’s a set of arti­cles on one of the fig­ures who co-found­ed both Cam­bridge Ana­lyt­i­ca and its par­ent com­pa­ny SCL Group: Nigel Oakes.

    While Cam­bridge Ana­lyt­i­ca’s for­mer-CEO Alexan­der Nix has received much of the atten­tion direct­ed at Cam­bridge Ana­lyt­i­ca, espe­cial­ly fol­low­ing the shock­ing hid­den-cam­era footage of Nix talk­ing to an under­cov­er reporter he thought was a client, the sto­ry of Cam­bridge Ana­lyt­i­ca ulti­mate­ly leads to Oakes accord­ing to mul­ti­ple sources.

    So who is Nigel Oakes? Well, as the fol­low­ing arti­cle notes, Oakes got his start in the busi­ness of influ­enc­ing peo­ple in the field of “mar­ket­ing aro­mat­ics,” or the use of smells to make con­sumers spend more mon­ey. He also dat­ed Lady Helen Wind­sor when he was younger, which made him a some­what pub­licly known per­son in the UK.

    In 1993, Oakes co-found­ed Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries, the pre­de­ces­sor to SCL Group. In 2005, he co-found­ed SCL Group which, at the time, made head­lines when it billed itself at a glob­al arms fair in Lon­don as the first pri­vate com­pa­ny to pro­vide psy­cho­log­i­cal war­fare ser­vices. Oakes said he was con­fi­dent that psy­ops could short­en mil­i­tary con­flicts. As he put it, “We used to be in the busi­ness of mind bend­ing for polit­i­cal pur­pos­es, but now we are in the busi­ness of sav­ing lives.”

    SCL sold the same psy­cho­log­i­cal war­fare prod­ucts in the US. Ser­vices includ­ed manip­u­la­tion of elec­tions and “per­cep­tion man­age­ment,” or the inten­tion­al spread of fake news. And the US State Depart­ment remains a client and con­firmed that it retains SCL Group on a con­tract to “pro­vide research and ana­lyt­i­cal sup­port in con­nec­tion with our mis­sion to counter ter­ror­ist pro­pa­gan­da and dis­in­for­ma­tion over­seas.”

    So Nigel Oakes has quite an inter­est­ing his­to­ry. A his­to­ry that he unwit­ting­ly encap­su­late with a now-noto­ri­ous quote he gave in 1992:
    “We use the same tech­niques as Aris­to­tle and Hitler...We appeal to peo­ple on an emo­tion­al lev­el to get them to agree on a func­tion­al lev­el.”:

    Politi­co

    Cam­bridge Ana­lyt­i­ca boss went from ‘aro­mat­ics’ to psy­ops to Trump’s cam­paign

    While Alexan­der Nix draws head­lines for his role in the Trump 2016 dig­i­tal oper­a­tion, his col­or­ful busi­ness part­ner Nigel Oakes may be an equal­ly impor­tant fig­ure.

    By Josh Mey­er

    3/22/18, 10:15 AM CET

    Updat­ed 3/23/18, 4:17 AM CET

    WASHINGTON — Long before the polit­i­cal data firm he over­sees, Cam­bridge Ana­lyt­i­ca, helped Don­ald Trump become pres­i­dent, Nigel Oakes tried a very dif­fer­ent form of influ­enc­ing human behav­ior. It was called “mar­ket­ing aro­mat­ics,” or the use of smells to make con­sumers spend more mon­ey.

    In the decades since, the Eton-edu­cat­ed British busi­ness­man has styled him­self as an expert on a wide vari­ety of “mind-bend­ing” tech­niques — from scents to psy­cho­log­i­cal war­fare to cam­paign pol­i­tics.

    But some 25 years after his for­ay into aro­mat­ics, a bad odor has arisen around his use of data to influ­ence vot­er behav­ior. Oakes and his part­ners, who include Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix, are under intense scruti­ny over their meth­ods in the 2016 cam­paign, includ­ing the alleged improp­er use of Face­book data. Some news reports have also found links to Rus­sia that the com­pa­ny has down­played.

    Oakes and the com­pa­ny he co-found­ed in 2005 along with Nix, SCL Group, have now drawn the inter­est of con­gres­sion­al offi­cials. Three Repub­li­can sen­a­tors wrote Oakes a let­ter this week request­ing infor­ma­tion and a brief­ing relat­ed to Facebook’s sud­den sus­pen­sion last Fri­day of Cam­bridge Ana­lyt­i­ca, which is a close­ly affil­i­at­ed sub­sidiary of SCL.

    The request — from Sen­ate com­merce com­mit­tee mem­bers John Thune (R‑S.D.), Roger Wick­er (R‑Miss.) and Jer­ry Moran, (R‑Kan.) — came after recent alle­ga­tions that Cam­bridge Ana­lyt­i­ca used inap­pro­pri­ate­ly har­vest­ed pri­vate Face­book data on near­ly 50 mil­lion users and exploit­ed the infor­ma­tion to assist Pres­i­dent Don­ald Trump’s 2016 cam­paign.

    But that has trig­gered wider ques­tions about whether Cam­bridge Ana­lyt­i­ca, whose board once includ­ed for­mer Trump polit­i­cal strate­gist Steve Ban­non, could have played some role in the Kremlin’s scheme to manip­u­late U.S. social media in 2016. The com­pa­ny denies that.

    Cap­tured on an under­cov­er video by Britain’s Chan­nel 4 News, Nix boast­ed that the firm “did all the research, all the data, all the ana­lyt­ics, all the tar­get­ing,” for the Trump cam­paign, adding that “our data informed all the strat­e­gy.” (Trump offi­cials call that an exag­ger­a­tion.)

    Adding to the con­cern is the role of Alek­san­dr Kogan, a Russ­ian-born researcher at Cam­bridge Uni­ver­si­ty who col­lect­ed the Face­book data with­out dis­clos­ing that it would be used com­mer­cial­ly, and who was also work­ing for a uni­ver­si­ty in St. Peters­burg, Rus­sia at the time. Cam­bridge Ana­lyt­i­ca also report­ed­ly dis­cussed a busi­ness rela­tion­ship in 2014 and 2015 with the the Krem­lin-con­nect­ed Russ­ian oil giant Lukoil, which expressed inter­est in how data is used to tar­get Amer­i­can vot­ers, accord­ing to the New York Times.

    The recent flur­ry of cov­er­age has bare­ly men­tioned the 55-year-old Oakes, a vir­tu­al unknown in the U.S. but more famil­iar in Great Britain, in part because of his rela­tion­ship in the 1990s with a mem­ber of the roy­al Wind­sor fam­i­ly.

    But data ana­lyt­ics experts described Oakes as a hid­den hand run­ning both SCL and Cam­bridge Ana­lyt­i­ca.

    “Any­one right now that is focus­ing on the prob­lems with Cam­bridge Ana­lyt­i­ca should be back­track­ing to the source, which is Nigel Oakes,” said Sam Wool­ley, research direc­tor of the Dig­i­tal Intel­li­gence Lab at the Sil­i­con Val­ley-based Insti­tute for the Future.

    “My research has shown that Cam­bridge Ana­lyt­i­ca is the tip of the ice­berg of Nigel Oakes’ empire of psy­ops and infor­ma­tion ops around the world,” said Wool­ley, whose research aims to help pro­tect democ­ra­cy from the nefar­i­ous use of rapid­ly evolv­ing com­mu­ni­ca­tions tech­nol­o­gy. “As you start to dig in to that, you find out a lot of very con­cern­ing things.”

    Wool­ley said he attend­ed a Cam­bridge Ana­lyt­i­ca “meet-up” in April 2016 dur­ing the New York pres­i­den­tial pri­ma­ry in New York. At the time, the com­pa­ny was work­ing for anoth­er can­di­date, Sen­a­tor Ted Cruz (R‑Texas), and gave a wide-rang­ing overview of their activ­i­ties, Wool­ley said.

    It was clear from the ses­sion that the two com­pa­nies are com­plete­ly inter­twined, Wool­ley said. He recalled that Cam­bridge Ana­lyt­i­ca lead­ers “con­flat­ed all of their work with SCL’s work,” includ­ing in sev­er­al over­seas elec­tions. Based on his ongo­ing research, he described the two firms as sell­ing “polit­i­cal mar­ket­ing to the high­est bid­der, whether you’re in gov­ern­ment, the mil­i­tary or pol­i­tics, even author­i­tar­i­an” regimes.

    Oakes and SCL Group did not return calls seek­ing com­ment through a spokesper­son.

    SCL Group — like its pre­de­ces­sor, Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries, which Oakes co-found­ed in 1993 — is no stranger to con­tro­ver­sies relat­ed to for­eign elec­tions, includ­ing in con­nec­tion with alleged dirty tricks it has alleged­ly employed on behalf of polit­i­cal clients from Europe to Africa and Asia.

    The com­pa­ny also made head­lines in 2005 when it billed itself at a glob­al arms fair in Lon­don as the first pri­vate com­pa­ny to pro­vide psy­cho­log­i­cal war­fare ser­vices, or “psy­ops,” to the British mil­i­tary.

    At the time, Oakes, as chief exec­u­tive, said he was con­fi­dent that psy­ops could short­en mil­i­tary con­flicts, and that gov­ern­ments would buy such a ser­vice, which SCL had pro­vid­ed com­mer­cial­ly.

    “We used to be in the busi­ness of mind bend­ing for polit­i­cal pur­pos­es,” he told a reporter, “but now we are in the busi­ness of sav­ing lives.”

    Those who know Oakes, or know of him, are some­what skep­ti­cal.

    One pri­vate inves­ti­ga­tor said the com­pa­ny is known to have done exten­sive work for the U.S. mil­i­tary and oth­er gov­ern­ment agen­cies against tar­gets includ­ing Iran. SCL got its start in the U.S. by sell­ing the same psy­cho­log­i­cal war­fare prod­uct as it did to the British, includ­ing manip­u­la­tion of elec­tions and “per­cep­tion man­age­ment,” or the inten­tion­al spread of fake news.

    The State Depart­ment con­firmed to Defense One this week that it retains SCL Group on a con­tract to “pro­vide research and ana­lyt­i­cal sup­port in con­nec­tion with our mis­sion to counter ter­ror­ist pro­pa­gan­da and dis­in­for­ma­tion over­seas.”

    Com­pa­ny lit­er­a­ture describes some of SCL’s ser­vices, besides “psy­cho­log­i­cal war­fare,” as “influ­ence oper­a­tions” and “pub­lic diplo­ma­cy.”

    Absent from such descrip­tions is some of the more bom­bas­tic rhetoric of Oakes’ youth.

    “We use the same tech­niques as Aris­to­tle and Hitler,” he told an inter­view­er in 1992. “We appeal to peo­ple on an emo­tion­al lev­el to get them to agree on a func­tion­al lev­el.”

    On its web­site, SCL Group does not high­light its con­nec­tions to Cam­bridge Ana­lyt­i­ca.

    “Our vision is to be the pre­mier provider of data ana­lyt­ics and strat­e­gy for behav­ior change,” the web­site says.

    “Our mis­sion is to cre­ate behav­ior change through research, data, ana­lyt­ics, and strat­e­gy for both domes­tic and inter­na­tion­al gov­ern­ment clients.”

    But Oakes and his com­pa­ny have a his­to­ry of secre­cy, mak­ing the hid­den-cam­era footage of Nix all the more shock­ing. In the footage aired by Chan­nel 4, Nix appears to tell a jour­nal­ist pos­ing as a poten­tial client that the com­pa­ny could, for instance, send Ukrain­ian sex work­ers to an opponent’s house to sab­o­tage him.

    SCL Group said it has sus­pend­ed Nix while it inves­ti­gates, and sev­er­al U.S. law­mak­ers cit­ed the reports in say­ing that they want to call him back before com­mit­tees inves­ti­gat­ing Russ­ian med­dling to answer more ques­tions.

    One British jour­nal­ist who has inves­ti­gat­ed the two com­pa­nies and their lead­ers also sug­gest­ed that the real trail of ques­tions leads to Oakes.

    “Alexan­der Nix has been sus­pend­ed from a shell com­pa­ny that has no employ­ees and no assets,” said Car­ole Cad­wal­ladr of the Observ­er, who authored last weekend’s expose, and oth­ers. “If you think this ends here, think again.”

    The let­ter from the three sen­a­tors — which they also sent to Face­book CEO Mark Zucker­berg — asks Oakes whether he acknowl­edges the con­duct described in Facebook’s state­ment announc­ing the sus­pen­sion of Nix’s account, and those of both SCL Group and Cam­bridge Ana­lyt­i­ca.

    It also asks him to pro­vide infor­ma­tion about whether he was aware of oth­er activ­i­ty by Cam­bridge that Face­book said led to the sus­pen­sion, includ­ing how it accessed the data in ques­tion and whether it false­ly cer­ti­fied that it had destroyed it at Facebook’s request.

    “Con­sumers rely on app devel­op­ers to be trans­par­ent and truth­ful in their terms of ser­vice so con­sumers can make informed deci­sions about whether to con­sent to the shar­ing and use of their data,” the sen­a­tors wrote. “There­fore, the alle­ga­tion that SCL was not forth­com­ing with Face­book or trans­par­ent with con­sumers is trou­bling.”

    The sen­a­tors remind­ed Oakes that their com­mit­tee has juris­dic­tion over the inter­net and com­mu­ni­ca­tions tech­nolo­gies gen­er­al­ly, as well as over con­sumer pro­tec­tion and data pri­va­cy issues.

    Mean­while, Democ­rats who have been inves­ti­gat­ing Russ­ian elec­tion inter­fer­ence and sus­pect­ed col­lu­sion between the Krem­lin and the Trump cam­paign are express­ing height­ened inter­est in Oakes’s com­pa­ny, though for how their focus is pri­mar­i­ly on Nix.

    Rep­re­sen­ta­tive Adam Schiff, the top Demo­c­rat on the House intel­li­gence com­mit­tee, said on MSNBC Wednes­day that he was par­tic­u­lar­ly con­cerned about Nix’ com­ments, cap­tured by Chan­nel 4, about how he got off easy dur­ing his inter­view with Con­gress.

    “The Repub­li­cans asked three ques­tions. Five min­utes, done,” Nix said. And while the Democ­rats asked two hours of ques­tions, Nix said he didn’t have to answer them because “it’s vol­un­tary.”

    ...

    ———-

    “Cam­bridge Ana­lyt­i­ca boss went from ‘aro­mat­ics’ to psy­ops to Trump’s cam­paign” by Josh Mey­er; Politi­co; 03/22/2018

    ““Any­one right now that is focus­ing on the prob­lems with Cam­bridge Ana­lyt­i­ca should be back­track­ing to the source, which is Nigel Oakes,” said Sam Wool­ley, research direc­tor of the Dig­i­tal Intel­li­gence Lab at the Sil­i­con Val­ley-based Insti­tute for the Future.”

    Nigel Oakes is seen as “the source” of Cam­bridge Ana­lyt­i­ca. And Cam­bridge Ana­lyt­i­ca is seen as mere­ly “the tip of the ice­berg of Nigel Oakes’ empire of psy­ops and infor­ma­tion ops around the world”:

    ...
    My research has shown that Cam­bridge Ana­lyt­i­ca is the tip of the ice­berg of Nigel Oakes’ empire of psy­ops and infor­ma­tion ops around the world,” said Wool­ley, whose research aims to help pro­tect democ­ra­cy from the nefar­i­ous use of rapid­ly evolv­ing com­mu­ni­ca­tions tech­nol­o­gy. “As you start to dig in to that, you find out a lot of very con­cern­ing things.”
    ...

    And that’s how British jour­nal­ist Car­ole Cad­wal­ladr, who has done exten­sive report­ing on Cam­bridge Ana­lyt­i­ca over the last year, also sees it: the ques­tions about Cam­bridge Ana­lyt­i­ca leads to Oakes:

    ...
    One British jour­nal­ist who has inves­ti­gat­ed the two com­pa­nies and their lead­ers also sug­gest­ed that the real trail of ques­tions leads to Oakes.

    “Alexan­der Nix has been sus­pend­ed from a shell com­pa­ny that has no employ­ees and no assets,” said Car­ole Cad­wal­ladr of the Observ­er, who authored last weekend’s expose, and oth­ers. “If you think this ends here, think again.”
    ...

    And that’s no sur­prise that Cam­bridge Ana­lyt­i­ca ques­tions lead to Oakes. He helped co-found it, along with co-found­ing SCL Group in 2005 and Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries in 1993:

    ...
    Oakes and the com­pa­ny he co-found­ed in 2005 along with Nix, SCL Group, have now drawn the inter­est of con­gres­sion­al offi­cials. Three Repub­li­can sen­a­tors wrote Oakes a let­ter this week request­ing infor­ma­tion and a brief­ing relat­ed to Facebook’s sud­den sus­pen­sion last Fri­day of Cam­bridge Ana­lyt­i­ca, which is a close­ly affil­i­at­ed sub­sidiary of SCL.

    ...

    SCL Group — like its pre­de­ces­sor, Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries, which Oakes co-found­ed in 1993 — is no stranger to con­tro­ver­sies relat­ed to for­eign elec­tions, includ­ing in con­nec­tion with alleged dirty tricks it has alleged­ly employed on behalf of polit­i­cal clients from Europe to Africa and Asia.
    ...

    And Oakes has been pitch­ing SCL Group as a pri­vate psy­cho­log­i­cal war­fare ser­vice provider for years. So if we’re explor­ing how Cam­bridge Ana­lyt­i­ca got into the busi­ness of the manip­u­la­tion of the mass­es, the fact that SCL has been pro­vid­ing those ser­vices to the US and UK gov­ern­ments for years is a pret­ty big fac­tor in that sto­ry. when Cam­bridge Ana­lyt­i­ca was formed in 2013, its team was already quite expe­ri­enced in these kinds of mat­ters:

    ...
    The com­pa­ny also made head­lines in 2005 when it billed itself at a glob­al arms fair in Lon­don as the first pri­vate com­pa­ny to pro­vide psy­cho­log­i­cal war­fare ser­vices, or “psy­ops,” to the British mil­i­tary.

    At the time, Oakes, as chief exec­u­tive, said he was con­fi­dent that psy­ops could short­en mil­i­tary con­flicts, and that gov­ern­ments would buy such a ser­vice, which SCL had pro­vid­ed com­mer­cial­ly.

    “We used to be in the busi­ness of mind bend­ing for polit­i­cal pur­pos­es,” he told a reporter, “but now we are in the busi­ness of sav­ing lives.”

    Those who know Oakes, or know of him, are some­what skep­ti­cal.

    One pri­vate inves­ti­ga­tor said the com­pa­ny is known to have done exten­sive work for the U.S. mil­i­tary and oth­er gov­ern­ment agen­cies against tar­gets includ­ing Iran. SCL got its start in the U.S. by sell­ing the same psy­cho­log­i­cal war­fare prod­uct as it did to the British, includ­ing manip­u­la­tion of elec­tions and “per­cep­tion man­age­ment,” or the inten­tion­al spread of fake news.

    The State Depart­ment con­firmed to Defense One this week that it retains SCL Group on a con­tract to “pro­vide research and ana­lyt­i­cal sup­port in con­nec­tion with our mis­sion to counter ter­ror­ist pro­pa­gan­da and dis­in­for­ma­tion over­seas.”

    Com­pa­ny lit­er­a­ture describes some of SCL’s ser­vices, besides “psy­cho­log­i­cal war­fare,” as “influ­ence oper­a­tions” and “pub­lic diplo­ma­cy.”
    ...

    And as the hid­den-cam­era footage of Alexan­der Nix showed the world, those mass manip­u­la­tion ser­vices include dirty tricks. Like send­ing Ukrain­ian sex work­ers to an opponent’s house to sab­o­tage him. It’s an indi­ca­tor of the amoral char­ac­ter of the peo­ple behind Cam­bridge Ana­lyt­i­ca and its SCL Group par­ent:

    ...
    But Oakes and his com­pa­ny have a his­to­ry of secre­cy, mak­ing the hid­den-cam­era footage of Nix all the more shock­ing. In the footage aired by Chan­nel 4, Nix appears to tell a jour­nal­ist pos­ing as a poten­tial client that the com­pa­ny could, for instance, send Ukrain­ian sex work­ers to an opponent’s house to sab­o­tage him.
    ...

    And that amoral­i­ty is per­fect­ly encap­su­lat­ed in a now-noto­ri­ous 1992 quote from Oakes, where he favor­ably com­pares his work in psy­cho­log­i­cal manip­u­la­tion with the tech­niques employed by Hitler:

    ...
    Absent from such descrip­tions is some of the more bom­bas­tic rhetoric of Oakes’ youth.

    “We use the same tech­niques as Aris­to­tle and Hitler,” he told an inter­view­er in 1992. “We appeal to peo­ple on an emo­tion­al lev­el to get them to agree on a func­tion­al lev­el.”
    ...

    And 1992 quote was the only ‘we use the same tech­niques as Hitler!’ quote Oakes has made over the years. As the fol­low­ing arti­cle notes, Oakes made the same admis­sion last year in ref­er­ence to the tech­niques employed by Cam­bridge Ana­lyt­i­ca for the Trump cam­paign:

    The Huff­in­g­ton Post

    Cam­bridge Ana­lyt­i­ca Founder Once Com­pared Trump To Hitler
    Trump vil­i­fied Mus­lims the same way Hitler vil­i­fied Jews, Nigel Oakes said.

    By Willa Frej
    04/17/2018 12:32 pm ET Updat­ed

    Nigel Oakes, who runs the group that found­ed data min­ing firm Cam­bridge Ana­lyt­i­ca, admit­ted in an inter­view last year that Pres­i­dent Don­ald Trump’s con­tro­ver­sial pro­pa­gan­da tac­tics mir­ror those of Adolf Hitler.

    Both Hitler and Trump have suc­cess­ful­ly attacked anoth­er group, turn­ing it into an “arti­fi­cial ene­my,” in order to fos­ter greater sup­port among loy­al­ists, Oakes, the CEO of SCL Group, Cam­bridge Analytica’s par­ent com­pa­ny, said last Novem­ber.

    He made the com­ments as part of a series of inter­views that Emma Bri­ant, a Uni­ver­si­ty of Essex lec­tur­er, con­duct­ed with peo­ple involved in Britain’s cam­paign to leave the Euro­pean Union about pro­pa­gan­da used dur­ing the Brex­it ref­er­en­dum. Britain’s Par­lia­ment released the inter­view tran­scripts Mon­day.

    “Hitler, got to be very care­ful about say­ing so, must nev­er prob­a­bly say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a prob­lem with the Jews at all, but the peo­ple didn’t like the Jews,” Oakes said. “So if the peo­ple… He could just use them to say… So he just lever­age an arti­fi­cial ene­my. Well that’s exact­ly what Trump did. He lever­aged a Mus­lim- I mean, you know, it’s- It was a real ene­my. ISIS is a real, but how big a threat is ISIS real­ly to Amer­i­ca? Real­ly, I mean, we are still talk­ing about 9/11, well 9/11 is a long time ago.”

    Anoth­er one of the inter­vie­wees, for­mer com­mu­ni­ca­tions direc­tor for Leave.EU Andy Wig­more, com­pared the campaign’s own strat­e­gy to Hitler’s “very clever” pro­pa­gan­da machine.

    “In its pure mar­ket­ing sense, you can see the log­ic of what they were say­ing, why they were say­ing it, and how they pre­sent­ed things, and the imagery,” he said of the Nazis. “And look­ing at that now, in hind­sight, hav­ing been on the sharp end of this cam­paign, you think: crikey, this is not new, and it’s just … using the tools that you have at the time.”

    Cam­bridge Ana­lyt­i­ca said Oakes nev­er worked for the com­pa­ny or the Trump cam­paign and said he was instead “speak­ing in a per­son­al capac­i­ty about the his­tor­i­cal use of pro­pa­gan­da to an aca­d­e­m­ic he knew well from her work in the defence sphere,” accord­ing to a spokesper­son.

    ...

    ———-

    “Cam­bridge Ana­lyt­i­ca Founder Once Com­pared Trump To Hitler” by Willa Frej; The Huff­in­g­ton Post; 04/17/2018

    ““Hitler, got to be very care­ful about say­ing so, must nev­er prob­a­bly say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a prob­lem with the Jews at all, but the peo­ple didn’t like the Jews,” Oakes said. “So if the peo­ple… He could just use them to say… So he just lever­age an arti­fi­cial ene­my. Well that’s exact­ly what Trump did. He lever­aged a Mus­lim- I mean, you know, it’s- It was a real ene­my. ISIS is a real, but how big a threat is ISIS real­ly to Amer­i­ca? Real­ly, I mean, we are still talk­ing about 9/11, well 9/11 is a long time ago.””

    And that’s Nigel Oakes in his own words: he saw Trump’s sys­tem­at­ic fear mon­ger­ing about vir­tu­al­ly all Mus­lims as more or less the same cyn­i­cal tech­nique employed by Hitler.

    And when you look at the full quote pro­vid­ed to the UK par­lia­ment it sounds even worse because he’s fram­ing the use of these demo­niza­tion tech­niques as sim­ply a way to fire up “your group” (your tar­get base of sup­port­ers) by demo­niz­ing a dif­fer­ent group that you don’t expect to vote for your can­di­date:

    Clip 8 — Nigel Oakes: Nazi meth­ods of pro­pa­gan­da

    Emma Bri­ant: It didn’t mat­ter with the rest of what he’s [Don­ald Trump] say­ing, it didn’t mat­ter if he is alien­at­ing all of the lib­er­al women, actu­al­ly, and I think he was nev­er going to get them any­way.

    Nigel Oakes: That’s right

    Emma Bri­ant: You’ve got to think about what would res­onate with as many as pos­si­ble.

    Nigel Oakes: And often, as you right­ly say, it’s the things that res­onate, some­times to attack the oth­er group and know that you are going to lose them is going to rein­force and res­onate your group. Which is why, you know, Hitler, got to be very care­ful about say­ing so, must nev­er prob­a­bly say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a prob­lem with the Jews at all, but the peo­ple didn’t like the Jews. So if the peo­ple... He could just use them to say... So he just lever­age an arti­fi­cial ene­my. Well that’s exact­ly what Trump did. He lever­aged a Mus­lim — I mean, you know, it’s — It was a real ene­my. ISIS is a real, but how big a threat is ISIS real­ly to Amer­i­ca? Real­ly, I mean, we are still talk­ing about 9/11, well 9/11 is a long time ago.

    This inter­view was con­duct­ed by Dr Emma L Bri­ant, Uni­ver­si­ty of Essex, both for the upcom­ing book “What’s wrong with the Democ­rats? Media Bias, Inequal­i­ty and the rise of Don­ald Trump”, and for oth­er upcom­ing pub­li­ca­tions.

    “And often, as you right­ly say, it’s the things that res­onate, some­times to attack the oth­er group and know that you are going to lose them is going to rein­force and res­onate your group.”

    Attack­ing “the oth­er group and know that you are going to lose” in order to “rein­force and res­onate your group.” That’s how Nigel Oakes mat­ter-of-fact­ly framed the use of the same kinds of mass manip­u­la­tion tech­niques designed to gen­er­ate an emo­tion­al appeal to a tar­get polit­i­cal demo­graph­ic. An emo­tion­al appeal that hap­pens to be based on demo­niza­tion a group of peo­ple that your tar­get demo­graph­ic already gen­er­al­ly dis­likes. In oth­er words, find the exist­ing areas of hatred and inflame them.

    And offer­ing ser­vices that will strate­gi­cal­ly inflame those pas­sions is some­thing Nigel Oakes has been open­ly offer­ing clients for decades. And that’s all part of why Nigel Oakes is described as the real force behind Cam­bridge Ana­lyt­i­ca.

    At the same time, let’s not for­get the pre­vi­ous reports about Cam­bridge Ana­lyt­i­ca whis­tle-blow­er Christo­pher Wylie and Wylie char­ac­ter­i­za­tion of Steve Ban­non as Alexan­der Nix’s real boss at Cam­bridge Ana­lyt­i­ca despite tech­ni­cal­ly serv­ing as the com­pa­ny’s vice pres­i­dent and sec­re­tary. So while Nigel Oakes is clear­ly a crit­i­cal­ly impor­tant fig­ure behind Cam­bridge Ana­lyt­i­ca, the ques­tion of who was real­ly in charge of that Cam­bridge Ana­lyt­i­ca oper­a­tion for the Trump Team is still an open ques­tion. Although it was like­ly more of a Hitler-inspired group effort.

    Posted by Pterrafractyl | April 18, 2018, 3:28 pm
  10. Here’s an omi­nous arti­cle about Palan­tir (as if there aren’t omi­nous arti­cles about Palan­tir) that high­lights both the chal­lenges the com­pa­ny faces in sell­ing its sur­veil­lance ser­vices and their plans for over­com­ing those chal­lenges: It turns out the ser­vices Palan­tir offers to its clients is pret­ty labor inten­sive, includ­ing a poten­tial­ly large num­ber of on-site Palan­tir employ­ees. One notable exam­ple is JP Mor­gan that hired Palan­tir to mon­i­tor the bank’s employ­ees for the pur­pose of detect­ing mis­cre­ant behav­iors. And this ser­vice involved as many as 120 “for­ward-deployed engi­neers” from Palan­tir work­ing at JP Mor­gan, each one cost­ing the bank as much as $3,000 a day. So from a price stand­point that’s obvi­ous­ly going to be an issue, even for a finan­cial giant like JP Mor­gan. Although at JP Mor­gan it sounds like the big­ger issue was that the exec­u­tives learned that their emails and activ­i­ty were poten­tial­ly caught up in Palan­tir’s data drag­net too. But the over­all cost of these “for­ward-deployed engi­neer” Palan­tir con­trac­tors is report­ed­ly an issue for a num­ber of oth­er cor­po­rate clients that recent­ly dropped Palan­tir includ­ing Her­shey Co., Coca-Cola, Nas­daq, Amer­i­can Express, and Home Depot.

    So how is Palan­tir plan­ning on address­ing the labor-inten­sive nature of their ser­vices to attract more clients? Automa­tion, of course. And that’s already part of the new prod­uct Palan­tir is offer­ing clients called Faun­dry which is already in use by Air­bus SE and Mer­ck KGaA. In oth­er words, the automa­tion of Palan­tir’s cor­po­rate sur­veil­lance ser­vices is almost here and that means a lot more cor­po­rate clients are prob­a­bly going to be hir­ing Palan­tir. So, yeah, that’s rather omi­nous.

    The arti­cle also includes a few more Palan­tir fun facts. For instance, while there are 2,000 engi­neers at the com­pa­ny, the Pri­va­cy and Civ­il Lib­er­ties Team only con­sists of 10 peo­ple.

    A sec­ond fun fact is about Peter Thiel. Appar­ent­ly he’s plan­ning on move to Los Ange­les and start­ing up a right-wing media empire. Oh good­ie.

    The arti­cle also con­tains a cou­ple of fun facts in rela­tion to the ques­tions about Palan­tir and Cam­bridge Ana­lyt­i­ca after the rev­e­la­tion that a Palan­tir employ­ee was work­ing with Cam­bridge Ana­lyt­i­ca to devel­op its psy­cho­log­i­cal pro­fil­ing algo­rithms: First, Palan­tir claims that the com­pa­ny turned down the offers to work with Cam­bridge Ana­lyt­i­ca and that its employ­ee, Alfredas Chmieli­auskas, was pure­ly work­ing on his own. As the fol­low­ing arti­cle notes, that’s the same expla­na­tion Palan­tir gave when it was caught plan­ning an orches­trat­ed dis­in­for­ma­tion cam­paign against Wik­ileaks and Anony­mous. So the “lone employ­ee” expla­na­tion for Palan­tir appears to be a favorite.

    Addi­tion­al, the arti­cle notes tha Palan­tir does­n’t adver­tise its ser­vices and instead pure­ly relies on word-of-mouth. And that’s inter­est­ing in rela­tion to the mys­tery of how it was that Sophie Schmidt, Google CEO Eric Schmidt’s daugh­ter and a for­mer Cam­bridge Ana­lyt­i­ca intern, just hap­pened to stop by in Cam­bridge Ana­lyt­i­ca’s Lon­don head­quar­ters in mid 2013 to push the idea that the com­pa­ny should start work­ing with Palan­tir. Now, it’s impor­tant to recall that part of what made Sophie Schmidt’s seem­ing­ly ran­dom vis­it in mid-2013 so curi­ous is that Cam­bridge Ana­lyt­i­ca and Palan­tir has already start­ed talk­ing in ear­ly 2013. Still, it’s note­wor­thy if Palan­tir only relies on word-of-mouth refer­rals and Sophie Schmidt appeared to be pro­vid­ed exact­ly that kind of refer­ral seem­ing­ly ran­dom­ly and spon­ta­neous­ly.

    So that’s all some of the new infor­ma­tion we learn about Palan­tir in the fol­low­ing arti­cle. New infor­ma­tion that’s all omi­nous, of course:

    Bloomberg Busi­ness­week

    Peter Thiel’s data-min­ing com­pa­ny is using War on Ter­ror tools to track Amer­i­can cit­i­zens. The scary thing? Palan­tir is des­per­ate for new cus­tomers.

    By Peter Wald­man, Lizette Chap­man, and Jor­dan Robert­son
    April 19, 2018

    High above the Hud­son Riv­er in down­town Jer­sey City, a for­mer U.S. Secret Ser­vice agent named Peter Cav­ic­chia III ran spe­cial ops for JPMor­gan Chase & Co. His insid­er threat group—most large finan­cial insti­tu­tions have one—used com­put­er algo­rithms to mon­i­tor the bank’s employ­ees, osten­si­bly to pro­tect against per­fid­i­ous traders and oth­er mis­cre­ants.

    Aid­ed by as many as 120 “for­ward-deployed engi­neers” from the data min­ing com­pa­ny Palan­tir Tech­nolo­gies Inc., which JPMor­gan engaged in 2009, Cavicchia’s group vac­u­umed up emails and brows­er his­to­ries, GPS loca­tions from com­pa­ny-issued smart­phones, print­er and down­load activ­i­ty, and tran­scripts of dig­i­tal­ly record­ed phone con­ver­sa­tions. Palantir’s soft­ware aggre­gat­ed, searched, sort­ed, and ana­lyzed these records, sur­fac­ing key­words and pat­terns of behav­ior that Cavicchia’s team had flagged for poten­tial abuse of cor­po­rate assets. Palantir’s algo­rithm, for exam­ple, alert­ed the insid­er threat team when an employ­ee start­ed badg­ing into work lat­er than usu­al, a sign of poten­tial dis­gruntle­ment. That would trig­ger fur­ther scruti­ny and pos­si­bly phys­i­cal sur­veil­lance after hours by bank secu­ri­ty per­son­nel.

    Over time, how­ev­er, Cav­ic­chia him­self went rogue. For­mer JPMor­gan col­leagues describe the envi­ron­ment as Wall Street meets Apoc­a­lypse Now, with Cav­ic­chia as Colonel Kurtz, ensconced upriv­er in his office suite eight floors above the rest of the bank’s secu­ri­ty team. Peo­ple in the depart­ment were shocked that no one from the bank or Palan­tir set any real lim­its. They dark­ly joked that Cav­ic­chia was lis­ten­ing to their calls, read­ing their emails, watch­ing them come and go. Some plant­ed fake infor­ma­tion in their com­mu­ni­ca­tions to see if Cav­ic­chia would men­tion it at meet­ings, which he did.

    It all end­ed when the bank’s senior exec­u­tives learned that they, too, were being watched, and what began as a promis­ing mar­riage of mas­ters of big data and glob­al finance descend­ed into a spy­ing scan­dal. The mis­ad­ven­ture, which has nev­er been report­ed, also marked an omi­nous turn for Palan­tir, one of the most rich­ly val­ued star­tups in Sil­i­con Val­ley. An intel­li­gence plat­form designed for the glob­al War on Ter­ror was weaponized against ordi­nary Amer­i­cans at home.

    Found­ed in 2004 by Peter Thiel and some fel­low Pay­Pal alum­ni, Palan­tir cut its teeth work­ing for the Pen­ta­gon and the CIA in Afghanistan and Iraq. The company’s engi­neers and prod­ucts don’t do any spy­ing them­selves; they’re more like a spy’s brain, col­lect­ing and ana­lyz­ing infor­ma­tion that’s fed in from the hands, eyes, nose, and ears. The soft­ware combs through dis­parate data sources—financial doc­u­ments, air­line reser­va­tions, cell­phone records, social media postings—and search­es for con­nec­tions that human ana­lysts might miss. It then presents the link­ages in col­or­ful, easy-to-inter­pret graph­ics that look like spi­der webs. U.S. spies and spe­cial forces loved it imme­di­ate­ly; they deployed Palan­tir to syn­the­size and sort the bliz­zard of bat­tle­field intel­li­gence. It helped plan­ners avoid road­side bombs, track insur­gents for assas­si­na­tion, even hunt down Osama bin Laden. The mil­i­tary suc­cess led to fed­er­al con­tracts on the civil­ian side. The U.S. Depart­ment of Health and Human Ser­vices uses Palan­tir to detect Medicare fraud. The FBI uses it in crim­i­nal probes. The Depart­ment of Home­land Secu­ri­ty deploys it to screen air trav­el­ers and keep tabs on immi­grants.

    Police and sheriff’s depart­ments in New York, New Orleans, Chica­go, and Los Ange­les have also used it, fre­quent­ly ensnar­ing in the dig­i­tal drag­net peo­ple who aren’t sus­pect­ed of com­mit­ting any crime. Peo­ple and objects pop up on the Palan­tir screen inside box­es con­nect­ed to oth­er box­es by radi­at­ing lines labeled with the rela­tion­ship: “Col­league of,” “Lives with,” “Oper­a­tor of [cell num­ber],” “Own­er of [vehi­cle],” “Sib­ling of,” even “Lover of.” If the author­i­ties have a pic­ture, the rest is easy. Tap­ping data­bas­es of driver’s license and ID pho­tos, law enforce­ment agen­cies can now iden­ti­fy more than half the pop­u­la­tion of U.S. adults.

    JPMor­gan was effec­tive­ly Palantir’s R&D lab and test bed for a for­ay into the finan­cial sec­tor, via a prod­uct called Metrop­o­lis. The two com­pa­nies made an odd cou­ple. Palantir’s soft­ware engi­neers showed up at the bank on skate­boards. Neck­ties and hair­cuts were too much to ask, but JPMor­gan drew the line at T‑shirts. The pro­gram­mers had to agree to wear shirts with col­lars, tucked in when pos­si­ble.

    As Metrop­o­lis was installed and refined, JPMor­gan made an equi­ty invest­ment in Palan­tir and induct­ed the com­pa­ny into its Hall of Inno­va­tion, while its exec­u­tives raved about Palan­tir in the press. The soft­ware turned “data land­fills into gold mines,” Guy Chiarel­lo, who was then JPMorgan’s chief infor­ma­tion offi­cer, told Bloomberg Busi­ness­week in 2011.

    Cav­ic­chia was in charge of foren­sic inves­ti­ga­tions at the bank. Through Palan­tir, he gained admin­is­tra­tive access to a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use. He had unprece­dent­ed access to every­thing, all at once, all the time, on one ana­lyt­ic plat­form. He was a one-man Nation­al Secu­ri­ty Agency, sur­round­ed by the Palan­tir engi­neers, each one cost­ing the bank as much as $3,000 a day.

    Senior inves­ti­ga­tors stum­bled onto the full extent of the spy­ing by acci­dent. In May 2013 the bank’s lead­er­ship ordered an inter­nal probe into who had leaked a doc­u­ment to the New York Times about a fed­er­al inves­ti­ga­tion of JPMor­gan for pos­si­bly manip­u­lat­ing U.S. elec­tric­i­ty mar­kets. Evi­dence indi­cat­ed the leak­er could have been Frank Bisig­nano, who’d recent­ly resigned as JPMorgan’s co-chief oper­at­ing offi­cer to become CEO of First Data Corp., the big pay­ments proces­sor. Cav­ic­chia had used Metrop­o­lis to gain access to emails about the leak investigation—some writ­ten by top executives—and the bank believed he shared the con­tents of those emails and oth­er com­mu­ni­ca­tions with Bisig­nano after Bisig­nano had left the bank. (Inside JPMor­gan, Bisig­nano was con­sid­ered Cavicchia’s patron—a senior exec­u­tive who pro­tect­ed and pro­mot­ed him.)

    JPMor­gan offi­cials debat­ed whether to file a sus­pi­cious activ­i­ty report with fed­er­al reg­u­la­tors about the inter­nal secu­ri­ty breach, as required by law when­ev­er banks sus­pect reg­u­la­to­ry vio­la­tions. They decid­ed not to—a con­tro­ver­sial deci­sion inter­nal­ly, accord­ing to mul­ti­ple sources with the bank. Cav­ic­chia nego­ti­at­ed a sev­er­ance agree­ment and was forced to resign. He joined Bisig­nano at First Data, where he’s now a senior vice pres­i­dent. Chiarel­lo also went to First Data, as pres­i­dent. After their depar­tures, JPMor­gan dras­ti­cal­ly cur­tailed its Palan­tir use, in part because “it nev­er lived up to its promised poten­tial,” says one JPMor­gan exec­u­tive who insist­ed on anonymi­ty to dis­cuss the deci­sion.

    The bank, First Data, and Bisig­nano, Chiarel­lo, and Cav­ic­chia didn’t respond to sep­a­rate­ly emailed ques­tions for this arti­cle. Palan­tir, in a state­ment respond­ing to ques­tions about how JPMor­gan and oth­ers have used its soft­ware, declined to answer spe­cif­ic ques­tions. “We are aware that pow­er­ful tech­nol­o­gy can be abused and we spend a lot of time and ener­gy mak­ing sure our prod­ucts are used for the forces of good,” the state­ment said.

    Much depends on how the com­pa­ny choos­es to define good. In March a for­mer com­put­er engi­neer for Cam­bridge Ana­lyt­i­ca, the polit­i­cal con­sult­ing firm that worked for Don­ald Trump’s 2016 pres­i­den­tial cam­paign, tes­ti­fied in the British Par­lia­ment that a Palan­tir employ­ee had helped Cam­bridge Ana­lyt­i­ca use the per­son­al data of up to 87 mil­lion Face­book users to devel­op psy­cho­graph­ic pro­files of indi­vid­ual vot­ers. Palan­tir said it has a strict pol­i­cy against work­ing on polit­i­cal issues, includ­ing cam­paigns, and showed Bloomberg emails in which it turned down Cambridge’s request to work with Palan­tir on mul­ti­ple occa­sions. The employ­ee, Palan­tir said, worked with Cam­bridge Ana­lyt­i­ca on his own time. Still, there was no mis­tak­ing the impli­ca­tions of the inci­dent: All human rela­tions are a mat­ter of record, ready to be revealed by a clever algo­rithm. Every­one is a spi­der­gram now.

    Thiel, who turned 50 in Octo­ber, long rev­eled as the lib­er­tar­i­an black sheep in left-lean­ing Sil­i­con Val­ley. He con­tributed $1.25 mil­lion to Trump’s pres­i­den­tial vic­to­ry, spoke at the Repub­li­can con­ven­tion, and has dined with Trump at the White House. But Thiel has told friends he’s had enough of the Bay Area’s “mono­cul­tur­al” lib­er­al­ism. He’s ditch­ing his long­time base in San Fran­cis­co and mov­ing his per­son­al invest­ment firms this year to Los Ange­les, where he plans to estab­lish his next project, a con­ser­v­a­tive media empire.

    As Thiel’s wealth has grown, he’s got­ten more stri­dent. In a 2009 essay for the Cato Insti­tute, he railed against tax­es, ­gov­ern­ment, women, poor peo­ple, and society’s acqui­es­cence to the inevitabil­i­ty of death. (Thiel doesn’t accept death as inex­orable.) He wrote that he’d reached some rad­i­cal con­clu­sions: “Most impor­tant­ly, I no longer believe that free­dom and democ­ra­cy are com­pat­i­ble.” The 1920s was the last time one could feel “gen­uine­ly opti­mistic” about Amer­i­can democ­ra­cy, he said; since then, “the vast increase in wel­fare ben­e­fi­cia­ries and the exten­sion of the fran­chise to women—two con­stituen­cies that are noto­ri­ous­ly tough for libertarians—have ren­dered the notion of ‘cap­i­tal­ist democ­ra­cy’ into an oxy­moron.”

    Thiel went into tech after miss­ing a prized Supreme Court clerk­ship fol­low­ing his grad­u­a­tion from Stan­ford Law School. He co-found­ed Pay­Pal and then par­layed his win­nings from its 2002 sale to EBay Inc. into a career in ven­ture invest­ing. He made an ear­ly bet on Face­book Inc. (where he’s still on the board), which accounts for most of his $3.3 bil­lion for­tune, as esti­mat­ed by Bloomberg, and launched his career as a backer of big ideas—things like pri­vate space trav­el (through an invest­ment in SpaceX), hotel alter­na­tives (Airbnb), and float­ing island nations (the Seast­eading Insti­tute).

    He start­ed Palantir—named after the omni­scient crys­tal balls in J.R.R. Tolkien’s Lord of the Rings trilogy—three years after the attacks of Sept. 11, 2001. The CIA’s invest­ment arm, In-Q-Tel, was a seed investor. For the role of chief exec­u­tive offi­cer, he chose an old law school friend and self-described neo-Marx­ist, Alex Karp. Thiel told Bloomberg in 2011 that civ­il lib­er­tar­i­ans ought to embrace Palan­tir, because data min­ing is less repres­sive than the “crazy abus­es and dra­con­ian poli­cies” pro­posed after Sept. 11. The best way to pre­vent anoth­er cat­a­stroph­ic attack with­out becom­ing a police state, he argued, was to give the gov­ern­ment the best sur­veil­lance tools pos­si­ble, while build­ing in safe­guards against their abuse.

    Leg­end has it that Stephen Cohen, one of Thiel’s co-founders, pro­grammed the ini­tial pro­to­type for Palantir’s soft­ware in two weeks. It took years, how­ev­er, to coax cus­tomers away from the long­time leader in the intel­li­gence ana­lyt­ics mar­ket, a soft­ware com­pa­ny called I2 Inc.

    In one adven­ture miss­ing from the glow­ing accounts of Palantir’s ear­ly rise, I2 accused Palan­tir of mis­ap­pro­pri­at­ing its intel­lec­tu­al prop­er­ty through a Flori­da shell com­pa­ny reg­is­tered to the fam­i­ly of a Palan­tir exec­u­tive. A com­pa­ny claim­ing to be a pri­vate eye firm had been licens­ing I2 soft­ware and devel­op­ment tools and spir­it­ing them to Palan­tir for more than four years. I2 said the cutout was reg­is­tered to the fam­i­ly of Shyam Sankar, Palantir’s direc­tor of busi­ness devel­op­ment.

    I2 sued Palan­tir in fed­er­al court, alleg­ing fraud, con­spir­a­cy, and copy­right infringe­ment. In its legal response, Palan­tir argued it had the right to appro­pri­ate I2’s code for the greater good. “What’s at stake here is the abil­i­ty of crit­i­cal nation­al secu­ri­ty, defense and intel­li­gence agen­cies to access their own data and use it inter­op­er­a­bly in whichev­er plat­form they choose in order to most effec­tive­ly pro­tect the cit­i­zen­ry,” Palan­tir said in its motion to dis­miss I2’s suit.

    The motion was denied. Palan­tir agreed to pay I2 about $10 mil­lion to set­tle the suit. I2 was sold to IBM in 2011.

    Sankar, Palan­tir employ­ee No.13 and now one of the company’s top exec­u­tives, also showed up in anoth­er Palan­tir scan­dal: the company’s 2010 pro­pos­al for the U.S. Cham­ber of Com­merce to run a secret sab­o­tage cam­paign against the group’s lib­er­al oppo­nents. Hacked emails released by the group Anony­mous indi­cat­ed that Palan­tir and two oth­er defense con­trac­tors pitched out­side lawyers for the orga­ni­za­tion on a plan to snoop on the fam­i­lies of pro­gres­sive activists, cre­ate fake iden­ti­ties to infil­trate left-lean­ing groups, scrape social media with bots, and plant false infor­ma­tion with lib­er­al groups to sub­se­quent­ly dis­cred­it them.

    After the emails emerged in the press, Palan­tir offered an expla­na­tion sim­i­lar to the one it pro­vid­ed in March for its U.K.-based employee’s assis­tance to Cam­bridge Ana­lyt­i­ca: It was the work of a sin­gle rogue employ­ee. The com­pa­ny nev­er explained Sankar’s involve­ment. Karp issued a pub­lic apol­o­gy and said he and Palan­tir were deeply com­mit­ted to pro­gres­sive caus­es. Palan­tir set up an advi­so­ry pan­el on pri­va­cy and civ­il lib­er­ties, head­ed by a for­mer CIA attor­ney, and beefed up an engi­neer­ing group it calls the Pri­va­cy and Civ­il Lib­er­ties Team. The com­pa­ny now has about 10 PCL engi­neers on call to help vet clients’ requests for access to data troves and pitch in with per­ti­nent thoughts about law, moral­i­ty, and machines.

    Dur­ing its 14 years in start­up mode, Palan­tir has cul­ti­vat­ed a mys­tique as a haven for bril­liant engi­neers who want to solve big prob­lems such as ter­ror­ism and human traf­fick­ing, unfet­tered by pedes­tri­an con­cerns such as mak­ing mon­ey. Palan­tir exec­u­tives boast of not employ­ing a sin­gle sales­person, rely­ing instead on word-of-mouth refer­rals.

    The company’s ear­ly data min­ing daz­zled ven­ture investors, who val­ued it at $20 bil­lion in 2015. But Palan­tir has nev­er report­ed a prof­it. It oper­ates less like a con­ven­tion­al soft­ware com­pa­ny than like a con­sul­tan­cy, deploy­ing rough­ly half its 2,000 engi­neers to client sites. That works at well-fund­ed gov­ern­ment spy agen­cies seek­ing spe­cial­ized appli­ca­tions but has pro­duced mixed results with cor­po­rate clients. Palantir’s high instal­la­tion and main­te­nance costs repelled cus­tomers such as Her­shey Co., which trum­pet­ed a Palan­tir part­ner­ship in 2015 only to walk away two years lat­er. Coca-Cola, Nas­daq, Amer­i­can Express, and Home Depot have also dumped Palan­tir.

    Karp rec­og­nized the high-touch mod­el was prob­lem­at­ic ear­ly in the company’s push into the cor­po­rate mar­ket, but solu­tions have been elu­sive. “We didn’t want to be a ser­vices com­pa­ny. We want­ed to do some­thing that was cost-effi­cient,” he con­fessed at a Euro­pean con­fer­ence in 2010, in one of sev­er­al unguard­ed com­ments cap­tured in videos post­ed online. “Of course, what we didn’t rec­og­nize was that this would be much, much hard­er than we real­ized.”

    Palantir’s newest prod­uct, Foundry, aims to final­ly break through the prof­itabil­i­ty bar­ri­er with more automa­tion and less need for on-site engi­neers. Air­bus SE, the big Euro­pean plane mak­er, uses Foundry to crunch air­line data about spe­cif­ic onboard com­po­nents to track usage and main­te­nance and antic­i­pate repair prob­lems. Mer­ck KGaA, the phar­ma­ceu­ti­cal giant, has a long-term Palan­tir con­tract to use Foundry in drug devel­op­ment and sup­ply chain man­age­ment.

    Deep­er adop­tion of Foundry in the com­mer­cial mar­ket is cru­cial to Palantir’s hopes of a big pay­day. Some investors are weary and have already writ­ten down their Palan­tir stakes. Mor­gan Stan­ley now val­ues the com­pa­ny at $6 bil­lion. Fred Alger Man­age­ment Inc., which has owned stock since at least 2006, reval­ued Palan­tir in Decem­ber at about $10 bil­lion, accord­ing to Bloomberg Hold­ings. One frus­trat­ed investor, Marc Abramowitz, recent­ly won a court order for Palan­tir to show him its books, as part of a law­suit he filed alleg­ing the com­pa­ny sab­o­taged his attempt to find a buy­er for the Palan­tir shares he has owned for more than a decade.

    As shown in the pri­va­cy breach­es at Face­book and Cam­bridge Analytica—with Thiel and Palan­tir linked to both sides of the equation—the pres­sure to mon­e­tize data at tech com­pa­nies is cease­less. Face­book didn’t grow from a web­site con­nect­ing col­lege kids into a pur­vey­or of user pro­files and predilec­tions worth $478 bil­lion by walling off per­son­al data. Palan­tir says its Pri­va­cy and Civ­il Lib­er­ties Team watch­es out for inap­pro­pri­ate data demands, but it con­sists of just 10 peo­ple in a com­pa­ny of 2,000 engi­neers. No one said no to JPMor­gan, or to whomev­er at Palan­tir vol­un­teered to help Cam­bridge Analytica—or to anoth­er orga­ni­za­tion keen­ly inter­est­ed in state-of-the-art data sci­ence, the Los Ange­les Police Depart­ment.

    Palan­tir began work with the LAPD in 2009. The impe­tus was fed­er­al fund­ing. After sev­er­al Sept. 11 post­mortems called for more intel­li­gence shar­ing at all lev­els of law enforce­ment, mon­ey start­ed flow­ing to Palan­tir to help build data inte­gra­tion sys­tems for so-called fusion cen­ters, start­ing in L.A. There are now more than 1,300 trained Palan­tir users at more than a half-dozen law enforce­ment agen­cies in South­ern Cal­i­for­nia, includ­ing local police and sheriff’s depart­ments and the Bureau of Alco­hol, Tobac­co, Firearms and Explo­sives.

    The LAPD uses Palantir’s Gotham prod­uct for Oper­a­tion Laser, a pro­gram to iden­ti­fy and deter peo­ple like­ly to com­mit crimes. Infor­ma­tion from rap sheets, parole reports, police inter­views, and oth­er sources is fed into the sys­tem to gen­er­ate a list of peo­ple the depart­ment defines as chron­ic offend­ers, says Craig Uchi­da, whose con­sult­ing firm, Jus­tice & Secu­ri­ty Strate­gies Inc., designed the Laser sys­tem. The list is dis­trib­uted to patrol­men, with orders to mon­i­tor and stop the pre-crime sus­pects as often as pos­si­ble, using excus­es such as jay­walk­ing or fix-it tick­ets. At each con­tact, offi­cers fill out a field inter­view card with names, address­es, vehi­cles, phys­i­cal descrip­tions, any neigh­bor­hood intel­li­gence the per­son offers, and the officer’s own obser­va­tions on the sub­ject.

    The cards are dig­i­tized in the Palan­tir sys­tem, adding to a con­stant­ly expand­ing sur­veil­lance data­base that’s ful­ly acces­si­ble with­out a war­rant. Tomorrow’s data points are auto­mat­i­cal­ly linked to today’s, with the goal of gen­er­at­ing inves­tiga­tive leads. Say a chron­ic offend­er is tagged as a pas­sen­ger in a car that’s pulled over for a bro­ken tail­light. Two years lat­er, that same car is spot­ted by an auto­mat­ic license plate read­er near a crime scene 200 miles across the state. As soon as the plate hits the sys­tem, Palan­tir alerts the offi­cer who made the orig­i­nal stop that a car once linked to the chron­ic offend­er was spot­ted near a crime scene.

    The plat­form is sup­ple­ment­ed with what soci­ol­o­gist Sarah Brayne calls the sec­ondary sur­veil­lance net­work: the web of who is relat­ed to, friends with, or sleep­ing with whom. One woman in the sys­tem, for exam­ple, who wasn’t sus­pect­ed of com­mit­ting any crime, was iden­ti­fied as hav­ing mul­ti­ple boyfriends with­in the same net­work of asso­ciates, says Brayne, who spent two and a half years embed­ded with the LAPD while research­ing her dis­ser­ta­tion on big-data polic­ing at Prince­ton Uni­ver­si­ty and who’s now an asso­ciate pro­fes­sor at the Uni­ver­si­ty of Texas at Austin. “Any­body who logs into the sys­tem can see all these inti­mate ties,” she says. To widen the scope of pos­si­ble con­nec­tions, she adds, the LAPD has also explored pur­chas­ing pri­vate data, includ­ing social media, fore­clo­sure, and toll road infor­ma­tion, cam­era feeds from hos­pi­tals, park­ing lots, and uni­ver­si­ties, and deliv­ery infor­ma­tion from Papa John’s Inter­na­tion­al Inc. and Piz­za Hut LLC.

    The LAPD declined to com­ment for this sto­ry. Palan­tir sent Bloomberg a state­ment about its work with law enforce­ment: “Our [for­ward-deployed engi­neers] and [pri­va­cy and civ­il lib­er­ties] engi­neers work with the law enforce­ment cus­tomers (includ­ing LAPD) to ensure that the imple­men­ta­tion of our soft­ware and inte­gra­tion of their source sys­tems with the soft­ware is con­sis­tent with the Department’s legal and pol­i­cy oblig­a­tions, as well as pri­va­cy and civ­il lib­er­ties con­sid­er­a­tions that may not cur­rent­ly be leg­is­lat­ed but are on the hori­zon. We as a com­pa­ny deter­mine the types of engage­ments and gen­er­al appli­ca­tions of our soft­ware with respect to those over­ar­ch­ing con­sid­er­a­tions. Police Agen­cies have inter­nal respon­si­bil­i­ty for ensur­ing that their infor­ma­tion sys­tems are used in a man­ner con­sis­tent with their poli­cies and pro­ce­dures.”

    Oper­a­tion Laser has made L.A. cops more surgical—and, accord­ing to com­mu­ni­ty activists, unre­lent­ing. Once tar­gets are enmeshed in a spi­der­gram, they’re stuck.

    ...

    Palan­tir is twice the age most star­tups are when they cash out in a sale or ini­tial pub­lic offer­ing. The com­pa­ny needs to fig­ure out how to be reward­ed on Wall Street with­out creep­ing out Main Street. It might not be pos­si­ble. For all of Palantir’s pro­fessed con­cern for indi­vid­u­als’ pri­va­cy, the sin­gle most impor­tant safe­guard against abuse is the one it’s try­ing des­per­ate­ly to reduce through automa­tion: human judg­ment.

    As Palan­tir tries to court cor­po­rate cus­tomers as a more con­ven­tion­al soft­ware com­pa­ny, few­er for­ward-deployed engi­neers will mean few­er human deci­sions. Sen­si­tive ques­tions, such as how deeply to pry into people’s lives, will be answered increas­ing­ly by arti­fi­cial intel­li­gence and machine-learn­ing algo­rithms. The small team of Pri­va­cy and Civ­il Lib­er­ties engi­neers could find them­selves even less influ­en­tial, as the urge for omnipo­tence among clients over­whelms any self-imposed restraints.

    Com­put­ers don’t ask moral ques­tions; peo­ple do, says John Grant, one of Palantir’s top PCL engi­neers and a force­ful advo­cate for manda­to­ry ethics edu­ca­tion for engi­neers. “At a com­pa­ny like ours with mil­lions of lines of code, every tiny deci­sion could have huge impli­ca­tions,” Grant told a pri­va­cy con­fer­ence in Berke­ley last year.

    JPMorgan’s expe­ri­ence remains instruc­tive. “The world changed when it became clear every­one could be tar­get­ed using Palan­tir,” says a for­mer JPMor­gan cyber expert who worked with Cav­ic­chia at one point on the insid­er threat team. “Nefar­i­ous ideas became triv­ial to imple­ment; everyone’s a sus­pect, so we mon­i­tored every­thing. It was a pret­ty ter­ri­ble feel­ing.”
    ———–

    “Peter Thiel’s data-min­ing com­pa­ny is using War on Ter­ror tools to track Amer­i­can cit­i­zens. The scary thing? Palan­tir is des­per­ate for new cus­tomers.” by Peter Wald­man, Lizette Chap­man, and Jor­dan Robert­son; Bloomberg Busi­ness­week; 04/19/2018

    “High above the Hud­son Riv­er in down­town Jer­sey City, a for­mer U.S. Secret Ser­vice agent named Peter Cav­ic­chia III ran spe­cial ops for JPMor­gan Chase & Co. His insid­er threat group—most large finan­cial insti­tu­tions have one—used com­put­er algo­rithms to mon­i­tor the bank’s employ­ees, osten­si­bly to pro­tect against per­fid­i­ous traders and oth­er mis­cre­ants.”

    Insid­er threat ser­vices. That appears to be one of the pri­ma­ry ser­vices Palan­tir is try­ing to offer to cor­po­rate clients. It’s the kind of ser­vice that gives Palan­tir access to almost every­thing employ­ees are doing in a com­pa­ny and basi­cal­ly turns it into a Big Broth­er-for-hire enti­ty. And when JP Mor­gan hired Palan­tir to pro­vide these ser­vices they end­ed up drop­ping the after the exec­u­tives learned that it was too Big Broth­er-ish and watch­ing over the exec­u­tives too:

    ...
    Aid­ed by as many as 120 “for­ward-deployed engi­neers” from the data min­ing com­pa­ny Palan­tir Tech­nolo­gies Inc., which JPMor­gan engaged in 2009, Cavicchia’s group vac­u­umed up emails and brows­er his­to­ries, GPS loca­tions from com­pa­ny-issued smart­phones, print­er and down­load activ­i­ty, and tran­scripts of dig­i­tal­ly record­ed phone con­ver­sa­tions. Palantir’s soft­ware aggre­gat­ed, searched, sort­ed, and ana­lyzed these records, sur­fac­ing key­words and pat­terns of behav­ior that Cavicchia’s team had flagged for poten­tial abuse of cor­po­rate assets. Palantir’s algo­rithm, for exam­ple, alert­ed the insid­er threat team when an employ­ee start­ed badg­ing into work lat­er than usu­al, a sign of poten­tial dis­gruntle­ment. That would trig­ger fur­ther scruti­ny and pos­si­bly phys­i­cal sur­veil­lance after hours by bank secu­ri­ty per­son­nel.

    Over time, how­ev­er, Cav­ic­chia him­self went rogue. For­mer JPMor­gan col­leagues describe the envi­ron­ment as Wall Street meets Apoc­a­lypse Now, with Cav­ic­chia as Colonel Kurtz, ensconced upriv­er in his office suite eight floors above the rest of the bank’s secu­ri­ty team. Peo­ple in the depart­ment were shocked that no one from the bank or Palan­tir set any real lim­its. They dark­ly joked that Cav­ic­chia was lis­ten­ing to their calls, read­ing their emails, watch­ing them come and go. Some plant­ed fake infor­ma­tion in their com­mu­ni­ca­tions to see if Cav­ic­chia would men­tion it at meet­ings, which he did.

    It all end­ed when the bank’s senior exec­u­tives learned that they, too, were being watched, and what began as a promis­ing mar­riage of mas­ters of big data and glob­al finance descend­ed into a spy­ing scan­dal. The mis­ad­ven­ture, which has nev­er been report­ed, also marked an omi­nous turn for Palan­tir, one of the most rich­ly val­ued star­tups in Sil­i­con Val­ley. An intel­li­gence plat­form designed for the glob­al War on Ter­ror was weaponized against ordi­nary Amer­i­cans at home.
    ...

    And this project at JP Mor­gan was basi­cal­ly the test lab for a new ser­vice Palan­tir is try­ing to offer the finan­cial sec­tor: Metrop­o­lis:

    ...
    JPMor­gan was effec­tive­ly Palantir’s R&D lab and test bed for a for­ay into the finan­cial sec­tor, via a prod­uct called Metrop­o­lis. The two com­pa­nies made an odd cou­ple. Palantir’s soft­ware engi­neers showed up at the bank on skate­boards. Neck­ties and hair­cuts were too much to ask, but JPMor­gan drew the line at T‑shirts. The pro­gram­mers had to agree to wear shirts with col­lars, tucked in when pos­si­ble.

    As Metrop­o­lis was installed and refined, JPMor­gan made an equi­ty invest­ment in Palan­tir and induct­ed the com­pa­ny into its Hall of Inno­va­tion, while its exec­u­tives raved about Palan­tir in the press. The soft­ware turned “data land­fills into gold mines,” Guy Chiarel­lo, who was then JPMorgan’s chief infor­ma­tion offi­cer, told Bloomberg Busi­ness­week in 2011.

    Cav­ic­chia was in charge of foren­sic inves­ti­ga­tions at the bank. Through Palan­tir, he gained admin­is­tra­tive access to a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use. He had unprece­dent­ed access to every­thing, all at once, all the time, on one ana­lyt­ic plat­form. He was a one-man Nation­al Secu­ri­ty Agency, sur­round­ed by the Palan­tir engi­neers, each one cost­ing the bank as much as $3,000 a day.
    ...

    And through this JP Mor­gan test bed for Metrop­o­lis, Peter Cav­ic­chia insid­er threat group was giv­en access to “a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use”. Along with a team of Palan­tir engi­neers to help him use that data. This is the busi­ness mod­el Palan­tir was try­ing to test so it could sell to oth­er banks: using Palan­tir to give bank employ­ees unprece­dent­ed access to the bank’s inter­nal data (which, of course, means Palan­tir like­ly has access to that data too):

    ...
    Cav­ic­chia was in charge of foren­sic inves­ti­ga­tions at the bank. Through Palan­tir, he gained admin­is­tra­tive access to a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use. He had unprece­dent­ed access to every­thing, all at once, all the time, on one ana­lyt­ic plat­form. He was a one-man Nation­al Secu­ri­ty Agency, sur­round­ed by the Palan­tir engi­neers, each one cost­ing the bank as much as $3,000 a day.
    ...

    But Palan­tir’s test bed at JP Mor­gan ulti­mate­ly turned into a failed exper­i­ment when JP Mor­gan’s lead­er­ship learned that Cav­ic­chia had appar­ent­ly used his unprece­dent­ed access to inter­nal doc­u­ments to spy on JP Mor­gan exec­u­tives who were inves­ti­gat­ing a leak to the New York Times. The leak appeared to be done by an exec­u­tive who had just left the com­pa­ny, Frank Bisig­nano, who also hap­pened to be Cav­ic­chi­a’s patron at the com­pa­ny before he left. And that leak inves­ti­ga­tion appeared to show that Cav­ic­chia accessed exec­u­tive emails about the leak and passed them along to Bisig­nano. In oth­er words, JP Mor­gan learned that the guy they made their cor­po­rate Big Broth­er abused that pow­er (shock­er):

    ...
    Senior inves­ti­ga­tors stum­bled onto the full extent of the spy­ing by acci­dent. In May 2013 the bank’s lead­er­ship ordered an inter­nal probe into who had leaked a doc­u­ment to the New York Times about a fed­er­al inves­ti­ga­tion of JPMor­gan for pos­si­bly manip­u­lat­ing U.S. elec­tric­i­ty mar­kets. Evi­dence indi­cat­ed the leak­er could have been Frank Bisig­nano, who’d recent­ly resigned as JPMorgan’s co-chief oper­at­ing offi­cer to become CEO of First Data Corp., the big pay­ments proces­sor. Cav­ic­chia had used Metrop­o­lis to gain access to emails about the leak investigation—some writ­ten by top executives—and the bank believed he shared the con­tents of those emails and oth­er com­mu­ni­ca­tions with Bisig­nano after Bisig­nano had left the bank. (Inside JPMor­gan, Bisig­nano was con­sid­ered Cavicchia’s patron—a senior exec­u­tive who pro­tect­ed and pro­mot­ed him.)

    JPMor­gan offi­cials debat­ed whether to file a sus­pi­cious activ­i­ty report with fed­er­al reg­u­la­tors about the inter­nal secu­ri­ty breach, as required by law when­ev­er banks sus­pect reg­u­la­to­ry vio­la­tions. They decid­ed not to—a con­tro­ver­sial deci­sion inter­nal­ly, accord­ing to mul­ti­ple sources with the bank. Cav­ic­chia nego­ti­at­ed a sev­er­ance agree­ment and was forced to resign. He joined Bisig­nano at First Data, where he’s now a senior vice pres­i­dent. Chiarel­lo also went to First Data, as pres­i­dent. After their depar­tures, JPMor­gan dras­ti­cal­ly cur­tailed its Palan­tir use, in part because “it nev­er lived up to its promised poten­tial,” says one JPMor­gan exec­u­tive who insist­ed on anonymi­ty to dis­cuss the deci­sion.
    ...

    Thus end­ed Palan­tir’s test run of Metrop­o­lis, high­light­ing the fact that the exten­sive man­pow­er asso­ci­at­ed with Palan­tir’s ser­vices isn’t the only fac­tor that might keep cor­po­rate clients away. The way Palan­tir’s ser­vices cre­ate indi­vid­u­als with unprece­dent­ed access to the inter­nal doc­u­ments of a com­pa­ny might also dri­ve clients away. After all, threat assess­ment groups are intend­ed to mit­i­gate risk. Not exac­er­bate it.

    But the cost of all those on-site Palan­tir engi­neers is still a obsta­cle to wider adop­tion of Palan­tir ser­vices. As the arti­cle notes, rough­ly half of Palan­tir’s 2,000 engi­neers are work­ing on client sites:

    ...
    The company’s ear­ly data min­ing daz­zled ven­ture investors, who val­ued it at $20 bil­lion in 2015. But Palan­tir has nev­er report­ed a prof­it. It oper­ates less like a con­ven­tion­al soft­ware com­pa­ny than like a con­sul­tan­cy, deploy­ing rough­ly half its 2,000 engi­neers to client sites. That works at well-fund­ed gov­ern­ment spy agen­cies seek­ing spe­cial­ized appli­ca­tions but has pro­duced mixed results with cor­po­rate clients. Palantir’s high instal­la­tion and main­te­nance costs repelled cus­tomers such as Her­shey Co., which trum­pet­ed a Palan­tir part­ner­ship in 2015 only to walk away two years lat­er. Coca-Cola, Nas­daq, Amer­i­can Express, and Home Depot have also dumped Palan­tir.
    ...

    And that’s what Palan­tir’s newest prod­uct, Foundry, is designed to address. By increas­ing­ly automat­ing the cor­po­rate sur­veil­lance process:

    ...
    Palantir’s newest prod­uct, Foundry, aims to final­ly break through the prof­itabil­i­ty bar­ri­er with more automa­tion and less need for on-site engi­neers. Air­bus SE, the big Euro­pean plane mak­er, uses Foundry to crunch air­line data about spe­cif­ic onboard com­po­nents to track usage and main­te­nance and antic­i­pate repair prob­lems. Mer­ck KGaA, the phar­ma­ceu­ti­cal giant, has a long-term Palan­tir con­tract to use Foundry in drug devel­op­ment and sup­ply chain man­age­ment.

    Deep­er adop­tion of Foundry in the com­mer­cial mar­ket is cru­cial to Palantir’s hopes of a big pay­day. Some investors are weary and have already writ­ten down their Palan­tir stakes. Mor­gan Stan­ley now val­ues the com­pa­ny at $6 bil­lion. Fred Alger Man­age­ment Inc., which has owned stock since at least 2006, reval­ued Palan­tir in Decem­ber at about $10 bil­lion, accord­ing to Bloomberg Hold­ings. One frus­trat­ed investor, Marc Abramowitz, recent­ly won a court order for Palan­tir to show him its books, as part of a law­suit he filed alleg­ing the com­pa­ny sab­o­taged his attempt to find a buy­er for the Palan­tir shares he has owned for more than a decade.
    ...

    “Deep­er adop­tion of Foundry in the com­mer­cial mar­ket is cru­cial to Palantir’s hopes of a big pay­day.”

    And that appears to be the direc­tion Palan­tir is head­ing: auto­mat­ed cor­po­rate sur­veil­lance which will allow the com­pa­ny to offer its ser­vices cheap­er and to more clients. So if Palan­tir suc­ceeds we just might see A LOT more com­pa­nies hir­ing Palan­tir’s ser­vices, which means A LOT more employ­ees are going to have Palan­tir’s soft­ware watch­ing and ana­lyz­ing their every key­stroke and email. It real­ly is pret­ty omi­nous. Espe­cial­ly giv­en the fact that com­pa­ny’s Pri­va­cy and Civ­il Lib­er­ties Team con­sists of a whole 10 peo­ple:

    ...
    As shown in the pri­va­cy breach­es at Face­book and Cam­bridge Analytica—with Thiel and Palan­tir linked to both sides of the equation—the pres­sure to mon­e­tize data at tech com­pa­nies is cease­less. Face­book didn’t grow from a web­site con­nect­ing col­lege kids into a pur­vey­or of user pro­files and predilec­tions worth $478 bil­lion by walling off per­son­al data. Palan­tir says its Pri­va­cy and Civ­il Lib­er­ties Team watch­es out for inap­pro­pri­ate data demands, but it con­sists of just 10 peo­ple in a com­pa­ny of 2,000 engi­neers. No one said no to JPMor­gan, or to whomev­er at Palan­tir vol­un­teered to help Cam­bridge Analytica—or to anoth­er orga­ni­za­tion keen­ly inter­est­ed in state-of-the-art data sci­ence, the Los Ange­les Police Depart­ment.
    ...

    So that’s an overview of the cur­rent sta­tus of Palan­tir’s Big Broth­er-for-hire ser­vices: they’ve hit some obsta­cles, but if they can suc­ceed in over­com­ing those obsta­cle Palan­tir could become the go-to cor­po­rate sur­veil­lance firm. It’s more than a lit­tle omi­nous.

    And then there’s the to fun fact from this arti­cle that relate to the ques­tions of Palan­tir’s ties to Cam­bridge Ana­lyt­i­ca: First, just as Palan­tir claimed that it’s employ­ee found to be work­ing with Cam­bridge Ana­lyt­i­ca, Alfredas Chmieli­auskas, was doing this on his own, that’s the same excuse Palan­tir gave when it was caught pitch­ing a project to the US Cham­ber of Com­merce to run a secret cam­paign to spy on and sab­o­tage the Cham­ber’s crit­ics: it was just a lone employ­ee:

    ...
    Sankar, Palan­tir employ­ee No.13 and now one of the company’s top exec­u­tives, also showed up in anoth­er Palan­tir scan­dal: the company’s 2010 pro­pos­al for the U.S. Cham­ber of Com­merce to run a secret sab­o­tage cam­paign against the group’s lib­er­al oppo­nents. Hacked emails released by the group Anony­mous indi­cat­ed that Palan­tir and two oth­er defense con­trac­tors pitched out­side lawyers for the orga­ni­za­tion on a plan to snoop on the fam­i­lies of pro­gres­sive activists, cre­ate fake iden­ti­ties to infil­trate left-lean­ing groups, scrape social media with bots, and plant false infor­ma­tion with lib­er­al groups to sub­se­quent­ly dis­cred­it them.

    After the emails emerged in the press, Palan­tir offered an expla­na­tion sim­i­lar to the one it pro­vid­ed in March for its U.K.-based employee’s assis­tance to Cam­bridge Ana­lyt­i­ca: It was the work of a sin­gle rogue employ­ee. The com­pa­ny nev­er explained Sankar’s involve­ment. Karp issued a pub­lic apol­o­gy and said he and Palan­tir were deeply com­mit­ted to pro­gres­sive caus­es. Palan­tir set up an advi­so­ry pan­el on pri­va­cy and civ­il lib­er­ties, head­ed by a for­mer CIA attor­ney, and beefed up an engi­neer­ing group it calls the Pri­va­cy and Civ­il Lib­er­ties Team. The com­pa­ny now has about 10 PCL engi­neers on call to help vet clients’ requests for access to data troves and pitch in with per­ti­nent thoughts about law, moral­i­ty, and machines.
    ...

    Final­ly, there’s the inter­est­ing fact that the Palan­tir exe­cutes boast of not employ­ing a sin­gle sales-per­son and just rely on word of mouth:

    ...
    Dur­ing its 14 years in start­up mode, Palan­tir has cul­ti­vat­ed a mys­tique as a haven for bril­liant engi­neers who want to solve big prob­lems such as ter­ror­ism and human traf­fick­ing, unfet­tered by pedes­tri­an con­cerns such as mak­ing mon­ey. Palan­tir exec­u­tives boast of not employ­ing a sin­gle sales­person, rely­ing instead on word-of-mouth refer­rals.
    ...

    And Sophie Schmidt, Google CEO Eric Schmidt’s daugh­ter and a for­mer Cam­bridge Ana­lyt­i­ca intern, pro­vid­ed exact­ly that in June of 2013: a word of mouth endorse­ment of Palan­tir. So did Sophie Schmidt make this word of mouth pitch inde­pen­dent­ly and coin­ci­den­tal­ly? It remains an unan­swered ques­tion but it’s hard to ignore that Schmidt’s pitch appears to be the mode of how Palan­tir mar­kets itself.

    So we’ll see what hap­pens with Palan­tir and its dri­ve to use auto­mat­ed cor­po­rate sur­veil­lance to cut costs and sell its Big Broth­er-for-hire ser­vices to even more large employ­ers. But it does seem like just a mat­ter of time before Palan­tir suc­ceeds in cut­ting those costs, which means “word of mouth” isn’t just going to be Palan­tir’s approach to mar­ket­ing. Word of mouth is also going to be the only way employ­ees in the future will be able to say some­thing to each oth­er with­out Palan­tir know­ing about it.

    Posted by Pterrafractyl | April 19, 2018, 7:43 pm
  11. Here’s an update on how Face­book his plan­ning on address­ing the new con­gres­sion­al scruti­ny it’s receiv­ing from the US Con­gress as the Cam­bridge Ana­lyt­i­ca con­tin­ues to play out: Face­book’s head of pol­i­cy in the Unit­ed States, Erin Egan, was just replaced. It’s a notable posi­tion, polit­i­cal­ly speak­ing, because it’s based in Wash­ing­ton DC, so Face­book basi­cal­ly just replaced one of it’s top DC lob­by­ists.

    So who replaced Egan? Kevin Mar­tin, Face­book’s vice pres­i­dent of mobile and glob­al access pol­i­cy. Oh, and Mar­tin was also a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion. Sur­prise!

    Mar­tin will report to vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan. Oh, and Mar­tin and Kaplan worked togeth­er in the George W. Bush White House and on Bush’s 2000 pres­i­den­tial cam­paign. Sur­prise again! There’s a dis­tinct ‘K Street’ feel to it all.

    Face­book is spin­ning this by empha­siz­ing that Egan will remain chief pri­va­cy offi­cer. The com­pa­ny is act­ing like they made this move in order to have some­one with Egan’s cre­den­tials focused on rebuild­ing trust and not so they can replace her with a Repub­li­can.

    And that appears to be Face­book’s strat­e­gy for deal­ing with Con­gress: task­ing Repub­li­cans to lob­by their fel­low Repub­li­cans:

    The New York Times

    Face­book Replaces Lob­by­ing Exec­u­tive Amid Reg­u­la­to­ry Scruti­ny

    By Cecil­ia Kang
    April 24, 2018

    WASHINGTON — Face­book on Tues­day replaced its head of pol­i­cy in the Unit­ed States, Erin Egan, as the social net­work scram­bles to respond to intense scruti­ny from fed­er­al reg­u­la­tors and law­mak­ers.

    Ms. Egan, who is also Facebook’s chief pri­va­cy offi­cer, was respon­si­ble for lob­by­ing and gov­ern­ment rela­tions as head of pol­i­cy for the last two years. She will be replaced by Kevin Mar­tin on an inter­im basis, the com­pa­ny said. Mr. Mar­tin has been Facebook’s vice pres­i­dent of mobile and glob­al access pol­i­cy and is a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion.

    Ms. Egan will remain chief pri­va­cy offi­cer and focus on pri­va­cy poli­cies across the globe, Andy Stone, a Face­book spokesman, said.

    Elliot Schrage, Facebook’s vice pres­i­dent of com­mu­ni­ca­tions and pub­lic pol­i­cy, said in a state­ment on Wednes­day: “We need to focus our best peo­ple on our most impor­tant pri­or­i­ties. We are com­mit­ted to rebuild­ing people’s trust in how we han­dle their infor­ma­tion, and Erin is the best per­son to part­ner with our prod­uct teams on that task.”

    The exec­u­tive reshuf­fling in Facebook’s Wash­ing­ton offices fol­lowed a peri­od of tumult for the com­pa­ny, which has put it increas­ing­ly in the spot­light on Capi­tol Hill. Last month, The New York Times and oth­ers report­ed that the data of mil­lions of Face­book users had been har­vest­ed by the British polit­i­cal research firm Cam­bridge Ana­lyt­i­ca. The ensu­ing out­cry led Facebook’s chief exec­u­tive, Mark Zucker­berg, to tes­ti­fy at two con­gres­sion­al hear­ings this month.

    Since the rev­e­la­tions about Cam­bridge Ana­lyt­i­ca, the Fed­er­al Trade Com­mis­sion has start­ed an inves­ti­ga­tion of whether Face­book vio­lat­ed promis­es it made in 2011 to pro­tect the pri­va­cy of users, mak­ing it hard­er for the com­pa­ny to share data with third par­ties.

    At the same time, Face­book is grap­pling with increased pri­va­cy reg­u­la­tions out­side the Unit­ed States. Sweep­ing new pri­va­cy laws called the Gen­er­al Data Pro­tec­tion Reg­u­la­tion are set to take effect in Europe next month. And Face­book has been called to talk to reg­u­la­tors in sev­er­al coun­tries, includ­ing Ire­land, Ger­many and Indone­sia, about its han­dling of user data.

    Mr. Zucker­berg said told Con­gress this month that Face­book had grown too fast and that he hadn’t fore­seen the prob­lems the plat­form would con­front.

    “Face­book is an ide­al­is­tic and opti­mistic com­pa­ny,” he said. “For most of our exis­tence, we focused on all the good that con­nect­ing peo­ple can bring.”

    The exec­u­tive shifts put two Repub­li­can men in charge of Facebook’s Wash­ing­ton offices. Mr. Mar­tin will report to Joel Kaplan, vice pres­i­dent of glob­al pub­lic pol­i­cy. Mr. Mar­tin and Mr. Kaplan worked togeth­er in the George W. Bush White House and on Mr. Bush’s 2000 pres­i­den­tial cam­paign.

    Face­book hired Ms. Egan in 2011; she is a fre­quent head­lin­er at tech pol­i­cy events in Wash­ing­ton. Before join­ing Face­book, she spent 15 years as a part­ner at the law firm Cov­ing­ton & Burl­ing as co-chair­woman of the glob­al pri­va­cy and secu­ri­ty group.

    ...

    ———-

    “Face­book Replaces Lob­by­ing Exec­u­tive Amid Reg­u­la­to­ry Scruti­ny” by Cecil­ia Kang; The New York Times; 04/24/2018

    “Ms. Egan, who is also Facebook’s chief pri­va­cy offi­cer, was respon­si­ble for lob­by­ing and gov­ern­ment rela­tions as head of pol­i­cy for the last two years. She will be replaced by Kevin Mar­tin on an inter­im basis, the com­pa­ny said. Mr. Mar­tin has been Facebook’s vice pres­i­dent of mobile and glob­al access pol­i­cy and is a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion.

    When you’re a com­pa­ny as big as Face­book, that’s who you bring in to lead you’re lob­by­ing effort: The for­mer Repub­li­can chair­man of the FCC.

    And this means two Repub­li­cans will be in charge of Face­book’s Wash­ing­ton offices (which are pret­ty much there to lob­by):

    ...
    The exec­u­tive shifts put two Repub­li­can men in charge of Facebook’s Wash­ing­ton offices. Mr. Mar­tin will report to Joel Kaplan, vice pres­i­dent of glob­al pub­lic pol­i­cy. Mr. Mar­tin and Mr. Kaplan worked togeth­er in the George W. Bush White House and on Mr. Bush’s 2000 pres­i­den­tial cam­paign.
    ...

    But the way Face­book would pre­fer us to look at it, this was real­ly all about free­ing up Erin Egan to work on rebuild­ing trust over pri­va­cy con­cerns:

    ...
    Ms. Egan will remain chief pri­va­cy offi­cer and focus on pri­va­cy poli­cies across the globe, Andy Stone, a Face­book spokesman, said.

    Elliot Schrage, Facebook’s vice pres­i­dent of com­mu­ni­ca­tions and pub­lic pol­i­cy, said in a state­ment on Wednes­day: “We need to focus our best peo­ple on our most impor­tant pri­or­i­ties. We are com­mit­ted to rebuild­ing people’s trust in how we han­dle their infor­ma­tion, and Erin is the best per­son to part­ner with our prod­uct teams on that task.”
    ...

    And this move is hap­pen­ing at the same time Face­book is star­ing at a new EU data pri­va­cy regime, the GDPR:

    ...
    At the same time, Face­book is grap­pling with increased pri­va­cy reg­u­la­tions out­side the Unit­ed States. Sweep­ing new pri­va­cy laws called the Gen­er­al Data Pro­tec­tion Reg­u­la­tion are set to take effect in Europe next month. And Face­book has been called to talk to reg­u­la­tors in sev­er­al coun­tries, includ­ing Ire­land, Ger­many and Indone­sia, about its han­dling of user data.
    ...

    And those new EU GDPR rules don’t just poten­tial­ly impact how Face­book han­dles its Euro­pean users going for­ward. It poten­tial­ly impacts the poli­cies gov­ern­ing all of Face­book’s users out­side of the US.

    Why? Because Face­book’s cus­tomers out­side the US and Cana­da are han­dled by Face­book’s oper­a­tions in Ire­land and there­fore under EU rules. That’s just how Face­book decid­ed to struc­ture itself inter­na­tion­al­ly (large­ly due to Ire­land’s sta­tus is a cor­po­rate tax haven).

    So does this mean Face­book’s US users will be oper­at­ing in a data pri­va­cy reg­u­la­to­ry envi­ron­ment man­aged by the GOP while almost every­one else in the world oper­ates under the EU’s new rules? Nope, because Face­book just moved its inter­na­tion­al oper­a­tions out of Ire­land and back to its US head­quar­ters in Cal­i­for­nia. And that means the rules Face­book is lob­by­ing for in DC will apply to all Face­book users glob­al­ly out­side the EU:

    Reuters

    Exclu­sive: Face­book to put 1.5 bil­lion users out of reach of new EU pri­va­cy law

    David Ingram
    April 18, 2018 / 7:13 PM

    SAN FRANCISCO (Reuters) — If a new Euro­pean law restrict­ing what com­pa­nies can do with people’s online data went into effect tomor­row, almost 1.9 bil­lion Face­book Inc users around the world would be pro­tect­ed by it. The online social net­work is mak­ing changes that ensure the num­ber will be much small­er.

    Face­book mem­bers out­side the Unit­ed States and Cana­da, whether they know it or not, are cur­rent­ly gov­erned by terms of ser­vice agreed with the company’s inter­na­tion­al head­quar­ters in Ire­land.

    Next month, Face­book is plan­ning to make that the case for only Euro­pean users, mean­ing 1.5 bil­lion mem­bers in Africa, Asia, Aus­tralia and Latin Amer­i­ca will not fall under the Euro­pean Union’s Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR), which takes effect on May 25.

    The pre­vi­ous­ly unre­port­ed move, which Face­book con­firmed to Reuters on Tues­day, shows the world’s largest online social net­work is keen to reduce its expo­sure to GDPR, which allows Euro­pean reg­u­la­tors to fine com­pa­nies for col­lect­ing or using per­son­al data with­out users’ con­sent.

    That removes a huge poten­tial lia­bil­i­ty for Face­book, as the new EU law allows for fines of up to 4 per­cent of glob­al annu­al rev­enue for infrac­tions, which in Facebook’s case could mean bil­lions of dol­lars.

    The change comes as Face­book is under scruti­ny from reg­u­la­tors and law­mak­ers around the world since dis­clos­ing last month that the per­son­al infor­ma­tion of mil­lions of users wrong­ly end­ed up in the hands of polit­i­cal con­sul­tan­cy Cam­bridge Ana­lyt­i­ca, set­ting off wider con­cerns about how it han­dles user data.

    The change affects more than 70 per­cent of Facebook’s 2 bil­lion-plus mem­bers. As of Decem­ber, Face­book had 239 mil­lion users in the Unit­ed States and Cana­da, 370 mil­lion in Europe and 1.52 bil­lion users else­where.

    Face­book, like many oth­er U.S. tech­nol­o­gy com­pa­nies, estab­lished an Irish sub­sidiary in 2008 and took advan­tage of the country’s low cor­po­rate tax rates, rout­ing through it rev­enue from some adver­tis­ers out­side North Amer­i­ca. The unit is sub­ject to reg­u­la­tions applied by the 28-nation Euro­pean Union.

    Face­book said the lat­est change does not have tax impli­ca­tions.

    ‘IN SPIRIT’

    In a state­ment giv­en to Reuters, Face­book played down the impor­tance of the terms of ser­vice change, say­ing it plans to make the pri­va­cy con­trols and set­tings that Europe will get under GDPR avail­able to the rest of the world.

    “We apply the same pri­va­cy pro­tec­tions every­where, regard­less of whether your agree­ment is with Face­book Inc or Face­book Ire­land,” the com­pa­ny said.

    Ear­li­er this month, Face­book Chief Exec­u­tive Mark Zucker­berg told Reuters in an inter­view that his com­pa­ny would apply the EU law glob­al­ly “in spir­it,” but stopped short of com­mit­ting to it as the stan­dard for the social net­work across the world.

    In prac­tice, the change means the 1.5 bil­lion affect­ed users will not be able to file com­plaints with Ireland’s Data Pro­tec­tion Com­mis­sion­er or in Irish courts. Instead they will be gov­erned by more lenient U.S. pri­va­cy laws, said Michael Veale, a tech­nol­o­gy pol­i­cy researcher at Uni­ver­si­ty Col­lege Lon­don.

    Face­book will have more lee­way in how it han­dles data about those users, Veale said. Cer­tain types of data such as brows­ing his­to­ry, for instance, are con­sid­ered per­son­al data under EU law but are not as pro­tect­ed in the Unit­ed States, he said.

    The com­pa­ny said its ratio­nale for the change was relat­ed to the Euro­pean Union’s man­dat­ed pri­va­cy notices, “because EU law requires spe­cif­ic lan­guage.” For exam­ple, the com­pa­ny said, the new EU law requires spe­cif­ic legal ter­mi­nol­o­gy about the legal basis for pro­cess­ing data which does not exist in U.S. law.

    NO WARNING

    Ire­land was unaware of the change. One Irish offi­cial, speak­ing on con­di­tion of anonymi­ty, said he did not know of any plans by Face­book to trans­fer respon­si­bil­i­ties whole­sale to the Unit­ed States or to decrease Facebook’s pres­ence in Ire­land, where the social net­work is seek­ing to recruit more than 100 new staff.

    Face­book released a revised terms of ser­vice in draft form two weeks ago, and they are sched­uled to take effect next month.

    Oth­er multi­na­tion­al com­pa­nies are also plan­ning changes. LinkedIn, a unit of Microsoft Corp, tells users in its exist­ing terms of ser­vice that if they are out­side the Unit­ed States, they have a con­tract with LinkedIn Ire­land. New terms that take effect May 8 move non-Euro­peans to con­tracts with U.S.-based LinkedIn Corp.

    ...

    ———-
    “Exclu­sive: Face­book to put 1.5 bil­lion users out of reach of new EU pri­va­cy law” by David Ingram; Reuters; 04/18/2018

    “Face­book mem­bers out­side the Unit­ed States and Cana­da, whether they know it or not, are cur­rent­ly gov­erned by terms of ser­vice agreed with the company’s inter­na­tion­al head­quar­ters in Ire­land.”

    Yep, for Face­book and quite a few oth­er major inter­net com­pa­nies with inter­na­tion­al head­quar­ters in Ire­land, it’s the EU’s rules that deter­mine the rules for most of their glob­al cus­tomer base. But not any­more for Face­book:

    ...
    Next month, Face­book is plan­ning to make that the case for only Euro­pean users, mean­ing 1.5 bil­lion mem­bers in Africa, Asia, Aus­tralia and Latin Amer­i­ca will not fall under the Euro­pean Union’s Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR), which takes effect on May 25.

    The pre­vi­ous­ly unre­port­ed move, which Face­book con­firmed to Reuters on Tues­day, shows the world’s largest online social net­work is keen to reduce its expo­sure to GDPR, which allows Euro­pean reg­u­la­tors to fine com­pa­nies for col­lect­ing or using per­son­al data with­out users’ con­sent.

    That removes a huge poten­tial lia­bil­i­ty for Face­book, as the new EU law allows for fines of up to 4 per­cent of glob­al annu­al rev­enue for infrac­tions, which in Facebook’s case could mean bil­lions of dol­lars.
    ...

    And that move from Ire­land to Cal­i­for­nia will impact the ~1.5 bil­lion users Face­book has out­side of the US, Cana­da, and EU:

    ...
    The change affects more than 70 per­cent of Facebook’s 2 bil­lion-plus mem­bers. As of Decem­ber, Face­book had 239 mil­lion users in the Unit­ed States and Cana­da, 370 mil­lion in Europe and 1.52 bil­lion users else­where.

    ace­book, like many oth­er U.S. tech­nol­o­gy com­pa­nies, estab­lished an Irish sub­sidiary in 2008 and took advan­tage of the country’s low cor­po­rate tax rates, rout­ing through it rev­enue from some adver­tis­ers out­side North Amer­i­ca. The unit is sub­ject to reg­u­la­tions applied by the 28-nation Euro­pean Union.

    Face­book said the lat­est change does not have tax impli­ca­tions.
    ...

    But Face­book wants to assure every­one that this move will have no mean­ing­ful impact on any­one’s pri­va­cy because it’s com­mit­ted to hav­ing ALL of its users glob­al­ly fol­low the same rules as laid out by the EU’s new GDPR. At least ‘in spir­it’. That’s right, Face­book is telling the world that its going to imple­ment the GDPR glob­al­ly at the same time it moves its oper­a­tions out of the EU. That’s not sus­pi­cious or any­thing:

    ...
    ‘IN SPIRIT’

    In a state­ment giv­en to Reuters, Face­book played down the impor­tance of the terms of ser­vice change, say­ing it plans to make the pri­va­cy con­trols and set­tings that Europe will get under GDPR avail­able to the rest of the world.

    “We apply the same pri­va­cy pro­tec­tions every­where, regard­less of whether your agree­ment is with Face­book Inc or Face­book Ire­land,” the com­pa­ny said.

    Ear­li­er this month, Face­book Chief Exec­u­tive Mark Zucker­berg told Reuters in an inter­view that his com­pa­ny would apply the EU law glob­al­ly “in spir­it,” but stopped short of com­mit­ting to it as the stan­dard for the social net­work across the world.

    In prac­tice, the change means the 1.5 bil­lion affect­ed users will not be able to file com­plaints with Ireland’s Data Pro­tec­tion Com­mis­sion­er or in Irish courts. Instead they will be gov­erned by more lenient U.S. pri­va­cy laws, said Michael Veale, a tech­nol­o­gy pol­i­cy researcher at Uni­ver­si­ty Col­lege Lon­don.

    Face­book will have more lee­way in how it han­dles data about those users, Veale said. Cer­tain types of data such as brows­ing his­to­ry, for instance, are con­sid­ered per­son­al data under EU law but are not as pro­tect­ed in the Unit­ed States, he said.
    ...

    So why did Face­book make the move if it’s pledg­ing to imple­ment the GDPR ‘in spir­it’ for every­one? Well, accord­ing to Face­book, it’s “because EU law requires spe­cif­ic lan­guage.” That’s not dubi­ous or any­thing:

    ...
    The com­pa­ny said its ratio­nale for the change was relat­ed to the Euro­pean Union’s man­dat­ed pri­va­cy notices, “because EU law requires spe­cif­ic lan­guage.” For exam­ple, the com­pa­ny said, the new EU law requires spe­cif­ic legal ter­mi­nol­o­gy about the legal basis for pro­cess­ing data which does not exist in U.S. law.
    ...

    And, of course, Face­book isn’t the only multi­na­tion­al inter­net firm look­ing to move out of Ire­land. Microsoft­’s LinkedIn is mak­ing the same move, under a sim­i­lar­ly laugh­able pre­tense:

    ...
    Oth­er multi­na­tion­al com­pa­nies are also plan­ning changes. LinkedIn, a unit of Microsoft Corp, tells users in its exist­ing terms of ser­vice that if they are out­side the Unit­ed States, they have a con­tract with LinkedIn Ire­land. New terms that take effect May 8 move non-Euro­peans to con­tracts with U.S.-based LinkedIn Corp.

    LinkedIn said in a state­ment on Wednes­day that all users are enti­tled to the same pri­va­cy pro­tec­tions. “We’ve sim­ply stream­lined the con­tract loca­tion to ensure all mem­bers under­stand the LinkedIn enti­ty respon­si­ble for their per­son­al data,” the com­pa­ny said.

    “We’ve sim­ply stream­lined the con­tract loca­tion to ensure all mem­bers under­stand the LinkedIn enti­ty respon­si­ble for their per­son­al data”

    Yeah, LinkedIn is mak­ing the move so users won’t be con­fused about whether or not the US or EU LinkedIn enti­ty was respon­si­ble for their per­son­al data. LOL! We’ll no doubt get sim­i­lar­ly laugh­able expla­na­tions from all the oth­er multi­na­tion­al firms mak­ing sim­i­lar moves.

    Also don’t for­get that these moves mean the US’s data pri­va­cy rules are going to be even more impor­tant for the inter­net giants because now those rules are for going to apply to users every­where but the EU. And that means the lob­by­ing of US law­mak­ers and reg­u­la­tors is going to be even more impor­tant going for­ward. The more com­pa­nies that relo­cate to the US to escape the EU’s GDPR for the inter­na­tion­al cus­tomer base, the greater the incen­tives for under­min­ing US data pri­va­cy laws. In oth­er words, it’s a real­ly great time to be a Repub­li­can data pri­va­cy lob­by­ist.

    Posted by Pterrafractyl | April 30, 2018, 5:30 pm
  12. Here’s a pair of sto­ries that relates to both Cam­bridge Ana­lyt­i­ca as well as the bizarre col­lec­tion of sto­ries relat­ed to the ‘Sey­chelles backchan­nel’ #TrumpRus­sia sto­ry (like George Nader’s par­tic­i­pa­tion in the ‘backchan­nel’ or Nader’s hir­ing of GOP mon­ey man Elliot Broidy to lob­by on behalf of the UAE and Saud­is). And the con­nect­ing ele­ment is none oth­er than Erik Prince:

    So long Cam­bridge Ana­lyt­i­ca! Yep, Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

    Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca? Of course not. They’re just rebrand­ing under a new com­pa­ny, Emer­da­ta. It’s kind of like when Black­wa­ter renamed itself Xe, and then Acad­e­mi. And intrigu­ing­ly, Cam­bridge Ana­lyt­i­ca’s trans­for­ma­tion into Emer­da­ta intro­duces anoth­er asso­ci­a­tion with Black­wa­ter: Emerdata’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince:

    The New York Times

    Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data

    By Nicholas Con­fes­sore and Matthew Rosen­berg
    May 2, 2018

    The embat­tled polit­i­cal con­sult­ing firm Cam­bridge Ana­lyt­i­ca announced on Wednes­day that it would cease most oper­a­tions and file for bank­rupt­cy amid grow­ing legal and polit­i­cal scruti­ny of its busi­ness prac­tices and work for Don­ald J. Trump’s pres­i­den­tial cam­paign.

    The deci­sion was made less than two months after Cam­bridge Ana­lyt­i­ca and Face­book became embroiled in a data-har­vest­ing scan­dal that com­pro­mised the per­son­al infor­ma­tion of up to 87 mil­lion peo­ple. Rev­e­la­tions about the mis­use of data, pub­lished in March by The New York Times and The Observ­er of Lon­don, plunged Face­book into cri­sis and prompt­ed reg­u­la­tors and law­mak­ers to open inves­ti­ga­tions into Cam­bridge Ana­lyt­i­ca.

    In a state­ment post­ed to its web­site, Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.

    But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices.

    “Over the past sev­er­al months, Cam­bridge Ana­lyt­i­ca has been the sub­ject of numer­ous unfound­ed accu­sa­tions and, despite the company’s efforts to cor­rect the record, has been vil­i­fied for activ­i­ties that are not only legal, but also wide­ly accept­ed as a stan­dard com­po­nent of online adver­tis­ing in both the polit­i­cal and com­mer­cial are­nas,” the company’s state­ment said.

    Cam­bridge Ana­lyt­i­ca also said the results of an inde­pen­dent inves­ti­ga­tion it had com­mis­sioned, which it released on Wednes­day, con­tra­dict­ed asser­tions made by for­mer employ­ees and con­trac­tors about its acqui­si­tion of Face­book data. The report played down the role of a con­trac­tor turned whis­tle-blow­er, Christo­pher Wylie, who helped the com­pa­ny acquire Face­book data, call­ing it “very mod­est.”

    Cam­bridge Ana­lyt­i­ca did not reply to requests for com­ment. The news of Cam­bridge ceas­ing oper­a­tions was ear­li­er report­ed by The Wall Street Jour­nal and Giz­mo­do.

    The com­pa­ny, bankrolled by Robert Mer­cer, a wealthy Repub­li­can donor who invest­ed at least $15 mil­lion, offered tools that it claimed could iden­ti­fy the per­son­al­i­ties of Amer­i­can vot­ers and influ­ence their behav­ior. Those mod­el­ing tech­niques under­pinned Cam­bridge Analytica’s work for the Trump cam­paign and for oth­er can­di­dates in 2014 and 2016.

    But Cam­bridge Ana­lyt­i­ca came under scruti­ny over the past year, first for its pur­port­ed meth­ods of pro­fil­ing vot­ers and then over alle­ga­tions that it improp­er­ly har­vest­ed pri­vate data from Face­book users. Last year, the com­pa­ny was drawn into the spe­cial coun­sel inves­ti­ga­tion of Russ­ian inter­fer­ence in the 2016 elec­tion.

    The com­pa­ny was also forced to sus­pend its chief exec­u­tive, Alexan­der Nix, after a British tele­vi­sion chan­nel released an under­cov­er video. In it, Mr. Nix sug­gest­ed that the com­pa­ny had used seduc­tion and bribery to entrap politi­cians and influ­ence for­eign elec­tions.

    Face­book has since announced changes to its poli­cies for col­lect­ing and han­dling user data. Its chief exec­u­tive, Mark Zucker­berg, tes­ti­fied last month before Con­gress, where he faced crit­i­cism for fail­ing to pro­tect users’ data.

    The con­tro­ver­sy dealt a major blow to Cam­bridge Analytica’s ambi­tions of expand­ing its com­mer­cial busi­ness in the Unit­ed States, while also bring­ing unwant­ed atten­tion to the Amer­i­can gov­ern­ment con­tracts sought by SCL Group, an intel­li­gence con­trac­tor.

    Besides work­ing for the Trump cam­paign, Cam­bridge Ana­lyt­i­ca was pre­vi­ous­ly hired by the polit­i­cal action com­mit­tee found­ed by John R. Bolton, the nation­al secu­ri­ty advis­er. It had also worked for the 2016 pres­i­den­tial cam­paigns of Ben Car­son and Sen­a­tor Ted Cruz.

    But no can­di­dates for fed­er­al office in the Unit­ed States have dis­closed pay­ing Cam­bridge Ana­lyt­i­ca dur­ing the 2018 cycle. A Repub­li­can con­gres­sion­al can­di­date in Cal­i­for­nia did report void­ing a $10,000 trans­ac­tion with the com­pa­ny in ear­ly March, accord­ing to fed­er­al elec­tion records.

    The com­pa­ny also unsuc­cess­ful­ly tried to court some major com­mer­cial clients in the last year, includ­ing Mer­cedes-Benz and Anheuser-Busch InBev, the glob­al brew­er, accord­ing to one for­mer employ­ee. Cam­bridge pitched AB InBev by claim­ing that it could posi­tion Bud Light as the beer for the young par­ty crowd and Bud­weis­er for old-school con­ser­v­a­tives, accord­ing to the for­mer employ­ee, who asked not to be named because the per­son was restrict­ed from speak­ing about the company’s busi­ness.

    In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

    Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

    An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. Efforts to reach him by phone on Wednes­day were unsuc­cess­ful.

    ...

    ———

    “Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data” by Nicholas Con­fes­sore and Matthew Rosen­berg; The New York Times; 05/02/2018

    “In a state­ment post­ed to its web­site, Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.”

    So Cam­bridge Ana­lyt­i­ca is going away and the SCL Group is get­ting out of the elec­tions busi­ness. At least on the sur­face. But there’s still an open ques­tion of who is going to retain the rights to all the infor­ma­tion held by Cam­bridge Ana­lyt­i­ca, includ­ing all those psy­cho­graph­ic vot­er pro­files that are pre­sum­ably worth quite a bit of mon­ey:

    ...
    But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices.
    ...

    And that ques­tion over who is going to own the rights to all that data is par­tic­u­lar­ly rel­e­vant giv­en that exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group and the Mer­cers recent­ly formed a new com­pa­ny: Emer­da­ta. And look who hap­pens to be one of Emer­data’s direc­tors: John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince:

    ...
    In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

    Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

    An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. Efforts to reach him by phone on Wednes­day were unsuc­cess­ful.
    ...

    “Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group.”

    LOL! Yeah, the pos­si­bil­i­ty for a “Black­wa­ter-style rebrand­ing” is look­ing more like a real­i­ty at this point. Although we’ll see how many clients this new com­pa­ny gets.

    And that brings us to the fol­low­ing piece. It’s a fas­ci­nat­ing piece that sum­ma­rizes all of the var­i­ous thing we’ve learned about Erik Prince, the #TrumpRus­sia inves­ti­ga­tion, and the UAE. And as the arti­cle notes, at the same time Emer­da­ta was being formed in 2017 (August 11, 2017, was the incor­po­ra­tion dat) the UAE was already pay­ing SCL to work on run­ning a social media cam­paign for the UAE against Qatar as part of the UAE’s #Boy­cottQatar cam­paign. And as the arti­cle also notes, if you look at the name “Emer­da­ta”, it sure sounds like short­ened ver­sion of “Emerati-Data”.

    So giv­en the pres­ence of Erik Prince’s busi­ness part­ner on the board of direc­tors of Emer­da­ta, and giv­en Prince’s exten­sive ties to the UAE, we have to ask the ques­tion of whether or not Cam­bridge Ana­lyt­i­ca is about to become the new play­thing of UAE:

    Medi­um

    From the Sey­chelles to the White House to Cam­bridge Ana­lyt­i­ca, Erik Prince and the UAE are key parts of the Trump sto­ry

    Wendy Siegel­man
    Apr 8, 2018

    In Jan­u­ary 2017 Erik Prince attend­ed a meet­ing in the Sey­chelles with the Unit­ed Arab Emirate’s Crown Prince Mohammed bin Zayed Al-Nahyan, the CEO of the Russ­ian Direct Invest­ment Fund Kir­ill Dmitriev, and George Nad­er, a for­mer con­sul­tant for Erik Prince’s com­pa­ny Black­wa­ter.

    While Erik Prince tes­ti­fied in Novem­ber 2017 to the House Intel­li­gence Com­mit­tee that the meet­ing with Dmitriev was unplanned, news broke last week that Mueller has evi­dence that Prince’s meet­ing with Putin’s ally Dmitriev may not have been a chance encounter, con­tra­dict­ing Prince’s sworn tes­ti­mo­ny. And, accord­ing to George Nad­er, a main pur­pose of the meet­ing was to set up a com­mu­ni­ca­tion chan­nel between the Russ­ian gov­ern­ment and the incom­ing Trump admin­is­tra­tion.

    Ear­li­er this year, a seem­ing­ly unre­lat­ed scan­dal erupt­ed after a sto­ry broke about Trump’s data com­pa­ny Cam­bridge Ana­lyt­i­ca har­vest­ing Face­book data on tens of mil­lions of peo­ple. Short­ly after that a Chan­nel 4 News inves­ti­ga­tion revealed under­cov­er film of Cam­bridge Ana­lyt­i­ca exec­u­tives brag­ging about the dirty tricks they use to influ­ence elec­tions.

    As the Cam­bridge Ana­lyt­i­ca scan­dal was unfold­ing, I broke the sto­ry about a new com­pa­ny Emer­da­ta Lim­it­ed, cre­at­ed by Cam­bridge Ana­lyt­i­ca exec­u­tives, that in ear­ly 2018 added new board mem­bers Rebekah and Jen­nifer Mer­cer, Cheng Peng, Chun Shun Ko John­son, who is a busi­ness part­ner of Erik Prince, and Ahmed Al Khat­ib, a ‘Cit­i­zen of Sey­chelles’.

    In 2017 as Cam­bridge Ana­lyt­i­ca exec­u­tives cre­at­ed Emer­da­ta, they were also work­ing on behalf of the UAE through SCL Social, which had a $330,000 con­tract to run a social media cam­paign for the UAE against Qatar, fea­tur­ing the theme #Boy­cottQatar. One of the Emer­da­ta direc­tors may have ties to the UAE and the com­pa­ny name, coin­ci­den­tal­ly, sounds like a play on Emirati-Data…Emerdata.

    The Unit­ed Arab Emi­rates and peo­ple advo­cat­ing for the inter­ests of the UAE—including Prince, Nad­er, and Trump fundrais­er Elliot Broidy who has done large busi­ness deals with the UAE—have start­ed to appear fre­quent­ly in news relat­ed to Mueller’s inves­ti­ga­tion. Erik Prince, the broth­er of the U.S. Sec­re­tary of Edu­ca­tion Bet­sy DeVos, lived in the UAE, attend­ed the Sey­chelles meet­ing with the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, is busi­ness part­ners with Chun Shun Ko who just joined the board of the new Cam­bridge Analytica/SCL com­pa­ny Emer­da­ta, and SCL had a large con­tract to work on behalf of the UAE.

    To bet­ter under­stand the role Erik Prince and the UAE have played in the Trump-Rus­sia story—and in the much broad­er sto­ry of glob­al polit­i­cal influ­ence, and often corruption—below is a time­line track­ing some key events. Not all events are relat­ed, but review­ing the infor­ma­tion chrono­log­i­cal­ly may help answer a few ques­tions, includ­ing: Why was the UAE involved in a meet­ing with Erik Prince to set up a com­mu­ni­ca­tion chan­nel with Rus­sia? Is the UAE involved with Cam­bridge Analytica’s new com­pa­ny Emer­da­ta? Does Erik Prince have any con­nec­tion to Cam­bridge Ana­lyt­i­ca, even if only indi­rect­ly through Chun Shun Ko, Steve Ban­non, or the Mer­cers? And what role has the UAE had in influ­enc­ing the Trump admin­is­tra­tion?

    Note: this time­line may be updat­ed peri­od­i­cal­ly to include new per­ti­nent infor­ma­tion. Each event below includes the source data link. Name vari­a­tions (e.g. Chun Shun Ko John­son vs John­son Ko Chun-shun) reflect how names are pre­sent­ed by each source.

    2010

    * In a depo­si­tion Erik Prince said he had pre­vi­ous­ly hired George Nad­er to help Black­wa­ter as a “busi­ness devel­op­ment con­sul­tant that we retained in Iraq” because the com­pa­ny was look­ing for con­tracts with the Iraqi gov­ern­ment. New York Times
    * After a series of civ­il law­suits, crim­i­nal charges and Con­gres­sion­al inves­ti­ga­tions against Erik Prince’s com­pa­ny Black­wa­ter and its for­mer exec­u­tives, Prince moved to the Unit­ed Arab Emi­rates. New York Times.

    2011

    * Sheik Mohamed bin Zayed al-Nahyan of Abu Dhabi hired Erik Prince to build a fight­ing force, pay­ing $529 mil­lion to build an army. Addi­tion­al­ly, Prince “worked with the Emi­rati gov­ern­ment on var­i­ous ventures…including an oper­a­tion using South African mer­ce­nar­ies to train Soma­lis to fight pirates.” New York Times
    * A movie called “The Project,” about Erik Prince’s UAE-fund­ed pri­vate army in Soma­lia, was paid for by the Mov­ing Pic­ture Insti­tute where Rebekah Mer­cer is on the board of Trustees. Gawk­er Web­site

    2012

    * Erik Prince, who works and lives in Abu Dhabi in the Unit­ed Arab Emi­rates, cre­at­ed Fron­tier Resource Group, an Africa-ded­i­cat­ed invest­ment firm part­nered with major Chi­nese enter­pris­es. South Chi­na Morn­ing Post

    2013

    * The Russ­ian Direct Invest­ment Fund led by CEO Kir­ill Dmitriev, and the UAE’s Mubadala Devel­op­ment Com­pa­ny based in Abu Dhabi, launched a $2 bil­lion co-invest­ment fund to pur­sue oppor­tu­ni­ties in Rus­sia. PR Newswire

    2014

    * Jan­u­ary: Erik Prince was named Chair­man of DVN Hold­ings, con­trolled by Hong Kong busi­ness­man John­son Ko Chun-shun and Chi­nese state-owned Citic Group. DVN’s board pro­posed that the firm be renamed Fron­tier Ser­vices Group. South Chi­na Morn­ing Post.
    * Jan­u­ary: Erik Prince’s busi­ness part­ner, Dori­an Barak, became a Non-Exec­u­tive Direc­tor of Reori­ent Group Lim­it­ed, an invest­ment com­pa­ny where Ko Chun Shun John­son was Chair­man and Exec­u­tive Direc­tor, and had done a $350 mil­lion deal with Jack Ma. 2014 Annu­al Report. Forbes
    * Erik Prince’s busi­ness part­ner Dori­an Barak joined the board of Alu­fur Min­ing, “an inde­pen­dent min­er­al explo­ration and devel­op­ment com­pa­ny with sig­nif­i­cant baux­ite inter­ests in the Repub­lic of Guinea.” (Prince would lat­er tes­ti­fy that the pur­pose of his Sey­chelles trip was to dis­cuss min­er­als and ‘baux­ite’ with the UAE’s Mohammed bin Zayed). Alu­fur web­site
    * August: The John Bolton Super PAC found­ed by John Bolton, Pres­i­dent Trump’s incom­ing nation­al secu­ri­ty advis­er, hired Cam­bridge Ana­lyt­i­ca months after the firm was found­ed and while it was still har­vest­ing Face­book data. In the two years that fol­lowed, Bolton’s super PAC spent near­ly $1.2 mil­lion pri­mar­i­ly for “sur­vey research” and “behav­ioral micro­tar­get­ing with psy­cho­graph­ic mes­sag­ing” using Face­book data. New York Times

    2016

    * August/September: Erik Prince donat­ed $150,000 to Make Amer­i­ca Num­ber 1, a pro-Trump PAC for which Robert Mer­cer has been the largest fun­der. Open Secrets
    * Octo­ber: Erik Prince donat­ed $100,000 to the Trump Vic­to­ry fund and $33,400 to the Repub­li­can Nation­al Com­mit­tee. FEC
    * Octo­ber 11: Erik Prince did an inter­view with Bre­it­bart News Dai­ly describ­ing Hillary Clinton’s “demon­stra­ble links to Rus­sia, par­tic­u­lar­ly her com­plic­i­ty in “sell­ing 20 per­cent of the Unit­ed States’s ura­ni­um sup­ply to a Russ­ian state com­pa­ny.” Bre­it­bart
    * Novem­ber 4: Erik Prince told Bre­it­bart News Dai­ly that “The NYPD want­ed to do a press con­fer­ence announc­ing the war­rants and the addi­tion­al arrests they were mak­ing” in the Antho­ny Wein­er inves­ti­ga­tion, but received “huge push­back” from the Jus­tice Depart­ment.” Prince described crim­i­nal cul­pa­bil­i­ty in emails from Weiner’s lap­top relat­ed to “mon­ey laun­der­ing, under­age sex, pay-for-play.” Bre­it­bart
    * Decem­ber: The Unit­ed Arab Emirate’s crown prince of Abu Dhabi, Sheikh Mohamed bin Zayed al-Nahyan, vis­it­ed Trump Tow­er and met with Jared Kush­n­er, Michael Fly­nn, and Steve Ban­non. In an unusu­al breach of pro­to­col, the Oba­ma admin­is­tra­tion was not noti­fied about the vis­it. Wash­ing­ton Post
    * Erik Prince told the House Intel­li­gence Com­mit­tee that Steve Ban­non informed him about the Decem­ber Trump Tow­er meet­ing with Mohamed bin Zayed al-Nahyan. Prince also said he had sent Ban­non unso­licit­ed pol­i­cy papers dur­ing the cam­paign. CNN

    2017

    Jan­u­ary 2017

    * One week pri­or to the meet­ing in the Sey­chelles, sources report­ed that George Nad­er met with Erik Prince and lat­er sent him infor­ma­tion on Kir­ill Dmitriev, the CEO of the Russ­ian Direct Invest­ment Fund, con­tra­dict­ing Prince’s sworn tes­ti­mo­ny to the House Intel­li­gence Com­mit­tee that the meet­ing with Kir­ill Dmitriev in the Sey­chelles was unex­pect­ed. ABC News
    * Jan­u­ary 11: A meet­ing was held in the Sey­chelles with Erik Prince, the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, Kir­ill Dmitriev, and George Nad­er, who had pre­vi­ous­ly con­sult­ed for Prince’s Black­wa­ter. Accord­ing to Nad­er the meet­ing was to dis­cuss for­eign pol­i­cy and to estab­lish a line of com­mu­ni­ca­tion between the Russ­ian gov­ern­ment and the incom­ing Trump admin­is­tra­tion. ABC News

    Feb­ru­ary 2017

    * “After decades of close polit­i­cal and defense prox­im­i­ty with the Unit­ed States, the Unit­ed Arab Emi­rates have con­clud­ed three major agree­ments with Rus­sia which could lead to its air force being ulti­mate­ly re-equipped with Russ­ian com­bat air­craft.” Defense Aero­space

    March 2017

    * Elliott Broidy, a top GOP and Trump fundrais­er with hun­dreds of mil­lions of dol­lars in busi­ness deals with the UAE, sent George Nad­er a spread­sheet out­lin­ing a pro­posed $12.7 mil­lion cam­paign against Qatar and the Mus­lim Broth­er­hood. Broidy also sent an email to George Nad­er refer­ring to Secure Amer­i­ca Now as a group he worked with. New York Times
    * The largest fun­der of Secure Amer­i­ca Now, a secre­tive group that cre­ates anti-Mus­lim adver­tis­ing, is Robert Mer­cer, who is also the largest fun­der of Cam­bridge Ana­lyt­i­ca. Open Secrets

    April 2017

    * Jared Kushner’s father Charles Kush­n­er met with Qatar’s min­is­ter of finance Ali Sharif Al Ema­di to dis­cuss financ­ing of Kush­n­er Com­pa­nies’ 666 Fifth Avenue build­ing. Inter­cept
    * “Tom Bar­rack, a Trump friend who had sug­gest­ed that Thani con­sid­er invest­ing in the Kush­n­er prop­er­ty, has said Charles Kush­n­er was “crushed” when his son got the White House job because that prompt­ed the Qataris to pull out.” Wash­ing­ton Post

    May 2017

    * May 20–21: Don­ald Trump made his first over­seas trip to Riyadh, Sau­di Ara­bia, accom­pa­nied by Jared Kush­n­er, Steve Ban­non and oth­ers. On his first day there Trump signed a joint “strate­gic vision” that includ­ed $110 bil­lion in Amer­i­can arms sales and oth­er invest­ments. Wash­ing­ton Post
    * May 23: Per U.S. offi­cials, the UAE gov­ern­ment dis­cussed plans to hack Qatar. Wash­ing­ton Post
    * May 24: Per U.S. offi­cials, the UAE orches­trat­ed the hack of Qatari gov­ern­ment news and social media sites in order to post incen­di­ary false quotes attrib­uted to Qatar’s emir, Sheikh Tamim Bin Hamad al-Thani. Wash­ing­ton Post
    * Late May: Fol­low­ing the hack, Sau­di Ara­bia, UAE, Bahrain and Egypt banned Qatari media, broke rela­tions and declared a trade and diplo­mat­ic boy­cott. Wash­ing­ton Post

    June 2017

    * June 5: The Gulf Coop­er­a­tion Coun­cil mem­bers Sau­di Ara­bia, the Unit­ed Arab Emi­rates, Bahrain, and Egypt released coor­di­nat­ed state­ments accus­ing Qatar of sup­port­ing ter­ror­ist groups and say­ing that as a result they were cut­ting links to the coun­try by land, sea and air. Wash­ing­ton Post
    * June 6: Trump tweet­ed his sup­port for the block­ade against Qatar, while Rex Tiller­son and James Mat­tis called for calm. The Guardian
    * June 7: U.S. inves­ti­ga­tors from the FBI, who sent a team to Doha to help the Qatari gov­ern­ment inves­ti­gate the alleged hack­ing inci­dent, sus­pect­ed Russ­ian hack­ers plant­ed the fake news behind the Qatar cri­sis. CNN
    * June 27: Rex Tiller­son “reaf­firmed his strong sup­port for Kuwait’s efforts to medi­ate the dis­pute between Qatar and Sau­di Ara­bia, the UAE, Bahrain, and Egypt” and “lead­ers reaf­firmed the need for all par­ties to exer­cise restraint to allow for pro­duc­tive diplo­mat­ic dis­cus­sions.” U.S. Depart­ment of State Read­out
    * An aide to Tiller­son was con­vinced Trump’s sup­port for the UAE came from the UAE Ambas­sador Yousef Al Ota­bie, a close friend of Jared Kush­n­er, known to speak to Kush­n­er on a week­ly basis. The Amer­i­can Con­ser­v­a­tive

    July 2017

    * U.S. intel­li­gence offi­cials con­firmed that the UAE orches­trat­ed the hack of Qatari gov­ern­ment news and social media sites with incen­di­ary false quotes attrib­uted to Qatar’s emir, Sheikh Tamim Bin Hamad al-Thani. Wash­ing­ton Post

    August 2017

    * Emer­da­ta Lim­it­ed was incor­po­rat­ed in the UK with Cam­bridge Analytica’s Chair­man Julian Wheat­land and Chief Data Offi­cer Alexan­der Tayler as sig­nif­i­cant own­ers. Com­pa­ny fil­ing

    Sep­tem­ber 2017

    * Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix and Steve Ban­non both present at the CSLA Investors’ Forum in Hong Kong. CLSA is part of Citic Secu­ri­ties, which is part of Citic Group, the major­i­ty own­er of Erik Prince and Ko Chun Shun Johnson’s Fron­tier Ser­vices Group. Bloomberg Tweet

    Octo­ber 2017

    * Octo­ber 6: Elliott Broidy, whose com­pa­ny Circi­nus has had hun­dreds of mil­lions of dol­lars in con­tracts with the UAE, met Trump and sug­gest­ed Trump meet with the UAE’s Mohammed bin Zayed al-Nahyan. Broidy said Trump thought it was good idea. Broidy also “per­son­al­ly urged Mr. Trump to fire Mr. Tiller­son, whom the Saud­is and Emi­ratis saw as insuf­fi­cient­ly tough on Iran and Qatar.” New York Times
    * Octo­ber 6: SCL Social Lim­it­ed, part of SCL Group/Cambridge Ana­lyt­i­ca, was hired by UK com­pa­ny Project Asso­ciates for approx­i­mate­ly $330,000 to imple­ment a social media cam­paign for the UAE against Qatar, fea­tur­ing the them #Boy­cottQatar. FARA fil­ing
    * Octo­ber 23: Steve Ban­non spoke at a Hud­son Insti­tute event on “Coun­ter­ing Vio­lent Extrem­ism: Qatar, Iran, and the Mus­lim Broth­er­hood,” and called the Qatar block­ade “the sin­gle most impor­tant thing that’s hap­pen­ing in the world.” Ban­non “bragged that president’s trip to Sau­di Ara­bia in May gave the Saud­is the gump­tion to lead a block­ade against Doha.” The Nation­al Inter­est
    * Octo­ber 29: Jared Kush­n­er returned from an unan­nounced trip to Sau­di Ara­bia to dis­cuss Mid­dle East peace. Tom Bar­rack, a long­time friend and close Trump con­fi­dant said “The key to solv­ing (the Israel-Pales­tin­ian dis­pute) is Egypt. And the key to Egypt is Abu Dhabi and Sau­di Ara­bia.” Politi­co

    Novem­ber 2017

    * Novem­ber 4: A week after Kushner’s vis­it Sau­di prince Mohammed bin Salman, launched an anti-cor­rup­tion crack­down and arrest­ed dozens of mem­bers of the Sau­di roy­al fam­i­ly. Per three sources “Crown Prince Mohammed told con­fi­dants that Kush­n­er had dis­cussed the names of Saud­is dis­loy­al to the crown prince.” How­ev­er, “Kush­n­er, through his attorney’s spokesper­son, denies hav­ing done so.” Anoth­er source said Mohammed bin Salman told UAE Crown Prince Mohammed bin Zayed that Kush­n­er was “in his pock­et.” The Inter­cept
    * Novem­ber 15: Dmit­ry Rybolovlev sold his Leonar­do Da Vin­ci paint­ing ‘Sal­va­tore Mun­di’ for $450.3 mil­lion, set­ting a new record for the high­est priced paint­ing sold at auc­tion. Rybolovlev?—?who had pur­chased a Flori­da home from Trump in 2008 for $95 mil­lion, more than $50 mil­lion above Trump’s pur­chase price?—?sold the Da Vin­ci paint­ing for $322 mil­lion above his pur­chase price. The paint­ing was bought by Sau­di Prince Bad­er bin Abdul­lah on behalf of crown prince Moham­mad Bin Salman. The price was dri­ven up by a bid­ding war with the UAE’s Mohammed bin Zayed al-Nahyan, as both bid­ders feared los­ing the paint­ing to the Qatari rul­ing fam­i­ly. After the pur­chase came under crit­i­cism, the Da Vin­ci paint­ing was swapped with the UAE Min­istry of Cul­ture in exchange for an equal­ly val­ued yacht. Dai­ly Mail
    * Novem­ber 20: Erik Prince tes­ti­fied before the U.S. House of Rep­re­sen­ta­tives Intel­li­gence Com­mit­tee and said that he arranged his trip to Sey­chelles with peo­ple who worked for Mohammed bin Zayed to dis­cuss “secu­ri­ty issues and min­er­al issues and even baux­ite.” Prince then described how some­one, maybe one of Mohammed bin Zayed’s broth­ers, told Prince he should meet with Kir­ill Dim­itriev, describ­ing him as “a Russ­ian guy that we’ve done some busi­ness with in the past.” Erik Prince Tran­script

    Decem­ber 2017

    * Buz­zfeed broke a sto­ry on how Erik Prince had pitched the Trump admin­is­tra­tion on a plan to hire him to pri­va­tize the Afghan war and mine Afghanistan’s valu­able min­er­als. A slide pre­sen­ta­tion of Prince’s pitch described Afghanistan’s rich deposits of min­er­als with an esti­mat­ed val­ue of $1 tril­lion, and described his plan as “a strate­gic min­er­al resource extrac­tion fund­ed effort.” Buz­zfeed

    2018
    Jan­u­ary 2018

    * George Nad­er emailed a request to Elliott Broidy say­ing the leader of the UAE asked Trump to call the crown prince of Sau­di Ara­bia to smooth over poten­tial bad feel­ings cre­at­ed by the book “Fire and Fury.” Nad­er also reit­er­at­ed to Broidy the desire of the ruler of the UAE to meet alone with Trump. Days lat­er as Nad­er went to meet Broidy at Mar-a-lago to cel­e­brate the anniver­sary of the inau­gu­ra­tion, he was met at Dulles Air­port by FBI agents work­ing for Mueller. New York Times

    Jan­u­ary to March 2018

    * Emer­da­ta Lim­it­ed added new direc­tors Alexan­der Nix, John­son Chun Shun Ko, Cheng Peng, Ahmad Al Khat­ib, Rebekah Mer­cer, and Jen­nifer Mer­cer. John­son Chun Shun Ko is the busi­ness part­ner of Erik Prince. Ahmad Al Khat­ib is iden­ti­fied as ‘Cit­i­zen of Sey­chelles’. Shares are issued val­ued at 1,912,512 GBP. Emer­da­ta arti­cle
    * SCL/Cambridge Ana­lyt­i­ca founder Nigel Oakes told Chan­nel 4 News it was his under­stand­ing that Emer­da­ta was set up a year ago to acquire all of Cam­bridge Ana­lyt­i­ca and all of SCL. Chan­nel 4 News
    * A web­site for a com­pa­ny called Coinagelabs.org dis­played Cam­bridge Ana­lyt­i­ca as a part­ner. The team was led by CEO Sandy Peng, who pre­vi­ous­ly worked at Reori­ent Cap­i­tal, part of Reori­ent Group where Chun Shun Ko had been exec­u­tive chair­man. The site is no longer avail­able. Twit­ter Thread

    March 2018

    March 13: A key goal of UAE polit­i­cal advi­sor George Nad­er, and Elliott Broidy who had hun­dreds of mil­lions of dol­lars of busi­ness with the UAE, was accom­plished, and Rex Tiller­son was fired. New York Times
    March 19: Rus­sia-friend­ly Cal­i­for­nia rep­re­sen­ta­tive Dana Rohrabach­er, who has crit­i­cized the Mag­nit­sky Act and who Prince had interned for in the 1990’s, attend­ed a fundrais­er host­ed for him by Erik Prince and Oliv­er North, at Prince’s home in Vir­ginia. The Inter­cept

    April 2018

    * Sources report­ed that “Spe­cial Coun­sel Robert Mueller has obtained evi­dence that calls into ques­tion Con­gres­sion­al tes­ti­mo­ny giv­en by Trump sup­port­er and Black­wa­ter founder Erik Prince last year, when he described a meet­ing in Sey­chelles with a Russ­ian financier close to Vladimir Putin as a casu­al chance encounter “over a beer.”” ABC News
    * John Bolton is set to begin as Don­ald Trump’s new Nation­al Secu­ri­ty Advi­sor, replac­ing Lt. Gen. H.R. McMas­ter, who had opposed Erik Prince’s plans to pri­va­tize the war in Afghanistan. Wash­ing­ton Post
    * Robert Mer­cer, the largest fun­der of Cam­bridge Ana­lyt­i­ca, has giv­en $5 mil­lion to Bolton’s super PAC since 2013. He was the Bolton super PAC’s largest donor dur­ing the 2016 elec­tion cycle, and so far, is also the largest donor for the 2018 elec­tion cycle, accord­ing to fed­er­al cam­paign finance fil­ings. The Cen­ter for Pub­lic Integri­ty
    * A source close to Erik Prince said, “now that McMas­ter will be replaced by neo­con favorite John Bolton, and Tiller­son with CIA direc­tor Mike Pom­peo, who once ran an aero­space sup­pli­er, the dynam­ics have changed. Bolton’s selec­tion, par­tic­u­lar­ly, is “going to take us in a real­ly pos­i­tive direc­tion.”” Forbes
    * Accord­ing to an SEC fil­ing, Kush­n­er Com­pa­nies appears to have reached a deal to buy out its part­ner, Vor­na­do Real­ty Trust, in the trou­bled 666 Fifth Avenue prop­er­ty. The Kush­n­ers had pre­vi­ous­ly nego­ti­at­ed unsuc­cess­ful­ly with Chi­nese com­pa­ny Anbang Insur­ance Group, whose chair­man has since been pros­e­cut­ed and reg­u­la­tors have seized con­trol of the com­pa­ny. The Kush­n­ers also nego­ti­at­ed unsuc­cess­ful­ly with the for­mer prime min­is­ter of Qatar, and short­ly after­wards Qatar was hacked and block­ad­ed by the UAE, Sau­di Ara­bia, Bahrain and Egypt. It is not clear who will pro­vide the financ­ing for the deal. New York Times

    ...

    ———-

    “From the Sey­chelles to the White House to Cam­bridge Ana­lyt­i­ca, Erik Prince and the UAE are key parts of the Trump sto­ry” by Wendy Siegel­man; Medi­um; 04/08/2018

    In 2017 as Cam­bridge Ana­lyt­i­ca exec­u­tives cre­at­ed Emer­da­ta, they were also work­ing on behalf of the UAE through SCL Social, which had a $330,000 con­tract to run a social media cam­paign for the UAE against Qatar, fea­tur­ing the theme #Boy­cottQatar. One of the Emer­da­ta direc­tors may have ties to the UAE and the com­pa­ny name, coin­ci­den­tal­ly, sounds like a play on Emirati-Data…Emerdata.

    Emi­rati-Data = Emer­da­ta. Is that the play on words we’re see­ing in this name? It does sound like a rea­son­able infer­ence. Espe­cial­ly giv­en Erik Prince’s close asso­ci­a­tion with both the Emer­Data’s board of direc­tors and the UAE:

    ...
    As the Cam­bridge Ana­lyt­i­ca scan­dal was unfold­ing, I broke the sto­ry about a new com­pa­ny Emer­da­ta Lim­it­ed, cre­at­ed by Cam­bridge Ana­lyt­i­ca exec­u­tives, that in ear­ly 2018 added new board mem­bers Rebekah and Jen­nifer Mer­cer, Cheng Peng, Chun Shun Ko John­son, who is a busi­ness part­ner of Erik Prince, and Ahmed Al Khat­ib, a ‘Cit­i­zen of Sey­chelles’.

    ...

    The Unit­ed Arab Emi­rates and peo­ple advo­cat­ing for the inter­ests of the UAE—including Prince, Nad­er, and Trump fundrais­er Elliot Broidy who has done large busi­ness deals with the UAE—have start­ed to appear fre­quent­ly in news relat­ed to Mueller’s inves­ti­ga­tion. Erik Prince, the broth­er of the U.S. Sec­re­tary of Edu­ca­tion Bet­sy DeVos, lived in the UAE, attend­ed the Sey­chelles meet­ing with the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, is busi­ness part­ners with Chun Shun Ko who just joined the board of the new Cam­bridge Analytica/SCL com­pa­ny Emer­da­ta, and SCL had a large con­tract to work on behalf of the UAE.
    ...

    So let’s take a clos­er look at Prince’s ties to the UAE and his parn­ters in Hong Kong: He moves to the UAE in 2010, and gets hired by the Sheik Mohamed bin Zayed al-Nahyan to build a fight­ing force in 2011. In 2012, while still liv­ing in the UAE, Prince cre­ates the Fron­tier Resource Group, an Africa-ded­i­cat­ed invest­ment firm part­nered with major Chi­nese enter­pris­es:

    ...
    2010

    * In a depo­si­tion Erik Prince said he had pre­vi­ous­ly hired George Nad­er to help Black­wa­ter as a “busi­ness devel­op­ment con­sul­tant that we retained in Iraq” because the com­pa­ny was look­ing for con­tracts with the Iraqi gov­ern­ment. New York Times
    * After a series of civ­il law­suits, crim­i­nal charges and Con­gres­sion­al inves­ti­ga­tions against Erik Prince’s com­pa­ny Black­wa­ter and its for­mer exec­u­tives, Prince moved to the Unit­ed Arab Emi­rates. New York Times.

    2011

    * Sheik Mohamed bin Zayed al-Nahyan of Abu Dhabi hired Erik Prince to build a fight­ing force, pay­ing $529 mil­lion to build an army. Addi­tion­al­ly, Prince “worked with the Emi­rati gov­ern­ment on var­i­ous ventures…including an oper­a­tion using South African mer­ce­nar­ies to train Soma­lis to fight pirates.” New York Times
    * A movie called “The Project,” about Erik Prince’s UAE-fund­ed pri­vate army in Soma­lia, was paid for by the Mov­ing Pic­ture Insti­tute where Rebekah Mer­cer is on the board of Trustees. Gawk­er Web­site

    2012

    * Erik Prince, who works and lives in Abu Dhabi in the Unit­ed Arab Emi­rates, cre­at­ed Fron­tier Resource Group, an Africa-ded­i­cat­ed invest­ment firm part­nered with major Chi­nese enter­pris­es. South Chi­na Morn­ing Post

    2013

    * The Russ­ian Direct Invest­ment Fund led by CEO Kir­ill Dmitriev, and the UAE’s Mubadala Devel­op­ment Com­pa­ny based in Abu Dhabi, launched a $2 bil­lion co-invest­ment fund to pur­sue oppor­tu­ni­ties in Rus­sia. PR Newswire
    ...

    Then, in 2014, Prince gets named as Chair­man of DVN Hold­ings, con­trolled by Hong Kong busi­ness­man John­son Ko Chun-shun (who sits on the board of Emer­da­ta) and Chi­nese state-owned Citic Group:

    ...
    2014

    * Jan­u­ary: Erik Prince was named Chair­man of DVN Hold­ings, con­trolled by Hong Kong busi­ness­man John­son Ko Chun-shun and Chi­nese state-owned Citic Group. DVN’s board pro­posed that the firm be renamed Fron­tier Ser­vices Group. South Chi­na Morn­ing Post.
    * Jan­u­ary: Erik Prince’s busi­ness part­ner, Dori­an Barak, became a Non-Exec­u­tive Direc­tor of Reori­ent Group Lim­it­ed, an invest­ment com­pa­ny where Ko Chun Shun John­son was Chair­man and Exec­u­tive Direc­tor, and had done a $350 mil­lion deal with Jack Ma. 2014 Annu­al Report. Forbes
    * Erik Prince’s busi­ness part­ner Dori­an Barak joined the board of Alu­fur Min­ing, “an inde­pen­dent min­er­al explo­ration and devel­op­ment com­pa­ny with sig­nif­i­cant baux­ite inter­ests in the Repub­lic of Guinea.” (Prince would lat­er tes­ti­fy that the pur­pose of his Sey­chelles trip was to dis­cuss min­er­als and ‘baux­ite’ with the UAE’s Mohammed bin Zayed). Alu­fur web­site
    ...

    Then there’s all the shenani­gans involv­ing the Sey­chelles ‘backchan­nel’ (that inex­plic­a­bly involves the UAE) and GOP mon­ey-man Elliott Broidy:

    ...
    * Decem­ber: The Unit­ed Arab Emirate’s crown prince of Abu Dhabi, Sheikh Mohamed bin Zayed al-Nahyan, vis­it­ed Trump Tow­er and met with Jared Kush­n­er, Michael Fly­nn, and Steve Ban­non. In an unusu­al breach of pro­to­col, the Oba­ma admin­is­tra­tion was not noti­fied about the vis­it. Wash­ing­ton Post
    * Erik Prince told the House Intel­li­gence Com­mit­tee that Steve Ban­non informed him about the Decem­ber Trump Tow­er meet­ing with Mohamed bin Zayed al-Nahyan. Prince also said he had sent Ban­non unso­licit­ed pol­i­cy papers dur­ing the cam­paign. CNN

    2017

    Jan­u­ary 2017

    * One week pri­or to the meet­ing in the Sey­chelles, sources report­ed that George Nad­er met with Erik Prince and lat­er sent him infor­ma­tion on Kir­ill Dmitriev, the CEO of the Russ­ian Direct Invest­ment Fund, con­tra­dict­ing Prince’s sworn tes­ti­mo­ny to the House Intel­li­gence Com­mit­tee that the meet­ing with Kir­ill Dmitriev in the Sey­chelles was unex­pect­ed. ABC News
    * Jan­u­ary 11: A meet­ing was held in the Sey­chelles with Erik Prince, the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, Kir­ill Dmitriev, and George Nad­er, who had pre­vi­ous­ly con­sult­ed for Prince’s Black­wa­ter. Accord­ing to Nad­er the meet­ing was to dis­cuss for­eign pol­i­cy and to estab­lish a line of com­mu­ni­ca­tion between the Russ­ian gov­ern­ment and the incom­ing Trump admin­is­tra­tion. ABC News

    Feb­ru­ary 2017

    * “After decades of close polit­i­cal and defense prox­im­i­ty with the Unit­ed States, the Unit­ed Arab Emi­rates have con­clud­ed three major agree­ments with Rus­sia which could lead to its air force being ulti­mate­ly re-equipped with Russ­ian com­bat air­craft.” Defense Aero­space

    March 2017

    * Elliott Broidy, a top GOP and Trump fundrais­er with hun­dreds of mil­lions of dol­lars in busi­ness deals with the UAE, sent George Nad­er a spread­sheet out­lin­ing a pro­posed $12.7 mil­lion cam­paign against Qatar and the Mus­lim Broth­er­hood. Broidy also sent an email to George Nad­er refer­ring to Secure Amer­i­ca Now as a group he worked with. New York Times
    * The largest fun­der of Secure Amer­i­ca Now, a secre­tive group that cre­ates anti-Mus­lim adver­tis­ing, is Robert Mer­cer, who is also the largest fun­der of Cam­bridge Ana­lyt­i­ca. Open Secrets
    ...

    Then Emer­da­ta gets formed in August of 2017. The next month, Steve Ban­non and Alexan­der Nix atten the CSLA Investors’ Forum in Hong Kong, which is run by Citic Group, the major­i­ty own­er of Prince’s Fron­tier Ser­vices Group:

    ...
    August 2017

    * Emer­da­ta Lim­it­ed was incor­po­rat­ed in the UK with Cam­bridge Analytica’s Chair­man Julian Wheat­land and Chief Data Offi­cer Alexan­der Tayler as sig­nif­i­cant own­ers. Com­pa­ny fil­ing

    Sep­tem­ber 2017

    * Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix and Steve Ban­non both present at the CSLA Investors’ Forum in Hong Kong. CLSA is part of Citic Secu­ri­ties, which is part of Citic Group, the major­i­ty own­er of Erik Prince and Ko Chun Shun Johnson’s Fron­tier Ser­vices Group. Bloomberg Tweet
    ...

    Then in Octo­ber of 2017, we have a con­tin­u­a­tion of Elliot Broidy’s lob­by­ing the Trump admin­is­tra­tion on behalf of the UAE at the same time the SCL Group gets hired to imple­ment a social media cam­paign for the UAE against Qatar:

    ...
    Octo­ber 2017

    * Octo­ber 6: Elliott Broidy, whose com­pa­ny Circi­nus has had hun­dreds of mil­lions of dol­lars in con­tracts with the UAE, met Trump and sug­gest­ed Trump meet with the UAE’s Mohammed bin Zayed al-Nahyan. Broidy said Trump thought it was good idea. Broidy also “per­son­al­ly urged Mr. Trump to fire Mr. Tiller­son, whom the Saud­is and Emi­ratis saw as insuf­fi­cient­ly tough on Iran and Qatar.” New York Times
    * Octo­ber 6: SCL Social Lim­it­ed, part of SCL Group/Cambridge Ana­lyt­i­ca, was hired by UK com­pa­ny Project Asso­ciates for approx­i­mate­ly $330,000 to imple­ment a social media cam­paign for the UAE against Qatar, fea­tur­ing the them #Boy­cottQatar. FARA fil­ing
    * Octo­ber 23: Steve Ban­non spoke at a Hud­son Insti­tute event on “Coun­ter­ing Vio­lent Extrem­ism: Qatar, Iran, and the Mus­lim Broth­er­hood,” and called the Qatar block­ade “the sin­gle most impor­tant thing that’s hap­pen­ing in the world.” Ban­non “bragged that president’s trip to Sau­di Ara­bia in May gave the Saud­is the gump­tion to lead a block­ade against Doha.” The Nation­al Inter­est
    * Octo­ber 29: Jared Kush­n­er returned from an unan­nounced trip to Sau­di Ara­bia to dis­cuss Mid­dle East peace. Tom Bar­rack, a long­time friend and close Trump con­fi­dant said “The key to solv­ing (the Israel-Pales­tin­ian dis­pute) is Egypt. And the key to Egypt is Abu Dhabi and Sau­di Ara­bia.” Politi­co
    ...

    Final­ly, in ear­ly 2018 we find Emer­da­ta adding Alexan­der Nix, John­son Chun Shun Ko (Prince’s part­ner at Fron­tier Ser­vices Group), Cheng Peng, Ahmad Al Khat­ib, Rebekah Mer­cer, and Jen­nifer Mer­cer to the board of direc­tors:

    ...
    Jan­u­ary to March 2018

    * Emer­da­ta Lim­it­ed added new direc­tors Alexan­der Nix, John­son Chun Shun Ko, Cheng Peng, Ahmad Al Khat­ib, Rebekah Mer­cer, and Jen­nifer Mer­cer. John­son Chun Shun Ko is the busi­ness part­ner of Erik Prince. Ahmad Al Khat­ib is iden­ti­fied as ‘Cit­i­zen of Sey­chelles’. Shares are issued val­ued at 1,912,512 GBP. Emer­da­ta arti­cle
    ...

    So it sure looks a lot like the new incar­na­tion of Cam­bridge Ana­lyt­i­ca is basi­cal­ly going to be apply­ing Cam­bridge Ana­lyt­i­ca’s psy­cho­log­i­cal war­fare meth­ods on behalf of the UAE, among oth­ers. The Chi­nese investors will also pre­sum­ably be inter­est­ed in these kinds of ser­vices. And any­one else who might want to hire a psy­cho­log­i­cal war­fare ser­vice provider run by a bunch of far right lumi­nar­ies.

    Posted by Pterrafractyl | May 3, 2018, 3:56 pm
  13. Oh look at that: Remem­ber how Alek­san­dr Kogan, the Uni­ver­si­ty of Cam­bridge pro­fes­sor who built the app used by Cam­bridge Ana­lyt­i­ca, claimed that what he was doing was rather typ­i­cal? Well, Face­book’s audit of the thou­sands of apps used on its plat­form appears to be prov­ing Kogan right. Face­book just announced that it has already found and sus­pend­ed 200 apps that appear to be mis­us­ing user data.

    Face­book won’t say which apps were sus­pend­ed, how many users were involved, or what the red flags were that trig­gered the sus­pen­sion, so we’re large­ly left in the dark in terms of the scope of the prob­lem.

    But there is one par­tic­u­lar prob­lem app that’s been revealed, although it was­n’t revealed by Face­book. It’s the myPer­son­al­i­ty app which was also devel­oped by Cam­bridge Uni­ver­si­ty pro­fes­sors at the Cam­bridge Psy­cho­met­rics Cen­ter. Recall how Cam­bridge Ana­lyt­i­ca end­ed up work­ing with Alek­sander Kogan only after first being rebuffed by the Cam­bridge Psy­cho­met­rics Cen­ter. And as we’re going to see in the sec­ond arti­cle below, Kogan actu­al­ly work­ing on the myPer­son­al­i­ty app until 2014 (when he went to work for Cam­bridge Ana­lyt­i­ca). So the one app of the 200 recent­ly sus­pend­ed apps that we get to know about at this point is an app Kogan helped devel­op. And the oth­er 199 apps remain a mys­tery for now:

    The Wash­ing­ton Post

    Face­book sus­pends 200 apps fol­low­ing Cam­bridge Ana­lyt­i­ca scan­dal

    by Drew Har­well and Tony Romm
    May 14, 2018

    Face­book said Mon­day morn­ing that it had sus­pend­ed rough­ly 200 apps amid an ongo­ing inves­ti­ga­tion prompt­ed by the Cam­bridge Ana­lyt­i­ca scan­dal into whether ser­vices on the site had improp­er­ly used or col­lect­ed users’ per­son­al data.

    The com­pa­ny said in an update, its first since the social net­work announced the inter­nal audit in March, that the apps would under­go a “thor­ough inves­ti­ga­tion” into whether they had mis­used user data.

    Face­book declined to pro­vide more detail on which apps were sus­pend­ed, how many peo­ple had used them or what red flags had led them to sus­pect those apps of mis­use.

    CEO Mark Zucker­berg has said the com­pa­ny will exam­ine tens of thou­sands of apps that could have accessed or col­lect­ed large amounts of users’ per­son­al infor­ma­tion before the site’s more restric­tive data rules for third-par­ty devel­op­ers took effect in 2015.

    The com­pa­ny said teams of inter­nal and exter­nal experts will con­duct inter­views and lead on-site inspec­tions of cer­tain apps dur­ing its ongo­ing audit. Thou­sands of apps have been inves­ti­gat­ed so far, the com­pa­ny said, adding that any app that refus­es to coop­er­ate or failed the audit would be banned from the site.

    The sus­pen­sions sup­port a long-run­ning defense of Alek­san­dr Kogan, the researcher who pro­vid­ed Face­book data to Cam­bridge Ana­lyt­i­ca, that many apps besides his had gath­ered vast amounts of user infor­ma­tion under Face­book’s pre­vi­ous­ly lax data-pri­va­cy rules.

    One of the 200 apps, the per­son­al­i­ty quiz myPer­son­al­i­ty, was sus­pend­ed in ear­ly April and is under inves­ti­ga­tion, Face­book offi­cials said. Researchers at the Uni­ver­si­ty of Cam­bridge had set up the app to col­lect per­son­al infor­ma­tion about Face­book users and inform aca­d­e­m­ic research. But its data may not have been prop­er­ly secured, as first report­ed by New Sci­en­tist, which found login cre­den­tials for the app’s data­base avail­able online.

    “This is clear­ly a breach of the terms that aca­d­e­mics agree to when request­ing a col­lab­o­ra­tion with myPer­son­al­i­ty,” the Uni­ver­si­ty of Cam­bridge said in a state­ment Mon­day. “Once we learned of this, we took imme­di­ate steps to stop access to the account and to stop fur­ther data shar­ing.”

    The researchers added that aca­d­e­mics who used the tool had to ver­i­fy their iden­ti­ties and the nature of their research and agree to terms of ser­vice that pro­hib­it­ed them from shar­ing Face­book data “out­side of their research group.”

    A dif­fer­ent quiz app, devel­oped by Kogan and tapped by Cam­bridge Ana­lyt­i­ca, a polit­i­cal con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­cans, was able to pull detailed data on 87 mil­lion peo­ple, includ­ing from the app’s direct users and their friends, who had not overt­ly con­sent­ed to the app’s use.

    The announce­ment comes ahead of a Wednes­day hear­ing on Capi­tol Hill focused on Cam­bridge Ana­lyt­i­ca and data pri­va­cy. Law­mak­ers on the Sen­ate Judi­cia­ry Com­mit­tee said they would ques­tion Christo­pher Wylie, a for­mer employ­ee at the firm who brought its busi­ness prac­tices to light ear­li­er this year, along with oth­er aca­d­e­mics.

    In the Unit­ed States, the Fed­er­al Trade Com­mis­sion is inves­ti­gat­ing whether Facebook’s entan­gle­ment with Cam­bridge Ana­lyt­i­ca vio­lates its 2011 set­tle­ment with the U.S. gov­ern­ment over anoth­er series of pri­va­cy mishaps. Such vio­la­tions could car­ry sky-high fines.

    ...

    Face­book said users will be able to go to this page to see if they had used one of the sus­pect­ed apps once the com­pa­ny reveals which apps are under inves­ti­ga­tion. Com­pa­ny offi­cials would not pro­vide an esti­mat­ed time­line for that dis­clo­sure.

    ———-

    “Face­book sus­pends 200 apps fol­low­ing Cam­bridge Ana­lyt­i­ca scan­dal” by Drew Har­well and Tony Romm; The Wash­ing­ton Post; 05/14/2018

    “Face­book declined to pro­vide more detail on which apps were sus­pend­ed, how many peo­ple had used them or what red flags had led them to sus­pect those apps of mis­use.”

    Did you hap­pen to use one of the 200 sus­pend­ed apps? Who knows, although Face­book says it will noti­fy peo­ple of the names of sus­pend­ed apps even­tu­al­ly. No time­line for that dis­clo­sure is giv­en:

    ...
    Face­book said users will be able to go to this page to see if they had used one of the sus­pect­ed apps once the com­pa­ny reveals which apps are under inves­ti­ga­tion. Com­pa­ny offi­cials would not pro­vide an esti­mat­ed time­line for that dis­clo­sure.
    ...

    And, again, this is exact­ly what Kogan warned us about:

    ...
    The sus­pen­sions sup­port a long-run­ning defense of Alek­san­dr Kogan, the researcher who pro­vid­ed Face­book data to Cam­bridge Ana­lyt­i­ca, that many apps besides his had gath­ered vast amounts of user infor­ma­tion under Face­book’s pre­vi­ous­ly lax data-pri­va­cy rules.
    ...

    And note how Face­book is specif­i­cal­ly say­ing it’s review­ing “tens of thou­sands of apps that could have accessed or col­lect­ed large amounts of users’ per­son­al infor­ma­tion before the site’s more restric­tive data rules for third-par­ty devel­op­ers took effect in 2015”. In oth­er words, Face­book isn’t review­ing all of it’s apps. Only those that exist­ed before the pol­i­cy change that stopped apps from exploit­ing the “friends per­mis­sion” fea­ture that let app devel­op­ers scrape the infor­ma­tion for Face­book users and their friends. So it sounds like this review process isn’t look­ing for data pri­va­cy abus­es under the cur­rent set of rules. Just abus­es under the old set of rules:

    ...
    CEO Mark Zucker­berg has said the com­pa­ny will exam­ine tens of thou­sands of apps that could have accessed or col­lect­ed large amounts of users’ per­son­al infor­ma­tion before the site’s more restric­tive data rules for third-par­ty devel­op­ers took effect in 2015.

    The com­pa­ny said teams of inter­nal and exter­nal experts will con­duct inter­views and lead on-site inspec­tions of cer­tain apps dur­ing its ongo­ing audit. Thou­sands of apps have been inves­ti­gat­ed so far, the com­pa­ny said, adding that any app that refus­es to coop­er­ate or failed the audit would be banned from the site.
    ...

    And that appar­ent focus on abus­es from the old “friends per­mis­sion” rules sug­gests that cur­rent data use prob­lems might go unde­tect­ed. And the one app we’ve learned about, the myPer­son­al­i­ty app, is a per­fect exam­ple of the kind of app that would have been vio­lat­ing Face­book’s cur­rent data pri­va­cy rules. Because as peo­ple recent­ly learned, the Face­book data gath­ered by the app was avail­able online for the pur­pose of shar­ing with oth­er researchers, but it was so poor­ly secured that any­one could have poten­tial­ly accessed it:

    ...
    One of the 200 apps, the per­son­al­i­ty quiz myPer­son­al­i­ty, was sus­pend­ed in ear­ly April and is under inves­ti­ga­tion, Face­book offi­cials said. Researchers at the Uni­ver­si­ty of Cam­bridge had set up the app to col­lect per­son­al infor­ma­tion about Face­book users and inform aca­d­e­m­ic research. But its data may not have been prop­er­ly secured, as first report­ed by New Sci­en­tist, which found login cre­den­tials for the app’s data­base avail­able online.

    “This is clear­ly a breach of the terms that aca­d­e­mics agree to when request­ing a col­lab­o­ra­tion with myPer­son­al­i­ty,” the Uni­ver­si­ty of Cam­bridge said in a state­ment Mon­day. “Once we learned of this, we took imme­di­ate steps to stop access to the account and to stop fur­ther data shar­ing.”

    The researchers added that aca­d­e­mics who used the tool had to ver­i­fy their iden­ti­ties and the nature of their research and agree to terms of ser­vice that pro­hib­it­ed them from shar­ing Face­book data “out­side of their research group.”
    ...

    But it gets worse. Because as the fol­low­ing New Sci­en­tist arti­cle that revealed the myPer­son­al­i­ty apps pri­va­cy issues points out, the data on some 6 mil­lion Face­book users was anonymized, but it was such a shod­dy anonymiza­tion scheme that some­one could have eas­i­ly deanonymized the data in an auto­mat­ed fash­ion. And access to this data­base was poten­tial­ly avail­able to any­one for the past four years. So almost any­one could have grabbed this anonymized data on 6 mil­lion Face­book users and deanonymized it with rel­a­tive ease.

    And putting aside the pos­si­ble unof­fi­cial access of this data, the peo­ple and insti­t­u­a­tions that got offi­cial access is also concerning:More than 280 peo­ple from near­ly 150 insti­tu­tions accessed this data­base, includ­ing researchers at uni­ver­si­ties and at com­pa­nies like Face­book, Google, Microsoft and Yahoo. Yep, researchers at Face­book were appar­ent­ly access­ing this data­base of poor­ly anonymized data.

    So it should come as no sur­prise that, just as Alek­san­dr Kogan defend­ed him­self by assert­ing that lots of oth­er apps did the same thing as his Cam­bridge Ana­lyt­i­ca app and Face­book was well aware of how his app was being used, we’re get­ting the exact same defense from the team by myPer­son­al­i­ty:

    New Sci­en­tist

    Huge new Face­book data leak exposed inti­mate details of 3m users

    By Phee Water­field and Tim­o­thy Rev­ell
    14 May 2018, updat­ed 15 May 2018

    Data from mil­lions of Face­book users who used a pop­u­lar per­son­al­i­ty app, includ­ing their answers to inti­mate ques­tion­naires, was left exposed online for any­one to access, a New Sci­en­tist inves­ti­ga­tion has found.

    Aca­d­e­mics at the Uni­ver­si­ty of Cam­bridge dis­trib­uted the data from the per­son­al­i­ty quiz app myPer­son­al­i­ty to hun­dreds of researchers via a web­site with insuf­fi­cient secu­ri­ty pro­vi­sions, which led to it being left vul­ner­a­ble to access for four years. Gain­ing access illic­it­ly was rel­a­tive­ly easy.

    The data was high­ly sen­si­tive, reveal­ing per­son­al details of Face­book users, such as the results of psy­cho­log­i­cal tests. It was meant to be stored and shared anony­mous­ly, how­ev­er such poor pre­cau­tions were tak­en that deanonymis­ing would not be hard.

    “This type of data is very pow­er­ful and there is real poten­tial for mis­use,” says Chris Sum­n­er at the Online Pri­va­cy Foun­da­tion. The UK’s data watch­dog, the Infor­ma­tion Commissioner’s Office, has told New Sci­en­tist that it is inves­ti­gat­ing.

    The data sets were con­trolled by David Still­well and Michal Kosin­s­ki at the Uni­ver­si­ty of Cambridge’s The Psy­cho­met­rics Cen­tre. Alexan­dr Kogan, at the cen­tre of the Cam­bridge Ana­lyt­i­ca alle­ga­tions, was list­ed as a col­lab­o­ra­tor on the myPer­son­al­i­ty project until the sum­mer of 2014.

    Face­book sus­pend­ed myPer­son­al­i­ty from its plat­form on 7 April say­ing the app may have vio­lat­ed its poli­cies due to the lan­guage used in the app and on its web­site to describe how data is shared.

    More than 6 mil­lion peo­ple com­plet­ed the tests on the myPer­son­al­i­ty app and near­ly half agreed to share data from their Face­book pro­files with the project. All of this data was then scooped up and the names removed before it was put on a web­site to share with oth­er researchers. The terms allow the myPer­son­al­i­ty team to use and dis­trib­ute the data “in an anony­mous man­ner such that the infor­ma­tion can­not be traced back to the indi­vid­ual user”.

    To get access to the full data set peo­ple had to reg­is­ter as a col­lab­o­ra­tor to the project. More than 280 peo­ple from near­ly 150 insti­tu­tions did this, includ­ing researchers at uni­ver­si­ties and at com­pa­nies like Face­book, Google, Microsoft and Yahoo.

    Easy back­door

    How­ev­er, for those who were not enti­tled to access the data set because they didn’t have a per­ma­nent aca­d­e­m­ic con­tract, for exam­ple, there was an easy workaround. For the last four years, a work­ing user­name and pass­word has been avail­able online that could be found from a sin­gle web search. Any­one who want­ed access to the data set could have found the key to down­load it in less than a minute.

    The pub­licly avail­able user­name and pass­word were sit­ting on the code-shar­ing web­site GitHub. They had been passed from a uni­ver­si­ty lec­tur­er to some stu­dents for a course project on cre­at­ing a tool for pro­cess­ing Face­book data. Upload­ing code to GitHub is very com­mon in com­put­er sci­ence as it allows oth­ers to reuse parts of your work, but the stu­dents includ­ed the work­ing login cre­den­tials too.

    myPer­son­al­i­ty wasn’t mere­ly an aca­d­e­m­ic project; researchers from com­mer­cial com­pa­nies were also enti­tled to access the data so long as they agreed to abide by strict data pro­tec­tion pro­ce­dures and didn’t direct­ly earn mon­ey from it.

    Still­well and Kosin­s­ki were both part of a spin-out com­pa­ny called Cam­bridge Per­son­al­i­ty Research, which sold access to a tool for tar­get­ing adverts based on per­son­al­i­ty types, built on the back of the myPer­son­al­i­ty data sets. The firm’s web­site described it as the tool that “mind-reads audi­ences”.

    Face­book start­ed inves­ti­gat­ing myPer­son­al­i­ty as part of a wider inves­ti­ga­tion into apps using the plat­form. This was start­ed by the alle­ga­tions sur­round­ing how Cam­bridge Ana­lyt­i­ca accessed data from an app called This Is Your Dig­i­tal Life devel­oped by Kogan.

    Today it it announced it has sus­pend­ed around 200 apps as part of its inves­ti­ga­tion into apps that had access to large amounts of infor­ma­tion on users.

    Cam­bridge Ana­lyt­i­ca had approached the myPer­son­al­i­ty app team in 2013 to get access to the data, but was turned down because of its polit­i­cal ambi­tions, accord­ing to Still­well.

    “We are cur­rent­ly inves­ti­gat­ing the app, and if myPer­son­al­i­ty refus­es to coop­er­ate or fails our audit, we will ban it,” says Ime Archi­bong, Facebook’s vice pres­i­dent of Prod­uct Part­ner­ships.

    The myPer­son­al­i­ty app web­site has now been tak­en down, the pub­licly avail­able cre­den­tials no longer work, and Stillwell’s web­site and Twit­ter account have gone offline.

    “We are aware of an inci­dent relat­ed to the My Per­son­al­i­ty app and are mak­ing enquiries,” a spokesper­son for the Infor­ma­tion Commissioner’s Office told New Sci­en­tist.

    Per­son­al infor­ma­tion exposed

    The cre­den­tials gave access to the “Big Five” per­son­al­i­ty scores of 3.1 mil­lion users. These scores are used in psy­chol­o­gy to assess people’s char­ac­ter­is­tics, such as con­sci­en­tious­ness, agree­able­ness and neu­roti­cism. The cre­den­tials also allowed access to 22 mil­lion sta­tus updates from over 150,000 users, along­side details such as age, gen­der and rela­tion­ship sta­tus from 4.3 mil­lion peo­ple.

    “If at any time a user­name and pass­word for any files that were sup­posed to be restrict­ed were made pub­lic, it would be a con­se­quen­tial and seri­ous issue,” says Pam Dixon at the World Pri­va­cy Forum. “Not only is it a bad secu­ri­ty prac­tice, it is a pro­found eth­i­cal vio­la­tion to allow strangers to access files.”

    Beyond the pass­word leak and dis­trib­ut­ing the data to hun­dreds of researchers, there are seri­ous con­cerns with the way the anonymi­sa­tion process was per­formed.

    Each user in the data set was giv­en a unique ID, which tied togeth­er data such as their age, gen­der, loca­tion, sta­tus updates, results on the per­son­al­i­ty quiz and more. With that much infor­ma­tion, de-anonymis­ing the data can be done very eas­i­ly. “You could re-iden­ti­fy some­one online from a sta­tus update, gen­der and date,” says Dixon.

    This process could be auto­mat­ed, quick­ly reveal­ing the iden­ti­ties of the mil­lions of peo­ple in the data sets, and tying them to the results of inti­mate per­son­al­i­ty tests.

    “Any data set that has enough attrib­ut­es is extreme­ly hard to anonymise,” says Yves-Alexan­dre de Mon­tjoye at Impe­r­i­al Col­lege Lon­don. So instead of dis­trib­ut­ing actu­al data sets, the best approach is to pro­vide a way for researchers to run tests on the data. That way they get aggre­gat­ed results and nev­er access to indi­vid­u­als. “The use of the data can’t be at the expense of people’s pri­va­cy,” he says.

    The Uni­ver­si­ty of Cam­bridge says it was alert­ed to the issues sur­round­ing myPer­son­al­i­ty by the Infor­ma­tion Commissioner’s Office. It says that, as the app was cre­at­ed by Still­well before he joined the uni­ver­si­ty, “it did not go through our eth­i­cal approval process­es”. It also says “the Uni­ver­si­ty of Cam­bridge does not own or con­trol the app or data”.

    ...

    When approached, Still­well says that through­out the nine years of the project there has only been one data breach, and that researchers giv­en access to the data set must agree not to de-anonymise the data. “We believe that aca­d­e­m­ic research ben­e­fits from prop­er­ly con­trolled shar­ing of anonymised data among the research com­mu­ni­ty,” he told New Sci­en­tist.

    He also says that Face­book has long been aware of the myPer­son­al­i­ty project, hold­ing meet­ings with him­self and Kosin­s­ki going back as far as 2011. “It is there­fore a lit­tle odd that Face­book should sud­den­ly now pro­fess itself to have been unaware of the myPer­son­al­i­ty research and to believe that the use of the data was a breach of its terms,” he says.

    The inves­ti­ga­tions by Face­book and the Infor­ma­tion Commissioner’s Office should try to deter­mine who accessed the myPer­son­al­i­ty data and what it was used for. How­ev­er, as it was shared with so many dif­fer­ent peo­ple, track­ing every­one who has a copy and what they did with it will prove very dif­fi­cult. We will nev­er know exact­ly who did what with this data set. “This is the tip of the ice­berg,” says Dixon. “Who else has this data?”

    ———–

    “Huge new Face­book data leak exposed inti­mate details of 3m users” by Phee Water­field and Tim­o­thy Rev­ell; New Sci­en­tist; 05/14/2018

    “Aca­d­e­mics at the Uni­ver­si­ty of Cam­bridge dis­trib­uted the data from the per­son­al­i­ty quiz app myPer­son­al­i­ty to hun­dreds of researchers via a web­site with insuf­fi­cient secu­ri­ty pro­vi­sions, which led to it being left vul­ner­a­ble to access for four years. Gain­ing access illic­it­ly was rel­a­tive­ly easy.”

    Yep, an online data­base of high­ly sen­si­tive Face­book + psy­cho­log­i­cal pro­file data was made acces­si­ble to hun­dreds of researchers. But it was also poten­tial­ly acces­si­ble to any­one due to poor secu­ri­ty. For four years.

    And those that were giv­en offi­cial access to the data includ­ed com­pa­nies like Microsoft, Google, Yahoo, and Face­book:

    ...
    To get access to the full data set peo­ple had to reg­is­ter as a col­lab­o­ra­tor to the project. More than 280 peo­ple from near­ly 150 insti­tu­tions did this, includ­ing researchers at uni­ver­si­ties and at com­pa­nies like Face­book, Google, Microsoft and Yahoo.
    ...

    While the Face­book researchers could plau­si­bly claim that they had no idea the serv­er host­ing this data had insuf­fi­cient secu­ri­ty, it would be a lot hard­er for them to claim they had no idea the anonymiza­tion scheme was high­ly inad­e­quate:

    ...
    The data was high­ly sen­si­tive, reveal­ing per­son­al details of Face­book users, such as the results of psy­cho­log­i­cal tests. It was meant to be stored and shared anony­mous­ly, how­ev­er such poor pre­cau­tions were tak­en that deanonymis­ing would not be hard.

    “This type of data is very pow­er­ful and there is real poten­tial for mis­use,” says Chris Sum­n­er at the Online Pri­va­cy Foun­da­tion. The UK’s data watch­dog, the Infor­ma­tion Commissioner’s Office, has told New Sci­en­tist that it is inves­ti­gat­ing.
    ...

    And the only thing the myPer­son­al­i­ty team appeared to do to anonymize the data was replace names with a num­ber. THAT’S IT! And when that’s the only anonymiza­tion step employed in a data set with large amounts of data on each indi­vid­ual, includ­ing sta­tus updates, it’s going to be triv­ial to auto­mate the deanonymiza­tion of these peo­ple, espe­cial­ly for com­pa­nies like Google, Yahoo, Microsoft and Face­book:

    ...
    Per­son­al infor­ma­tion exposed

    The cre­den­tials gave access to the “Big Five” per­son­al­i­ty scores of 3.1 mil­lion users. These scores are used in psy­chol­o­gy to assess people’s char­ac­ter­is­tics, such as con­sci­en­tious­ness, agree­able­ness and neu­roti­cism. The cre­den­tials also allowed access to 22 mil­lion sta­tus updates from over 150,000 users, along­side details such as age, gen­der and rela­tion­ship sta­tus from 4.3 mil­lion peo­ple.

    ...

    Each user in the data set was giv­en a unique ID, which tied togeth­er data such as their age, gen­der, loca­tion, sta­tus updates, results on the per­son­al­i­ty quiz and more. With that much infor­ma­tion, de-anonymis­ing the data can be done very eas­i­ly. “You could re-iden­ti­fy some­one online from a sta­tus update, gen­der and date,” says Dixon.

    This process could be auto­mat­ed, quick­ly reveal­ing the iden­ti­ties of the mil­lions of peo­ple in the data sets, and tying them to the results of inti­mate per­son­al­i­ty tests.

    “Any data set that has enough attrib­ut­es is extreme­ly hard to anonymise,” says Yves-Alexan­dre de Mon­tjoye at Impe­r­i­al Col­lege Lon­don. So instead of dis­trib­ut­ing actu­al data sets, the best approach is to pro­vide a way for researchers to run tests on the data. That way they get aggre­gat­ed results and nev­er access to indi­vid­u­als. “The use of the data can’t be at the expense of people’s pri­va­cy,” he says.
    ...

    Not sur­pris­ing­ly, two of the aca­d­e­mics in charge of this project were part of a spin-off com­pa­ny that sold tools for tar­get­ing ads based on per­son­al­i­ty types. So it was­n’t just com­mer­cial com­pa­nies like Google and Yahoo who got access to this data. The whole enter­prise appeared to be com­mer­cial in nature:

    ...
    myPer­son­al­i­ty wasn’t mere­ly an aca­d­e­m­ic project; researchers from com­mer­cial com­pa­nies were also enti­tled to access the data so long as they agreed to abide by strict data pro­tec­tion pro­ce­dures and didn’t direct­ly earn mon­ey from it.

    Still­well and Kosin­s­ki were both part of a spin-out com­pa­ny called Cam­bridge Per­son­al­i­ty Research, which sold access to a tool for tar­get­ing adverts based on per­son­al­i­ty types, built on the back of the myPer­son­al­i­ty data sets. The firm’s web­site described it as the tool that “mind-reads audi­ences”.
    ...

    And, of course, Alek­san­dr Kogan was part of this project before he went to work for Cam­bridge Ana­lyt­i­ca:

    ...
    The data sets were con­trolled by David Still­well and Michal Kosin­s­ki at the Uni­ver­si­ty of Cambridge’s The Psy­cho­met­rics Cen­tre. Alexan­dr Kogan, at the cen­tre of the Cam­bridge Ana­lyt­i­ca alle­ga­tions, was list­ed as a col­lab­o­ra­tor on the myPer­son­al­i­ty project until the sum­mer of 2014.
    ...

    And note how Face­book only sus­pend­ed this app on April 7th of this year, four years after Face­book end­ed its noto­ri­ous “friends per­mis­sion” fea­ture that’s received most of the atten­tion from the Cam­bridge Ana­lyt­i­ca scan­dal. It’s a big reminder that data pri­va­cy abus­es via Face­book apps aren’t lim­it­ed to that “friends per­mis­sions” fea­ture. It’s an exist­ing prob­lem, which is why it’s trou­bling to hear that Face­book was look­ing into the tens of thou­sands of apps that may have abused in pre-2015 data use poli­cies:

    ...
    Face­book sus­pend­ed myPer­son­al­i­ty from its plat­form on 7 April say­ing the app may have vio­lat­ed its poli­cies due to the lan­guage used in the app and on its web­site to describe how data is shared.

    More than 6 mil­lion peo­ple com­plet­ed the tests on the myPer­son­al­i­ty app and near­ly half agreed to share data from their Face­book pro­files with the project. All of this data was then scooped up and the names removed before it was put on a web­site to share with oth­er researchers. The terms allow the myPer­son­al­i­ty team to use and dis­trib­ute the data “in an anony­mous man­ner such that the infor­ma­tion can­not be traced back to the indi­vid­ual user”.
    ...

    But beyond the trou­bling half-assed anonymiza­tion scheme, there’s the issue of all this data being inad­ver­tent­ly made avail­able to the world due to the user cre­den­tials for the data­base get­ting uploaded into some code on GitHub, an online cod­ing repos­i­to­ry:

    ...
    Easy back­door

    How­ev­er, for those who were not enti­tled to access the data set because they didn’t have a per­ma­nent aca­d­e­m­ic con­tract, for exam­ple, there was an easy workaround. For the last four years, a work­ing user­name and pass­word has been avail­able online that could be found from a sin­gle web search. Any­one who want­ed access to the data set could have found the key to down­load it in less than a minute.

    The pub­licly avail­able user­name and pass­word were sit­ting on the code-shar­ing web­site GitHub. They had been passed from a uni­ver­si­ty lec­tur­er to some stu­dents for a course project on cre­at­ing a tool for pro­cess­ing Face­book data. Upload­ing code to GitHub is very com­mon in com­put­er sci­ence as it allows oth­ers to reuse parts of your work, but the stu­dents includ­ed the work­ing login cre­den­tials too.
    ...

    It’s impor­tant to keep in mind that the acci­den­tal release of those cre­den­tials by some stu­dents is prob­a­bly the most under­stand­able aspect of this data pri­va­cy night­mare. It’s the equiv­a­lent of writ­ing a bug in code: a com­mon care­less acci­dent. Every­thing else asso­ci­at­ed with this data pri­va­cy night­mare is far less under­stand­able because it was­n’t a mis­take but by design.

    And as we should expect at this point, the design­ers of the myPer­son­al­i­ty app are express­ing dis­may as Face­book’s dis­may. After all, Face­book has long been aware of the project and even held meet­ings with the team as far back as 2011:

    ...
    When approached, Still­well says that through­out the nine years of the project there has only been one data breach, and that researchers giv­en access to the data set must agree not to de-anonymise the data. “We believe that aca­d­e­m­ic research ben­e­fits from prop­er­ly con­trolled shar­ing of anonymised data among the research com­mu­ni­ty,” he told New Sci­en­tist.

    He also says that Face­book has long been aware of the myPer­son­al­i­ty project, hold­ing meet­ings with him­self and Kosin­s­ki going back as far as 2011. “It is there­fore a lit­tle odd that Face­book should sud­den­ly now pro­fess itself to have been unaware of the myPer­son­al­i­ty research and to believe that the use of the data was a breach of its terms,” he says.
    ...

    And don’t for­get, Face­book researchers were among the users of this data. So Face­book was obvi­ous­ly pret­ty famil­iar with the app.

    And in the end, we’ll like­ly nev­er know who accessed the data and what they did with it. It’s just the tip of the ice­berg:

    ...
    The inves­ti­ga­tions by Face­book and the Infor­ma­tion Commissioner’s Office should try to deter­mine who accessed the myPer­son­al­i­ty data and what it was used for. How­ev­er, as it was shared with so many dif­fer­ent peo­ple, track­ing every­one who has a copy and what they did with it will prove very dif­fi­cult. We will nev­er know exact­ly who did what with this data set. “This is the tip of the ice­berg,” says Dixon. “Who else has this data?”

    And note one of the oth­er chill­ing impli­ca­tions of this sto­ry: Recall how the ~270,000 user of the Cam­bridge Ana­lyt­i­ca app result­ing in Cam­bridge Ana­lyt­i­ca har­vest­ing data on ~87 mil­lion peo­ple using the “friends per­mis­sions” option. Well, if this myPer­son­al­i­ty app was been oper­at­ing for 9 years that means it also had access to the “friends per­mis­sions” option, and for much longer than the Cam­bridge Ana­lyt­i­ca app. And 6 mil­lion peo­ple appar­ent­ly down­loaded this app! So how many of that 6 mil­lion peo­ple were using this app in the pre-2015 peri­od when the “friends per­mis­sion” option was still avail­able and how many friends of those 6 mil­lion peo­ple had their pro­files har­vest­ed too?

    So it’s entire­ly pos­si­ble the peo­ple at myPer­son­al­i­ty grabbed infor­ma­tion on far more than the 6 mil­lion peo­ple who used their app and we have no idea what they did with the data. What we know know is just the tip of the ice­berg of this sto­ry.

    And this sto­ry of myPer­son­al­i­ty is just cov­er­ing one of the 200 apps that Face­book just sus­pend­ed. In oth­er words, this ice­berg of a sto­ry is just the tip of a much, much larg­er ice­berg.

    Posted by Pterrafractyl | May 17, 2018, 10:54 pm
  14. Here’s a sto­ry about explo­sive new law­suit against Face­book that could end up being a major headache for the com­pa­ny, and Mark Zucker­berg in par­tic­u­lar: The law­suit is being brought by Six4Three, a for­mer app devel­op­er start­up. Six4Three claims that, in 2012, Face­book was fac­ing a large cri­sis with its adver­tis­ing busi­ness mod­el due to the rapid adop­tion of smart­phones and the fact that Face­book’s ads were pri­mar­i­ly focused on desk­tops. Fac­ing a large drop in rev­enue, Face­book alleged­ly forced devel­op­er to buy expen­sive ads on the new, under­used Face­book mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off.

    The way Six4Three describes it, Face­book first got devel­op­ers to build their busi­ness mod­els around access to that data, and then engaged in what amounts to a shake­down of those devel­op­ers, threat­en­ing to take that access away unless expen­sive mobile ads were pur­chased.

    But beyond that, Six4Three alleges that Face­book incen­tivized devel­oped to cre­ate apps for its sys­tem by imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. Don’t for­get the Face­book friends data data (accessed via the “friends per­mis­sion” fea­ture) is the infor­ma­tion at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

    So Face­book was appar­ent­ly offer­ing long-term access to “friends per­mis­sion” data back in 2012 as a means of incen­tiviz­ing devel­op­ers to cre­ate apps and the same time it was threat­en­ing to cut off devel­op­er access to this data unless they pur­chased expen­sive mobile adds. And then, of course, that “friends per­mis­sion” fea­ture was wound down in 2015, which was undoubt­ed­ly a good thing for the pri­va­cy of Face­book users but as we can see the devel­op­ers weren’t so hap­py about this, in part because they were appar­ent­ly told by Face­book to expect long-term access to that data. Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book.

    It’s worth not­ing that Six4Three devel­oped an app called Pink­i­nis that searched through the pho­tos of your friends for pic­tures of them in swimwear. So los­ing access to friends data more or less broke Six4Three’s app.

    Beyond that, Six4Three also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. This is also note­wor­thy with respect to the Cam­bridge Ana­lyt­i­ca scan­dal since it appeared to be the case that Alek­san­dr Kogan’s psy­cho­log­i­cal pro­fil­ing app was allowed to access the “friends per­mis­sion” fea­ture lat­er than oth­er apps. In oth­er words, the Cam­bridge Ana­lyt­i­ca app did actu­al­ly appear to get pref­er­en­tial treat­ment from Face­book.

    But Six4Three’s alle­ga­tions go fur­ther, and sug­gest that Face­book’s exec­u­tives would observe which apps were the most suc­cess­ful and plot­ted to either extract mon­ey from them, co-opt them or destroy them using the threat of cut­ting off access to the user data as lever­age.

    So, basi­cal­ly, Face­book is get­ting sued by this app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

    The Guardian

    Zucker­berg set up fraud­u­lent scheme to ‘weaponise’ data, court case alleges

    Face­book CEO exploit­ed abil­i­ty to access data from any user’s friend net­work, US case claims

    Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son

    Thu 24 May 2018 08.01 EDT

    Mark Zucker­berg faces alle­ga­tions that he devel­oped a “mali­cious and fraud­u­lent scheme” to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness.

    A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive “weaponised” the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

    A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.

    Face­book rejects all claims, and has made a motion to have the case dis­missed using a free speech defence.

    It claims the first amend­ment pro­tects its right to make “edi­to­r­i­al deci­sions” as it sees fit. Zucker­berg and oth­er senior exec­u­tives have assert­ed that Face­book is a plat­form not a pub­lish­er, most recent­ly in tes­ti­mo­ny to Con­gress.

    Heather Whit­ney, a legal schol­ar who has writ­ten about social media com­pa­nies for the Knight First Amend­ment Insti­tute at Colum­bia Uni­ver­si­ty, said, in her opin­ion, this exposed a poten­tial ten­sion for Face­book.

    “Facebook’s claims in court that it is an edi­tor for first amend­ment pur­pos­es and thus free to cen­sor and alter the con­tent avail­able on its site is in ten­sion with their, espe­cial­ly recent, claims before the pub­lic and US Con­gress to be neu­tral plat­forms.”

    The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

    The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

    It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours.

    Face­book said the claims had no mer­it and the com­pa­ny would “con­tin­ue to defend our­selves vig­or­ous­ly”.

    Six4Three lodged its orig­i­nal case in 2015 short­ly after Face­book removed devel­op­ers’ access to friends’ data. The com­pa­ny said it had invest­ed $250,000 in devel­op­ing an app called Piki­nis that fil­tered users’ friends pho­tos to find any of them in swimwear. Its launch was met with con­tro­ver­sy.

    The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

    The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.

    “The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

    The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.

    Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

    “Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.

    In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends.

    Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

    Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

    The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

    It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments.

    A Face­book spokesman said: “When we changed our pol­i­cy in 2015, we gave all third-par­ty devel­op­ers ample notice of mate­r­i­al plat­form changes that could have impact­ed their appli­ca­tions.”

    Facebook’s sub­mis­sion to the court, an “anti-Slapp motion” under Cal­i­forn­ian leg­is­la­tion designed to pro­tect free­dom of speech, said: “Six4Three is tak­ing its fifth shot at an ever expand­ing set of claims and all of its claims turn on one deci­sion, which is absolute­ly pro­tect­ed: Facebook’s edi­to­r­i­al deci­sion to stop pub­lish­ing cer­tain user-gen­er­at­ed con­tent via its Plat­form to third-par­ty app devel­op­ers.”

    David God­kin, Six4Three’s lead coun­sel said: “We believe the pub­lic has a right to see the evi­dence and are con­fi­dent the evi­dence clear­ly demon­strates the truth of our alle­ga­tions, and much more.”

    Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

    “They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.”

    ...

    ———-

    “Zucker­berg set up fraud­u­lent scheme to ‘weaponise’ data, court case alleges” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 05/24/2018

    “A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.”

    It was Mark Zucker­berg who per­son­al­ly led this shake­down oper­a­tion, accord­ing to the law­suit. So what’s the evi­dence? Well, that appears to be in the form of thou­sands of cur­rent­ly redact­ed inter­nal emails. It’s unclear how those emails were obtained:

    ...
    The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

    The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

    It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours.
    ...

    Note this isn’t a new law­suit by Six4Three. They first filed a case in 2015, short­ly after Face­book removed devel­op­ers’ access to the “friends per­mis­sion” data fea­ture, where app devel­op­ers could grab exten­sive infor­ma­tion from ALL the Face­book friends of the users who down­loaded their apps. And when you look at the how the Six4Three app works it’s pret­ty clear why they would have been very upset about los­ing access to the friends data: their “Piki­nis” app is based on scan­ning your friends’ pic­tures for shots of them in swimwear:

    ...
    Six4Three lodged its orig­i­nal case in 2015 short­ly after Face­book removed devel­op­ers’ access to friends’ data. The com­pa­ny said it had invest­ed $250,000 in devel­op­ing an app called Piki­nis that fil­tered users’ friends pho­tos to find any of them in swimwear. Its launch was met with con­tro­ver­sy.
    ...

    And it’s a rather fas­ci­nat­ing law­suit by Six4Three because it’s basi­cal­ly com­plain­ing about Face­book sud­den­ly threat­en­ing to remove access to this per­son­al data after pre­vi­ous­ly imply­ing that devel­op­ers would have long-term access to it and use that pow­er to extort devel­op­ers. And in order to make that case, Six4Three also asserts that Face­book was well aware of the pri­va­cy impli­ca­tions of its data shar­ing poli­cies because access to that data was both the car­rot and the stick for devel­op­ers. So this case, if proven, would utter­ly destroy Face­book’s por­tray­al of itself as a vic­tim of Cam­bridge Ana­lyt­i­ca’s mis­use of its data:

    ...
    The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

    The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.
    ...

    And the ini­tial motive for all this was Face­book’s real­iza­tion in 2012 that it failed to antic­i­pate the speed of con­sumer adop­tion of smart­phones and effec­tive­ly dam­aged its lucra­tive adver­tis­ing busi­ness, which was focused on desk­top ads:

    ...
    “The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

    The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.
    ...

    So Face­book respond­ed to this sud­den threat to its core busi­ness by in mul­ti­ple scan­dalous ways, accord­ing to the law­suit. First, Face­book began forc­ing app devel­op­ers to buy expen­sive mobile ads on its new, under­used mobile ser­vice, or risk hav­ing their access to data at the core of their busi­ness cut off. It’s an exam­ple of how impor­tant sell­ing access to that user data to third par­ties was to Face­book’s busi­ness mod­el:

    ...
    Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

    “Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.
    ...

    But beyond that, Six4Three alleges that Face­book was simul­ta­ne­ous­ly try­ing to entice devel­op­ers to makes for its sys­tems by imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. So the “friends per­mis­sion” fea­ture for devel­op­ers that Face­book was phas­ing out in 2014–2015 was appar­ent­ly be ped­dled to devel­op­ers as a long-term fea­ture back in 2012:

    ...
    In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends.
    ...

    And, accord­ing to Six4Three, once a busi­ness became hooked on Face­book’s user data, Face­book would then look for par­tic­u­lar­ly lucra­tive apps and try to find ways to extract more mon­ey out of them. And that would appar­ent­ly include threat­en­ing to cut off access to that user data to either force com­pa­nies out of busi­ness or coerce app own­ers into sell­ing at below mar­ket prices. Up to 40,000 com­pa­nies were poten­tial­ly defraud­ed in this way and it was Face­book’s senior exec­u­tives who per­son­al­ly devised and man­aged the scheme, includ­ing Zucker­berg:

    ...
    Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

    Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

    The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

    It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments.

    A Face­book spokesman said: “When we changed our pol­i­cy in 2015, we gave all third-par­ty devel­op­ers ample notice of mate­r­i­al plat­form changes that could have impact­ed their appli­ca­tions.”
    ...

    Not sur­pris­ing­ly, Sandy Parak­i­la, the for­mer Face­book exec­u­tive turned whistle­blow­er who pre­vi­ous­ly revealed that Face­book exec­u­tives were con­scious­ly neg­li­gent in how user data was used(or abused), views this law­suit and the rev­e­la­tions con­tained in those emails a “bomb­shell” that more or less backs up what he’s been say­ing all along:

    ...
    Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

    “They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.”

    So was Mark Zucker­berg effec­tive­ly act­ing like the top mob­ster in a shake­down scheme involv­ing app devel­op­ers? A scheme where Face­book selec­tive­ly threat­ened to rescind access to its core data in order to extort ad buys from the devel­op­ers, buy the app at below mar­ket prices, or straight up dri­ve app devel­op­ers out of busi­ness? We’ll see, but this is going to be a law­suit to keep in eye on.

    “That’s a nice app you got there...it would be a shame if some­thing hap­pened to your access to user data...”

    Posted by Pterrafractyl | May 24, 2018, 12:09 pm
  15. Here’s a fas­ci­nat­ing twist to the already fas­ci­nat­ing sto­ry of Psy Group, the Israeli-owned pri­vate intel­li­gence firm that was appar­ent­ly pushed on the Trump team dur­ing the August 3, 2016, Trump Tow­er meet­ing. That’s the new­ly dis­cov­ered meet­ing where Erik Prince and George Nad­er met with Don­ald Trump, Jr. and Stephen Miller to inform the Trump team that the crown princes of Sau­di Ara­bia and the UAE were “eager” to help Trump win the elec­tion. And Psy Group, an Israeli pri­vate intel­li­gence firm that offers many of the same psy­cho­log­i­cal war­fare ser­vices of Cam­bridge Ana­lyt­i­ca, pre­sent­ed a pitch at that meet­ing for a socia media manip­u­la­tion cam­paign involv­ing thou­sands of fake accounts. And this meet­ing hap­pened a cou­ple weeks before Steve Ban­non replaced Paul Man­afort and brought Cam­bridge Ana­lyt­i­ca into promi­nence in the Trump team’s elec­toral machi­na­tions.

    So here’s the new twist to this Psy Group/Cambridge Ana­lyt­i­ca sto­ry: now we learn that Cam­bridge Ana­lyt­i­ca and Psy Group formed a busi­ness alliance with Cam­bridge Ana­lyt­i­ca after Trump’s vic­to­ry to try to win U.S. gov­ern­ment work. This alliance report­ed­ly hap­pened after the Cam­bridge Ana­lyt­i­ca and Psy Group signed a mutu­al non-dis­clo­sure agree­ment.

    Intrigu­ing­ly, the agree­ment was signed on Decem­ber 14, 2016, accord­ing to doc­u­ments seen by Bloomberg. And Decem­ber 14th, 2016, just hap­pens to be one day before the Crown Prince of the UAE secret­ly trav­eled the US — against diplo­mat­ic pro­to­col — and met with the Trump tran­si­tion team at Trump Tow­er (includ­ing Michael Fly­nn, Jared Kush­n­er, and Steve Ban­non) to help arrange the even­tu­al meet­ing in the Sey­chelles between Erik Prince, George Nad­er, and Kir­ill Dmitriev.

    So you have to won­der if the sign­ing of that non-dis­clo­sure agree­ment was part of all the schem­ing asso­ci­at­ed with the Sey­chelles. Don’t for­get that the Sey­chelles meet­ing appears to cen­ter around what amounts to a lucra­tive offer to Rus­sia to realign itself away from the gov­ern­ments of Iran and Syr­ia, which implic­it­ly sug­gests plans for ongo­ing regime change oper­a­tions in Syr­ia and a major new regime change oper­a­tion in Iran. And based on what we know about the ser­vices offered by both Psy Group and Cam­bridge Ana­lyt­i­ca — psy­cho­log­i­cal war­fare ser­vices designed to change the atti­tudes of entire nations — the two firms sound like exact­ly the kinds of com­pa­nies that might have been major con­trac­tors for those planned regime change oper­a­tions.

    Grant­ed, there would have been no short­age of poten­tial US gov­ern­ment con­tracts Cam­bridge Ana­lyt­i­ca and Psy Group would have been mutu­al­ly inter­est­ed in pur­su­ing that have noth­ing to do with the Sey­chelles scheme. But the tim­ing sure is inter­est­ing giv­en the heavy over­lap of char­ac­ters involved.

    And while the non-dis­clo­sure doc­u­ments don’t indi­cate which gov­ern­ment con­tracts pre­cise­ly the two com­pa­nies were ini­tial­ly plan­ning on joint­ly bid­ding on (which makes sense if they were ini­tial­ly plan­ning on work­ing on some­thing involv­ing a Sey­chelles/regime-change scheme), there is some infor­ma­tion on one of the con­tracts they did end up joint­ing bid­ding on which hap­pened to focus on psy­cho­log­i­cal war­fare ser­vices in the Mid­dle East. Specif­i­cal­ly, they made a joint pro­pos­al for the State Department’s Glob­al Engage­ment Cen­ter for a project focused on dis­rupt­ing the recruit­ment and rad­i­cal­iza­tion of ISIS mem­bers. It sounds like the pro­pos­al focused heav­i­ly on cre­at­ing fake online per­sonas so it’s basi­cal­ly a dif­fer­ent appli­ca­tion for the same fake-per­sona ser­vices Psy Group and Cam­bridge Ana­lyt­i­ca offer in the polit­i­cal are­na.

    And it turns out the State Department’s Glob­al Engage­ment Cen­ter did indeed sign a con­tract with Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, SCL Group, last year. Addi­tion­al­ly, one of the con­tracts Psy Group and Cam­bridge Ana­lyt­i­ca joint­ly sub­mit­ted to the US State Depart­ment also includ­ed SCL. Although it’s unclear if it involved Cam­bridge Ana­lyt­i­ca because it didn’t include pro­vi­sions for sub­con­trac­tors and the con­tract didn’t involve social media and was focused on in-per­son inter­views. So while we don’t know how suc­cess­ful Cam­brdi­ge Ana­lyt­i­ca and Psy Group were in their mutu­al hunt for gov­ern­ment con­tracts, SCL was suc­cess­ful. So if SCL was get­ting lots of oth­er con­tracts who knows how many of them also involved Cam­bridge Ana­lyt­i­ca and/or Psy Group.

    We’re also learn­ing that Psy Group appears to have shut itself down in Feb­ru­ary of 2018 short­ly after George Nad­er was inter­view by Robert Mueller’s grand jury. But it does­n’t appear to be a real shut­down and it sounds like Psy Group has qui­et­ly reopened under the new name “WhiteKnight”. Let’s not for­get that Cam­bridge Ana­lyt­i­ca appears to have already done the same thing, shut­ting down only to qui­et­ly reopen as “Emer­da­ta”. So for all we know there’s already a new WhiteKnight/Emerdata non-dis­clo­sure agree­ment in place for the pur­pose of fur­ther joint bid­ding on gov­ern­ment con­tracts. But as the fol­low­ing sto­ry makes clear, one thing we do know for sure at this point is that if the Cam­bridge Ana­lyt­i­ca and/or Psy Group end up get­ting gov­ern­ment con­tracts they’re going to go to great lengths to hide it:

    Bloomberg

    Mueller Asked About Mon­ey Flows to Israeli Social-Media Firm, Source Says

    * PSY Group’s work includ­ed fake per­sonas, firm’s doc­u­ments show
    * Founder is report­ed to have met with Don­ald Trump Jr. in 2016

    By Michael Riley and Lau­ren Etter
    May 22, 2018, 12:35 PM CDT

    Spe­cial Coun­sel Robert Mueller’s team has asked about flows of mon­ey into the Cyprus bank account of a com­pa­ny that spe­cial­ized in social-media manip­u­la­tion and whose founder report­ed­ly met with Don­ald Trump Jr. in August 2016, accord­ing to a per­son famil­iar with the inves­ti­ga­tion.

    The inquiry is draw­ing atten­tion to PSY Group, an Israeli firm that pitched its ser­vices to super-PACs and oth­er enti­ties dur­ing the 2016 elec­tion. Those ser­vices includ­ed infil­trat­ing tar­get audi­ences with elab­o­rate­ly craft­ed social-media per­sonas and spread­ing mis­lead­ing infor­ma­tion through web­sites meant to mim­ic news por­tals, accord­ing to inter­views and PSY Group doc­u­ments seen by Bloomberg News.

    The per­son doesn’t believe any of those pitch­es was suc­cess­ful, and it’s ille­gal for for­eign enti­ties to con­tribute any­thing of val­ue or to play deci­sion-mak­ing roles in U.S. polit­i­cal cam­paigns.

    One of PSY Group’s founders, Joel Zamel, met in August 2016 at Trump Tow­er with Don­ald Trump Jr. and an emis­sary to Sau­di Ara­bia and the Unit­ed Arab Emi­rates to dis­cuss how PSY Group could help Trump win, the New York Times report­ed on Sat­ur­day.

    Marc Mukasey, a lawyer for Zamel, said his client “offered noth­ing to the Trump cam­paign, received noth­ing from the Trump cam­paign, deliv­ered noth­ing to the Trump cam­paign and was not solicit­ed by, or asked to do any­thing for, the Trump cam­paign.” He also said reports that Zamel’s com­pa­nies engage in social-media manip­u­la­tion are mis­guid­ed and that the firms “har­vest pub­licly avail­able infor­ma­tion for law­ful use.”

    Don­ald Trump Jr. recalls a meet­ing at which he was pitched “on a social media plat­form or mar­ket­ing strat­e­gy,” said his attor­ney, Alan Futer­fas, in an emailed state­ment. “He was not inter­est­ed and that was the end of it.”

    Fol­low­ing Trump’s vic­to­ry, PSY Group formed an alliance with Cam­bridge Ana­lyt­i­ca, the Trump campaign’s pri­ma­ry social-media con­sul­tants, to try to win U.S. gov­ern­ment work, accord­ing to doc­u­ments obtained by Bloomberg News.

    FBI agents work­ing with Mueller’s team inter­viewed peo­ple asso­ci­at­ed with PSY Group’s U.S. oper­a­tions in Feb­ru­ary, and Mueller sub­poe­naed bank records for pay­ments made to the firm’s Cyprus bank accounts, accord­ing to a per­son who has seen one of the sub­poe­nas. Though PSY Group is based in Israel, it’s tech­ni­cal­ly head­quar­tered in Cyprus, the small Mediter­ranean island famous for its bank­ing secre­cy.

    Short­ly after those inter­views, on Feb. 25, PSY Group Chief Exec­u­tive Offi­cer Royi Burstien informed employ­ees in Tel Aviv that the com­pa­ny was clos­ing down. Burstien is a for­mer com­man­der of an Israeli psy­cho­log­i­cal war­fare unit, accord­ing to two peo­ple famil­iar with the com­pa­ny. He didn’t respond to requests for com­ment.

    ...

    ‘Poi­son­ing the Well’

    Tac­tics deployed by PSY Group in for­eign elec­tions includ­ed inflam­ing divi­sions in oppo­si­tion groups and play­ing on deep-seat­ed cul­tur­al and eth­nic con­flicts, some­thing the firm called “poi­son­ing the well,” accord­ing to the peo­ple.

    In a con­tract­ing pro­pos­al for the U.S. State Depart­ment that PSY Group pre­pared with Cam­bridge Ana­lyt­i­ca and SCL Group, Cambridge’s U.K. affil­i­ate, the firm said that it “has con­duct­ed messaging/influence oper­a­tions in well over a dozen lan­guages and dialects” and that it employs “an elite group of high-rank­ing for­mer offi­cers from some of the world’s most renowned intel­li­gence units.”

    Although the pro­pos­al says that the com­pa­ny is legal­ly bound not to reveal its clients, it also boasts that “PSY has suc­ceed­ed in plac­ing the results of its intel­li­gence activ­i­ties in top-tier pub­li­ca­tions across the globe in order to advance the inter­ests of its clients.”

    That pro­pos­al was the result of a col­lab­o­ra­tion that gelled after Trump’s vic­to­ry — a mutu­al non-dis­clo­sure agree­ment between Cam­bridge and PSY Group is dat­ed Dec. 14, 2016 — but the doc­u­ments don’t indi­cate how the com­pa­nies ini­tial­ly con­nect­ed or why they decid­ed to work togeth­er.

    Com­pa­nies Shut Down

    Cam­bridge Ana­lyt­i­ca and the elec­tions divi­sion of SCL shut down this month fol­low­ing scruti­ny of the com­pa­nies’ busi­ness prac­tices, includ­ing the release of a secret­ly record­ed inter­view of Cam­bridge CEO Alexan­der Nix say­ing he could entrap politi­cians in com­pro­mis­ing sit­u­a­tions.

    The joint pro­pos­al for the State Department’s Glob­al Engage­ment Cen­ter was for a project to inter­rupt the recruit­ment and rad­i­cal­iza­tion of ISIS mem­bers, and it pro­vides insight into PSY Group’s use of fake social-media per­sonas.

    The com­pa­ny spent months prepar­ing for the pro­pos­al by devel­op­ing a per­sona for “an aver­age Chica­go teenag­er” named Madi­son who con­vert­ed from Chris­tian­i­ty to Islam and became alien­at­ed from her par­ents. Over a peri­od of many weeks, Madi­son inter­act­ed with an ISIS recruiter, received instruc­tions for send­ing mon­ey to fight­ers in Syr­ia, and began an extend­ed flir­ta­tion with a fight­er in Raqqa, Syr­ia.

    Among the long-term objec­tives of Madison’s per­sona were obtain­ing names and con­tacts of “rad­i­cal Turk­ish Islam­ic ele­ments” and obtain­ing bank accounts and rout­ing num­bers for donat­ing to ISIS, accord­ing to the pro­pos­al seen by Bloomberg News.

    The State Department’s Glob­al Engage­ment Cen­ter entered into a con­tract with SCL Group last year, but it didn’t include pro­vi­sions for work to be per­formed by any sub­con­trac­tors, accord­ing to a depart­ment spokesman. That con­tract didn’t involve social media and was focused on in-per­son inter­views, accord­ing to an ear­li­er depart­ment brief­ing.

    Tow­er Meet­ing

    The Trump Tow­er meet­ing in August 2016 includ­ed Zamel, the PSY Group founder, and George Nad­er, an advis­er to the rul­ing fam­i­lies of Sau­di Ara­bia and the Unit­ed Arab Emi­rates, accord­ing to the New York Times report. PSY Group’s deci­sion to shut down appears to have come the same week that Nad­er tes­ti­fied before the grand jury work­ing with Mueller, accord­ing to the tim­ing of that tes­ti­mo­ny pre­vi­ous­ly report­ed in the Times.

    Fol­low­ing the elec­tion, Nad­er hired a dif­fer­ent com­pa­ny of Zamel’s called WhiteKnight, which spe­cial­izes in open-source social media research and is based in the Caribbean, accord­ing to a per­son famil­iar with the trans­ac­tion.

    The per­son described WhiteKnight as a high-end busi­ness con­sult­ing firm owned in part by Zamel that com­plet­ed a post-elec­tion analy­sis for Nad­er that exam­ined the role that social media played in the 2016 elec­tion.

    There is lit­tle pub­lic infor­ma­tion about WhiteKnight or its prod­ucts, and the com­pa­ny does not appear to have a web­site.

    Anoth­er per­son famil­iar with PSY Group’s oper­a­tions said that months ago, there was dis­cus­sion about rebrand­ing the firm under a dif­fer­ent name.

    The name being dis­cussed inter­nal­ly, accord­ing to the per­son, was WhiteKnight.

    ———-

    “Mueller Asked About Mon­ey Flows to Israeli Social-Media Firm, Source Says” by Michael Riley and Lau­ren Etter; Bloomberg; 05/22/2018

    “Spe­cial Coun­sel Robert Mueller’s team has asked about flows of mon­ey into the Cyprus bank account of a com­pa­ny that spe­cial­ized in social-media manip­u­la­tion and whose founder report­ed­ly met with Don­ald Trump Jr. in August 2016, accord­ing to a per­son famil­iar with the inves­ti­ga­tion.”

    So the Mueller probe is look­ing into mon­ey-flows of Psy Group’s Cyprus bank account, along with the activ­i­ties of George Nad­er (who pitched Psy Group to the Trump team in August 2016) and this inter­est from Mueller appears to have led to the sud­den shut­down of the com­pa­ny a few months ago:

    ...
    FBI agents work­ing with Mueller’s team inter­viewed peo­ple asso­ci­at­ed with PSY Group’s U.S. oper­a­tions in Feb­ru­ary, and Mueller sub­poe­naed bank records for pay­ments made to the firm’s Cyprus bank accounts, accord­ing to a per­son who has seen one of the sub­poe­nas. Though PSY Group is based in Israel, it’s tech­ni­cal­ly head­quar­tered in Cyprus, the small Mediter­ranean island famous for its bank­ing secre­cy.

    Short­ly after those inter­views, on Feb. 25, PSY Group Chief Exec­u­tive Offi­cer Royi Burstien informed employ­ees in Tel Aviv that the com­pa­ny was clos­ing down. Burstien is a for­mer com­man­der of an Israeli psy­cho­log­i­cal war­fare unit, accord­ing to two peo­ple famil­iar with the com­pa­ny. He didn’t respond to requests for com­ment.

    ...

    Tow­er Meet­ing

    The Trump Tow­er meet­ing in August 2016 includ­ed Zamel, the PSY Group founder, and George Nad­er, an advis­er to the rul­ing fam­i­lies of Sau­di Ara­bia and the Unit­ed Arab Emi­rates, accord­ing to the New York Times report. PSY Group’s deci­sion to shut down appears to have come the same week that Nad­er tes­ti­fied before the grand jury work­ing with Mueller, accord­ing to the tim­ing of that tes­ti­mo­ny pre­vi­ous­ly report­ed in the Times.
    ...

    Although the sud­den shut­down of Psy Group appears to real­ly be a secret rebrand­ing. Psy Group is appar­ent­ly now WhiteKnight, a rebrand­ing the com­pa­ny has been work­ing on for a white it seems since WhiteKnight was hired by Nad­er to do a post-elec­tion analy­sis on the role social media played in the 2016 elec­tion:

    ...
    Fol­low­ing the elec­tion, Nad­er hired a dif­fer­ent com­pa­ny of Zamel’s called WhiteKnight, which spe­cial­izes in open-source social media research and is based in the Caribbean, accord­ing to a per­son famil­iar with the trans­ac­tion.

    The per­son described WhiteKnight as a high-end busi­ness con­sult­ing firm owned in part by Zamel that com­plet­ed a post-elec­tion analy­sis for Nad­er that exam­ined the role that social media played in the 2016 elec­tion.

    There is lit­tle pub­lic infor­ma­tion about WhiteKnight or its prod­ucts, and the com­pa­ny does not appear to have a web­site.

    Anoth­er per­son famil­iar with PSY Group’s oper­a­tions said that months ago, there was dis­cus­sion about rebrand­ing the firm under a dif­fer­ent name.

    The name being dis­cussed inter­nal­ly, accord­ing to the per­son, was WhiteKnight.

    Just imag­ine how fas­ci­nat­ing WhiteKnight’s post-elec­tion analy­sis on the role social media played must since it was basi­cal­ly con­duct­ed by Psy Group, a social media manip­u­la­tion firm that either exe­cut­ed much of the most egre­gious (and effec­tive) social media manip­u­la­tion itself or worked direct­ly with the worst per­pe­tra­tors like Cam­bridge Ana­lyt­i­ca. There’s prob­a­bly quite a few insights in that report that would­n’t be avail­able to oth­er firms.

    So what kinds of secrets is Psy Group hop­ing to keep hid­den with its shutdown/rebranding move? Well, some of those secrets pre­sum­ably involve the alliance Psy Group cre­at­ed with Cam­bridge Ana­lyt­i­ca short­ly after Trump’s vic­to­ry, cul­mi­nat­ing the the Decem­ber 14, 2016, mutu­al non-dis­clo­sure agree­ment (one day before the Trump Tow­er meet­ing with the crown prince of the UAE to set up the Sey­chelles meet­ing). And note how the con­tract Psy Group and Cam­bridge Ana­lyt­i­ca pitched to “con­duct­ed messaging/influence oper­a­tions in well over a dozen lan­guages and dialects” was also sub­mit­ted with Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny SCL. So Psy Group’s alliance with Cam­bridge Ana­lyt­i­ca was prob­a­bly real­ly an alliance with Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny too:

    ...
    Fol­low­ing Trump’s vic­to­ry, PSY Group formed an alliance with Cam­bridge Ana­lyt­i­ca, the Trump campaign’s pri­ma­ry social-media con­sul­tants, to try to win U.S. gov­ern­ment work, accord­ing to doc­u­ments obtained by Bloomberg News.

    ...

    PSY Group devel­oped elab­o­rate infor­ma­tion oper­a­tions for com­mer­cial clients and polit­i­cal can­di­dates around the world, the peo­ple said.

    ‘Poi­son­ing the Well’

    Tac­tics deployed by PSY Group in for­eign elec­tions includ­ed inflam­ing divi­sions in oppo­si­tion groups and play­ing on deep-seat­ed cul­tur­al and eth­nic con­flicts, some­thing the firm called “poi­son­ing the well,” accord­ing to the peo­ple.

    In a con­tract­ing pro­pos­al for the U.S. State Depart­ment that PSY Group pre­pared with Cam­bridge Ana­lyt­i­ca and SCL Group, Cambridge’s U.K. affil­i­ate, the firm said that it “has con­duct­ed messaging/influence oper­a­tions in well over a dozen lan­guages and dialects” and that it employs “an elite group of high-rank­ing for­mer offi­cers from some of the world’s most renowned intel­li­gence units.”

    Although the pro­pos­al says that the com­pa­ny is legal­ly bound not to reveal its clients, it also boasts that “PSY has suc­ceed­ed in plac­ing the results of its intel­li­gence activ­i­ties in top-tier pub­li­ca­tions across the globe in order to advance the inter­ests of its clients.”

    That pro­pos­al was the result of a col­lab­o­ra­tion that gelled after Trump’s vic­to­ry — a mutu­al non-dis­clo­sure agree­ment between Cam­bridge and PSY Group is dat­ed Dec. 14, 2016 — but the doc­u­ments don’t indi­cate how the com­pa­nies ini­tial­ly con­nect­ed or why they decid­ed to work togeth­er.
    ...

    Anoth­er point to keep in mind regard­ing the tim­ing of that Decem­ber 14, 2016, mutu­al non-dis­clo­sure agree­ment: the Sey­chelles meet­ing appears to be a giant pitch designed to realign Rus­sia, indi­cat­ing the UAE was clear­ly very inter­est­ed in exploit­ing Trump’s vic­to­ry in a big way. They were ‘cash­ing in’, metaphor­i­cal­ly. So it seems rea­son­able to sus­pect that Psy Group, which is close­ly affil­i­at­ed with the UAE’s crown prince, would also be quite inter­est­ed in lit­er­al­ly ‘cash­ing in’ in a very big way too dur­ing that Decem­ber 2016 tran­si­tion peri­od. In oth­er words, while we don’t know what Psy Group and Cam­bridge Ana­lyt­i­ca decid­ed to not dis­close with their non-dis­clo­sure agree­ment, we can be pret­ty sure it was extreme­ly ambi­tious at the time.

    But at this point, the only pro­pos­als for US gov­ern­ment con­tracts that we do know about were for an anti-ISIS social media oper­a­tion for the US State Department’s Glob­al Engage­ment Cen­ter:

    ...
    The joint pro­pos­al for the State Department’s Glob­al Engage­ment Cen­ter was for a project to inter­rupt the recruit­ment and rad­i­cal­iza­tion of ISIS mem­bers, and it pro­vides insight into PSY Group’s use of fake social-media per­sonas.

    The com­pa­ny spent months prepar­ing for the pro­pos­al by devel­op­ing a per­sona for “an aver­age Chica­go teenag­er” named Madi­son who con­vert­ed from Chris­tian­i­ty to Islam and became alien­at­ed from her par­ents. Over a peri­od of many weeks, Madi­son inter­act­ed with an ISIS recruiter, received instruc­tions for send­ing mon­ey to fight­ers in Syr­ia, and began an extend­ed flir­ta­tion with a fight­er in Raqqa, Syr­ia.

    Among the long-term objec­tives of Madison’s per­sona were obtain­ing names and con­tacts of “rad­i­cal Turk­ish Islam­ic ele­ments” and obtain­ing bank accounts and rout­ing num­bers for donat­ing to ISIS, accord­ing to the pro­pos­al seen by Bloomberg News.
    ...

    And one con­tract we do know about at this point that was award­ing to this net­work of com­pa­nies was actu­al­ly award­ed to Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, SCL:

    ...
    The State Department’s Glob­al Engage­ment Cen­ter entered into a con­tract with SCL Group last year, but it didn’t include pro­vi­sions for work to be per­formed by any sub­con­trac­tors, accord­ing to a depart­ment spokesman. That con­tract didn’t involve social media and was focused on in-per­son inter­views, accord­ing to an ear­li­er depart­ment brief­ing.
    ...

    So there’s one gov­ern­ment con­tract that SCL won fol­low­ing Trump’s elec­tion, but Psy Group/Cambridge Ana­lyt­i­ca may or may not have been involved with.

    And that’s all we know about the work Psy Group may or may not have done for the US gov­ern­ment fol­low­ing Trump’s vic­to­ry at this point. Except we also know that Psy Group and Cam­bridge Ana­lyt­i­ca weren’t com­pet­ing, so what­ev­er con­tract Psy Group got Cam­bridge Ana­lyt­i­ca may have received too. And that indi­cates, at a min­i­mum, a will­ing­ness for these two com­pa­nies to work VERY close togeth­er. So close they risk reveal­ing inter­nal secrets to each oth­er. Don’t for­get, Psy Group and Cam­bridge Ana­lyt­i­ca are osten­si­bly com­peti­tors offer­ing sim­i­lar ser­vices to the same types of clients. And short­ly after the elec­tion they were will­ing to sign an agree­ment to joint­ly com­pete for con­tracts that they would work on togeth­er. Don’t for­get that one of the mas­sive ques­tions loom­ing over this whole sto­ry is whether or not Psy Group and Cam­bridge Ana­lyt­i­ca — two direct com­peti­tors — were not just on the same team but actu­al­ly work­ing close­ly togeth­er dur­ing the 2016 elec­tion to help elect Trump. And thanks to these recent rev­e­la­tions we now know Psy Group and Cam­bridge Ana­lyt­i­ca were at least will­ing to work extreme­ly close­ly with each oth­er imme­di­ate­ly after the elec­tion on a vari­ety of dif­fer­ent gov­ern­ment con­tracts. That seems like a rel­e­vant clue in this whole mess.

    Posted by Pterrafractyl | May 29, 2018, 8:14 pm
  16. Oh look, a new scary Cam­bridge Ana­lyt­i­ca oper­a­tion was just dis­cov­ered. Or rather, it’s a scary new sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot that Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment to and played a key role in also worked on the pro-Brex­it cam­paign and lat­er assist­ed a West-lean­ing East Ukraine politi­cian Sergei Taru­ta. It’s like these com­pa­nies can’t go a week with­out a new scary sto­ry. Which is extra scary.

    For scary starters, the arti­cle also notes that, despite Face­book’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. So if Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form it’s not try­ing very hard. One is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

    Anoth­er part of what makes the fol­low­ing arti­cle scary is that it’s a reminder that you don’t nec­es­sar­i­ly need to have down­loaded a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

    Addi­tion­al­ly, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing we typ­i­cal­ly asso­ciate with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. A ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made. That’s what AIQ was offer­ing and the new­ly dis­cov­ered data­base con­tained the info for that.

    In this case, the Finan­cial Times has some­how got­ten its hands on a bunch of Face­book-relat­ed data on held inter­nal­ly by AIQ. It turns out that AIQ stored a list of 759,934 Face­book users in a table that includ­ed home address­es, phone num­bers and email address­es for some pro­files. Addi­tion­al­ly, the files con­tain polit­i­cal Face­book posts and likes for the peo­ple. It all appears to be part of a soft­ware pack­age AIQ was devel­op­ing for a client that would allow them to search the polit­i­cal posts and “Likes” peo­ple made on Face­book. A per­son­al polit­i­cal brows­er that could give a far more detailed peak into some­one’s pol­i­tics than oth­er forms of tra­di­tion­al­ly avail­able infor­ma­tion on peo­ple’s pol­i­tics like polit­i­cal dona­tion records and par­ty affil­i­a­tion.

    Also keep in mind that we already know Cam­bridge Ana­lyt­i­ca col­lect­ed large amounts of infor­ma­tion on 87 mil­lion Face­book accounts. So the 759,934 num­ber should not be seen as the total num­ber of peo­ple AIQ has sim­i­lar such files on. It could just be a par­tic­u­lar batch select­ed by that client. A batch of 759,934 peo­ple a client just hap­pens to want to make per­son­al­ized polit­i­cal search­es on.

    It’s also worth not­ing that this ser­vice would be per­fect for accom­plish­ing the right-wing’s long-stand­ing goal of purg­ing the fed­er­al gov­ern­ment of lib­er­al employ­ees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. John­son and ‘Alt-Right’ neo-Nazi bil­lion­aire Peter Thiel report­ed­ly was help­ing the Trump team accom­plish dur­ing the tran­si­tion peri­od. And an ide­o­log­i­cal purge of the State Depart­ment is report­ed­ly already under­way. So it will be inter­est­ing to learn if this AIQ is being used for such pur­pos­es.

    It’s unclear if the data in these files was col­lect­ed through a Face­book app devel­oped by AIQ — in which case the peo­ple in the file at least had to click the “I accept” part of installing the app — or if the data was col­lect­ed sim­ply from scrap­ing pub­licly avail­able Face­book posts. Again, it’s a reminder that pret­ty much ANYTHING you do on a pub­licly acces­si­ble Face­book post, even a ‘Like’, is prob­a­bly get­ting col­lect­ed by some­one, aggre­gat­ed, and resold. Includ­ing, per­haps, by Aggre­gateIQ:

    Finan­cial Times

    Aggre­gateIQ had data of thou­sands of Face­book users
    Linked app found by secu­ri­ty researcher rais­es ques­tions on social network’s polic­ing

    Aliya Ram in Lon­don and Han­nah Kuch­ler in San Fran­cis­co
    June 1, 2018, 2:21 PM

    Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.

    The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

    The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

    Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

    Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

    Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

    “I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

    He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.

    Aggre­gateIQ has sought to dis­tance itself from an inter­na­tion­al pri­va­cy scan­dal engulf­ing Face­book and Cam­bridge Ana­lyt­i­ca, despite alle­ga­tions from Christo­pher Wylie, a whistle­blow­er at the now-defunct UK firm, that it had act­ed as the Cana­di­an branch of the organ­i­sa­tion.

    The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.

    The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said.

    AggregateIQ’s tech­nol­o­gy was used in the US for Ted Cruz’s cam­paign for the Repub­li­can nom­i­na­tion in 2016, and the com­pa­ny has also received mil­lions of pounds of fund­ing from British groups. These include Vote Leave, the main pro-Brex­it cam­paign front­ed by for­eign sec­re­tary Boris John­son.

    “The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.”

    ...

    ———-

    “Aggre­gateIQ had data of thou­sands of Face­book users” by Aliya Ram and Han­nah Kuch­ler; Finan­cial Times; 06/01/2018

    ““The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.”

    As secu­ri­ty researcher Chris Vick­ery put it, the whole is greater than the sum when you look at the syn­er­gys­tic way the var­i­ous tools devel­oped by com­pa­nies like Cam­bridge Ana­lyt­i­ca and AIQ work togeth­er. Syn­er­gy in the ser­vice of cre­at­ing a mass manip­u­la­tion ser­vice with per­son­al­ized micro-tar­get­ing capa­bil­i­ties.

    And that syn­er­gis­tic mass manip­u­la­tion is part of why it’s dis­turb­ing to hear that Vick­ery just dis­cov­ered 13 AIQ apps still avail­able on Face­book after Cam­bridge Ana­lyt­i­ca was declared banned and caused Face­book so much bad pub­lic­i­ty. The fact that there are still Cam­bridge Ana­lyt­i­ca-affil­i­at­ed apps sug­gests Face­book either real­ly, real­ly, real­ly likes Cam­bridge Ana­lyt­i­ca or it’s just real­ly, real­ly bad at app over­sight:

    ...
    The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

    The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

    Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

    ...

    He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.
    ...

    “How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.”

    “AIQ John­ny Scraper”. They weren’t even hid­ing it. But at least the John­ny Scraper app sounds rel­a­tive­ly innocu­ous.

    The per­son­al polit­i­cal post search engine ser­vice, on the oth­er hand, sounds far from innocu­ous. A data­base on 759,934 Face­book users cre­at­ed by AIQ soft­ware that tracked which which users liked a par­tic­u­lar page or were post­ing pos­i­tive or neg­a­tive com­ments. So soft­ware that inter­prets what peo­ple write about pol­i­tics on Face­book and aggre­gates that data into a search engine for clients. You have to won­der how sophis­ti­cat­ed that auto­mat­ed inter­pre­ta­tion soft­ware is at this point. What­ev­er the answer, AIQ’s text inter­pre­ta­tion soft­ware is only going to get more sophis­ti­cat­ed. That’s a giv­en.

    Some­day that soft­ware will prob­a­bly be able to write its own syn­op­sis of a per­son that’s bet­ter than a human could do. Who knows when that kind of soft­ware will be avail­able but some­day it will be and com­pa­nies like AIQ will be there to exploit it if that’s legal. That’s also a giv­en.

    And this 759,934 per­son data­base of polit­i­cal Likes and writ­ten polit­i­cal com­ments was what AIQ pro­vid­ed for just one client:

    ...
    Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

    Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

    “I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

    ...

    The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said.

    AggregateIQ’s tech­nol­o­gy was used in the US for Ted Cruz’s cam­paign for the Repub­li­can nom­i­na­tion in 2016, and the com­pa­ny has also received mil­lions of pounds of fund­ing from British groups. These include Vote Leave, the main pro-Brex­it cam­paign front­ed by for­eign sec­re­tary Boris John­son.
    ...

    And for all we know, AIQ’s data­base could have been data curat­ed from pub­licly avail­able posts and not AIQ app users, high­light­ing how any­thing pub­licly done on Face­book, even a Like, is going to be col­lect­ed by some­one and prob­a­bly sold:

    ...
    The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.
    ...

    You are what you Like in this com­mer­cial space. And we’re all in this com­mer­cial space to some extent. There real­ly is a com­mer­cial­ly avail­able pro­file of you. It’s just dis­trib­uted between the many dif­fer­ent data bro­kers offer­ing slices of it.

    Anoth­er key dynam­ic in all this is that Face­book’s busi­ness mod­el appears to be both a com­bi­na­tion of exploit­ing the vast infor­ma­tion monop­oly it pos­sess­es with an oppos­ing busi­ness mod­el of effec­tive­ly sell­ing off lit­tle chunks of that data by mak­ing it avail­able to app devel­op­ers. There’s an obvi­ous ten­sion in both exploit­ing your data monop­oly while sell­ing it off but that appears to be the most prof­itable path for­ward which is why that’s prob­a­bly the busi­ness mod­el AIQ was offer­ing with the data it was col­lect­ing from Face­book: ana­lyz­ing the Face­book data it’s col­lect­ed through apps and pub­lic data scrap­ing, cat­e­go­riz­ing the data (like polit­i­cal of non-polit­i­cal com­ments and if they’re pos­i­tive or neg­a­tive), and then sell slices of that vast inter­nal AIQ curat­ed con­tent to clients.

    Aggre­gate as much data as pos­si­ble. Ana­lyze it. And offer pieces of that curat­ed data pile to clients. That appears to be a busi­ness mod­el of choice in this com­mer­cial big data are­na which is why we should assume AIQ and Cam­bridge Ana­lyt­i­ca were offer­ing sim­i­lar ser­vice and should­n’t assume this par­tic­u­lar data­base of 759,934 Face­book accounts is the only one of its nature. Espe­cial­ly giv­en the 87 mil­lion pro­files they already scraped.

    And this is a busi­ness mod­el that’s going to apply for far more than just Face­book con­tent. The whole spec­trum of infor­ma­tion col­lect­ed on every­one is going to be part of this com­mer­cial space. And that’s part of what’s so scary: the data that gets fed into these inde­pen­dent Big Data repos­i­to­ries like the AIQ/Cambridge Ana­lyt­i­ca data­base is going to increas­ing­ly be the curat­ed data pro­vid­ed by oth­er Big Data providers in the same busi­ness. Every­one is col­lect­ing and ana­lyz­ing the curat­ed data every­one else is regur­gi­tat­ing out. Just as Cam­bridge Ana­lyt­i­ca and AIQ offer a slew of sep­a­rate inter­op­er­a­ble ser­vices to clients that have a ‘whole is greater than the sum’ syn­er­gis­tic qual­i­ty, the entire Big Data indus­try is going to have a sim­i­lar qual­i­ty. It’s a com­pet­i­tive coop­er­a­tive divi­sion of labor. Cam­bridge Ana­lyt­i­ca and AIQ are just the extra scary team mem­bers in a syn­er­gis­tic indus­try-wide team effort in the ser­vice of max­i­miz­ing the prof­its we all make from exploit­ing every­one’s data for sale.

    Posted by Pterrafractyl | June 3, 2018, 9:47 pm
  17. It’s that time again. Time to learn how the Cam­bridge Analytica/Facebook scan­dal just got worse. So what’s the new low? Well, it turns out Face­book has­n’t just been shar­ing egre­gious amounts of Face­book user data with app devel­op­ers. Device mak­ers, like Apple and Sam­sung, have also been giv­en sim­i­lar access to user data. At least 60 device mak­ers known thus far.

    Except, of course, it’s worse and these device mak­ers have actu­al­ly been giv­en EVEN MORE data that Face­book app devel­op­ers received. For exam­ple, Face­book allowed the device mak­ers access to the data of users’ friends with­out their explic­it con­sent, even after declar­ing that it would no longer share such infor­ma­tion with out­siders. And some device mak­ers could access per­son­al infor­ma­tion from users’ friends who thought they had turned off any shar­ing. So the “friends per­mis­sions” option that allowed Cam­bridge Ana­lyt­i­ca’s app to col­lect data on 87 mil­lion Face­book users even though just 300,000 peo­ple used their app has remained an option for device man­u­fac­tur­ers even after Face­book phased out the friends per­mis­sion option in 2014–2015.

    Beyond that, the New York Times exam­ined the kind of infor­ma­tion gath­ered from a Black­ber­ry device owned by one of its reporters and found that it was­n’t just col­lect­ing iden­ti­fy­ing infor­ma­tion on all the reporters friends. It was also grab­bing iden­ti­fy­ing infor­ma­tion on those friends’ friends. That sin­gle Black­ber­ry was able to retrieve iden­ti­fy­ing infor­ma­tion on near­ly 295,000 peo­ple!

    Face­book jus­ti­fies all this by argu­ing that the device mak­ers are basi­cal­ly an exten­sion of Face­book. The com­pa­ny also asserts that there were strict agree­ments on how the data could be used. But the main loop­hole they cite is that Face­book viewed its hard­ware part­ners as “ser­vice providers,” like a cloud com­put­ing ser­vice paid to store Face­book data or a com­pa­ny con­tract­ed to process cred­it card trans­ac­tions. And by cat­e­go­riz­ing these device mak­ers as ser­vice providers Face­book is able to get around a 2011 con­sent decree Face­book signed with the US Fed­er­al Trade Com­mis­sion over pre­vi­ous pri­va­cy vio­la­tions. Accord­ing to that con­sent decree Face­book does not need to seek addi­tion­al per­mis­sion to share friend data with ser­vice providers.

    So it’s not just Cam­bridge Ana­lyt­i­ca and the thou­sands of app devel­op­ers who have been scoop­ing up moun­tains of Face­book user data with­out peo­ple real­iz­ing it. The device mak­ers have been doing it too. More so. Much, much more so:

    The New York Times

    Face­book Gave Device Mak­ers Deep Access to Data on Users and Friends

    The com­pa­ny formed data-shar­ing part­ner­ships with Apple, Sam­sung and
    dozens of oth­er device mak­ers, rais­ing new con­cerns about its pri­va­cy pro­tec­tions.

    By GABRIEL J.X. DANCE, NICHOLAS CONFESSORE and MICHAEL LaFOR­GIA
    JUNE 3, 2018

    As Face­book sought to become the world’s dom­i­nant social media ser­vice, it struck agree­ments allow­ing phone and oth­er device mak­ers access to vast amounts of its users’ per­son­al infor­ma­tion.

    Face­book has reached data-shar­ing part­ner­ships with at least 60 device mak­ers — includ­ing Apple, Ama­zon, Black­Ber­ry, Microsoft and Sam­sung — over the last decade, start­ing before Face­book apps were wide­ly avail­able on smart­phones, com­pa­ny offi­cials said. The deals allowed Face­book to expand its reach and let device mak­ers offer cus­tomers pop­u­lar fea­tures of the social net­work, such as mes­sag­ing, “like” but­tons and address books.

    But the part­ner­ships, whose scope has not pre­vi­ous­ly been report­ed, raise con­cerns about the company’s pri­va­cy pro­tec­tions and com­pli­ance with a 2011 con­sent decree with the Fed­er­al Trade Com­mis­sion. Face­book allowed the device com­pa­nies access to the data of users’ friends with­out their explic­it con­sent, even after declar­ing that it would no longer share such infor­ma­tion with out­siders. Some device mak­ers could retrieve per­son­al infor­ma­tion even from users’ friends who believed they had barred any shar­ing, The New York Times found.

    Most of the part­ner­ships remain in effect, though Face­book began wind­ing them down in April. The com­pa­ny came under inten­si­fy­ing scruti­ny by law­mak­ers and reg­u­la­tors after news reports in March that a polit­i­cal con­sult­ing firm, Cam­bridge Ana­lyt­i­ca, mis­used the pri­vate infor­ma­tion of tens of mil­lions of Face­book users.

    In the furor that fol­lowed, Facebook’s lead­ers said that the kind of access exploit­ed by Cam­bridge in 2014 was cut off by the next year, when Face­book pro­hib­it­ed devel­op­ers from col­lect­ing infor­ma­tion from users’ friends. But the com­pa­ny offi­cials did not dis­close that Face­book had exempt­ed the mak­ers of cell­phones, tablets and oth­er hard­ware from such restric­tions.

    “You might think that Face­book or the device man­u­fac­tur­er is trust­wor­thy,” said Serge Egel­man, a pri­va­cy researcher at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, who stud­ies the secu­ri­ty of mobile apps. “But the prob­lem is that as more and more data is col­lect­ed on the device — and if it can be accessed by apps on the device — it cre­ates seri­ous pri­va­cy and secu­ri­ty risks.”

    In inter­views, Face­book offi­cials defend­ed the data shar­ing as con­sis­tent with its pri­va­cy poli­cies, the F.T.C. agree­ment and pledges to users. They said its part­ner­ships were gov­erned by con­tracts that strict­ly lim­it­ed use of the data, includ­ing any stored on part­ners’ servers. The offi­cials added that they knew of no cas­es where the infor­ma­tion had been mis­used.

    The com­pa­ny views its device part­ners as exten­sions of Face­book, serv­ing its more than two bil­lion users, the offi­cials said.

    “These part­ner­ships work very dif­fer­ent­ly from the way in which app devel­op­ers use our plat­form,” said Ime Archi­bong, a Face­book vice pres­i­dent. Unlike devel­op­ers that pro­vide games and ser­vices to Face­book users, the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence,” the offi­cials said.

    Some device part­ners can retrieve Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events, among oth­er data. Tests by The Times showed that the part­ners request­ed and received data in the same way oth­er third par­ties did.

    Facebook’s view that the device mak­ers are not out­siders lets the part­ners go even fur­ther, The Times found: They can obtain data about a user’s Face­book friends, even those who have denied Face­book per­mis­sion to share infor­ma­tion with any third par­ties.

    In inter­views, sev­er­al for­mer Face­book soft­ware engi­neers and secu­ri­ty experts said they were sur­prised at the abil­i­ty to over­ride shar­ing restric­tions.

    “It’s like hav­ing door locks installed, only to find out that the lock­smith also gave keys to all of his friends so they can come in and rifle through your stuff with­out hav­ing to ask you for per­mis­sion,” said Ashkan Soltani, a research and pri­va­cy con­sul­tant who for­mer­ly served as the F.T.C.’s chief tech­nol­o­gist.

    Details of Facebook’s part­ner­ships have emerged amid a reck­on­ing in Sil­i­con Val­ley over the vol­ume of per­son­al infor­ma­tion col­lect­ed on the inter­net and mon­e­tized by the tech indus­try. The per­va­sive col­lec­tion of data, while large­ly unreg­u­lat­ed in the Unit­ed States, has come under grow­ing crit­i­cism from elect­ed offi­cials at home and over­seas and pro­voked con­cern among con­sumers about how freely their infor­ma­tion is shared.

    In a tense appear­ance before Con­gress in March, Facebook’s chief exec­u­tive, Mark Zucker­berg, empha­sized what he said was a com­pa­ny pri­or­i­ty for Face­book users.“Every piece of con­tent that you share on Face­book you own,” he tes­ti­fied. ”You have com­plete con­trol over who sees it and how you share it.”

    But the device part­ner­ships pro­voked dis­cus­sion even with­in Face­book as ear­ly as 2012, accord­ing to Sandy Parak­i­las, who at the time led third-par­ty adver­tis­ing and pri­va­cy com­pli­ance for Facebook’s plat­form.

    “This was flagged inter­nal­ly as a pri­va­cy issue,” said Mr. Parak­i­las, who left Face­book that year and has recent­ly emerged as a harsh crit­ic of the com­pa­ny. “It is shock­ing that this prac­tice may still con­tin­ue six years lat­er, and it appears to con­tra­dict Facebook’s tes­ti­mo­ny to Con­gress that all friend per­mis­sions were dis­abled.”

    The part­ner­ships were briefly men­tioned in doc­u­ments sub­mit­ted to Ger­man law­mak­ers inves­ti­gat­ing the social media giant’s pri­va­cy prac­tices and released by Face­book in mid-May. But Face­book pro­vid­ed the law­mak­ers with the name of only one part­ner — Black­Ber­ry, mak­er of the once-ubiq­ui­tous mobile device — and lit­tle infor­ma­tion about how the agree­ments worked.

    The sub­mis­sion fol­lowed tes­ti­mo­ny by Joel Kaplan, Facebook’s vice pres­i­dent for glob­al pub­lic pol­i­cy, dur­ing a closed-door Ger­man par­lia­men­tary hear­ing in April. Elis­a­beth Winkelmeier-Beck­er, one of the law­mak­ers who ques­tioned Mr. Kaplan, said in an inter­view that she believed the data part­ner­ships dis­closed by Face­book vio­lat­ed users’ pri­va­cy rights.

    “What we have been try­ing to deter­mine is whether Face­book has know­ing­ly hand­ed over user data else­where with­out explic­it con­sent,” Ms. Winkelmeier-Beck­er said. “I would nev­er have imag­ined that this might even be hap­pen­ing secret­ly via deals with device mak­ers. Black­Ber­ry users seem to have been turned into data deal­ers, unknow­ing­ly and unwill­ing­ly.”

    In inter­views with The Times, Face­book iden­ti­fied oth­er part­ners: Apple and Sam­sung, the world’s two biggest smart­phone mak­ers, and Ama­zon, which sells tablets.

    An Apple spokesman said the com­pa­ny relied on pri­vate access to Face­book data for fea­tures that enabled users to post pho­tos to the social net­work with­out open­ing the Face­book app, among oth­er things. Apple said its phones no longer had such access to Face­book as of last Sep­tem­ber.

    ...

    Ush­er Lieber­man, a Black­Ber­ry spokesman, said in a state­ment that the com­pa­ny used Face­book data only to give its own cus­tomers access to their Face­book net­works and mes­sages. Mr. Lieber­man said that the com­pa­ny “did not col­lect or mine the Face­book data of our cus­tomers,” adding that “Black­Ber­ry has always been in the busi­ness of pro­tect­ing, not mon­e­tiz­ing, cus­tomer data.”

    Microsoft entered a part­ner­ship with Face­book in 2008 that allowed Microsoft-pow­ered devices to do things like add con­tacts and friends and receive noti­fi­ca­tions, accord­ing to a spokesman. He added that the data was stored local­ly on the phone and was not synced to Microsoft’s servers.

    Face­book acknowl­edged that some part­ners did store users’ data — includ­ing friends’ data — on their own servers. A Face­book offi­cial said that regard­less of where the data was kept, it was gov­erned by strict agree­ments between the com­pa­nies.

    “I am dumb­found­ed by the atti­tude that any­body in Facebook’s cor­po­rate office would think allow­ing third par­ties access to data would be a good idea,” said Hen­ning Schulzrinne, a com­put­er sci­ence pro­fes­sor at Colum­bia Uni­ver­si­ty who spe­cial­izes in net­work secu­ri­ty and mobile sys­tems.

    The Cam­bridge Ana­lyt­i­ca scan­dal revealed how loose­ly Face­book had policed the bustling ecosys­tem of devel­op­ers build­ing apps on its plat­form. They ranged from well-known play­ers like Zyn­ga, the mak­er of the Far­mVille game, to small­er ones, like a Cam­bridge con­trac­tor who used a quiz tak­en by about 300,000 Face­book users to gain access to the pro­files of as many as 87 mil­lion of their friends.

    Those devel­op­ers relied on Facebook’s pub­lic data chan­nels, known as appli­ca­tion pro­gram­ming inter­faces, or APIs. But start­ing in 2007, the com­pa­ny also estab­lished pri­vate data chan­nels for device man­u­fac­tur­ers.

    At the time, mobile phones were less pow­er­ful, and rel­a­tive­ly few of them could run stand-alone Face­book apps like those now com­mon on smart­phones. The com­pa­ny con­tin­ued to build new pri­vate APIs for device mak­ers through 2014, spread­ing user data through tens of mil­lions of mobile devices, game con­soles, tele­vi­sions and oth­er sys­tems out­side Facebook’s direct con­trol.

    Face­book began mov­ing to wind down the part­ner­ships in April, after assess­ing its pri­va­cy and data prac­tices in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. Mr. Archi­bong said the com­pa­ny had con­clud­ed that the part­ner­ships were no longer need­ed to serve Face­book users. About 22 of them have been shut down.

    The broad access Face­book pro­vid­ed to device mak­ers rais­es ques­tions about its com­pli­ance with a 2011 con­sent decree with the F.T.C.

    The decree barred Face­book from over­rid­ing users’ pri­va­cy set­tings with­out first get­ting explic­it con­sent. That agree­ment stemmed from an inves­ti­ga­tion that found Face­book had allowed app devel­op­ers and oth­er third par­ties to col­lect per­son­al details about users’ friends, even when those friends had asked that their infor­ma­tion remain pri­vate.

    After the Cam­bridge Ana­lyt­i­ca rev­e­la­tions, the F.T.C. began an inves­ti­ga­tion into whether Facebook’s con­tin­ued shar­ing of data after 2011 vio­lat­ed the decree, poten­tial­ly expos­ing the com­pa­ny to fines.

    Face­book offi­cials said the pri­vate data chan­nels did not vio­late the decree because the com­pa­ny viewed its hard­ware part­ners as “ser­vice providers,” akin to a cloud com­put­ing ser­vice paid to store Face­book data or a com­pa­ny con­tract­ed to process cred­it card trans­ac­tions. Accord­ing to the con­sent decree, Face­book does not need to seek addi­tion­al per­mis­sion to share friend data with ser­vice providers.

    “These con­tracts and part­ner­ships are entire­ly con­sis­tent with Facebook’s F.T.C. con­sent decree,” Mr. Archi­bong, the Face­book offi­cial, said.

    But Jes­si­ca Rich, a for­mer F.T.C. offi­cial who helped lead the commission’s ear­li­er Face­book inves­ti­ga­tion, dis­agreed with that assess­ment.

    “Under Facebook’s inter­pre­ta­tion, the excep­tion swal­lows the rule,” said Ms. Rich, now with the Con­sumers Union. “They could argue that any shar­ing of data with third par­ties is part of the Face­book expe­ri­ence. And this is not at all how the pub­lic inter­pret­ed their 2014 announce­ment that they would lim­it third-par­ty app access to friend data.”

    To test one partner’s access to Facebook’s pri­vate data chan­nels, The Times used a reporter’s Face­book account — with about 550 friends — and a 2013 Black­Ber­ry device, mon­i­tor­ing what data the device request­ed and received. (More recent Black­Ber­ry devices, which run Google’s Android oper­at­ing sys­tem, do not use the same pri­vate chan­nels, Black­Ber­ry offi­cials said.)

    Imme­di­ate­ly after the reporter con­nect­ed the device to his Face­book account, it request­ed some of his pro­file data, includ­ing user ID, name, pic­ture, “about” infor­ma­tion, loca­tion, email and cell­phone num­ber. The device then retrieved the reporter’s pri­vate mes­sages and the respons­es to them, along with the name and user ID of each per­son with whom he was com­mu­ni­cat­ing.

    The data flowed to a Black­Ber­ry app known as the Hub, which was designed to let Black­Ber­ry users view all of their mes­sages and social media accounts in one place.

    The Hub also request­ed — and received — data that Facebook’s pol­i­cy appears to pro­hib­it. Since 2015, Face­book has said that apps can request only the names of friends using the same app. But the Black­Ber­ry app had access to all of the reporter’s Face­book friends and, for most of them, returned infor­ma­tion such as user ID, birth­day, work and edu­ca­tion his­to­ry and whether they were cur­rent­ly online.

    The Black­Ber­ry device was also able to retrieve iden­ti­fy­ing infor­ma­tion for near­ly 295,000 Face­book users. Most of them were sec­ond-degree Face­book friends of the reporter, or friends of friends.

    In all, Face­book empow­ers Black­Ber­ry devices to access more than 50 types of infor­ma­tion about users and their friends, The Times found.

    ———-

    “Face­book Gave Device Mak­ers Deep Access to Data on Users and Friends” by GABRIEL J.X. DANCE, NICHOLAS CONFESSORE and MICHAEL LaFOR­GIA; The New York Times; 06/03/2018

    “Face­book has reached data-shar­ing part­ner­ships with at least 60 device mak­ers — includ­ing Apple, Ama­zon, Black­Ber­ry, Microsoft and Sam­sung — over the last decade, start­ing before Face­book apps were wide­ly avail­able on smart­phones, com­pa­ny offi­cials said. The deals allowed Face­book to expand its reach and let device mak­ers offer cus­tomers pop­u­lar fea­tures of the social net­work, such as mes­sag­ing, “like” but­tons and address books.”

    At least 60 device mak­ers are sit­ting on A LOT of Face­book data. Note how NONE of them acknowl­edge this before this report came out as this Cam­bridge Ana­lyt­i­ca scan­dal was unfold­ing. It’s one of those qui­et lessons in how the world unfor­tu­nate­ly works.

    And these 60+ device mak­ers were able to access data of users’ friends with­out their con­sent even when those friends changed their pri­va­cy set­ting to bar any shar­ing:

    ...
    But the part­ner­ships, whose scope has not pre­vi­ous­ly been report­ed, raise con­cerns about the company’s pri­va­cy pro­tec­tions and com­pli­ance with a 2011 con­sent decree with the Fed­er­al Trade Com­mis­sion. Face­book allowed the device com­pa­nies access to the data of users’ friends with­out their explic­it con­sent, even after declar­ing that it would no longer share such infor­ma­tion with out­siders. Some device mak­ers could retrieve per­son­al infor­ma­tion even from users’ friends who believed they had barred any shar­ing, The New York Times found.

    Most of the part­ner­ships remain in effect, though Face­book began wind­ing them down in April. The com­pa­ny came under inten­si­fy­ing scruti­ny by law­mak­ers and reg­u­la­tors after news reports in March that a polit­i­cal con­sult­ing firm, Cam­bridge Ana­lyt­i­ca, mis­used the pri­vate infor­ma­tion of tens of mil­lions of Face­book users.

    In the furor that fol­lowed, Facebook’s lead­ers said that the kind of access exploit­ed by Cam­bridge in 2014 was cut off by the next year, when Face­book pro­hib­it­ed devel­op­ers from col­lect­ing infor­ma­tion from users’ friends. But the com­pa­ny offi­cials did not dis­close that Face­book had exempt­ed the mak­ers of cell­phones, tablets and oth­er hard­ware from such restric­tions.
    ...

    Most of the part­ner­ships remain in effect, though Face­book began wind­ing them down in April.”

    Yep, these data shar­ing part­ner­ship large­ly remain in effect and did­n’t end in 2014–2015 when the app devel­op­ers lost access to this kind of data. It’s only now, as the Cam­bridge Ana­lyt­i­ca scan­dal unfolds, that these part­ner­ships are being end­ed.

    This was all done despite a 2011 con­sent decree that barred Face­book from over­rid­ing users’ pri­va­cy set­tings with­out first get­ting explic­it con­sent. Face­book sim­ply cat­e­go­riz­ing the device mak­ers “ser­vice providers”, exploit­ing a “ser­vice provider” loop­hole in the decree:

    ...
    The broad access Face­book pro­vid­ed to device mak­ers rais­es ques­tions about its com­pli­ance with a 2011 con­sent decree with the F.T.C.

    The decree barred Face­book from over­rid­ing users’ pri­va­cy set­tings with­out first get­ting explic­it con­sent. That agree­ment stemmed from an inves­ti­ga­tion that found Face­book had allowed app devel­op­ers and oth­er third par­ties to col­lect per­son­al details about users’ friends, even when those friends had asked that their infor­ma­tion remain pri­vate.

    After the Cam­bridge Ana­lyt­i­ca rev­e­la­tions, the F.T.C. began an inves­ti­ga­tion into whether Facebook’s con­tin­ued shar­ing of data after 2011 vio­lat­ed the decree, poten­tial­ly expos­ing the com­pa­ny to fines.

    Face­book offi­cials said the pri­vate data chan­nels did not vio­late the decree because the com­pa­ny viewed its hard­ware part­ners as “ser­vice providers,” akin to a cloud com­put­ing ser­vice paid to store Face­book data or a com­pa­ny con­tract­ed to process cred­it card trans­ac­tions. Accord­ing to the con­sent decree, Face­book does not need to seek addi­tion­al per­mis­sion to share friend data with ser­vice providers.

    “These con­tracts and part­ner­ships are entire­ly con­sis­tent with Facebook’s F.T.C. con­sent decree,” Mr. Archi­bong, the Face­book offi­cial, said.

    But Jes­si­ca Rich, a for­mer F.T.C. offi­cial who helped lead the commission’s ear­li­er Face­book inves­ti­ga­tion, dis­agreed with that assess­ment.

    “Under Facebook’s inter­pre­ta­tion, the excep­tion swal­lows the rule,” said Ms. Rich, now with the Con­sumers Union. “They could argue that any shar­ing of data with third par­ties is part of the Face­book expe­ri­ence. And this is not at all how the pub­lic inter­pret­ed their 2014 announce­ment that they would lim­it third-par­ty app access to friend data.”
    ...

    It’s also worth recall­ing that Face­book made sim­i­lar excus­es for allow­ing app devel­op­ers to grab user friends data, claim­ing that the data was sole­ly going to be used for “improv­ing user expe­ri­ences.” Which makes the Face­book expla­na­tion for how the device mak­er data shar­ing pro­gram was very dif­fer­ent from the app devel­op­er data shar­ing pro­gram rather amus­ing because, accord­ing to Face­book, the the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence” (which implic­it­ly admits that app devel­op­ers were using that data from a lot more than just improv­ing user expe­ri­ences):

    ...
    “You might think that Face­book or the device man­u­fac­tur­er is trust­wor­thy,” said Serge Egel­man, a pri­va­cy researcher at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, who stud­ies the secu­ri­ty of mobile apps. “But the prob­lem is that as more and more data is col­lect­ed on the device — and if it can be accessed by apps on the device — it cre­ates seri­ous pri­va­cy and secu­ri­ty risks.”

    In inter­views, Face­book offi­cials defend­ed the data shar­ing as con­sis­tent with its pri­va­cy poli­cies, the F.T.C. agree­ment and pledges to users. They said its part­ner­ships were gov­erned by con­tracts that strict­ly lim­it­ed use of the data, includ­ing any stored on part­ners’ servers. The offi­cials added that they knew of no cas­es where the infor­ma­tion had been mis­used.

    ...

    “These part­ner­ships work very dif­fer­ent­ly from the way in which app devel­op­ers use our plat­form,” said Ime Archi­bong, a Face­book vice pres­i­dent. Unlike devel­op­ers that pro­vide games and ser­vices to Face­book users, the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence,” the offi­cials said.
    ...

    ““These part­ner­ships work very dif­fer­ent­ly from the way in which app devel­op­ers use our plat­form,” said Ime Archi­bong, a Face­book vice pres­i­dent. Unlike devel­op­ers that pro­vide games and ser­vices to Face­book users, the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence,” the offi­cials said.” LOL!

    Of course, it’s basi­cal­ly impos­si­ble for Face­book to know what device mak­ers were doing with this data because, just like with app devel­op­ers, these device man­u­fac­tur­ers had the option of keep­ing this Face­book data on their own servers:

    ...
    In inter­views with The Times, Face­book iden­ti­fied oth­er part­ners: Apple and Sam­sung, the world’s two biggest smart­phone mak­ers, and Ama­zon, which sells tablets.

    An Apple spokesman said the com­pa­ny relied on pri­vate access to Face­book data for fea­tures that enabled users to post pho­tos to the social net­work with­out open­ing the Face­book app, among oth­er things. Apple said its phones no longer had such access to Face­book as of last Sep­tem­ber.

    ...

    Ush­er Lieber­man, a Black­Ber­ry spokesman, said in a state­ment that the com­pa­ny used Face­book data only to give its own cus­tomers access to their Face­book net­works and mes­sages. Mr. Lieber­man said that the com­pa­ny “did not col­lect or mine the Face­book data of our cus­tomers,” adding that “Black­Ber­ry has always been in the busi­ness of pro­tect­ing, not mon­e­tiz­ing, cus­tomer data.”

    Microsoft entered a part­ner­ship with Face­book in 2008 that allowed Microsoft-pow­ered devices to do things like add con­tacts and friends and receive noti­fi­ca­tions, accord­ing to a spokesman. He added that the data was stored local­ly on the phone and was not synced to Microsoft’s servers.

    Face­book acknowl­edged that some part­ners did store users’ data — includ­ing friends’ data — on their own servers. A Face­book offi­cial said that regard­less of where the data was kept, it was gov­erned by strict agree­ments between the com­pa­nies.

    “I am dumb­found­ed by the atti­tude that any­body in Facebook’s cor­po­rate office would think allow­ing third par­ties access to data would be a good idea,” said Hen­ning Schulzrinne, a com­put­er sci­ence pro­fes­sor at Colum­bia Uni­ver­si­ty who spe­cial­izes in net­work secu­ri­ty and mobile sys­tems.
    ...

    And this data pri­va­cy night­mare sit­u­a­tion appar­ent­ly all start­ed in 2007, when Face­book began build­ing pri­vate APIs for device mak­ers:

    ...
    The Cam­bridge Ana­lyt­i­ca scan­dal revealed how loose­ly Face­book had policed the bustling ecosys­tem of devel­op­ers build­ing apps on its plat­form. They ranged from well-known play­ers like Zyn­ga, the mak­er of the Far­mVille game, to small­er ones, like a Cam­bridge con­trac­tor who used a quiz tak­en by about 300,000 Face­book users to gain access to the pro­files of as many as 87 mil­lion of their friends.

    Those devel­op­ers relied on Facebook’s pub­lic data chan­nels, known as appli­ca­tion pro­gram­ming inter­faces, or APIs. But start­ing in 2007, the com­pa­ny also estab­lished pri­vate data chan­nels for device man­u­fac­tur­ers.

    At the time, mobile phones were less pow­er­ful, and rel­a­tive­ly few of them could run stand-alone Face­book apps like those now com­mon on smart­phones. The com­pa­ny con­tin­ued to build new pri­vate APIs for device mak­ers through 2014, spread­ing user data through tens of mil­lions of mobile devices, game con­soles, tele­vi­sions and oth­er sys­tems out­side Facebook’s direct con­trol.

    Face­book began mov­ing to wind down the part­ner­ships in April, after assess­ing its pri­va­cy and data prac­tices in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. Mr. Archi­bong said the com­pa­ny had con­clud­ed that the part­ner­ships were no longer need­ed to serve Face­book users. About 22 of them have been shut down.
    ...

    So what kind of data are device man­u­fac­tur­ers actu­al­ly col­lect­ing? Well, it’s unclear if all device mak­ers get the same lev­el of access. But Black­Ber­ry, for exam­ple, can access 50 types of infor­ma­tion on users and their friends. Infor­ma­tion like Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events:

    ...
    Some device part­ners can retrieve Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events, among oth­er data. Tests by The Times showed that the part­ners request­ed and received data in the same way oth­er third par­ties did.

    Facebook’s view that the device mak­ers are not out­siders lets the part­ners go even fur­ther, The Times found: They can obtain data about a user’s Face­book friends, even those who have denied Face­book per­mis­sion to share infor­ma­tion with any third par­ties.

    In inter­views, sev­er­al for­mer Face­book soft­ware engi­neers and secu­ri­ty experts said they were sur­prised at the abil­i­ty to over­ride shar­ing restric­tions.

    “It’s like hav­ing door locks installed, only to find out that the lock­smith also gave keys to all of his friends so they can come in and rifle through your stuff with­out hav­ing to ask you for per­mis­sion,” said Ashkan Soltani, a research and pri­va­cy con­sul­tant who for­mer­ly served as the F.T.C.’s chief tech­nol­o­gist.

    ...

    In all, Face­book empow­ers Black­Ber­ry devices to access more than 50 types of infor­ma­tion about users and their friends, The Times found.

    And as the New York Times dis­cov­ered after test­ing a reporter’s Black­ber­ry device, Black­ber­ry was able to grab infor­ma­tion on friends of friends, allow­ing the one device they test­ed to col­lect iden­ti­fy­ing infor­ma­tion on 295,000 Face­book users:

    ...
    The Black­Ber­ry device was also able to retrieve iden­ti­fy­ing infor­ma­tion for near­ly 295,000 Face­book users. Most of them were sec­ond-degree Face­book friends of the reporter, or friends of friends.
    ...

    And this infor­ma­tion was col­lect­ed and sent to the “Black­Ber­ry Hub” imme­di­ate­ly after the reporter con­nect­ed their device to his Face­book account:

    ...
    To test one partner’s access to Facebook’s pri­vate data chan­nels, The Times used a reporter’s Face­book account — with about 550 friends — and a 2013 Black­Ber­ry device, mon­i­tor­ing what data the device request­ed and received. (More recent Black­Ber­ry devices, which run Google’s Android oper­at­ing sys­tem, do not use the same pri­vate chan­nels, Black­Ber­ry offi­cials said.)

    Imme­di­ate­ly after the reporter con­nect­ed the device to his Face­book account, it request­ed some of his pro­file data, includ­ing user ID, name, pic­ture, “about” infor­ma­tion, loca­tion, email and cell­phone num­ber. The device then retrieved the reporter’s pri­vate mes­sages and the respons­es to them, along with the name and user ID of each per­son with whom he was com­mu­ni­cat­ing.

    The data flowed to a Black­Ber­ry app known as the Hub, which was designed to let Black­Ber­ry users view all of their mes­sages and social media accounts in one place.

    The Hub also request­ed — and received — data that Facebook’s pol­i­cy appears to pro­hib­it. Since 2015, Face­book has said that apps can request only the names of friends using the same app. But the Black­Ber­ry app had access to all of the reporter’s Face­book friends and, for most of them, returned infor­ma­tion such as user ID, birth­day, work and edu­ca­tion his­to­ry and whether they were cur­rent­ly online.

    ...

    Not sur­pris­ing­ly, Face­book whis­tle-blow­er Sandy Parak­i­las, who left the com­pa­ny in 2012, recalls this data shar­ing arrange­ment trig­ger­ing dis­cus­sions with­in Face­book as ear­ly as 2012. So Face­book has had inter­nal con­cerns about this kind of data shar­ing for the past six years. Con­cerns that were appar­ent­ly ignored

    ...
    But the device part­ner­ships pro­voked dis­cus­sion even with­in Face­book as ear­ly as 2012, accord­ing to Sandy Parak­i­las, who at the time led third-par­ty adver­tis­ing and pri­va­cy com­pli­ance for Facebook’s plat­form.

    “This was flagged inter­nal­ly as a pri­va­cy issue,” said Mr. Parak­i­las, who left Face­book that year and has recent­ly emerged as a harsh crit­ic of the com­pa­ny. “It is shock­ing that this prac­tice may still con­tin­ue six years lat­er, and it appears to con­tra­dict Facebook’s tes­ti­mo­ny to Con­gress that all friend per­mis­sions were dis­abled.”
    ...

    Also keep in mind that the main con­cerns Sandy Parak­i­las recalls hear­ing Face­book exec­u­tives express­ing over the app devel­op­er data shar­ing back in 2012 was con­cerns that these devel­op­ers were col­lect­ed so much infor­ma­tion that they were going to be able to cre­ate their own social net­works. As Parak­i­las put it, ““They were wor­ried that the large app devel­op­ers were build­ing their own social graphs, mean­ing they could see all the con­nec­tions between these people...They were wor­ried that they were going to build their own social net­works.”

    Well, the major device mak­ers have undoubt­ed­ly been gath­er­ing far more infor­ma­tion than major app devel­op­ers, espe­cial­ly when you fac­tor in the “friends of friends” option and the fact that they’ve appar­ent­ly had access to this kind of data up until now. And that means these device mak­ers must already pos­sess remark­ably detailed social net­works of their own at this point.

    So when you hear Face­book exec­u­tives char­ac­ter­iz­ing these device man­u­fac­tur­ers as “exten­sions of Face­book”...

    ...
    The com­pa­ny views its device part­ners as exten­sions of Face­book, serv­ing its more than two bil­lion users, the offi­cials said.
    ...

    ...it’s prob­a­bly the most hon­est thing Face­book has said about this entire scan­dal.

    Posted by Pterrafractyl | June 7, 2018, 10:41 pm
  18. Here’s an angle to the Face­book data pri­va­cy scan­dal that has received sur­pris­ing­ly lit­tle atten­tion because, when it comes to pri­va­cy vio­la­tions, this just might be the worst one we’ve seen: It turns out one of the types of data that Face­book gave app devel­op­ers per­mis­sion to access is the con­tents of their pri­vate Inbox­es.

    Yep, it’s not just your Face­book ‘pro­file’ of data points Face­book has col­lect­ed on you. Or all the things you ‘liked’. App devel­op­ers appar­ent­ly also could gain access to the pri­vate mes­sages you received. And much like the ‘friends per­mis­sion’ option exploit­ed by Cam­bridge Ana­lyt­i­ca to get pro­file infor­ma­tion on all of the friends of app users with­out the per­mis­sions of those friends, this abil­i­ty to access the con­tents of your inbox is obvi­ous­ly a pri­va­cy vio­la­tion of the peo­ple send­ing you those mes­sages.

    The one pos­i­tive aspect of this whole sto­ry is that at least app devel­op­ers had to let users know that they were giv­ing access to the inbox. So users pre­sum­ably had to agree some­how. And Face­book states that users had to explic­it­ly give per­mis­sion for this. So at least this was­n’t a default app per­mis­sion.

    But when asked about the lan­guage used in this noti­fi­ca­tion Face­book had no response. So we can’t assume that all peo­ple who used Face­book apps were giv­ing devel­op­ers access to their their pri­vate inbox mes­sages, but we also have no idea how many peo­ple were tricked into it with decep­tive lan­guage dur­ing the per­mis­sions noti­fi­ca­tions.

    Of course, one of the big ques­tions is whether or not this inbox per­mis­sions fea­ture got exploit­ed by Cam­bridge Ana­lyt­i­ca? Yes, and that’s actu­al­ly how we learned about its exis­tence: When Face­book start­ed send­ing out noti­fi­ca­tions to users that they may have been impact­ed by the Cam­bridge Ana­lyt­i­ca data col­lec­tion (which impact­ed 87 mil­lion users) via the “This Is Your Dig­i­tal Life” app cre­at­ed by Alek­san­dr Kogan, they sent the fol­low­ing noti­fi­ca­tion that informed peo­ple that they may have had their per­son­al mes­sages col­lect­ed:

    A small num­ber of peo­ple who logged into “This Is Your Dig­i­tal Life” also shared their own News Feed, time­line, posts and mes­sages which may have includ­ed post and mes­sages from you. They may also have shared your home­town.

    So Face­book casu­al­ly informed users that only a “small num­ber of peo­ple” who used the Cam­bridge Ana­lyt­i­ca “This Is Your Dig­i­tal Life” app may have giv­en access to “mes­sage from you”. Did they actu­al­ly give devel­op­ers access to mes­sages from you? That’s left a mys­tery.

    And notice that the lan­guage in that Face­book noti­fi­ca­tion says user posts were also made avail­able to devel­op­ers. That’s been one of the things that’s nev­er been entire­ly clear in the report­ing of this top­ic: were devel­op­ers giv­en access to the actu­al pri­vate posts peo­ple make? The lan­guage of that noti­fi­ca­tion is ambigu­ous as to whether or not apps could access pri­vate posts or only pub­lic posts, but giv­en the way every­thing else has played out on this sto­ry is seems like pri­vate posts were high­ly like­ly.

    The inbox per­mis­sions was phased out in 2014 along with the “friends per­mis­sion” option and many of the oth­er per­mis­sions Face­book used to grant to app devel­op­er. There was a one year grace peri­od for app devel­op­ers to adjust to the new rules that took effect in April of 2015. But as the arti­cle notes, devel­op­ers were actu­al­ly giv­en access to the Inbox per­mis­sion until Octo­ber 6 of 2015. And that’s well into the US 2016 elec­tion cycle, which rais­es the fas­ci­nat­ing pos­si­bil­i­ty that this ‘fea­ture’ could have actu­al­ly be used to spy on the US polit­i­cal cam­paign. Or the UK Brex­it cam­paign. Or any oth­er polit­i­cal cam­paign around the world around that time. Or any­thing else of impor­tance across the world from 2010 — 2015 when these mail­box read­ing options were avail­able to app devel­op­ers.

    And that’s what makes it so amaz­ing that this par­tic­u­lar sto­ry was­n’t big­ger: back in April Face­book acknowl­edged that it gave almost any­one the poten­tial capac­i­ty to spy on pri­vate Face­book mes­sages and users had almost no idea this was going on. That seems like a pret­ty mas­sive scan­dal:

    The Reg­is­ter

    Face­book admits: Apps were giv­en users’ per­mis­sion to go into their inbox­es
    Only the inbox own­er had to con­sent to it, though... not the peo­ple they con­versed with

    By Rebec­ca Hill
    11 Apr 2018 at 12:24

    Face­book has admit­ted that some apps had access to users’ pri­vate mes­sages, thanks to a pol­i­cy that allowed devs to request mail­box per­mis­sions.

    The rev­e­la­tion came as cur­rent Face­book users found out whether they or their friends had used the “This Is Your Dig­i­tal Life” app that allowed aca­d­e­m­ic Alek­san­dr Kogan to col­lect data on users and their friends.

    Users whose friends had been suck­ered in by the quiz were told that as a result, their pub­lic pro­file, Page likes, birth­day and cur­rent city were “like­ly shared” with the app.

    So far, so expect­ed. But, the noti­fi­ca­tion went on:

    A small num­ber of peo­ple who logged into “This Is Your Dig­i­tal Life” also shared their own News Feed, time­line, posts and mes­sages which may have includ­ed post and mes­sages from you. They may also have shared your home­town.

    That’s because, back in 2014 when the app was in use, devel­op­ers using Facebook’s Graph API to get data off the plat­form could ask for read_mailbox per­mis­sion, allow­ing them access to a person’s inbox.

    That was just one of a series of extend­ed per­mis­sions grant­ed to devs under v1.0 of the Graph API, which was first intro­duced in 2010.

    Fol­low­ing pres­sure from pri­va­cy activists – but much to the dis­ap­point­ment of devel­op­ers – Face­book shut that tap off for most per­mis­sions in April 2015, although the changel­og shows that read_mailbox wasn’t dep­re­cat­ed until 6 Octo­ber 2015.

    Face­book con­firmed to The Reg­is­ter that this access had been request­ed by the app and that a small num­ber of peo­ple had grant­ed it per­mis­sion.

    “In 2014, Facebook’s plat­form pol­i­cy allowed devel­op­ers to request mail­box per­mis­sions but only if the per­son explic­it­ly gave con­sent for this to hap­pen,” a spokes­borg told us.

    “Accord­ing to our records only a very small num­ber of peo­ple explic­it­ly opt­ed into shar­ing this infor­ma­tion. The fea­ture was turned off in 2015.”

    Face­book tried to down­play the sig­nif­i­cance of the eye­brow-rais­ing rev­e­la­tion, say­ing it was at a time when mail­box­es were “more of an inbox”, and claimed it was main­ly used for apps offer­ing a com­bined mes­sag­ing ser­vice.

    “At the time when peo­ple pro­vid­ed access to their mail­box­es – when Face­book mes­sages were more of an inbox and less of a real-time mes­sag­ing ser­vice – this enabled things like desk­top apps that com­bined Face­book mes­sages with mes­sages from oth­er ser­vices like SMS so that a per­son could access their mes­sages all in one place,” the spokesper­son said.

    Pre­sum­ably the aim is to imply users were well aware of the per­mis­sions they were grant­i­ng, but it’s not clear how those requests would have been phrased for each app.

    We asked Face­book what form this would have tak­en – for instance if users could have been faced with a list of pre-ticked box­es, one of which gave per­mis­sion for inbox-surf­ing – but got no response.

    Although Face­book has indi­cat­ed Kogan’s app did request mail­box per­mis­sions, Cam­bridge Ana­lyt­i­ca – which licensed the user data from Kogan – denied it received any con­tent of any pri­vate mes­sages from his firm, GSR.

    GSR did not share the con­tent of any pri­vate mes­sages with Cam­bridge Ana­lyt­i­ca or SCL Elec­tions. Nei­ther com­pa­ny has ever han­dled such data.— Cam­bridge Ana­lyt­i­ca (@CamAnalytica) April 10, 2018

    But this is about more than GSR, Cam­bridge and SCL Elec­tions: for years, Facebook’s pol­i­cy allowed all devel­op­ers to request access to users’ inbox­es.

    That it was done with only one user’s per­mis­sion – the indi­vid­u­als “Friends” weren’t alert­ed to the fact mes­sages they had every right to believe were pri­vate, were not – is yet more evi­dence of just how blasé Face­book has been about users’ pri­va­cy.

    Mean­while, the firm has yet to offer details of a full audit of all the apps that asked for sim­i­lar amounts of infor­ma­tion as Kogan’s app did – although it has shut down some.

    And it is only offer­ing cur­rent users a sim­ple way to find out if they were affect­ed by the CA scan­dal; those who have since deac­ti­vat­ed or delet­ed their accounts have yet to be noti­fied. We’ve asked the firm how it plans to offer this infor­ma­tion, but it has yet to respond.

    Amid increased scruti­ny, Face­book is try­ing to sell the idea that it’s sor­ry, that it has learned from its mis­takes and that it is putting users first.

    But it’s going to be a tough sell: just last night, Mark Zucker­berg revealed that, when the firm first found out about GSR hand­ing data over to Cam­bridge Ana­lyt­i­ca in 2015, it chose not to tell users because it felt that ask­ing the firm to delete the data meant it was a “closed case”.

    ...

    ———–

    “Face­book admits: Apps were giv­en users’ per­mis­sion to go into their inbox­