Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

The Cambridge Analytica Microcosm in Our Panoptic Macrocosm

Let the Great Unfriend­ing Com­mence! Specif­i­cal­ly, the mass unfriend­ing of Face­book. Which would be a well deserved unfriend­ing after the scan­dalous rev­e­la­tions in a recent series of arti­cles cen­tered around the claims of Christo­pher Wylie, a Cam­bridge Ana­lyt­i­ca whis­tle-blow­er who helped found the firm and worked there until late 2014 until he and oth­ers grew increas­ing­ly uncom­fort­able with the far right goals and ques­tion­able actions of the firm.

And it turns out those ques­tion­able actions by Cam­bridge involve a far larg­er and more scan­dalous Face­book pol­i­cy brought forth by anoth­er whis­tle-blow­er, Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012.

So here’s a rough break­down of what’s been learned so far:

Accord­ing to Christo­pher Wylie, Cam­bridge Ana­lyt­i­ca was “har­vest­ing” mas­sive amount data off of Face­book from peo­ple who did not give their per­mis­sion by uti­liz­ing a Face­book loop­hole. This “friends per­mis­sions” loop­hole allowed app devel­op­ers to scrape infor­ma­tion not just from the Face­book pro­files of the peo­ple that agree to use their apps but also their friends’ pro­files too. In oth­er words, if your Face­book friend down­loaded Cam­bridge Ana­lyt­i­ca’s app, Cam­bridge Ana­lyt­i­ca was allowed to grab pri­vate infor­ma­tion from your Face­book pro­file with­out your per­mis­sion. And you would nev­er know it.

So how many pro­files was Cam­bridge Ana­lyt­i­ca allowed to “har­vest” uti­liz­ing this “friends per­mis­sion” fea­ture? About 50 mil­lion, and only a tiny frac­tion (~270,000) of that 50 mil­lion peo­ple actu­al­ly agreed to use Cam­bridge Ana­lyt­i­ca’s app. The rest were all their friends. So Face­book lit­er­al­ly used the con­nec­tiv­i­ty of Face­book users against them.

Keep in mind that this isn’t a new rev­e­la­tion. There were reports last year about how Cam­bridge Ana­lyt­i­ca paid ~100,000 peo­ple a dol­lar or two (via Ama­zon’s Mechan­i­cal Turks micro-task plat­form) to take an online sur­vey. But the only way they could be paid was to down­load an app that gave Cam­bridge Ana­lyt­i­ca access to the pro­files of all their Face­book friends, even­tu­al­ly yield­ing ~30 mil­lion “har­vest­ed” pro­files. Although accord­ing to these new reports that num­ber is clos­er to 50 mil­lion pro­files.

Before that, there was also a report from Decem­ber of 2015 about Cam­bridge Ana­lyt­i­ca’s build­ing of “psy­cho­graph­ic pro­files” for the Ted Cruz cam­paign. And that report also includ­ed the fact that this involved Face­book data har­vest­ed large­ly with­out users’ per­mis­sions.

So the fact that Cam­bridge Ana­lyt­i­ca was secret­ly har­vest­ing pri­vate Face­book user data with­out their per­mis­sions isn’t the big rev­e­la­tion here. What’s new is the rev­e­la­tion that what Cam­bridge Ana­lyt­i­ca did was inte­gral to Face­book’s busi­ness mod­el for years and very wide­spread.

This is where Sandy Parak­i­las comes into the pic­ture. Accord­ing to Parak­i­las, this pro­file-scrap­ing loop­hole that Cam­bridge Ana­lyt­i­ca was exploit­ing with its app was rou­tine­ly exploit­ing by pos­si­bly hun­dreds of thou­sands of oth­er app devel­op­ers for years. Yep. It turns out that Face­book had an arrange­ment going back to 2007 where the com­pa­ny would get a 30 per­cent cut in the mon­ey app devel­op­ers make off their Face­book apps and in exchange these devel­op­ers were giv­en the abil­i­ty to scrape the pro­files of not just the peo­ple who used their apps but also their friends. In oth­er words, Face­book was essen­tial­ly sell­ing the pri­vate infor­ma­tion of its users to app devel­op­ers. Secret­ly. Well, except it was­n’t a secret to all those app devel­op­ers. That’s also part of this scan­dal

This “friends per­mis­sion” fea­ture start­ed get­ting phased out around 2012, although it turns out Cam­bridge Ana­lyt­i­ca was one of the very last apps allowed to use it up into 2014.

Face­book has tried to defend itself by assert­ing that Face­book was only mak­ing this avail­able for things like aca­d­e­m­ic research and that Cam­bridge Ana­lyt­i­ca was there­fore mis­us­ing that data. And aca­d­e­m­ic research was in fact the cov­er sto­ry Cam­bridge Ana­lyt­i­ca used. Cam­bridge Ana­lyt­ic actu­al­ly set up a shell com­pa­ny, Glob­al Sci­ence Research (GRS), that was run by a Cam­bridge Uni­ver­si­ty pro­fes­sor, Alek­san­dr Kogan, and claimed to be pure­ly inter­est­ed in using that Face­book data for aca­d­e­m­ic research. The col­lect­ed data was then sent off to Cam­bridge Ana­lyt­i­ca. But accord­ing to Parak­i­las, Face­book was allow­ing devel­op­ers to uti­lize this “friends per­mis­sions” fea­ture rea­sons as vague as “improv­ing user expe­ri­ences”. Parak­i­las saw plen­ty of apps har­vest­ing this data for com­mer­cial pur­pos­es. Even worse, both Parak­i­las and Wylie paint a pic­ture of Face­book releas­ing this data and then doing almost noth­ing to ensure that it’s not mis­used.

So we’ve learned that Face­book was allow­ing app devel­op­ers to “har­vest” pri­vate data on Face­book users with­out their per­mis­sions from 2007–2014, and now we get to per­haps the most chill­ing part: Accord­ing to Parak­i­las, this data almost cer­tain­ly float­ing around in the black mar­ket. And it was so easy to set up an app and start col­lect­ing this kind of data that any­one with basic app cre­ate skills could start trawl­ing Face­book for data. And a major­i­ty of Face­book users prob­a­bly had their pro­files secret­ly “har­vest­ed” dur­ing this peri­od. If true, that means there’s like­ly a mas­sive black mar­ket of Face­book user pro­files just float­ing around out there and Face­book has done lit­tle to noth­ing to address this.

Parak­i­las, whose job it was to police data breach­es by third-par­ty soft­ware devel­op­ers from 2011–2012, under­stand­ably grew quite con­cerned over the risks to user data inher­ent in this busi­ness mod­el. So what did Face­book’s lead­er­ship do when he raised these con­cerns? They essen­tial­ly asked him “do you real­ly want to know how this data is being use” atti­tude and active­ly dis­cour­aged him from inves­ti­gat­ing how this data may be abused. Inten­tion­al­ly not know­ing about abus­es was oth­er part of the busi­ness mod­el. Crack­ing down on “rogue devel­op­ers” was very rare and the approval of Face­book CEO Mark Zucker­berg him­self was required to get an app kicked off the plat­form.

Face­book has been pub­licly deny­ing alle­ga­tions like this for years. It was the pub­lic denials that led Parak­i­las to come for­ward.

And it gets worse. It turns out that Alek­san­dr Kogan, the Uni­ver­si­ty of Cam­bridge aca­d­e­m­ic who end­ed up team­ing up with Cam­bridge Ana­lyt­i­ca and built the app that har­vest­ed the data, has a remark­ably close work­ing rela­tion­ship with Face­book. So close that Kogan actu­al­ly co-authored an aca­d­e­m­ic study pub­lished in 2015 with Face­book employ­ees. In addi­tion, one of Kogan’s part­ners in the data har­vest­ing, Joseph Chan­cel­lor, was also an author on the study and went on to join Face­book a few months after it was pub­lished.

It also looks like Steve Ban­non was over­see­ing this entire process, although he claims to know noth­ing.

Oh, and Palan­tir, the pri­vate intel­li­gence firm with deep ties to the US nation­al secu­ri­ty state owned by far right Face­book board mem­ber Peter Thiel, appears to have had an infor­mal rela­tion­ship with Cam­bridge Ana­lyt­i­ca this whole time, with Palan­tir employ­ees report­ed­ly trav­el­ing to Cam­bridge Ana­lyt­i­ca’s office to help build the psy­cho­log­i­cal pro­files. And this state of affairs is an exten­sion of how the inter­net has been used from its very con­cep­tion a half cen­tu­ry ago.

And that’s all part of why the Great Unfriend­ing of Face­book real­ly is long over­due. It’s one real­ly big rea­son to delete your Face­book account com­prised of many many many small egre­gious rea­sons.

So let’s start tak­ing a look at those many small rea­sons to delete your Face­book account with a look at a New York Times sto­ry about Christo­pher Wylie and his sto­ry of the ori­gins of Cam­bridge Ana­lyt­i­ca and the cru­cial role Face­book “har­vest­ing” played in pro­vid­ing the com­pa­ny with the data it need­ed to car­ry out the goals of its chief financiers: wag­ing the kind of ‘cul­ture war’ the bil­lion­aire far right Mer­cer fam­i­ly and Steve Ban­non want­ed to wage:

The New York Times

How Trump Con­sul­tants Exploit­ed the Face­book Data of Mil­lions

by Matthew Rosen­berg, Nicholas Con­fes­sore and Car­ole Cad­wal­ladr;
03/17/2018

As the upstart vot­er-pro­fil­ing com­pa­ny Cam­bridge Ana­lyt­i­ca pre­pared to wade into the 2014 Amer­i­can midterm elec­tions, it had a prob­lem.

The firm had secured a $15 mil­lion invest­ment from Robert Mer­cer, the wealthy Repub­li­can donor, and wooed his polit­i­cal advis­er, Stephen K. Ban­non, with the promise of tools that could iden­ti­fy the per­son­al­i­ties of Amer­i­can vot­ers and influ­ence their behav­ior. But it did not have the data to make its new prod­ucts work.

So the firm har­vest­ed pri­vate infor­ma­tion from the Face­book pro­files of more than 50 mil­lion users with­out their per­mis­sion, accord­ing to for­mer Cam­bridge employ­ees, asso­ciates and doc­u­ments, mak­ing it one of the largest data leaks in the social network’s his­to­ry. The breach allowed the com­pa­ny to exploit the pri­vate social media activ­i­ty of a huge swath of the Amer­i­can elec­torate, devel­op­ing tech­niques that under­pinned its work on Pres­i­dent Trump’s cam­paign in 2016.

An exam­i­na­tion by The New York Times and The Observ­er of Lon­don reveals how Cam­bridge Analytica’s dri­ve to bring to mar­ket a poten­tial­ly pow­er­ful new weapon put the firm — and wealthy con­ser­v­a­tive investors seek­ing to reshape pol­i­tics — under scruti­ny from inves­ti­ga­tors and law­mak­ers on both sides of the Atlantic.

Christo­pher Wylie, who helped found Cam­bridge and worked there until late 2014, said of its lead­ers: “Rules don’t mat­ter for them. For them, this is a war, and it’s all fair.”

“They want to fight a cul­ture war in Amer­i­ca,” he added. “Cam­bridge Ana­lyt­i­ca was sup­posed to be the arse­nal of weapons to fight that cul­ture war.”

Details of Cambridge’s acqui­si­tion and use of Face­book data have sur­faced in sev­er­al accounts since the busi­ness began work­ing on the 2016 cam­paign, set­ting off a furi­ous debate about the mer­its of the firm’s so-called psy­cho­graph­ic mod­el­ing tech­niques.

But the full scale of the data leak involv­ing Amer­i­cans has not been pre­vi­ous­ly dis­closed — and Face­book, until now, has not acknowl­edged it. Inter­views with a half-dozen for­mer employ­ees and con­trac­tors, and a review of the firm’s emails and doc­u­ments, have revealed that Cam­bridge not only relied on the pri­vate Face­book data but still pos­sess­es most or all of the trove.

Cam­bridge paid to acquire the per­son­al infor­ma­tion through an out­side researcher who, Face­book says, claimed to be col­lect­ing it for aca­d­e­m­ic pur­pos­es.

Dur­ing a week of inquiries from The Times, Face­book down­played the scope of the leak and ques­tioned whether any of the data still remained out of its con­trol. But on Fri­day, the com­pa­ny post­ed a state­ment express­ing alarm and promis­ing to take action.

“This was a scam — and a fraud,” Paul Gre­w­al, a vice pres­i­dent and deputy gen­er­al coun­sel at the social net­work, said in a state­ment to The Times ear­li­er on Fri­day. He added that the com­pa­ny was sus­pend­ing Cam­bridge Ana­lyt­i­ca, Mr. Wylie and the researcher, Alek­san­dr Kogan, a Russ­ian-Amer­i­can aca­d­e­m­ic, from Face­book. “We will take what­ev­er steps are required to see that the data in ques­tion is delet­ed once and for all — and take action against all offend­ing par­ties,” Mr. Gre­w­al said.

Alexan­der Nix, the chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and oth­er offi­cials had repeat­ed­ly denied obtain­ing or using Face­book data, most recent­ly dur­ing a par­lia­men­tary hear­ing last month. But in a state­ment to The Times, the com­pa­ny acknowl­edged that it had acquired the data, though it blamed Mr. Kogan for vio­lat­ing Facebook’s rules and said it had delet­ed the infor­ma­tion as soon as it learned of the prob­lem two years ago.

In Britain, Cam­bridge Ana­lyt­i­ca is fac­ing inter­twined inves­ti­ga­tions by Par­lia­ment and gov­ern­ment reg­u­la­tors into alle­ga­tions that it per­formed ille­gal work on the “Brex­it” cam­paign. The coun­try has strict pri­va­cy laws, and its infor­ma­tion com­mis­sion­er announced on Sat­ur­day that she was look­ing into whether the Face­book data was “ille­gal­ly acquired and used.”

In the Unit­ed States, Mr. Mercer’s daugh­ter, Rebekah, a board mem­ber, Mr. Ban­non and Mr. Nix received warn­ings from their lawyer that it was ille­gal to employ for­eign­ers in polit­i­cal cam­paigns, accord­ing to com­pa­ny doc­u­ments and for­mer employ­ees.

Con­gres­sion­al inves­ti­ga­tors have ques­tioned Mr. Nix about the company’s role in the Trump cam­paign. And the Jus­tice Department’s spe­cial coun­sel, Robert S. Mueller III, has demand­ed the emails of Cam­bridge Ana­lyt­i­ca employ­ees who worked for the Trump team as part of his inves­ti­ga­tion into Russ­ian inter­fer­ence in the elec­tion.

While the sub­stance of Mr. Mueller’s inter­est is a close­ly guard­ed secret, doc­u­ments viewed by The Times indi­cate that the firm’s British affil­i­ate claims to have worked in Rus­sia and Ukraine. And the Wik­iLeaks founder, Julian Assange, dis­closed in Octo­ber that Mr. Nix had reached out to him dur­ing the cam­paign in hopes of obtain­ing pri­vate emails belong­ing to Mr. Trump’s Demo­c­ra­t­ic oppo­nent, Hillary Clin­ton.

The doc­u­ments also raise new ques­tions about Face­book, which is already grap­pling with intense crit­i­cism over the spread of Russ­ian pro­pa­gan­da and fake news. The data Cam­bridge col­lect­ed from pro­files, a por­tion of which was viewed by The Times, includ­ed details on users’ iden­ti­ties, friend net­works and “likes.” Only a tiny frac­tion of the users had agreed to release their infor­ma­tion to a third par­ty.

“Pro­tect­ing people’s infor­ma­tion is at the heart of every­thing we do,” Mr. Gre­w­al said. “No sys­tems were infil­trat­ed, and no pass­words or sen­si­tive pieces of infor­ma­tion were stolen or hacked.”

Still, he added, “it’s a seri­ous abuse of our rules.”

Read­ing Vot­ers’ Minds

The Bor­deaux flowed freely as Mr. Nix and sev­er­al col­leagues sat down for din­ner at the Palace Hotel in Man­hat­tan in late 2013, Mr. Wylie recalled in an inter­view. They had much to cel­e­brate.

Mr. Nix, a brash sales­man, led the small elec­tions divi­sion at SCL Group, a polit­i­cal and defense con­trac­tor. He had spent much of the year try­ing to break into the lucra­tive new world of polit­i­cal data, recruit­ing Mr. Wylie, then a 24-year-old polit­i­cal oper­a­tive with ties to vet­er­ans of Pres­i­dent Obama’s cam­paigns. Mr. Wylie was inter­est­ed in using inher­ent psy­cho­log­i­cal traits to affect vot­ers’ behav­ior and had assem­bled a team of psy­chol­o­gists and data sci­en­tists, some of them affil­i­at­ed with Cam­bridge Uni­ver­si­ty.

The group exper­i­ment­ed abroad, includ­ing in the Caribbean and Africa, where pri­va­cy rules were lax or nonex­is­tent and politi­cians employ­ing SCL were hap­py to pro­vide gov­ern­ment-held data, for­mer employ­ees said.

Then a chance meet­ing brought Mr. Nix into con­tact with Mr. Ban­non, the Bre­it­bart News fire­brand who would lat­er become a Trump cam­paign and White House advis­er, and with Mr. Mer­cer, one of the rich­est men on earth.

Mr. Nix and his col­leagues court­ed Mr. Mer­cer, who believed a sophis­ti­cat­ed data com­pa­ny could make him a king­mak­er in Repub­li­can pol­i­tics, and his daugh­ter Rebekah, who shared his con­ser­v­a­tive views. Mr. Ban­non was intrigued by the pos­si­bil­i­ty of using per­son­al­i­ty pro­fil­ing to shift America’s cul­ture and rewire its pol­i­tics, recalled Mr. Wylie and oth­er for­mer employ­ees, who spoke on the con­di­tion of anonymi­ty because they had signed nondis­clo­sure agree­ments. Mr. Ban­non and the Mer­cers declined to com­ment.

Mr. Mer­cer agreed to help finance a $1.5 mil­lion pilot project to poll vot­ers and test psy­cho­graph­ic mes­sag­ing in Virginia’s guber­na­to­r­i­al race in Novem­ber 2013, where the Repub­li­can attor­ney gen­er­al, Ken Cuc­cinel­li, ran against Ter­ry McAu­li­ffe, the Demo­c­ra­t­ic fund-rais­er. Though Mr. Cuc­cinel­li lost, Mr. Mer­cer com­mit­ted to mov­ing for­ward.

The Mer­cers want­ed results quick­ly, and more busi­ness beck­oned. In ear­ly 2014, the investor Toby Neuge­bauer and oth­er wealthy con­ser­v­a­tives were prepar­ing to put tens of mil­lions of dol­lars behind a pres­i­den­tial cam­paign for Sen­a­tor Ted Cruz of Texas, work that Mr. Nix was eager to win.

...

Mr. Wylie’s team had a big­ger prob­lem. Build­ing psy­cho­graph­ic pro­files on a nation­al scale required data the com­pa­ny could not gath­er with­out huge expense. Tra­di­tion­al ana­lyt­ics firms used vot­ing records and con­sumer pur­chase his­to­ries to try to pre­dict polit­i­cal beliefs and vot­ing behav­ior.

But those kinds of records were use­less for fig­ur­ing out whether a par­tic­u­lar vot­er was, say, a neu­rot­ic intro­vert, a reli­gious extro­vert, a fair-mind­ed lib­er­al or a fan of the occult. Those were among the psy­cho­log­i­cal traits the firm claimed would pro­vide a unique­ly pow­er­ful means of design­ing polit­i­cal mes­sages.

Mr. Wylie found a solu­tion at Cam­bridge University’s Psy­cho­met­rics Cen­tre. Researchers there had devel­oped a tech­nique to map per­son­al­i­ty traits based on what peo­ple had liked on Face­book. The researchers paid users small sums to take a per­son­al­i­ty quiz and down­load an app, which would scrape some pri­vate infor­ma­tion from their pro­files and those of their friends, activ­i­ty that Face­book per­mit­ted at the time. The approach, the sci­en­tists said, could reveal more about a per­son than their par­ents or roman­tic part­ners knew — a claim that has been dis­put­ed.

When the Psy­cho­met­rics Cen­tre declined to work with the firm, Mr. Wylie found some­one who would: Dr. Kogan, who was then a psy­chol­o­gy pro­fes­sor at the uni­ver­si­ty and knew of the tech­niques. Dr. Kogan built his own app and in June 2014 began har­vest­ing data for Cam­bridge Ana­lyt­i­ca. The busi­ness cov­ered the costs — more than $800,000 — and allowed him to keep a copy for his own research, accord­ing to com­pa­ny emails and finan­cial records.

All he divulged to Face­book, and to users in fine print, was that he was col­lect­ing infor­ma­tion for aca­d­e­m­ic pur­pos­es, the social net­work said. It did not ver­i­fy his claim. Dr. Kogan declined to pro­vide details of what hap­pened, cit­ing nondis­clo­sure agree­ments with Face­book and Cam­bridge Ana­lyt­i­ca, though he main­tained that his pro­gram was “a very stan­dard vanil­la Face­book app.”

He ulti­mate­ly pro­vid­ed over 50 mil­lion raw pro­files to the firm, Mr. Wylie said, a num­ber con­firmed by a com­pa­ny email and a for­mer col­league. Of those, rough­ly 30 mil­lion — a num­ber pre­vi­ous­ly report­ed by The Inter­cept — con­tained enough infor­ma­tion, includ­ing places of res­i­dence, that the com­pa­ny could match users to oth­er records and build psy­cho­graph­ic pro­files. Only about 270,000 users — those who par­tic­i­pat­ed in the sur­vey — had con­sent­ed to hav­ing their data har­vest­ed.

Mr. Wylie said the Face­book data was “the sav­ing grace” that let his team deliv­er the mod­els it had promised the Mer­cers.

“We want­ed as much as we could get,” he acknowl­edged. “Where it came from, who said we could have it — we weren’t real­ly ask­ing.”

Mr. Nix tells a dif­fer­ent sto­ry. Appear­ing before a par­lia­men­tary com­mit­tee last month, he described Dr. Kogan’s con­tri­bu­tions as “fruit­less.”

An Inter­na­tion­al Effort

Just as Dr. Kogan’s efforts were get­ting under­way, Mr. Mer­cer agreed to invest $15 mil­lion in a joint ven­ture with SCL’s elec­tions divi­sion. The part­ners devised a con­vo­lut­ed cor­po­rate struc­ture, form­ing a new Amer­i­can com­pa­ny, owned almost entire­ly by Mr. Mer­cer, with a license to the psy­cho­graph­ics plat­form devel­oped by Mr. Wylie’s team, accord­ing to com­pa­ny doc­u­ments. Mr. Ban­non, who became a board mem­ber and investor, chose the name: Cam­bridge Ana­lyt­i­ca.

The firm was effec­tive­ly a shell. Accord­ing to the doc­u­ments and for­mer employ­ees, any con­tracts won by Cam­bridge, orig­i­nal­ly incor­po­rat­ed in Delaware, would be ser­viced by Lon­don-based SCL and over­seen by Mr. Nix, a British cit­i­zen who held dual appoint­ments at Cam­bridge Ana­lyt­i­ca and SCL. Most SCL employ­ees and con­trac­tors were Cana­di­an, like Mr. Wylie, or Euro­pean.

But in July 2014, an Amer­i­can elec­tion lawyer advis­ing the com­pa­ny, Lau­rence Levy, warned that the arrange­ment could vio­late laws lim­it­ing the involve­ment of for­eign nation­als in Amer­i­can elec­tions.

In a memo to Mr. Ban­non, Ms. Mer­cer and Mr. Nix, the lawyer, then at the firm Bracewell & Giu­liani, warned that Mr. Nix would have to recuse him­self “from sub­stan­tive man­age­ment” of any clients involved in Unit­ed States elec­tions. The data firm would also have to find Amer­i­can cit­i­zens or green card hold­ers, Mr. Levy wrote, “to man­age the work and deci­sion mak­ing func­tions, rel­a­tive to cam­paign mes­sag­ing and expen­di­tures.”

In sum­mer and fall 2014, Cam­bridge Ana­lyt­i­ca dived into the Amer­i­can midterm elec­tions, mobi­liz­ing SCL con­trac­tors and employ­ees around the coun­try. Few Amer­i­cans were involved in the work, which includ­ed polling, focus groups and mes­sage devel­op­ment for the John Bolton Super PAC, con­ser­v­a­tive groups in Col­orado and the cam­paign of Sen­a­tor Thom Tillis, the North Car­oli­na Repub­li­can.

Cam­bridge Ana­lyt­i­ca, in its state­ment to The Times, said that all “per­son­nel in strate­gic roles were U.S. nation­als or green card hold­ers.” Mr. Nix “nev­er had any strate­gic or oper­a­tional role” in an Amer­i­can elec­tion cam­paign, the com­pa­ny said.

Whether the company’s Amer­i­can ven­tures vio­lat­ed elec­tion laws would depend on for­eign employ­ees’ roles in each cam­paign, and on whether their work count­ed as strate­gic advice under Fed­er­al Elec­tion Com­mis­sion rules.

Cam­bridge Ana­lyt­i­ca appears to have exhib­it­ed a sim­i­lar pat­tern in the 2016 elec­tion cycle, when the com­pa­ny worked for the cam­paigns of Mr. Cruz and then Mr. Trump. While Cam­bridge hired more Amer­i­cans to work on the races that year, most of its data sci­en­tists were cit­i­zens of the Unit­ed King­dom or oth­er Euro­pean coun­tries, accord­ing to two for­mer employ­ees.

Under the guid­ance of Brad Parscale, Mr. Trump’s dig­i­tal direc­tor in 2016 and now the cam­paign man­ag­er for his 2020 re-elec­tion effort, Cam­bridge per­formed a vari­ety of ser­vices, for­mer cam­paign offi­cials said. That includ­ed design­ing tar­get audi­ences for dig­i­tal ads and fund-rais­ing appeals, mod­el­ing vot­er turnout, buy­ing $5 mil­lion in tele­vi­sion ads and deter­min­ing where Mr. Trump should trav­el to best drum up sup­port.

Cam­bridge exec­u­tives have offered con­flict­ing accounts about the use of psy­cho­graph­ic data on the cam­paign. Mr. Nix has said that the firm’s pro­files helped shape Mr. Trump’s strat­e­gy — state­ments dis­put­ed by oth­er cam­paign offi­cials — but also that Cam­bridge did not have enough time to com­pre­hen­sive­ly mod­el Trump vot­ers.

In a BBC inter­view last Decem­ber, Mr. Nix said that the Trump efforts drew on “lega­cy psy­cho­graph­ics” built for the Cruz cam­paign.

After the Leak

By ear­ly 2015, Mr. Wylie and more than half his orig­i­nal team of about a dozen peo­ple had left the com­pa­ny. Most were lib­er­al-lean­ing, and had grown dis­en­chant­ed with work­ing on behalf of the hard-right can­di­dates the Mer­cer fam­i­ly favored.

Cam­bridge Ana­lyt­i­ca, in its state­ment, said that Mr. Wylie had left to start a rival firm, and that it lat­er took legal action against him to enforce intel­lec­tu­al prop­er­ty claims. It char­ac­ter­ized Mr. Wylie and oth­er for­mer “con­trac­tors” as engag­ing in “what is clear­ly a mali­cious attempt to hurt the com­pa­ny.”

Near the end of that year, a report in The Guardian revealed that Cam­bridge Ana­lyt­i­ca was using pri­vate Face­book data on the Cruz cam­paign, send­ing Face­book scram­bling. In a state­ment at the time, Face­book promised that it was “care­ful­ly inves­ti­gat­ing this sit­u­a­tion” and would require any com­pa­ny mis­us­ing its data to destroy it.

Face­book ver­i­fied the leak and — with­out pub­licly acknowl­edg­ing it — sought to secure the infor­ma­tion, efforts that con­tin­ued as recent­ly as August 2016. That month, lawyers for the social net­work reached out to Cam­bridge Ana­lyt­i­ca con­trac­tors. “This data was obtained and used with­out per­mis­sion,” said a let­ter that was obtained by the Times. “It can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly.”

Mr. Gre­w­al, the Face­book deputy gen­er­al coun­sel, said in a state­ment that both Dr. Kogan and “SCL Group and Cam­bridge Ana­lyt­i­ca cer­ti­fied to us that they destroyed the data in ques­tion.”

But copies of the data still remain beyond Facebook’s con­trol. The Times viewed a set of raw data from the pro­files Cam­bridge Ana­lyt­i­ca obtained.

While Mr. Nix has told law­mak­ers that the com­pa­ny does not have Face­book data, a for­mer employ­ee said that he had recent­ly seen hun­dreds of giga­bytes on Cam­bridge servers, and that the files were not encrypt­ed.

Today, as Cam­bridge Ana­lyt­i­ca seeks to expand its busi­ness in the Unit­ed States and over­seas, Mr. Nix has men­tioned some ques­tion­able prac­tices. This Jan­u­ary, in under­cov­er footage filmed by Chan­nel 4 News in Britain and viewed by The Times, he boast­ed of employ­ing front com­pa­nies and for­mer spies on behalf of polit­i­cal clients around the world, and even sug­gest­ed ways to entrap politi­cians in com­pro­mis­ing sit­u­a­tions.

All the scruti­ny appears to have dam­aged Cam­bridge Analytica’s polit­i­cal busi­ness. No Amer­i­can cam­paigns or “super PACs” have yet report­ed pay­ing the com­pa­ny for work in the 2018 midterms, and it is unclear whether Cam­bridge will be asked to join Mr. Trump’s re-elec­tion cam­paign.

In the mean­time, Mr. Nix is seek­ing to take psy­cho­graph­ics to the com­mer­cial adver­tis­ing mar­ket. He has repo­si­tioned him­self as a guru for the dig­i­tal ad age — a “Math Man,” he puts it. In the Unit­ed States last year, a for­mer employ­ee said, Cam­bridge pitched Mer­cedes-Benz, MetLife and the brew­er AB InBev, but has not signed them on.

———-

“How Trump Con­sul­tants Exploit­ed the Face­book Data of Mil­lions” by Matthew Rosen­berg, Nicholas Con­fes­sore and Car­ole Cad­wal­ladr; The New York Times; 03/17/2018

“They want to fight a cul­ture war in Amer­i­ca,” he added. “Cam­bridge Ana­lyt­i­ca was sup­posed to be the arse­nal of weapons to fight that cul­ture war.”

Cam­bridge Ana­lyt­i­ca was sup­posed to be the arse­nal of weapons to fight the cul­ture war Cam­bridge Ana­lyt­i­ca’s lead­er­ship want­ed to wage. But that arse­nal could­n’t be built with­out data on what makes us ‘tick’. That’s where Face­book pro­file har­vest­ing came in:

The firm had secured a $15 mil­lion invest­ment from Robert Mer­cer, the wealthy Repub­li­can donor, and wooed his polit­i­cal advis­er, Stephen K. Ban­non, with the promise of tools that could iden­ti­fy the per­son­al­i­ties of Amer­i­can vot­ers and influ­ence their behav­ior. But it did not have the data to make its new prod­ucts work.

So the firm har­vest­ed pri­vate infor­ma­tion from the Face­book pro­files of more than 50 mil­lion users with­out their per­mis­sion, accord­ing to for­mer Cam­bridge employ­ees, asso­ciates and doc­u­ments, mak­ing it one of the largest data leaks in the social network’s his­to­ry. The breach allowed the com­pa­ny to exploit the pri­vate social media activ­i­ty of a huge swath of the Amer­i­can elec­torate, devel­op­ing tech­niques that under­pinned its work on Pres­i­dent Trump’s cam­paign in 2016.

An exam­i­na­tion by The New York Times and The Observ­er of Lon­don reveals how Cam­bridge Analytica’s dri­ve to bring to mar­ket a poten­tial­ly pow­er­ful new weapon put the firm — and wealthy con­ser­v­a­tive investors seek­ing to reshape pol­i­tics — under scruti­ny from inves­ti­ga­tors and law­mak­ers on both sides of the Atlantic.

Christo­pher Wylie, who helped found Cam­bridge and worked there until late 2014, said of its lead­ers: “Rules don’t mat­ter for them. For them, this is a war, and it’s all fair.”
...

And the acqui­si­tion of these 50 mil­lion Face­book pro­files has nev­er been acknowl­edge by Face­book, until now. And most or per­haps all of that data is still in the hands of Cam­bridge Ana­lyt­i­ca:

...
But the full scale of the data leak involv­ing Amer­i­cans has not been pre­vi­ous­ly dis­closed — and Face­book, until now, has not acknowl­edged it. Inter­views with a half-dozen for­mer employ­ees and con­trac­tors, and a review of the firm’s emails and doc­u­ments, have revealed that Cam­bridge not only relied on the pri­vate Face­book data but still pos­sess­es most or all of the trove.
...

And Face­book isn’t alone in sud­den­ly dis­cov­er­ing that its data was “har­vest­ed” by Cam­bridge Ana­lyt­i­ca. Cam­bridge Ana­lyt­i­ca itself would­n’t admit this either. Until now. Now Cam­bridge Ana­lyt­i­ca admits it did indeed obtained Face­book’s data. But the com­pa­ny blames it all on Alek­san­dr Kogan, the Cam­bridge Uni­ver­si­ty aca­d­e­m­ic who ran the front-com­pa­ny that paid peo­ple to take the psy­cho­log­i­cal pro­file sur­veys, for vio­lat­ing Face­book’s data usage rules. It also claims it delet­ed all the “har­vest­ed” infor­ma­tion two years ago as soon as it learned there was a prob­lem. That’s Cam­bridge Ana­lyt­i­ca’s new sto­ry and it’s stick­ing to it. For now:

...
Alexan­der Nix, the chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and oth­er offi­cials had repeat­ed­ly denied obtain­ing or using Face­book data, most recent­ly dur­ing a par­lia­men­tary hear­ing last month. But in a state­ment to The Times, the com­pa­ny acknowl­edged that it had acquired the data, though it blamed Mr. Kogan for vio­lat­ing Facebook’s rules and said it had delet­ed the infor­ma­tion as soon as it learned of the prob­lem two years ago.
...

But Christo­pher Wylie has a very dif­fer­ent rec­ol­lec­tion of events. In 2013, Wylie was a 24-year-old polit­i­cal oper­a­tive with ties to vet­er­ans of Pres­i­dent Obama’s cam­paigns inter­est­ed in using psy­cho­log­i­cal traits to affect vot­ers’ behav­ior. He even had a team of psy­chol­o­gists and data sci­en­tists, some of them affil­i­at­ed with Cam­bridge Uni­ver­si­ty (where Alek­san­dr Kogan was also work­ing at the time). And that exper­tise in psy­cho­log­i­cal pro­fil­ing for polit­i­cal pur­pos­es is why Mr. Nix recruit­ed Wylie and his team.

Then Nix has a chance meet­ing with Steve Ban­non and Robert Mer­cer. Mer­cer shows inter­est in the com­pa­ny because he believes it can make him a Repub­li­can king­mak­er, while Ban­non was focused on the pos­si­bil­i­ty of using per­son­al­i­ty pro­fil­ing to shift America’s cul­ture and rewire its pol­i­tics. The Mer­cers end up invest­ing $1.5 mil­lion in a pilot project: polling vot­ers and test­ing psy­cho­graph­ic mes­sag­ing in Virginia’s 2013 guber­na­to­r­i­al race:

...
The Bor­deaux flowed freely as Mr. Nix and sev­er­al col­leagues sat down for din­ner at the Palace Hotel in Man­hat­tan in late 2013, Mr. Wylie recalled in an inter­view. They had much to cel­e­brate.

Mr. Nix, a brash sales­man, led the small elec­tions divi­sion at SCL Group, a polit­i­cal and defense con­trac­tor. He had spent much of the year try­ing to break into the lucra­tive new world of polit­i­cal data, recruit­ing Mr. Wylie, then a 24-year-old polit­i­cal oper­a­tive with ties to vet­er­ans of Pres­i­dent Obama’s cam­paigns. Mr. Wylie was inter­est­ed in using inher­ent psy­cho­log­i­cal traits to affect vot­ers’ behav­ior and had assem­bled a team of psy­chol­o­gists and data sci­en­tists, some of them affil­i­at­ed with Cam­bridge Uni­ver­si­ty.

The group exper­i­ment­ed abroad, includ­ing in the Caribbean and Africa, where pri­va­cy rules were lax or nonex­is­tent and politi­cians employ­ing SCL were hap­py to pro­vide gov­ern­ment-held data, for­mer employ­ees said.

Then a chance meet­ing brought Mr. Nix into con­tact with Mr. Ban­non, the Bre­it­bart News fire­brand who would lat­er become a Trump cam­paign and White House advis­er, and with Mr. Mer­cer, one of the rich­est men on earth.

Mr. Nix and his col­leagues court­ed Mr. Mer­cer, who believed a sophis­ti­cat­ed data com­pa­ny could make him a king­mak­er in Repub­li­can pol­i­tics, and his daugh­ter Rebekah, who shared his con­ser­v­a­tive views. Mr. Ban­non was intrigued by the pos­si­bil­i­ty of using per­son­al­i­ty pro­fil­ing to shift America’s cul­ture and rewire its pol­i­tics, recalled Mr. Wylie and oth­er for­mer employ­ees, who spoke on the con­di­tion of anonymi­ty because they had signed nondis­clo­sure agree­ments. Mr. Ban­non and the Mer­cers declined to com­ment.

Mr. Mer­cer agreed to help finance a $1.5 mil­lion pilot project to poll vot­ers and test psy­cho­graph­ic mes­sag­ing in Virginia’s guber­na­to­r­i­al race in Novem­ber 2013, where the Repub­li­can attor­ney gen­er­al, Ken Cuc­cinel­li, ran against Ter­ry McAu­li­ffe, the Demo­c­ra­t­ic fund-rais­er. Though Mr. Cuc­cinel­li lost, Mr. Mer­cer com­mit­ted to mov­ing for­ward.
...

So the pilot project pro­ceed, but there was a prob­lem: Wylie’s team sim­ply did not have the data it need­ed. They only had the kind of data tra­di­tion­al ana­lyt­ics firms had: vot­ing records and con­sumer pur­chase his­to­ries. And get­ting the kind of data they want­ed to gain insight into vot­er neu­roti­cisms and psy­cho­log­i­cal traits could be very expen­sive:

...
The Mer­cers want­ed results quick­ly, and more busi­ness beck­oned. In ear­ly 2014, the investor Toby Neuge­bauer and oth­er wealthy con­ser­v­a­tives were prepar­ing to put tens of mil­lions of dol­lars behind a pres­i­den­tial cam­paign for Sen­a­tor Ted Cruz of Texas, work that Mr. Nix was eager to win.

...

Mr. Wylie’s team had a big­ger prob­lem. Build­ing psy­cho­graph­ic pro­files on a nation­al scale required data the com­pa­ny could not gath­er with­out huge expense. Tra­di­tion­al ana­lyt­ics firms used vot­ing records and con­sumer pur­chase his­to­ries to try to pre­dict polit­i­cal beliefs and vot­ing behav­ior.

But those kinds of records were use­less for fig­ur­ing out whether a par­tic­u­lar vot­er was, say, a neu­rot­ic intro­vert, a reli­gious extro­vert, a fair-mind­ed lib­er­al or a fan of the occult. Those were among the psy­cho­log­i­cal traits the firm claimed would pro­vide a unique­ly pow­er­ful means of design­ing polit­i­cal mes­sages.
...

And that’s where Alek­san­dr Kogan enters the pic­ture: First, Wylie found that Cam­bridge University’s Psy­cho­met­rics Cen­tre had exact­ly the kind of set up he need­ed. Researchers there claimed to have devel­oped tech­niques for map­ping per­son­al­i­ty traits based on what peo­ple “liked” on Face­book. Bet­ter yet, this team already had an app that paid users small sums to take a per­son­al­i­ty quiz and down­load an app that would scrape pri­vate infor­ma­tion from their Face­book pro­files and from their friends’ Face­book pro­files. In oth­er words, Cam­bridge University’s Psy­cho­met­rics Cen­tre was already employ­ing exact­ly the same kind of “har­vest­ing” mod­el Kogan and Cam­bridge Ana­lyt­i­ca even­tu­al­ly end­ed up doing.

But there was a prob­lem for Wylie and his team: Cam­bridge University’s Psy­cho­met­rics Cen­tre declined to work with them:

...
Mr. Wylie found a solu­tion at Cam­bridge University’s Psy­cho­met­rics Cen­tre. Researchers there had devel­oped a tech­nique to map per­son­al­i­ty traits based on what peo­ple had liked on Face­book. The researchers paid users small sums to take a per­son­al­i­ty quiz and down­load an app, which would scrape some pri­vate infor­ma­tion from their pro­files and those of their friends, activ­i­ty that Face­book per­mit­ted at the time. The approach, the sci­en­tists said, could reveal more about a per­son than their par­ents or roman­tic part­ners knew — a claim that has been dis­put­ed.
...

But it was­n’t a par­tic­u­lar­ly big prob­lem because Wylie found anoth­er Cam­bridge Uni­ver­si­ty psy­chol­o­gy pro­fes­sor who was famil­iar with the tech­niques and will­ing to do the job: Alek­san­dr Kogan. So Kogan built his own psy­cho­log­i­cal pro­file app and began har­vest­ing data for Cam­bridge Ana­lyt­i­ca in June 2014. Kogan was even allowed to keep the har­vest­ed data for his own research accord­ing to his con­tract with Cam­bridge Ana­lyt­i­ca. Accord­ing to Face­book, the only thing Kogan told them and told the users of his app in the fine print was that he was col­lect­ing infor­ma­tion for aca­d­e­m­ic pur­pos­es. Although Face­book did­n’t appear to have ever attempt­ed to ver­i­fy that claim:

...
When the Psy­cho­met­rics Cen­tre declined to work with the firm, Mr. Wylie found some­one who would: Dr. Kogan, who was then a psy­chol­o­gy pro­fes­sor at the uni­ver­si­ty and knew of the tech­niques. Dr. Kogan built his own app and in June 2014 began har­vest­ing data for Cam­bridge Ana­lyt­i­ca. The busi­ness cov­ered the costs — more than $800,000 — and allowed him to keep a copy for his own research, accord­ing to com­pa­ny emails and finan­cial records.

All he divulged to Face­book, and to users in fine print, was that he was col­lect­ing infor­ma­tion for aca­d­e­m­ic pur­pos­es, the social net­work said. It did not ver­i­fy his claim. Dr. Kogan declined to pro­vide details of what hap­pened, cit­ing nondis­clo­sure agree­ments with Face­book and Cam­bridge Ana­lyt­i­ca, though he main­tained that his pro­gram was “a very stan­dard vanil­la Face­book app.”
...

In the end, Kogan’s app man­aged to “har­vest” 50 mil­lion Face­book pro­files based on a mere 270,000 peo­ple actu­al­ly sign­ing up for Kogan’s app. So for each per­son who signed up for the app there were ~185 oth­er peo­ple who had their pro­files sent to Kogan too.

And 30 mil­lion of those pro­files con­tained infor­ma­tion like places of res­i­dence that allowed them to match that Face­book pro­file with oth­er records (pre­sum­ably non-Face­book records) and build psy­cho­graph­ic pro­files, imply­ing that those 30 mil­lion records were mapped to real life peo­ple:

...
He ulti­mate­ly pro­vid­ed over 50 mil­lion raw pro­files to the firm, Mr. Wylie said, a num­ber con­firmed by a com­pa­ny email and a for­mer col­league. Of those, rough­ly 30 mil­lion — a num­ber pre­vi­ous­ly report­ed by The Inter­cept — con­tained enough infor­ma­tion, includ­ing places of res­i­dence, that the com­pa­ny could match users to oth­er records and build psy­cho­graph­ic pro­files. Only about 270,000 users — those who par­tic­i­pat­ed in the sur­vey — had con­sent­ed to hav­ing their data har­vest­ed.

Mr. Wylie said the Face­book data was “the sav­ing grace” that let his team deliv­er the mod­els it had promised the Mer­cers.
...

So this har­vest­ing starts in mid-2014, but by ear­ly 2015, Wylie and more than half his orig­i­nal team leave the firm to start a rival firm, although it sounds lie con­cerns over the far right cause they were work­ing for was also behind their depar­ture:

...
By ear­ly 2015, Mr. Wylie and more than half his orig­i­nal team of about a dozen peo­ple had left the com­pa­ny. Most were lib­er­al-lean­ing, and had grown dis­en­chant­ed with work­ing on behalf of the hard-right can­di­dates the Mer­cer fam­i­ly favored.

Cam­bridge Ana­lyt­i­ca, in its state­ment, said that Mr. Wylie had left to start a rival firm, and that it lat­er took legal action against him to enforce intel­lec­tu­al prop­er­ty claims. It char­ac­ter­ized Mr. Wylie and oth­er for­mer “con­trac­tors” as engag­ing in “what is clear­ly a mali­cious attempt to hurt the com­pa­ny.”
...

Final­ly, this whole scan­dal goes pub­lic. Well, at least par­tial­ly: At the end of 2015, the Guardian reports this Face­book pro­file col­lec­tion scheme Cam­bridge Ana­lyt­i­ca was doing for the Ted Cruz cam­paign. Face­book does­n’t pub­licly acknowl­edge the truth of this report, but it did pub­licly state that it was “care­ful­ly inves­ti­gat­ing this sit­u­a­tion.” Face­book also sent a let­ter to Cam­bridge Ana­lyt­i­ca demand­ing that it destroy this data...except the let­ter was­n’t sent until August of 2016.

...
Near the end of that year, a report in The Guardian revealed that Cam­bridge Ana­lyt­i­ca was using pri­vate Face­book data on the Cruz cam­paign, send­ing Face­book scram­bling. In a state­ment at the time, Face­book promised that it was “care­ful­ly inves­ti­gat­ing this sit­u­a­tion” and would require any com­pa­ny mis­us­ing its data to destroy it.

Face­book ver­i­fied the leak and — with­out pub­licly acknowl­edg­ing it — sought to secure the infor­ma­tion, efforts that con­tin­ued as recent­ly as August 2016. That month, lawyers for the social net­work reached out to Cam­bridge Ana­lyt­i­ca con­trac­tors. “This data was obtained and used with­out per­mis­sion,” said a let­ter that was obtained by the Times. “It can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly.”
...

Face­book now claims that Cam­bridge Ana­lyt­i­ca “SCL Group and Cam­bridge Ana­lyt­i­ca cer­ti­fied to us that they destroyed the data in ques­tion.” But, of course, this was a lie. The New York Times was shown sets of the raw data.

And even more dis­turb­ing, a for­mer Cam­bridge Ana­lyt­i­ca employ­ee claims he recent­ly saw hun­dreds of giga­bytes on Cam­bridge Ana­lyt­i­ca’s servers. Unen­crypt­ed. Which means that data could poten­tial­ly be grabbed by any Cam­bridge Ana­lyt­i­ca employ­ee with access to that serv­er:

...
Mr. Gre­w­al, the Face­book deputy gen­er­al coun­sel, said in a state­ment that both Dr. Kogan and “SCL Group and Cam­bridge Ana­lyt­i­ca cer­ti­fied to us that they destroyed the data in ques­tion.”

But copies of the data still remain beyond Facebook’s con­trol. The Times viewed a set of raw data from the pro­files Cam­bridge Ana­lyt­i­ca obtained.

While Mr. Nix has told law­mak­ers that the com­pa­ny does not have Face­book data, a for­mer employ­ee said that he had recent­ly seen hun­dreds of giga­bytes on Cam­bridge servers, and that the files were not encrypt­ed.
...

So, to sum­ma­rize the key points from this New York Times arti­cle:

1. In 2013, Cam­bridge Ana­lyt­i­ca is formed when Alexan­der Nix, then a sales­man for the small elec­tions divi­sion at SCL Group, recruits Christo­pher Wylie and a team of psy­chol­o­gist to help devel­op a “polit­i­cal data” unit at the com­pa­ny, with an eye on the 2014 US mid-terms.

2. By chance, Nix and Wylie meet Steve Ban­non and Robert Mer­cer, who are quick­ly sold on the idea of psy­cho­graph­ic pro­fil­ing for polit­i­cal pur­pos­es. Ban­non was intrigue by the idea of using this data to wage the “cul­ture war.” Mer­cer agrees to invest $1.5 Bil­lion in a pilot project involv­ing the Vir­ginia guber­na­to­r­i­al race. Their suc­cess is lim­it­ed as Wylie soon dis­cov­ers that they don’t have the data they real­ly need to car­ry out their psy­cho­graph­ic pro­fil­ing project. But Robert Mer­cer remained com­mit­ted to the project.

3. Wylie found that Cam­bridge University’s Psy­cho­met­rics Cen­tre had exact­ly the kind of data they were seek­ing. Data that was being col­lect­ed via an app admin­is­tered through Face­book, where peo­ple were paid small amounts a mon­ey to take a sur­vey, and in exchange Cam­bridge University’s Psy­cho­met­rics Cen­tre was allowed to scrape their Face­book pro­file as well as the pro­files of all their Face­book friends.

4. Cam­bridge University’s Psy­cho­met­rics Cen­tre reject­ed Wylies offer to work with them, but there was anoth­er Cam­bridge Uni­ver­si­ty psy­chol­o­gy pro­fes­sor who was will­ing to do so, Alek­san­dr Kogan. Kogan pro­ceed­ed to start a com­pa­ny (as a front for Cam­bridge Ana­lyt­i­ca) and devel­op his own app, get­ting ~270,000 peo­ple to down­load it and give their per­mis­sion for their pro­files to be col­lect­ed. But using the “friends per­mis­sion” fea­ture, Kogan’s app end­ed col­lect­ing anoth­er ~50 mil­lion Face­book pro­files from the friends of those 270,000 peo­ple. ~30 mil­lion of those pro­files were matched to US vot­ers.

5. By ear­ly 2015, Wylie and his left-lean­ing team mem­bers leave Cam­bridge Ana­lyt­i­ca and form their own com­pa­ny, appar­ent­ly due to con­cerns over the far right goals of the firm.

6. Cam­bridge Ana­lyt­i­ca goes on to work for the Ted Cruz cam­paign. In late 2015, it’s report­ed that Cam­bridge Ana­lyt­i­ca work for Cruz involved work­ing with Face­book data from peo­ple who did­n’t give it per­mis­sion. Face­book issues a vague state­ment about how it’s going to inves­ti­gate.

7. In August 2016, Face­book sends a let­ter to Cam­bridge Ana­lyt­i­ca assert­ing that the data was obtained and used with­out per­mis­sion and must be delet­ed imme­di­ate­ly. The New York Times was just shown copies of exact­ly that data to write this arti­cle. Hun­dreds of giga­bytes of data that is com­plete­ly out­side Face­book’s con­trol.

8. Cam­bridge Ana­lyt­i­ca CEO (now for­mer CEO) Alexan­der Nix told law­mak­ers that the firm did­n’t pos­sess any Face­book data. So he was clear­ly com­plete­ly lying.

9. Final­ly, a for­mer Cam­bridge Ana­lyt­i­ca employ­ee showed the New York Times hun­dreds of giga­bytes of Face­book data. And it was unen­crypt­ed, so any­one with access to it could make a copy and give it to who­ev­er they want.

And that’s what we learned from just the New York Times’s ver­sion of this sto­ry. The Guardian Observ­er was also talk­ing with Christo­pher Wylie and oth­er Cam­bridge Ana­lyt­i­ca whis­tle-blow­ers. And while it large­ly cov­ers the same sto­ry as the New York Times report, the Observ­er arti­cle con­tains some addi­tion­al details.
1. For starters, the fol­low­ing arti­cle notes that the Facebook’s “plat­form pol­i­cy” allowed only col­lec­tion of friends’ data to improve user expe­ri­ence in the app and barred it being sold on or used for adver­tis­ing. That’s impor­tant to note because the stat­ed use of the data grabbed by Alek­san­dr Kogan’s app was for research pur­pos­es. But “improv­ing user expe­ri­ence in the app” is a far more gener­ic rea­son for grab­bing that data than aca­d­e­m­ic research pur­pos­es. And that hints at some­thing we’re going to see below from a Face­book whis­tle-blow­er: that all sorts of app devel­op­ers were grab­bing this kind of data using the ‘friends’ loop­hole for rea­sons that had absolute­ly noth­ing to do with aca­d­e­m­ic pur­pos­es and this was deemed fine by Face­book.

2. Face­book did­n’t for­mal­ly sus­pend Cam­bridge Ana­lyt­i­ca and Alek­san­dr Kogan from the plat­form until one day before the Observ­er arti­cle was pub­lished, which is more than two years after the ini­tial reports in late 2015 about the Cam­bridge Ana­lyt­i­ca mis­us­ing Face­book data for the Ted Cruz cam­paign. So if Face­book felt like Cam­bridge Ana­lyt­i­ca and Alek­san­dr Kogan was improp­er­ly obtain­ing and mis­us­ing its data it sure tried hard not to let on until the very last moment.

3. Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, told the UK MP when asked if Cam­bridge Ana­lyt­i­ca had Face­book data that, “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.” Which, again, as we’re going to see, was a total lie accord­ing to a Face­book whis­tle-blow­er because Face­book was rou­tine­ly pro­vid­ing exact­ly the kind of data Kogan’s app was col­lect­ing to thou­sands of devel­op­ers.

4. Alek­san­dr Kogan had a license from Face­book to col­lect pro­file data, but for research pur­pos­es, so when he used the data for com­mer­cial pur­pos­es he was vio­lat­ing his agree­ment, accord­ing to the arti­cle. Also, Kogan main­tains every­thing he did was legal, and says he had a “close work­ing rela­tion­ship” with Face­book, which had grant­ed him per­mis­sion for his apps. And as we’re going to see in sub­se­quent arti­cles, it does indeed look like Kogan is cor­rect and he was very open about using the data from the Cam­bridge Ana­lyt­i­ca app for com­mer­cial pur­pos­es and Face­book had no prob­lem with this.

5. In addi­tion to being a Cam­bridge Uni­ver­si­ty pro­fes­sor, Alek­san­dr Kogan has links to a Russ­ian uni­ver­si­ty and took Russ­ian grants for research. This will undoubt­ed­ly raise spec­u­la­tion about the pos­si­bil­i­ty that Kogan’s data was hand­ed over to the Krem­lin and used in the social-media influ­enc­ing cam­paign car­ried out by the Krem­lin-linked Inter­net Research Agency. If so, it’s still impor­tant to keep in mind that, based on what we’re going to see from Face­book whis­tle-blow­er Sandy Parak­i­las, the Krem­lin could have eas­i­ly set up all sorts of Face­book apps for col­lect­ing this kind of data because appar­ent­ly any­one could do it as long as the data was for “improv­ing the user expe­ri­ence”. That’s how obscene this sit­u­a­tion is. Kogan was not at all need­ed to pro­vide this data to the Krem­lin because it was so easy for any­one to obtain. In oth­er words, we should assume all sorts of gov­ern­ments have this kind of data.

6. The legal let­ter sent by Face­book to Cam­bridge Ana­lyt­i­ca in August 2016 demand­ing that it delete the data was sent just days before it was offi­cial­ly announced that Steve Ban­non was tak­ing over as cam­paign man­ag­er for Trump and bring­ing Cam­bridge Ana­lyt­i­ca with him. That sure does seem like Face­book knew about Ban­non’s involve­ment with Cam­bridge Ana­lyt­i­ca and the fact that Ban­non was going to become Trump’s cam­paign man­ag­er and bring Cam­bridge Ana­lyt­i­ca into the cam­paign.

7. Steve Bannon’s lawyer said he had no com­ment because his client “knows noth­ing about the claims being assert­ed”. He added: “The first Mr Ban­non heard of these reports was from media inquiries in the past few days.”

So as we can see, like the prover­bial onion, the more lay­ers you peel back on the sto­ry Cam­bridge Ana­lyt­i­ca and Face­book have been ped­dling about how this data was obtained and used, the more acrid and mal­odor­ous it gets. With a dis­tinct tinge of BS:

The Guardian

Revealed: 50 mil­lion Face­book pro­files har­vest­ed for Cam­bridge Ana­lyt­i­ca in major data breach

Whistle­blow­er describes how firm linked to for­mer Trump advis­er Steve Ban­non com­piled user data to tar­get Amer­i­can vot­ers

Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son

Sat 17 Mar 2018 18.03 EDT

The data ana­lyt­ics firm that worked with Don­ald Trump’s elec­tion team and the win­ning Brex­it cam­paign har­vest­ed mil­lions of Face­book pro­files of US vot­ers, in one of the tech giant’s biggest ever data breach­es, and used them to build a pow­er­ful soft­ware pro­gram to pre­dict and influ­ence choic­es at the bal­lot box.

A whistle­blow­er has revealed to the Observ­er how Cam­bridge Ana­lyt­i­ca – a com­pa­ny owned by the hedge fund bil­lion­aire Robert Mer­cer, and head­ed at the time by Trump’s key advis­er Steve Ban­non – used per­son­al infor­ma­tion tak­en with­out autho­ri­sa­tion in ear­ly 2014 to build a sys­tem that could pro­file indi­vid­ual US vot­ers, in order to tar­get them with per­son­alised polit­i­cal adver­tise­ments.

Christo­pher Wylie, who worked with a Cam­bridge Uni­ver­si­ty aca­d­e­m­ic to obtain the data, told the Observ­er: “We exploit­ed Face­book to har­vest mil­lions of people’s pro­files. And built mod­els to exploit what we knew about them and tar­get their inner demons. That was the basis the entire com­pa­ny was built on.

Doc­u­ments seen by the Observ­er, and con­firmed by a Face­book state­ment, show that by late 2015 the com­pa­ny had found out that infor­ma­tion had been har­vest­ed on an unprece­dent­ed scale. How­ev­er, at the time it failed to alert users and took only lim­it­ed steps to recov­er and secure the pri­vate infor­ma­tion of more than 50 mil­lion indi­vid­u­als.

The New York Times is report­ing that copies of the data har­vest­ed for Cam­bridge Ana­lyt­i­ca could still be found online; its report­ing team had viewed some of the raw data.

The data was col­lect­ed through an app called thi­sisy­our­dig­i­tal­life, built by aca­d­e­m­ic Alek­san­dr Kogan, sep­a­rate­ly from his work at Cam­bridge Uni­ver­si­ty. Through his com­pa­ny Glob­al Sci­ence Research (GSR), in col­lab­o­ra­tion with Cam­bridge Ana­lyt­i­ca, hun­dreds of thou­sands of users were paid to take a per­son­al­i­ty test and agreed to have their data col­lect­ed for aca­d­e­m­ic use.

How­ev­er, the app also col­lect­ed the infor­ma­tion of the test-tak­ers’ Face­book friends, lead­ing to the accu­mu­la­tion of a data pool tens of mil­lions-strong. Facebook’s “plat­form pol­i­cy” allowed only col­lec­tion of friends’ data to improve user expe­ri­ence in the app and barred it being sold on or used for adver­tis­ing. The dis­cov­ery of the unprece­dent­ed data har­vest­ing, and the use to which it was put, rais­es urgent new ques­tions about Facebook’s role in tar­get­ing vot­ers in the US pres­i­den­tial elec­tion. It comes only weeks after indict­ments of 13 Rus­sians by the spe­cial coun­sel Robert Mueller which stat­ed they had used the plat­form to per­pe­trate “infor­ma­tion war­fare” against the US.

Cam­bridge Ana­lyt­i­ca and Face­book are one focus of an inquiry into data and pol­i­tics by the British Infor­ma­tion Commissioner’s Office. Sep­a­rate­ly, the Elec­toral Com­mis­sion is also inves­ti­gat­ing what role Cam­bridge Ana­lyt­i­ca played in the EU ref­er­en­dum.

...

On Fri­day, four days after the Observ­er sought com­ment for this sto­ry, but more than two years after the data breach was first report­ed, Face­book announced that it was sus­pend­ing Cam­bridge Ana­lyt­i­ca and Kogan from the plat­form, pend­ing fur­ther infor­ma­tion over mis­use of data. Sep­a­rate­ly, Facebook’s exter­nal lawyers warned the Observ­er it was mak­ing “false and defam­a­to­ry” alle­ga­tions, and reserved Facebook’s legal posi­tion.

The rev­e­la­tions pro­voked wide­spread out­rage. The Mass­a­chu­setts Attor­ney Gen­er­al Mau­ra Healey announced that the state would be launch­ing an inves­ti­ga­tion. “Res­i­dents deserve answers imme­di­ate­ly from Face­book and Cam­bridge Ana­lyt­i­ca,” she said on Twit­ter.

The Demo­c­ra­t­ic sen­a­tor Mark Warn­er said the har­vest­ing of data on such a vast scale for polit­i­cal tar­get­ing under­lined the need for Con­gress to improve con­trols. He has pro­posed an Hon­est Ads Act to reg­u­late online polit­i­cal adver­tis­ing the same way as tele­vi­sion, radio and print. “This sto­ry is more evi­dence that the online polit­i­cal adver­tis­ing mar­ket is essen­tial­ly the Wild West. Whether it’s allow­ing Rus­sians to pur­chase polit­i­cal ads, or exten­sive micro-tar­get­ing based on ill-got­ten user data, it’s clear that, left unreg­u­lat­ed, this mar­ket will con­tin­ue to be prone to decep­tion and lack­ing in trans­paren­cy,” he said.

Last month both Face­book and the CEO of Cam­bridge Ana­lyt­i­ca, Alexan­der Nix, told a par­lia­men­tary inquiry on fake news: that the com­pa­ny did not have or use pri­vate Face­book data.

Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, when asked if Cam­bridge Ana­lyt­i­ca had Face­book data, told MPs: “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.”

Cam­bridge Analytica’s chief exec­u­tive, Alexan­der Nix, told the inquiry: “We do not work with Face­book data and we do not have Face­book data.”

Wylie, a Cana­di­an data ana­lyt­ics expert who worked with Cam­bridge Ana­lyt­i­ca and Kogan to devise and imple­ment the scheme, showed a dossier of evi­dence about the data mis­use to the Observ­er which appears to raise ques­tions about their tes­ti­mo­ny. He has passed it to the Nation­al Crime Agency’s cyber­crime unit and the Infor­ma­tion Commissioner’s Office. It includes emails, invoic­es, con­tracts and bank trans­fers that reveal more than 50 mil­lion pro­files – most­ly belong­ing to reg­is­tered US vot­ers – were har­vest­ed from the site in one of the largest-ever breach­es of Face­book data. Face­book on Fri­day said that it was also sus­pend­ing Wylie from access­ing the plat­form while it car­ried out its inves­ti­ga­tion, despite his role as a whistle­blow­er.

At the time of the data breach, Wylie was a Cam­bridge Ana­lyt­i­ca employ­ee, but Face­book described him as work­ing for Eunoia Tech­nolo­gies, a firm he set up on his own after leav­ing his for­mer employ­er in late 2014.

The evi­dence Wylie sup­plied to UK and US author­i­ties includes a let­ter from Facebook’s own lawyers sent to him in August 2016, ask­ing him to destroy any data he held that had been col­lect­ed by GSR, the com­pa­ny set up by Kogan to har­vest the pro­files.

That legal let­ter was sent sev­er­al months after the Guardian first report­ed the breach and days before it was offi­cial­ly announced that Ban­non was tak­ing over as cam­paign man­ag­er for Trump and bring­ing Cam­bridge Ana­lyt­i­ca with him.

“Because this data was obtained and used with­out per­mis­sion, and because GSR was not autho­rised to share or sell it to you, it can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly,” the let­ter said.

Face­book did not pur­sue a response when the let­ter ini­tial­ly went unan­swered for weeks because Wylie was trav­el­ling, nor did it fol­low up with foren­sic checks on his com­put­ers or stor­age, he said.

“That to me was the most aston­ish­ing thing. They wait­ed two years and did absolute­ly noth­ing to check that the data was delet­ed. All they asked me to do was tick a box on a form and post it back.”

Paul-Olivi­er Dehaye, a data pro­tec­tion spe­cial­ist, who spear­head­ed the inves­tiga­tive efforts into the tech giant, said: “Face­book has denied and denied and denied this. It has mis­led MPs and con­gres­sion­al inves­ti­ga­tors and it’s failed in its duties to respect the law.

“It has a legal oblig­a­tion to inform reg­u­la­tors and indi­vid­u­als about this data breach, and it hasn’t. It’s failed time and time again to be open and trans­par­ent.”

A major­i­ty of Amer­i­can states have laws requir­ing noti­fi­ca­tion in some cas­es of data breach, includ­ing Cal­i­for­nia, where Face­book is based.

Face­book denies that the har­vest­ing of tens of mil­lions of pro­files by GSR and Cam­bridge Ana­lyt­i­ca was a data breach. It said in a state­ment that Kogan “gained access to this infor­ma­tion in a legit­i­mate way and through the prop­er chan­nels” but “did not sub­se­quent­ly abide by our rules” because he passed the infor­ma­tion on to third par­ties.

Face­book said it removed the app in 2015 and required cer­ti­fi­ca­tion from every­one with copies that the data had been destroyed, although the let­ter to Wylie did not arrive until the sec­ond half of 2016. “We are com­mit­ted to vig­or­ous­ly enforc­ing our poli­cies to pro­tect people’s infor­ma­tion. We will take what­ev­er steps are required to see that this hap­pens,” Paul Gre­w­al, Facebook’s vice-pres­i­dent, said in a state­ment. The com­pa­ny is now inves­ti­gat­ing reports that not all data had been delet­ed.

Kogan, who has pre­vi­ous­ly unre­port­ed links to a Russ­ian uni­ver­si­ty and took Russ­ian grants for research, had a licence from Face­book to col­lect pro­file data, but it was for research pur­pos­es only. So when he hoovered up infor­ma­tion for the com­mer­cial ven­ture, he was vio­lat­ing the company’s terms. Kogan main­tains every­thing he did was legal, and says he had a “close work­ing rela­tion­ship” with Face­book, which had grant­ed him per­mis­sion for his apps.

The Observ­er has seen a con­tract dat­ed 4 June 2014, which con­firms SCL, an affil­i­ate of Cam­bridge Ana­lyt­i­ca, entered into a com­mer­cial arrange­ment with GSR, entire­ly premised on har­vest­ing and pro­cess­ing Face­book data. Cam­bridge Ana­lyt­i­ca spent near­ly $1m on data col­lec­tion, which yield­ed more than 50 mil­lion indi­vid­ual pro­files that could be matched to elec­toral rolls. It then used the test results and Face­book data to build an algo­rithm that could analyse indi­vid­ual Face­book pro­files and deter­mine per­son­al­i­ty traits linked to vot­ing behav­iour.

The algo­rithm and data­base togeth­er made a pow­er­ful polit­i­cal tool. It allowed a cam­paign to iden­ti­fy pos­si­ble swing vot­ers and craft mes­sages more like­ly to res­onate.

“The ulti­mate prod­uct of the train­ing set is cre­at­ing a ‘gold stan­dard’ of under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion,” the con­tract spec­i­fies. It promis­es to cre­ate a data­base of 2 mil­lion “matched” pro­files, iden­ti­fi­able and tied to elec­toral reg­is­ters, across 11 states, but with room to expand much fur­ther.

At the time, more than 50 mil­lion pro­files rep­re­sent­ed around a third of active North Amer­i­can Face­book users, and near­ly a quar­ter of poten­tial US vot­ers. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a rela­tion­ship with GSR. They did some research for us back in 2014. That research proved to be fruit­less and so the answer is no.”

Cam­bridge Ana­lyt­i­ca said that its con­tract with GSR stip­u­lat­ed that Kogan should seek informed con­sent for data col­lec­tion and it had no rea­son to believe he would not.

GSR was “led by a seem­ing­ly rep­utable aca­d­e­m­ic at an inter­na­tion­al­ly renowned insti­tu­tion who made explic­it con­trac­tu­al com­mit­ments to us regard­ing its legal author­i­ty to license data to SCL Elec­tions”, a com­pa­ny spokesman said.

SCL Elec­tions, an affil­i­ate, worked with Face­book over the peri­od to ensure it was sat­is­fied no terms had been “know­ing­ly breached” and pro­vid­ed a signed state­ment that all data and deriv­a­tives had been delet­ed, he said. Cam­bridge Ana­lyt­i­ca also said none of the data was used in the 2016 pres­i­den­tial elec­tion.

Steve Bannon’s lawyer said he had no com­ment because his client “knows noth­ing about the claims being assert­ed”. He added: “The first Mr Ban­non heard of these reports was from media inquiries in the past few days.” He direct­ed inquires to Nix.

———-

“Revealed: 50 mil­lion Face­book pro­files har­vest­ed for Cam­bridge Ana­lyt­i­ca in major data breach” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 03/17/2018

“Christo­pher Wylie, who worked with a Cam­bridge Uni­ver­si­ty aca­d­e­m­ic to obtain the data, told the Observ­er: “We exploit­ed Face­book to har­vest mil­lions of people’s pro­files. And built mod­els to exploit what we knew about them and tar­get their inner demons. That was the basis the entire com­pa­ny was built on.””

Exploit­ing every­one’s inner demons. Yeah, that sounds like some­thing Steve Ban­non and Robert Mer­cer would be inter­est­ed in. And it explains why Face­book data would have been poten­tial­ly so use­ful for exploit­ing those demons. Recall that the orig­i­nal non-Face­book data that Christo­pher Wylie and ini­tial Cam­bridge Ana­lyt­i­ca team was work­ing with with in 2013 and 2014 was­n’t seen as effec­tive. It did­n’t have that inner-demon-influ­enc­ing gran­u­lar­i­ty. And then they dis­cov­ered the Face­book data avail­able through this app loop­hole and it was tak­en to a dif­fer­ent lev­el. Remem­ber when Face­book ran that con­tro­ver­sial exper­i­ment on users where they tried to manip­u­late their emo­tions by alter­ing their news feeds? It sounds like that’s what Cam­bridge Ana­lyt­i­ca was basi­cal­ly try­ing to do using Face­book ads instead of the news­feed, but per­haps in a more micro­tar­get­ed way.

And that’s all because Facebook’s “plat­form pol­i­cy” allowed for the col­lec­tion of friends’ data to “improve user expe­ri­ence in the app” with the non-enforced request that the data not be sold on or used for adver­tis­ing:

...
The data was col­lect­ed through an app called thi­sisy­our­dig­i­tal­life, built by aca­d­e­m­ic Alek­san­dr Kogan, sep­a­rate­ly from his work at Cam­bridge Uni­ver­si­ty. Through his com­pa­ny Glob­al Sci­ence Research (GSR), in col­lab­o­ra­tion with Cam­bridge Ana­lyt­i­ca, hun­dreds of thou­sands of users were paid to take a per­son­al­i­ty test and agreed to have their data col­lect­ed for aca­d­e­m­ic use.

How­ev­er, the app also col­lect­ed the infor­ma­tion of the test-tak­ers’ Face­book friends, lead­ing to the accu­mu­la­tion of a data pool tens of mil­lions-strong. Facebook’s “plat­form pol­i­cy” allowed only col­lec­tion of friends’ data to improve user expe­ri­ence in the app and barred it being sold on or used for adver­tis­ing. The dis­cov­ery of the unprece­dent­ed data har­vest­ing, and the use to which it was put, rais­es urgent new ques­tions about Facebook’s role in tar­get­ing vot­ers in the US pres­i­den­tial elec­tion. It comes only weeks after indict­ments of 13 Rus­sians by the spe­cial coun­sel Robert Mueller which stat­ed they had used the plat­form to per­pe­trate “infor­ma­tion war­fare” against the US.
...

Just imag­ine how many app devel­op­ers were using this over the 2007–2014 peri­od Face­book had this “plat­form pol­i­cy” that allowed data cap­tures of friends’ “to improve user expe­ri­ence in the app”. It was­n’t just Cam­bridge Ana­lyt­i­ca that took advan­tage of this. That’s a big part of the sto­ry here.

And yet when Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, was asked if Cam­bridge Ana­lyt­i­ca had Face­book data, he said, “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.”:

...
Last month both Face­book and the CEO of Cam­bridge Ana­lyt­i­ca, Alexan­der Nix, told a par­lia­men­tary inquiry on fake news: that the com­pa­ny did not have or use pri­vate Face­book data.

Simon Mil­ner, Facebook’s UK pol­i­cy direc­tor, when asked if Cam­bridge Ana­lyt­i­ca had Face­book data, told MPs: “They may have lots of data but it will not be Face­book user data. It may be data about peo­ple who are on Face­book that they have gath­ered them­selves, but it is not data that we have pro­vid­ed.”

Cam­bridge Analytica’s chief exec­u­tive, Alexan­der Nix, told the inquiry: “We do not work with Face­book data and we do not have Face­book data.”
...

And note how the arti­cle appears to say the data Cam­bridge Ana­lyt­i­ca col­lect­ed on Face­book users includ­ed “emails, invoic­es, con­tracts and bank trans­fers that reveal more than 50 mil­lion pro­files.” It’s not clear if that’s a ref­er­ence to emails, invoic­es, con­tracts and bank trans­fers that involved with set­ting up Cam­bridge Ana­lyt­i­ca or emails, invoic­es, con­tracts and bank trans­fers from Face­book users, but if that was from users that would be wild­ly scan­dalous:

...
Wylie, a Cana­di­an data ana­lyt­ics expert who worked with Cam­bridge Ana­lyt­i­ca and Kogan to devise and imple­ment the scheme, showed a dossier of evi­dence about the data mis­use to the Observ­er which appears to raise ques­tions about their tes­ti­mo­ny. He has passed it to the Nation­al Crime Agency’s cyber­crime unit and the Infor­ma­tion Commissioner’s Office. It includes emails, invoic­es, con­tracts and bank trans­fers that reveal more than 50 mil­lion pro­filesmost­ly belong­ing to reg­is­tered US vot­ers – were har­vest­ed from the site in one of the largest-ever breach­es of Face­book data. Face­book on Fri­day said that it was also sus­pend­ing Wylie from access­ing the plat­form while it car­ried out its inves­ti­ga­tion, despite his role as a whistle­blow­er.
...

So it will be inter­est­ing to see if that point of ambi­gu­i­ty is ever clar­i­fied some­where. Because wow would that be scan­dalous if emails, invoic­es, con­tracts and bank trans­fers of Face­book users were released through this “plat­form pol­i­cy”.

Either way, it looks unam­bigu­ous­ly awful for Face­book. Espe­cial­ly now that we learn that the cease and destroy let­ter Face­book sent to Cam­bridge Ana­lyt­i­ca in August of 2016 was sus­pi­cious­ly sent just days before Steve Ban­non, a founder and offi­cer of Cam­bridge Ana­lyt­i­ca, becomes Trump’s cam­paign man­ag­er and brings the com­pa­ny into the Trump cam­paign:

...
The evi­dence Wylie sup­plied to UK and US author­i­ties includes a let­ter from Facebook’s own lawyers sent to him in August 2016, ask­ing him to destroy any data he held that had been col­lect­ed by GSR, the com­pa­ny set up by Kogan to har­vest the pro­files.

That legal let­ter was sent sev­er­al months after the Guardian first report­ed the breach and days before it was offi­cial­ly announced that Ban­non was tak­ing over as cam­paign man­ag­er for Trump and bring­ing Cam­bridge Ana­lyt­i­ca with him.

“Because this data was obtained and used with­out per­mis­sion, and because GSR was not autho­rised to share or sell it to you, it can­not be used legit­i­mate­ly in the future and must be delet­ed imme­di­ate­ly,” the let­ter said.
...

And the only thing Face­book did to con­firm that the Face­book data was­n’t mis­used, accord­ing to Christo­pher Wylie, was to ask that a box be checked a box on a form:

...
Face­book did not pur­sue a response when the let­ter ini­tial­ly went unan­swered for weeks because Wylie was trav­el­ling, nor did it fol­low up with foren­sic checks on his com­put­ers or stor­age, he said.

“That to me was the most aston­ish­ing thing. They wait­ed two years and did absolute­ly noth­ing to check that the data was delet­ed. All they asked me to do was tick a box on a form and post it back.”
...

And, again, Face­book denied it’s data based passed along to Cam­bridge Ana­lyt­i­ca when ques­tioned by both the US Con­gress and UK Par­lia­ment:

...
Paul-Olivi­er Dehaye, a data pro­tec­tion spe­cial­ist, who spear­head­ed the inves­tiga­tive efforts into the tech giant, said: “Face­book has denied and denied and denied this. It has mis­led MPs and con­gres­sion­al inves­ti­ga­tors and it’s failed in its duties to respect the law.

“It has a legal oblig­a­tion to inform reg­u­la­tors and indi­vid­u­als about this data breach, and it hasn’t. It’s failed time and time again to be open and trans­par­ent.”

A major­i­ty of Amer­i­can states have laws requir­ing noti­fi­ca­tion in some cas­es of data breach, includ­ing Cal­i­for­nia, where Face­book is based.
...

And not how Face­book now admits Alek­san­dr Kogan did indeed get the data legal­ly. It just was­n’t used prop­er­ly. It’s why Face­book is say­ing it should­n’t be called a “data breach”: because it was­n’t a breach because the data was obtained prop­er­ly:

...
Face­book denies that the har­vest­ing of tens of mil­lions of pro­files by GSR and Cam­bridge Ana­lyt­i­ca was a data breach. It said in a state­ment that Kogan “gained access to this infor­ma­tion in a legit­i­mate way and through the prop­er chan­nels” but “did not sub­se­quent­ly abide by our rules” because he passed the infor­ma­tion on to third par­ties.

Face­book said it removed the app in 2015 and required cer­ti­fi­ca­tion from every­one with copies that the data had been destroyed, although the let­ter to Wylie did not arrive until the sec­ond half of 2016. “We are com­mit­ted to vig­or­ous­ly enforc­ing our poli­cies to pro­tect people’s infor­ma­tion. We will take what­ev­er steps are required to see that this hap­pens,” Paul Gre­w­al, Facebook’s vice-pres­i­dent, said in a state­ment. The com­pa­ny is now inves­ti­gat­ing reports that not all data had been delet­ed.
...

But Alek­san­dr Kogan isn’t sim­ply argu­ing that he did noth­ing wrong when he obtained that Face­book data via his app. Kogan also argues that he had a “close work­ing rela­tion­ship” with Face­book, which has grant­ed him per­mis­sion for his apps, and every­thing he did with the data was legal. So Alek­san­dr Kogan’s sto­ry is quite notable because, again, as we’ll see below, there is evi­dence that his sto­ry is clos­est to the truth of all the sto­ries we’re hear­ing: that Face­book was total­ly fine with Kogan’s apps obtain­ing the pri­vate data of mil­lions of Face­book friends. And Face­book was per­fect­ly fine with how that data was used or was at least con­scious­ly try­ing to not know how the data might be mis­used. That’s the pic­ture that’s going to emerge so keep that in mind when Kogan asserts that he had a “close work­ing rela­tion­ship” with Face­book. He prob­a­bly did based on avail­able evi­dence:

...
Kogan, who has pre­vi­ous­ly unre­port­ed links to a Russ­ian uni­ver­si­ty and took Russ­ian grants for research, had a licence from Face­book to col­lect pro­file data, but it was for research pur­pos­es only. So when he hoovered up infor­ma­tion for the com­mer­cial ven­ture, he was vio­lat­ing the company’s terms. Kogan main­tains every­thing he did was legal, and says he had a “close work­ing rela­tion­ship” with Face­book, which had grant­ed him per­mis­sion for his apps.
...

Kogan main­tains every­thing he did was legal, and guess what? It prob­a­bly was legal. That’s part of the scan­dal here.

And regard­ing those tes­ti­mony’s by Cam­bridge Ana­lyt­i­ca’s now-for­mer CEO Alexan­der Nix that the com­pa­ny nev­er worked with Face­book, note how the Observ­er got to see a copy of the con­tract Cam­bridge Ana­lyt­i­ca entered into with Kogan’s GSR and the con­tract was entire­ly premised on har­vest­ing and pro­cess­ing the Face­book data. Which, again, hints at the like­li­hood that they thought what they were doing at the time (2014) was com­plete­ly legal. They talked about it in the con­tract:

...
The Observ­er has seen a con­tract dat­ed 4 June 2014, which con­firms SCL, an affil­i­ate of Cam­bridge Ana­lyt­i­ca, entered into a com­mer­cial arrange­ment with GSR, entire­ly premised on har­vest­ing and pro­cess­ing Face­book data. Cam­bridge Ana­lyt­i­ca spent near­ly $1m on data col­lec­tion, which yield­ed more than 50 mil­lion indi­vid­ual pro­files that could be matched to elec­toral rolls. It then used the test results and Face­book data to build an algo­rithm that could analyse indi­vid­ual Face­book pro­files and deter­mine per­son­al­i­ty traits linked to vot­ing behav­iour.

...

“The ulti­mate prod­uct of the train­ing set is cre­at­ing a ‘gold stan­dard’ of under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion,” the con­tract spec­i­fies. It promis­es to cre­ate a data­base of 2 mil­lion “matched” pro­files, iden­ti­fi­able and tied to elec­toral reg­is­ters, across 11 states, but with room to expand much fur­ther.

...

Cam­bridge Ana­lyt­i­ca said that its con­tract with GSR stip­u­lat­ed that Kogan should seek informed con­sent for data col­lec­tion and it had no rea­son to believe he would not.

GSR was “led by a seem­ing­ly rep­utable aca­d­e­m­ic at an inter­na­tion­al­ly renowned insti­tu­tion who made explic­it con­trac­tu­al com­mit­ments to us regard­ing its legal author­i­ty to license data to SCL Elec­tions”, a com­pa­ny spokesman said.
...

““The ulti­mate prod­uct of the train­ing set is cre­at­ing a ‘gold stan­dard’ of under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion,” the con­tract spec­i­fies. It promis­es to cre­ate a data­base of 2 mil­lion “matched” pro­files, iden­ti­fi­able and tied to elec­toral reg­is­ters, across 11 states, but with room to expand much fur­ther.”

A con­tract to cre­ate a ‘gold stan­dard’ of 2 mil­lion Face­book accounts that are ‘matched’ to real life vot­ers for the use of “under­stand­ing per­son­al­i­ty from Face­book pro­file infor­ma­tion.” That was the actu­al con­tract Kogan had with Cam­bridge Ana­lyt­i­ca. All for the pur­pose of devel­op­ing a sys­tem that would allow Cam­bridge Ana­lyt­i­ca to infer your inner demons from your Face­book pro­file and then manip­u­late them.

So it’s worth not­ing how the app per­mis­sions set­up Face­book allowed from 2007–2014 of let­ting app devel­op­ers col­lect Face­book pro­file infor­ma­tion of the peo­ple who use their apps and their friends cre­at­ed this amaz­ing arrange­ment where app devel­op­ers could gen­er­ate a ‘gold stan­dard’ of of peo­ple using apps and a test set from all their friends. If the goal was get­ting peo­ple to encour­age their friends to down­load an app that would have been a very use­ful data set. But it would of course also have been an incred­i­bly use­ful data set for any­one who want­ed to col­lect the pro­file infor­ma­tion of Face­book users. Because, again, as we’re going to see, a Face­book whis­tle-blow­er is claim­ing that Face­book user pro­file infor­ma­tion was rou­tine­ly hand­ed out to app devel­op­ers.

So if an app devel­op­er want­ed to exper­i­ment on, say, how to use that avail­able Face­book pro­file infor­ma­tion to manip­u­late peo­ple, get­ting a ‘gold stan­dard’ of peo­ple to take a psy­cho­log­i­cal pro­file sur­vey would be an impor­tant step in car­ry­ing out that exper­i­ment. Because those peo­ple who take your psy­cho­log­i­cal sur­vey form the data set you can use to train your algo­rithms that take Face­book pro­file infor­ma­tion as the input and cre­ate psy­cho­log­i­cal pro­file data as the out­put.

And that’s what Alek­san­dr Kogan’s app was doing: grab­bing psy­cho­log­i­cal infor­ma­tion from the sur­vey while simul­ta­ne­ous­ly grab­bing the Face­book pro­file data from the test-tak­ers, along with the Face­book pro­file data of all their friends. Kogan’s ‘gold stan­dard’ train­ing set was the peo­ple who actu­al­ly used his app and hand­ed over a bunch of per­son­al­i­ty infor­ma­tion from the sur­vey and the test set would have been the tens of mil­lions of friends whose data was also col­lect­ed. Since the goal of Cam­bridge Ana­lyt­i­ca was to infer per­son­al­i­ty char­ac­ter­is­tics from peo­ple’s Face­book pro­files, pair­ing the per­son­al­i­ty sur­veys from the ~270,000 peo­ple who took the app sur­vey to their Face­book pro­files allowed Cam­bridge Ana­lyt­i­ca to train their algo­rithms that guessed at per­son­al­i­ty char­ac­ter­is­tics from the Face­book pro­file infor­ma­tion. Then they had all the rest of the pro­file infor­ma­tion on the rest of the ~50 mil­lion peo­ple to apply those algo­rithms.

Recall how Trump’s 2016 cam­paign dig­i­tal direc­tor, Brad Parscale, curi­ous­ly downlplayed the util­i­ty of Cam­bridge Ana­lyt­i­ca’s data dur­ing inter­views where he was brag­ging about how they were using Face­book’s ad micro-tar­get­ing fea­tures to run “A/B test­ing on ste­ri­ods” on micro-tar­get­ed audi­ences i.e. strate­gi­cal­ly expos­ing micro-tar­get­ed Face­book audi­ences sets of ads that dif­fered in some spe­cif­ic way design to explore a par­tic­u­lar psy­cho­log­i­cal dimen­sion of that micro-audi­ence. So it’s worth not­ing that the “A/B test­ing on steroids” Brad Parscale referred to was prob­a­bly focused on the ~30 mil­lion of that ~50 mil­lion set of peo­ple that Cam­bridge Ana­lyt­i­ca obtained a Face­book pro­file who could be matched back to real peo­ple. Those 30 mil­lion Face­book users that Cam­bridge Ana­lyt­i­ca had Face­book pro­file data on were the test set. And the algo­rithms designed to guess the psy­cho­log­i­cal make­up of peo­ple from their Face­book pro­files that Cam­bridge Ana­lyt­i­ca refined on the train­ing set of ~270,000 Face­book users who took the psy­cho­log­i­cal pro­files were like­ly unleashed on that test set of ~30 mil­lion peo­ple.

So when we find out that the Cam­bridge Ana­lyt­i­ca con­tract with Alek­san­dr Kogan’s GSR com­pa­ny includ­ed lan­guage like build­ing a “gold stan­dard”, keep in mind that this implied that there was a lot of test­ing to do after the algo­rith­mic refine­ments based on that gold stan­dard. And the ~30–50 mil­lion pro­files they col­lect­ed from the friends of the ~270,000 peo­ple who down­loaded Kogan’s app made for quite a test set.

Also keep in mind that the denials that Cam­bridge Ana­lyt­i­ca worked with Face­book data by for­mer CEO Alexan­der Nix aren’t the only laugh­able denials of Cam­bridge Ana­lyt­i­ca’s offi­cers. Any denials by Steve Ban­non and his lawyers that he knew about Cam­bridge Ana­lyt­i­ca’s use of Face­book pro­file data should also be seen laugh­able, start­ing with the denials from Ban­non’s lawyers that he knows noth­ing about what Wylie and oth­ers are claim­ing:

...
Steve Bannon’s lawyer said he had no com­ment because his client “knows noth­ing about the claims being assert­ed”. He added: “The first Mr Ban­non heard of these reports was from media inquiries in the past few days.” He direct­ed inquires to Nix.

Steve Ban­non: the Boss Who Knows Noth­ing (Or So He Says)

Steve Ban­non “knows noth­ing about the claims being assert­ed.” LOL! Yeah, well, not accord­ing to Christo­pher Wylie, who, in the fol­low­ing arti­cle, has some rather sig­nif­i­cant claims about the role Steve Ban­non in all this. Accord­ing to Wylie:

1. Steve Ban­non was the per­son over­see­ing the acqui­si­tion of Face­book data by Cam­bridge Ana­lyt­i­ca. As Wylie put it, “We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss.” Now, when Wylie says Ban­non was Nix’s boss, note that Ban­non served as vice pres­i­dent and sec­re­tary of Cam­bridge Ana­lyt­i­ca from June 2014 to August 2016. And Nix was CEO dur­ing this peri­od. So tech­ni­cal­ly Nix was the boss. But it sounds like Ban­non was effec­tive­ly the boss, accord­ing to Wylie.

2. Wylie acknowl­edges that it’s unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the Face­book data. But Wylie does say that both Ban­non and Rebekah Mer­cer par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed. And Ban­non “approved the data-col­lec­tion scheme we were propos­ing”. So if Ban­non and Mer­cer did­n’t know the details of how the pur­chase of mas­sive amounts of Face­book data took place that would be pret­ty remark­able. Remark­ably uncu­ri­ous, giv­en that acquir­ing this data was at the core of what the com­pa­ny was doing and they approved of the data-col­lec­tion scheme. A scheme that involved hav­ing Alek­san­dr Kogan set up a sep­a­rate com­pa­ny. That was the “scheme” Ban­non and Mer­cer would have had to approve so the ques­tion if they did­n’t real­ize that they were acquire this Face­book data using this “friend shar­ing” fea­ture Face­book made avail­able to app devel­op­ers that would have been a sig­nif­i­cant over­sight.

The arti­cle goes on to include a few more fun facts, like...

3. Cam­bridge Ana­lyt­i­ca was doing focus group tests on vot­ers in 2014 and iden­ti­fied many of the same under­ly­ing emo­tion­al sen­ti­ments in vot­ers that formed the core mes­sage behind Don­ald Trump’s cam­paign. In focus groups for the 2014 midterms, the firm found that vot­ers respond­ed to calls for build­ing a wall with Mex­i­co, “drain­ing the swamp” int Wash­ing­ton DC, and to thin­ly veiled forms of racism toward African Amer­i­cans called “race real­ism”. The firm also test­ed vot­er atti­tudes towards Russ­ian Pres­i­dent Vladimir Putin and dis­cov­ered that there’s a lot of Amer­i­cans who real­ly like the idea of a real­ly strong author­i­tar­i­an leader. Again, this was all dis­cov­ered before Trump even jumped into the race.

4. The Trump cam­paign reject­ed ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, which sug­gests that Trump was actu­al­ly the top choice of the Mer­cers and Ban­non, ahead of Ted Cruz.

5. Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix was caught by Chan­nel 4 News in the UK boast­ing about the secre­cy of his firm, at one point stress­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing.”

So based on these alle­ga­tions, Steve Ban­non was close­ly involved in approval the var­i­ous schemes to acquire Face­book data and prob­a­bly using self-destruc­t­ing emails in the process:

The Wash­ing­ton Post

Ban­non over­saw Cam­bridge Analytica’s col­lec­tion of Face­book data, accord­ing to for­mer employ­ee

By Craig Tim­berg, Kar­la Adam and Michael Kran­ish
March 20, 2018 at 7:53 PM

LONDON — Con­ser­v­a­tive strate­gist Stephen K. Ban­non over­saw Cam­bridge Analytica’s ear­ly efforts to col­lect troves of Face­book data as part of an ambi­tious pro­gram to build detailed pro­files of mil­lions of Amer­i­can vot­ers, a for­mer employ­ee of the data-sci­ence firm said Tues­day.

The 2014 effort was part of a high-tech form of vot­er per­sua­sion tout­ed by the com­pa­ny, which under Ban­non iden­ti­fied and test­ed the pow­er of anti-estab­lish­ment mes­sages that lat­er would emerge as cen­tral themes in Pres­i­dent Trump’s cam­paign speech­es, accord­ing to Chris Wylie, who left the com­pa­ny at the end of that year.

Among the mes­sages test­ed were “drain the swamp” and “deep state,” he said.

Cam­bridge Ana­lyt­i­ca, which worked for Trump’s 2016 cam­paign, is now fac­ing ques­tions about alleged uneth­i­cal prac­tices, includ­ing charges that the firm improp­er­ly han­dled the data of tens of mil­lions of Face­book users. On Tues­day, the company’s board announced that it was sus­pend­ing its chief exec­u­tive, Alexan­der Nix, after British tele­vi­sion released secret record­ings that appeared to show him talk­ing about entrap­ping polit­i­cal oppo­nents.

More than three years before he served as Trump’s chief polit­i­cal strate­gist, Ban­non helped launch Cam­bridge Ana­lyt­i­ca with the finan­cial back­ing of the wealthy Mer­cer fam­i­ly as part of a broad­er effort to cre­ate a pop­ulist pow­er base. Ear­li­er this year, the Mer­cers cut ties with Ban­non after he was quot­ed mak­ing incen­di­ary com­ments about Trump and his fam­i­ly.

In an inter­view Tues­day with The Wash­ing­ton Post at his lawyer’s Lon­don office, Wylie said that Ban­non — while he was a top exec­u­tive at Cam­bridge Ana­lyt­i­ca and head of Bre­it­bart News — was deeply involved in the company’s strat­e­gy and approved spend­ing near­ly $1 mil­lion to acquire data, includ­ing Face­book pro­files, in 2014.

“We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss,” said Wylie, who was Cam­bridge Analytica’s research direc­tor. “Alexan­der Nix didn’t have the author­i­ty to spend that much mon­ey with­out approval.”

Ban­non, who served on the company’s board, did not respond to a request for com­ment. He served as vice pres­i­dent and sec­re­tary of Cam­bridge Ana­lyt­i­ca from June 2014 to August 2016, when he became chief exec­u­tive of Trump’s cam­paign, accord­ing to his pub­licly filed finan­cial dis­clo­sure. In 2017, he joined Trump in the White House as his chief strate­gist.

Ban­non received more than $125,000 in con­sult­ing fees from Cam­bridge Ana­lyt­i­ca in 2016 and owned “mem­ber­ship units” in the com­pa­ny worth between $1 mil­lion and $5 mil­lion, accord­ing to his finan­cial dis­clo­sure.

...

It is unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the data, which alleged­ly was col­lect­ed through an app that was por­trayed as a tool for psy­cho­log­i­cal research but was then trans­ferred to the com­pa­ny.

Face­book has said that infor­ma­tion was improp­er­ly shared and that it request­ed the dele­tion of the data in 2015. Cam­bridge Ana­lyt­i­ca offi­cials said that they had done so, but Face­book said it received reports sev­er­al days ago that the data was not delet­ed.

Wylie said that both Ban­non and Rebekah Mer­cer, whose father, Robert Mer­cer, financed the com­pa­ny, par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed, although Wylie acknowl­edged that it was not clear they knew the details of how the col­lec­tion took place.

Ban­non “approved the data-col­lec­tion scheme we were propos­ing,” Wylie said.

...

The data and analy­ses that Cam­bridge Ana­lyt­i­ca gen­er­at­ed in this time pro­vid­ed dis­cov­er­ies that would lat­er form the emo­tion­al­ly charged core of Trump’s pres­i­den­tial plat­form, said Wylie, whose dis­clo­sures in news reports over the past sev­er­al days have rocked both his one­time employ­er and Face­book.

“Trump wasn’t in our con­scious­ness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or any­thing.”

The year before Trump announced his pres­i­den­tial bid, the data firm already had found a high lev­el of alien­ation among young, white Amer­i­cans with a con­ser­v­a­tive bent.

In focus groups arranged to test mes­sages for the 2014 midterms, these vot­ers respond­ed to calls for build­ing a new wall to block the entry of ille­gal immi­grants, to reforms intend­ed the “drain the swamp” of Washington’s entrenched polit­i­cal com­mu­ni­ty and to thin­ly veiled forms of racism toward African Amer­i­cans called “race real­ism,” he recount­ed.

The firm also test­ed views of Russ­ian Pres­i­dent Vladimir Putin.

“The only for­eign thing we test­ed was Putin,” he said. “It turns out, there’s a lot of Amer­i­cans who real­ly like this idea of a real­ly strong author­i­tar­i­an leader and peo­ple were quite defen­sive in focus groups of Putin’s inva­sion of Crimea.”

The con­tro­ver­sy over Cam­bridge Analytica’s data col­lec­tion erupt­ed in recent days amid news reports that an app cre­at­ed by a Cam­bridge Uni­ver­si­ty psy­chol­o­gist, Alek­san­dr Kogan, accessed exten­sive per­son­al data of 50 mil­lion Face­book users. The app, called thi­sisy­our­dig­i­tal­life, was down­loaded by 270,000 users. Facebook’s pol­i­cy, which has since changed, allowed Kogan to also col­lect data —includ­ing names, home towns, reli­gious affil­i­a­tions and likes — on all of the Face­book “friends” of those users. Kogan shared that data with Cam­bridge Ana­lyt­i­ca for its grow­ing data­base on Amer­i­can vot­ers.

Face­book on Fri­day banned the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, Kogan and Wylie for improp­er­ly shar­ing that data.

The Fed­er­al Trade Com­mis­sion has opened an inves­ti­ga­tion into Face­book to deter­mine whether the social media plat­form vio­lat­ed a 2011 con­sent decree gov­ern­ing its pri­va­cy poli­cies when it allowed the data col­lec­tion. And Wylie plans to tes­ti­fy to Democ­rats on the House Intel­li­gence Com­mit­tee as part of their inves­ti­ga­tion of Russ­ian inter­fer­ence in the elec­tion, includ­ing pos­si­ble ties to the Trump cam­paign.

Mean­while, Britain’s Chan­nel 4 News aired a video Tues­day in which Nix was shown boast­ing about his work for Trump. He seemed to high­light his firm’s secre­cy, at one point stress­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing.”

The com­pa­ny said in a state­ment that Nix’s com­ments “do not rep­re­sent the val­ues or oper­a­tions of the firm and his sus­pen­sion reflects the seri­ous­ness with which we view this vio­la­tion.”

Nix could not be reached for com­ment.

Cam­bridge Ana­lyt­i­ca was set up as a U.S. affil­i­ate of British-based SCL Group, which had a wide range of gov­ern­men­tal clients glob­al­ly, in addi­tion to its polit­i­cal work.

Wylie said that Ban­non and Nix first met in 2013, the same year that Wylie — a young data whiz with some polit­i­cal expe­ri­ence in Britain and Cana­da — was work­ing for SCL Group. Ban­non and Wylie met soon after and hit it off in con­ver­sa­tions about cul­ture, elec­tions and how to spread ideas using tech­nol­o­gy.

Ban­non, Wylie, Nix, Rebekah Mer­cer and Robert Mer­cer met in Rebekah Mercer’s Man­hat­tan apart­ment in the fall of 2013, strik­ing a deal in which Robert Mer­cer would fund the cre­ation of Cam­bridge Ana­lyt­i­ca with $10 mil­lion, with the hope of shap­ing the con­gres­sion­al elec­tions a year lat­er, accord­ing to Wylie. Robert Mer­cer, in par­tic­u­lar, seemed trans­fixed by the group’s plans to har­ness and ana­lyze data, he recalled.

The Mer­cers were keen to cre­ate a U.S.-based busi­ness to avoid bad optics and vio­lat­ing U.S. cam­paign finance rules, Wylie said. “They want­ed to cre­ate an Amer­i­can brand,” he said.

The young com­pa­ny strug­gled to quick­ly deliv­er on its promis­es, Wiley said. Wide­ly avail­able infor­ma­tion from com­mer­cial data bro­kers pro­vid­ed people’s names, address­es, shop­ping habits and more, but failed to dis­tin­guish on more fine-grained mat­ters of per­son­al­i­ty that might affect polit­i­cal views.

Cam­bridge Ana­lyt­i­ca ini­tial­ly worked for 2016 Repub­li­can can­di­date Sen. Ted Cruz (Tex.), who was backed by the Mer­cers. The Trump cam­paign had reject­ed ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, and Trump him­self said in May 2016 that he “always felt” that the use of vot­er data was “over­rat­ed.”

After Cruz fad­ed, the Mer­cers switched their alle­giance to Trump and pitched their ser­vices to Trump’s dig­i­tal direc­tor, Brad Parscale. The company’s hir­ing was approved by Trump’s son-in-law, Jared Kush­n­er, who was infor­mal­ly help­ing to man­age the cam­paign with a focus on dig­i­tal strat­e­gy.

Kush­n­er said in an inter­view with Forbes mag­a­zine that the cam­paign “found that Face­book and dig­i­tal tar­get­ing were the most effec­tive ways to reach the audi­ences. ...We brought in Cam­bridge Ana­lyt­i­ca.” Kush­n­er said he “built” a data hub for the cam­paign “which nobody knew about, until towards the end.”

Kushner’s spokesman and lawyer both declined to com­ment Tues­day.

Two weeks before Elec­tion Day, Nix told a Post reporter at the company’s New York City office that his com­pa­ny could “deter­mine the per­son­al­i­ty of every sin­gle adult in the Unit­ed States of Amer­i­ca.”

The claim was wide­ly ques­tioned, and the Trump cam­paign lat­er said that it didn’t rely on psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca. Instead, the cam­paign said that it used a vari­ety of oth­er dig­i­tal infor­ma­tion to iden­ti­fy prob­a­ble sup­port­ers.

Parscale said in a Post inter­view in Octo­ber 2016 that he had not “opened the hood” on Cam­bridge Analytica’s method­ol­o­gy, and said he got much of his data from the Repub­li­can Nation­al Com­mit­tee. Parscale declined to com­ment Tues­day. He has pre­vi­ous­ly said that the Trump cam­paign did not use any psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca.

Cam­bridge Analytica’s par­ent com­pa­ny, SCL Group, has an ongo­ing con­tract with the State Department’s Glob­al Engage­ment Cen­ter. The com­pa­ny was paid almost $500,000 to inter­view peo­ple over­seas to under­stand the mind-set of Islamist mil­i­tants as part of an effort to counter their online pro­pa­gan­da and block recruits.

Heather Nauert, the act­ing under­sec­re­tary for pub­lic diplo­ma­cy, said Tues­day that the con­tract was signed in Novem­ber 2016, under the Oba­ma admin­is­tra­tion, and has not expired yet. In pub­lic records, the con­tract is dat­ed in Feb­ru­ary 2017, and the rea­son for the dis­crep­an­cy was not clear. Nauert said that the State Depart­ment had signed oth­er con­tracts with SCL Group in the past.

———-

“Ban­non over­saw Cam­bridge Analytica’s col­lec­tion of Face­book data, accord­ing to for­mer employ­ee” by Craig Tim­berg, Kar­la Adam and Michael Kran­ish; The Wash­ing­ton Post; 03/20/2018

“Con­ser­v­a­tive strate­gist Stephen K. Ban­non over­saw Cam­bridge Analytica’s ear­ly efforts to col­lect troves of Face­book data as part of an ambi­tious pro­gram to build detailed pro­files of mil­lions of Amer­i­can vot­ers, a for­mer employ­ee of the data-sci­ence firm said Tues­day.”

Steve Ban­non over­saw Cam­bridge Analytica’s ear­ly efforts to col­lect troves of Face­book data. That’s what Christo­pher Wylie claims, and giv­en Ban­non’s role as vice pres­i­dent of the com­pa­ny it’s not, on its face, an out­landish claim. And Ban­non appar­ent­ly approved the spend­ing of near­ly $1 mil­lion to acquire that Face­book data in 2014. Because, accord­ing to Wylie, Alexan­der Nix did­n’t actu­al­ly have per­mis­sion to spend that kind of mon­ey with­out approval. Ban­non, on the hand, did have per­mis­sion to make those kinds of expen­di­ture approvals. That’s how high up Ban­non was at that com­pa­ny even though he was tech­ni­cal­ly the vice pres­i­dent while Nix was the CEO:

...
In an inter­view Tues­day with The Wash­ing­ton Post at his lawyer’s Lon­don office, Wylie said that Ban­non — while he was a top exec­u­tive at Cam­bridge Ana­lyt­i­ca and head of Bre­it­bart News — was deeply involved in the company’s strat­e­gy and approved spend­ing near­ly $1 mil­lion to acquire data, includ­ing Face­book pro­files, in 2014.

“We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss,” said Wylie, who was Cam­bridge Analytica’s research direc­tor. “Alexan­der Nix didn’t have the author­i­ty to spend that much mon­ey with­out approval.”

Ban­non, who served on the company’s board, did not respond to a request for com­ment. He served as vice pres­i­dent and sec­re­tary of Cam­bridge Ana­lyt­i­ca from June 2014 to August 2016, when he became chief exec­u­tive of Trump’s cam­paign, accord­ing to his pub­licly filed finan­cial dis­clo­sure. In 2017, he joined Trump in the White House as his chief strate­gist.
...

“We had to get Ban­non to approve every­thing at this point. Ban­non was Alexan­der Nix’s boss...Alexander Nix didn’t have the author­i­ty to spend that much mon­ey with­out approval.””

And while Wylie acknowl­edges that unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the data, Wylie does assert that both Ban­non and Rebekah Mer­cer par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed. And, gen­er­al­ly speak­ing, if Ban­non was approval $1 mil­lion expen­di­tures on acquir­ing Face­book data he prob­a­bly sat in on at least one meet­ing where they described how they were plan­ning on actu­al­ly get­ting the data by spend­ing on that mon­ey. Don’t for­get the scheme involved pay­ing indi­vid­u­als small amounts of mon­ey to take the psy­cho­log­i­cal sur­vey on Kogan’s app, so at a min­i­mum you would expect Ban­non to know about how these apps were going to result in the gath­er­ing of Face­book pro­file infor­ma­tion:

...
It is unclear whether Ban­non knew how Cam­bridge Ana­lyt­i­ca was obtain­ing the data, which alleged­ly was col­lect­ed through an app that was por­trayed as a tool for psy­cho­log­i­cal research but was then trans­ferred to the com­pa­ny.

Face­book has said that infor­ma­tion was improp­er­ly shared and that it request­ed the dele­tion of the data in 2015. Cam­bridge Ana­lyt­i­ca offi­cials said that they had done so, but Face­book said it received reports sev­er­al days ago that the data was not delet­ed.

Wylie said that both Ban­non and Rebekah Mer­cer, whose father, Robert Mer­cer, financed the com­pa­ny, par­tic­i­pat­ed in con­fer­ence calls in 2014 in which plans to col­lect Face­book data were dis­cussed, although Wylie acknowl­edged that it was not clear they knew the details of how the col­lec­tion took place.

Ban­non “approved the data-col­lec­tion scheme we were propos­ing,” Wylie said.
...

What’s Ban­non hid­ing by claim­ing igno­rance? Well, that’s a good ques­tion after Britain’s Chan­nel 4 News aired a video Tues­day in which Nix was high­light­ing his firm’s secre­cy, includ­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing”:

...
Mean­while, Britain’s Chan­nel 4 News aired a video Tues­day in which Nix was shown boast­ing about his work for Trump. He seemed to high­light his firm’s secre­cy, at one point stress­ing the need to set up a spe­cial email account that self-destruc­ts all mes­sages so that “there’s no evi­dence, there’s no paper trail, there’s noth­ing.”

The com­pa­ny said in a state­ment that Nix’s com­ments “do not rep­re­sent the val­ues or oper­a­tions of the firm and his sus­pen­sion reflects the seri­ous­ness with which we view this vio­la­tion.”
...

Self-destruc­t­ing emails. That’s not sus­pi­cious or any­thing.

And note how Cam­bridge Ana­lyt­i­ca was appar­ent­ly already hon­ing in on a very ‘Trumpian’ mes­sage in 2014, long before Trump was on the radar:

...
The data and analy­ses that Cam­bridge Ana­lyt­i­ca gen­er­at­ed in this time pro­vid­ed dis­cov­er­ies that would lat­er form the emo­tion­al­ly charged core of Trump’s pres­i­den­tial plat­form, said Wylie, whose dis­clo­sures in news reports over the past sev­er­al days have rocked both his one­time employ­er and Face­book.

“Trump wasn’t in our con­scious­ness at that moment; this was well before he became a thing,” Wylie said. “He wasn’t a client or any­thing.”

The year before Trump announced his pres­i­den­tial bid, the data firm already had found a high lev­el of alien­ation among young, white Amer­i­cans with a con­ser­v­a­tive bent.

In focus groups arranged to test mes­sages for the 2014 midterms, these vot­ers respond­ed to calls for build­ing a new wall to block the entry of ille­gal immi­grants, to reforms intend­ed the “drain the swamp” of Washington’s entrenched polit­i­cal com­mu­ni­ty and to thin­ly veiled forms of racism toward African Amer­i­cans called “race real­ism,” he recount­ed.

The firm also test­ed views of Russ­ian Pres­i­dent Vladimir Putin.

“The only for­eign thing we test­ed was Putin,” he said. “It turns out, there’s a lot of Amer­i­cans who real­ly like this idea of a real­ly strong author­i­tar­i­an leader and peo­ple were quite defen­sive in focus groups of Putin’s inva­sion of Crimea.”
...

Intrigu­ing­ly, giv­en these ear­ly Trumpian find­ings in their 2014 vot­er research, it appears that the Trump cam­paign turned down ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, which sug­gests that Trump real­ly was the top pref­er­ence for Ban­non and the Mer­cers, not Ted Cruz:

...
Cam­bridge Ana­lyt­i­ca ini­tial­ly worked for 2016 Repub­li­can can­di­date Sen. Ted Cruz (Tex.), who was backed by the Mer­cers. The Trump cam­paign had reject­ed ear­ly over­tures to hire Cam­bridge Ana­lyt­i­ca, and Trump him­self said in May 2016 that he “always felt” that the use of vot­er data was “over­rat­ed.”
...

And as the arti­cle reminds us, the Trump cam­paign has com­plete­ly denied EVER using Cam­bridge Ana­lyt­i­ca’s data. Brad Parscale, Trump’s dig­i­tal direc­tor, claimed he got all the data they were work­ing with from the Repub­li­can Nation­al Com­mit­tee:

...
Two weeks before Elec­tion Day, Nix told a Post reporter at the company’s New York City office that his com­pa­ny could “deter­mine the per­son­al­i­ty of every sin­gle adult in the Unit­ed States of Amer­i­ca.”

The claim was wide­ly ques­tioned, and the Trump cam­paign lat­er said that it didn’t rely on psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca. Instead, the cam­paign said that it used a vari­ety of oth­er dig­i­tal infor­ma­tion to iden­ti­fy prob­a­ble sup­port­ers.

Parscale said in a Post inter­view in Octo­ber 2016 that he had not “opened the hood” on Cam­bridge Analytica’s method­ol­o­gy, and said he got much of his data from the Repub­li­can Nation­al Com­mit­tee. Parscale declined to com­ment Tues­day. He has pre­vi­ous­ly said that the Trump cam­paign did not use any psy­cho­graph­ic data from Cam­bridge Ana­lyt­i­ca.
...

And that denial by Parscale rais­es an obvi­ous ques­tion: when Parscale claims they only used data from the RNC, it’s clear­ly very pos­si­ble that he’s just straight up lying. But it’s also pos­si­ble that he’s lying while tech­ni­cal­ly telling the truth. Because if Cam­bridge Ana­lyt­i­ca gave its data to the RNC, it’s pos­si­ble the Trump cam­paign acquired the Cam­gridge Ana­lyt­i­ca data from the RNC at that point, giv­ing the cam­paign a degree of deni­a­bil­i­ty about the use of such scan­dalous­ly acquired data if the sto­ry of it ever became pub­lic. Like now.

Don’t for­get that data of this nature would have been poten­tial­ly use­ful for EVERY 2016 race, not just the pres­i­den­tial cam­paign. So if Ban­non and Mer­cer were intent on help­ing Repub­li­cans win across the board, hand­ing that data over to the RNC would have just made sense.

Also don’t for­get that the New York Times was shown unen­crypt­ed copies of the Face­book data col­lect­ed by Cam­bridge Ana­lyt­i­ca. If the New York Times saw this data, odds are the RNC has too. And who knows who else.

Face­book’s Sandy Parak­i­las Blows an “Utter­ly Hor­ri­fy­ing” Whis­tle

It all rais­es the ques­tion of whether or not the Repub­li­can Nation­al Com­mit­tee now pos­sess all that Cam­bridge Ana­lyt­i­ca data/Facebook data right now. And that brings us to per­haps the most scan­dalous arti­cle of all that we’re going to look at. It’s about Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012 who is now a whis­tle-blow­er about exact­ly the kind of “friend’s per­mis­sion” loop­hole Cam­bridge Ana­lyt­i­ca exploit­ed. And as the fol­low­ing arti­cle makes hor­rif­i­cal­ly clear:

1. It’s not just Cam­bridge Ana­lyt­i­ca or the RNC that might pos­sess this trea­sure trove of per­son­al infor­ma­tion. It’s the entire data bro­ker­age indus­try that prob­a­bly has thi­er hands on this data. Along with any­one who has picked it up through the black mar­ket.

2. It was rel­a­tive­ly easy to write an app that could exploit this “friends per­mis­sions” fea­ture and start trawl­ing Face­book for pro­file data for app users and their friends. Any­one with basic app cod­ing skills could do it.

3. Parak­i­las esti­mates that per­haps hun­dreds of thou­sands of devel­op­ers like­ly exploit­ed exact­ly the same ‘for research pur­pos­es only’ loop­hole exploit­ed by Cam­bridge Ana­lyt­i­ca. And Face­book had no way of track­ing how this data was used by devel­op­ers once it left Face­book’s servers.

4. Parak­i­las sus­pects that this amount of data will inevitably end up in the black mar­ket mean­ing there is prob­a­bly a mas­sive amount of per­son­al­ly iden­ti­fi­able Face­book data just float­ing around for the entire mar­ket­ing indus­try and any­one else (like the GOP) to data mine.

5. Parak­i­las knew of many com­mer­cial apps that were using the same “friends per­mis­sion” fea­ture to grab Face­book pro­file data use it com­mer­cial pur­pos­es.

6. Face­book’s pol­i­cy of giv­ing devel­op­ers access to Face­book users’ friends’ data was sanc­tioned in the small print in Facebook’s terms and con­di­tions, and users could block such data shar­ing by chang­ing their set­tings. That appears to be part of the legal pro­tec­tion Face­book employed when it had this pol­i­cy: don’t com­plain, it’s in the fine print.

7. Per­haps most scan­dalous of all, Face­book took a 30% cut of pay­ments made through apps in exchange for giv­ing these app devel­op­ers access to Face­book user data. Yep, Face­book was effec­tive­ly sell­ing user data, but by struc­tur­ing the sale of this data as a 30% share of the pay­ments made through the app Face­book also cre­at­ed an incen­tive to help devel­op­ers max­i­mize the prof­its they made through the app. So Face­book lit­er­al­ly set up a sys­tem that incen­tivized itself to help app devel­op­ers make as much mon­ey as pos­si­ble off of the user data they were hand­ing over.

8. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users. So as a 2010, ~1 in 10 Face­book apps were using this app loop­hole to grab infor­ma­tion about both the users of the app and their friends.

9. While Cam­bridge Ana­lyt­i­ca was far from alone in exploit­ing this loop­hole, it was actu­al­ly one of the very last firms giv­en per­mis­sion to be allowed to do so. Which means that par­tic­u­lar data set col­lect­ed by Cam­bridge Ana­lyt­i­ca could be unique­ly valu­able sim­ply be being larg­er and con­tain­ing and more recent data than most oth­er data sets of this nature.

10. When Parak­i­las brought up these con­cerns to Face­book’s exec­u­tives and sug­gest­ed the com­pa­ny should proac­tive­ly “audit devel­op­ers direct­ly and see what’s going on with the data” he was dis­cour­aged from the approach. One Face­book exec­u­tive advised him against look­ing too deeply at how the data was being used, warn­ing him: “Do you real­ly want to see what you’ll find?” Parak­i­las said he inter­pret­ed the com­ment to mean that “Face­book was in a stronger legal posi­tion if it didn’t know about the abuse that was hap­pen­ing”

11. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters, Parak­i­las was told that any deci­sion to ban an app required the per­son­al approval of Mark Zucker­berg. Although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers. That said, rogue devel­op­ers were rarely dealt with.

12. When Face­book even­tu­al­ly phased out this “friends per­mis­sions” pol­i­cy for app devel­op­ers, it was like­ly done out of con­cerns over the com­mer­cial val­ue of all this data they were hand­ing out. Exec­u­tives were appar­ent­ly con­cerned that com­peti­tors were going to use this data to build their own social net­works.

So, as we can see, the entire saga of Cam­bridge Ana­lyt­i­ca’s scan­dalous acqui­si­tion of pri­vate Face­book pro­files on ~50 mil­lion Amer­i­cans is some­thing Face­book made rou­tine for devel­op­ers of all sorts from 2007–2014, which means this is far from a ‘Cam­bridge Ana­lyt­i­ca’ sto­ry. It’s a Face­book sto­ry about a mas­sive prob­lem Face­book cre­at­ed for itself (for its own prof­its):

The Guardian

‘Utter­ly hor­ri­fy­ing’: ex-Face­book insid­er says covert data har­vest­ing was rou­tine

Sandy Parak­i­las says numer­ous com­pa­nies deployed these tech­niques – like­ly affect­ing hun­dreds of mil­lions of users – and that Face­book looked the oth­er way

Paul Lewis in San Fran­cis­co
Tue 20 Mar 2018 07.46 EDT

Hun­dreds of mil­lions of Face­book users are like­ly to have had their pri­vate infor­ma­tion har­vest­ed by com­pa­nies that exploit­ed the same terms as the firm that col­lect­ed data and passed it on to Cam­bridge Ana­lyt­i­ca, accord­ing to a new whistle­blow­er.

Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012, told the Guardian he warned senior exec­u­tives at the com­pa­ny that its lax approach to data pro­tec­tion risked a major breach.

“My con­cerns were that all of the data that left Face­book servers to devel­op­ers could not be mon­i­tored by Face­book, so we had no idea what devel­op­ers were doing with the data,” he said.

Parak­i­las said Face­book had terms of ser­vice and set­tings that “peo­ple didn’t read or under­stand” and the com­pa­ny did not use its enforce­ment mech­a­nisms, includ­ing audits of exter­nal devel­op­ers, to ensure data was not being mis­used.

Parak­i­las, whose job was to inves­ti­gate data breach­es by devel­op­ers sim­i­lar to the one lat­er sus­pect­ed of Glob­al Sci­ence Research, which har­vest­ed tens of mil­lions of Face­book pro­files and pro­vid­ed the data to Cam­bridge Ana­lyt­i­ca, said the slew of recent dis­clo­sures had left him dis­ap­point­ed with his supe­ri­ors for not heed­ing his warn­ings.

“It has been painful watch­ing,” he said, “because I know that they could have pre­vent­ed it.”

Asked what kind of con­trol Face­book had over the data giv­en to out­side devel­op­ers, he replied: “Zero. Absolute­ly none. Once the data left Face­book servers there was not any con­trol, and there was no insight into what was going on.”

Parak­i­las said he “always assumed there was some­thing of a black mar­ket” for Face­book data that had been passed to exter­nal devel­op­ers. How­ev­er, he said that when he told oth­er exec­u­tives the com­pa­ny should proac­tive­ly “audit devel­op­ers direct­ly and see what’s going on with the data” he was dis­cour­aged from the approach.

He said one Face­book exec­u­tive advised him against look­ing too deeply at how the data was being used, warn­ing him: “Do you real­ly want to see what you’ll find?” Parak­i­las said he inter­pret­ed the com­ment to mean that “Face­book was in a stronger legal posi­tion if it didn’t know about the abuse that was hap­pen­ing”.

He added: “They felt that it was bet­ter not to know. I found that utter­ly shock­ing and hor­ri­fy­ing.”

...

Face­book did not respond to a request for com­ment on the infor­ma­tion sup­plied by Parak­i­las, but direct­ed the Guardian to a Novem­ber 2017 blog­post in which the com­pa­ny defend­ed its data shar­ing prac­tices, which it said had “sig­nif­i­cant­ly improved” over the last five years.

“While it’s fair to crit­i­cise how we enforced our devel­op­er poli­cies more than five years ago, it’s untrue to sug­gest we didn’t or don’t care about pri­va­cy,” that state­ment said. “The facts tell a dif­fer­ent sto­ry.”

‘A major­i­ty of Face­book users’

Parak­i­las, 38, who now works as a prod­uct man­ag­er for Uber, is par­tic­u­lar­ly crit­i­cal of Facebook’s pre­vi­ous pol­i­cy of allow­ing devel­op­ers to access the per­son­al data of friends of peo­ple who used apps on the plat­form, with­out the knowl­edge or express con­sent of those friends.

That fea­ture, called friends per­mis­sion, was a boon to out­side soft­ware devel­op­ers who, from 2007 onwards, were giv­en per­mis­sion by Face­book to build quizzes and games – like the wide­ly pop­u­lar Far­mVille – that were host­ed on the plat­form.

The apps pro­lif­er­at­ed on Face­book in the years lead­ing up to the company’s 2012 ini­tial pub­lic offer­ing, an era when most users were still access­ing the plat­form via lap­tops and com­put­ers rather than smart­phones.

Face­book took a 30% cut of pay­ments made through apps, but in return enabled their cre­ators to have access to Face­book user data.

Parak­i­las does not know how many com­pa­nies sought friends per­mis­sion data before such access was ter­mi­nat­ed around mid-2014. How­ev­er, he said he believes tens or maybe even hun­dreds of thou­sands of devel­op­ers may have done so.

Parak­i­las esti­mates that “a major­i­ty of Face­book users” could have had their data har­vest­ed by app devel­op­ers with­out their knowl­edge. The com­pa­ny now has stricter pro­to­cols around the degree of access third par­ties have to data.

Parak­i­las said that when he worked at Face­book it failed to take full advan­tage of its enforce­ment mech­a­nisms, such as a clause that enables the social media giant to audit exter­nal devel­op­ers who mis­use its data.

Legal action against rogue devel­op­ers or moves to ban them from Face­book were “extreme­ly rare”, he said, adding: “In the time I was there, I didn’t see them con­duct a sin­gle audit of a developer’s sys­tems.”

Face­book announced on Mon­day that it had hired a dig­i­tal foren­sics firm to con­duct an audit of Cam­bridge Ana­lyt­i­ca. The deci­sion comes more than two years after Face­book was made aware of the report­ed data breach.

Dur­ing the time he was at Face­book, Parak­i­las said the com­pa­ny was keen to encour­age more devel­op­ers to build apps for its plat­form and “one of the main ways to get devel­op­ers inter­est­ed in build­ing apps was through offer­ing them access to this data”. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters he was told that any deci­sion to ban an app required the per­son­al approval of the chief exec­u­tive, Mark Zucker­berg, although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers.

While the pre­vi­ous pol­i­cy of giv­ing devel­op­ers access to Face­book users’ friends’ data was sanc­tioned in the small print in Facebook’s terms and con­di­tions, and users could block such data shar­ing by chang­ing their set­tings, Parak­i­las said he believed the pol­i­cy was prob­lem­at­ic.

“It was well under­stood in the com­pa­ny that that pre­sent­ed a risk,” he said. “Face­book was giv­ing data of peo­ple who had not autho­rised the app them­selves, and was rely­ing on terms of ser­vice and set­tings that peo­ple didn’t read or under­stand.”

It was this fea­ture that was exploit­ed by Glob­al Sci­ence Research, and the data pro­vid­ed to Cam­bridge Ana­lyt­i­ca in 2014. GSR was run by the Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, who built an app that was a per­son­al­i­ty test for Face­book users.

The test auto­mat­i­cal­ly down­loaded the data of friends of peo­ple who took the quiz, osten­si­bly for aca­d­e­m­ic pur­pos­es. Cam­bridge Ana­lyt­i­ca has denied know­ing the data was obtained improp­er­ly, and Kogan main­tains he did noth­ing ille­gal and had a “close work­ing rela­tion­ship” with Face­book.

While Kogan’s app only attract­ed around 270,000 users (most of whom were paid to take the quiz), the com­pa­ny was then able to exploit the friends per­mis­sion fea­ture to quick­ly amass data per­tain­ing to more than 50 mil­lion Face­book users.

“Kogan’s app was one of the very last to have access to friend per­mis­sions,” Parak­i­las said, adding that many oth­er sim­i­lar apps had been har­vest­ing sim­i­lar quan­ti­ties of data for years for com­mer­cial pur­pos­es. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users.

If those fig­ures were extrap­o­lat­ed, tens of thou­sands of apps, if not more, were like­ly to have sys­tem­at­i­cal­ly culled “pri­vate and per­son­al­ly iden­ti­fi­able” data belong­ing to hun­dreds of mil­lions of users, Parak­i­las said.

The ease with which it was pos­si­ble for any­one with rel­a­tive­ly basic cod­ing skills to cre­ate apps and start trawl­ing for data was a par­tic­u­lar con­cern, he added.

Parak­i­las said he was unsure why Face­book stopped allow­ing devel­op­ers to access friends data around mid-2014, rough­ly two years after he left the com­pa­ny. How­ev­er, he said he believed one rea­son may have been that Face­book exec­u­tives were becom­ing aware that some of the largest apps were acquir­ing enor­mous troves of valu­able data.

He recalled con­ver­sa­tions with exec­u­tives who were ner­vous about the com­mer­cial val­ue of data being passed to oth­er com­pa­nies.

“They were wor­ried that the large app devel­op­ers were build­ing their own social graphs, mean­ing they could see all the con­nec­tions between these peo­ple,” he said. “They were wor­ried that they were going to build their own social net­works.”

‘They treat­ed it like a PR exer­cise’

Parak­i­las said he lob­bied inter­nal­ly at Face­book for “a more rig­or­ous approach” to enforc­ing data pro­tec­tion, but was offered lit­tle sup­port. His warn­ings includ­ed a Pow­er­Point pre­sen­ta­tion he said he deliv­ered to senior exec­u­tives in mid-2012 “that includ­ed a map of the vul­ner­a­bil­i­ties for user data on Facebook’s plat­form”.

“I includ­ed the pro­tec­tive mea­sures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do mali­cious things with the data,” he said. “On the list of bad actors I includ­ed for­eign state actors and data bro­kers.”

Frus­trat­ed at the lack of action, Parak­i­las left Face­book in late 2012. “I didn’t feel that the com­pa­ny treat­ed my con­cerns seri­ous­ly. I didn’t speak out pub­licly for years out of self-inter­est, to be frank.”

That changed, Parak­i­las said, when he heard the con­gres­sion­al tes­ti­mo­ny giv­en by Face­book lawyers to Sen­ate and House inves­ti­ga­tors in late 2017 about Russia’s attempt to sway the pres­i­den­tial elec­tion. “They treat­ed it like a PR exer­cise,” he said. “They seemed to be entire­ly focused on lim­it­ing their lia­bil­i­ty and expo­sure rather than help­ing the coun­try address a nation­al secu­ri­ty issue.”

It was at that point that Parak­i­las decid­ed to go pub­lic with his con­cerns, writ­ing an opin­ion arti­cle in the New York Times that said Face­book could not be trust­ed to reg­u­late itself. Since then, Parak­i­las has become an advis­er to the Cen­ter for Humane Tech­nol­o­gy, which is run by Tris­tan Har­ris, a for­mer Google employ­ee turned whistle­blow­er on the indus­try.

———-

“ ‘Utter­ly hor­ri­fy­ing’: ex-Face­book insid­er says covert data har­vest­ing was rou­tine” by Paul Lewis; The Guardian; 03/20/2018

“Sandy Parak­i­las, the plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012, told the Guardian he warned senior exec­u­tives at the com­pa­ny that its lax approach to data pro­tec­tion risked a major breach.”

The plat­form oper­a­tions man­ag­er at Face­book respon­si­ble for polic­ing data breach­es by third-par­ty soft­ware devel­op­ers between 2011 and 2012: That’s who is mak­ing these claims. In oth­er words, Sandy Parak­i­las is indeed some­one who should be inti­mate­ly famil­iar with Face­book’s poli­cies of hand­ing user data over to app devel­op­ers because it was his job to ensure that data was­n’t breached.

And as Parak­i­las makes clear, he was­n’t actu­al­ly able to do his job. When the data left Face­book’s servers after get­ting hand­ed over to app devel­op­er Face­book had no idea what devel­op­ers were doing with the data and appar­ent­ly no inter­est in learn­ing:

...
“My con­cerns were that all of the data that left Face­book servers to devel­op­ers could not be mon­i­tored by Face­book, so we had no idea what devel­op­ers were doing with the data,” he said.

Parak­i­las said Face­book had terms of ser­vice and set­tings that “peo­ple didn’t read or under­stand” and the com­pa­ny did not use its enforce­ment mech­a­nisms, includ­ing audits of exter­nal devel­op­ers, to ensure data was not being mis­used.

Parak­i­las, whose job was to inves­ti­gate data breach­es by devel­op­ers sim­i­lar to the one lat­er sus­pect­ed of Glob­al Sci­ence Research, which har­vest­ed tens of mil­lions of Face­book pro­files and pro­vid­ed the data to Cam­bridge Ana­lyt­i­ca, said the slew of recent dis­clo­sures had left him dis­ap­point­ed with his supe­ri­ors for not heed­ing his warn­ings.

“It has been painful watch­ing,” he said, “because I know that they could have pre­vent­ed it.”

Asked what kind of con­trol Face­book had over the data giv­en to out­side devel­op­ers, he replied: “Zero. Absolute­ly none. Once the data left Face­book servers there was not any con­trol, and there was no insight into what was going on.”
...

And this com­plete­ly lack of over­sight by Face­book led Parak­i­las to assume there was “some­thing of a black mar­ket” for that Face­book data. But when he expressed these con­cerns with fel­low exec­u­tives he was warned not to look. Not know­ing how this data was being used was iron­i­cal­ly part of Face­book’s legal strat­e­gy, it seems:

...
Parak­i­las said he “always assumed there was some­thing of a black mar­ket” for Face­book data that had been passed to exter­nal devel­op­ers. How­ev­er, he said that when he told oth­er exec­u­tives the com­pa­ny should proac­tive­ly “audit devel­op­ers direct­ly and see what’s going on with the data” he was dis­cour­aged from the approach.

He said one Face­book exec­u­tive advised him against look­ing too deeply at how the data was being used, warn­ing him: “Do you real­ly want to see what you’ll find?” Parak­i­las said he inter­pret­ed the com­ment to mean that “Face­book was in a stronger legal posi­tion if it didn’t know about the abuse that was hap­pen­ing”.

He added: “They felt that it was bet­ter not to know. I found that utter­ly shock­ing and hor­ri­fy­ing.”
...

“They felt that it was bet­ter not to know. I found that utter­ly shock­ing and hor­ri­fy­ing.”

Well, at least one exec­u­tive at Face­book was utter­ly shocked and hor­ri­fied by the “bet­ter not to know” pol­i­cy towards hand­ing per­son­al pri­vate infor­ma­tion over to devel­op­ers. And that one exec­u­tive, Parak­i­las, left the com­pa­ny and is now a whis­tle-blow­er.

And one of the things that made Parak­i­las par­tic­u­lar­ly con­cerned that this was wide­spread among app was the fact that it was so easy to cre­ate apps that could then just be released onto Face­book to trawl for Face­book pro­file data from users and their unwit­ting friends:

...
The ease with which it was pos­si­ble for any­one with rel­a­tive­ly basic cod­ing skills to cre­ate apps and start trawl­ing for data was a par­tic­u­lar con­cern, he added.
...

And while rogue app devel­op­ers were at times dealt with, it was exceed­ing­ly rare with Parak­i­las not wit­ness­ing a sin­gle audit of a devel­op­er’s sys­tems dur­ing his time there.

Even more alarm­ing is that Face­book was appar­ent­ly quite on encour­ag­ing app devel­op­ers to grab this Face­book pro­file data as an incen­tive to encour­age even more app devel­op. Apps were seen as so impor­tant to Face­book that Mark Zucker­berg him­self had to give his per­son­al approval to ban on app. And while that pol­i­cy was lat­er relaxed to not require Zucker­berg’s approval, it does­n’t sound like that pol­i­cy change actu­al­ly result­ed in more apps get­ting banned:

...
Parak­i­las said that when he worked at Face­book it failed to take full advan­tage of its enforce­ment mech­a­nisms, such as a clause that enables the social media giant to audit exter­nal devel­op­ers who mis­use its data.

Legal action against rogue devel­op­ers or moves to ban them from Face­book were “extreme­ly rare”, he said, adding: “In the time I was there, I didn’t see them con­duct a sin­gle audit of a developer’s sys­tems.”

Dur­ing the time he was at Face­book, Parak­i­las said the com­pa­ny was keen to encour­age more devel­op­ers to build apps for its plat­form and “one of the main ways to get devel­op­ers inter­est­ed in build­ing apps was through offer­ing them access to this data”. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters he was told that any deci­sion to ban an app required the per­son­al approval of the chief exec­u­tive, Mark Zucker­berg, although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers.
...

So how many Face­book users had their pri­vate pro­file infor­ma­tion like­ly via this ‘fine print’ fea­ture that allowed app devel­op­ers to scrape the pro­files of app users and their friends? Accord­ing to Parak­i­las, prob­a­bly a major­i­ty of Face­book users. So that black mar­ket of Face­book pro­files prob­a­bly includes a major­i­ty of Face­book users. But even more amaz­ing is that Face­book hand­ed out this per­son­al user infor­ma­tion to app devel­op­ers in exchange for a 30 share of the mon­ey they made through the app. Face­book was basi­cal­ly direct­ly sell­ing pri­vate user data to devel­op­ers, which is a big rea­son why Parak­i­las’s esti­mate that a major­i­ty of Face­book users were impact­ed by this is like­ly true. Espe­cial­ly if, as Parak­i­las hints, the num­ber of devel­op­ers grab­bing user pro­file infor­ma­tion via these apps might be in the hun­dreds of thou­sands. That’s a lot of devel­op­ers poten­tial­ly feed­ing into that black mar­ket:

...
‘A major­i­ty of Face­book users’

Parak­i­las, 38, who now works as a prod­uct man­ag­er for Uber, is par­tic­u­lar­ly crit­i­cal of Facebook’s pre­vi­ous pol­i­cy of allow­ing devel­op­ers to access the per­son­al data of friends of peo­ple who used apps on the plat­form, with­out the knowl­edge or express con­sent of those friends.

That fea­ture, called friends per­mis­sion, was a boon to out­side soft­ware devel­op­ers who, from 2007 onwards, were giv­en per­mis­sion by Face­book to build quizzes and games – like the wide­ly pop­u­lar Far­mVille – that were host­ed on the plat­form.

The apps pro­lif­er­at­ed on Face­book in the years lead­ing up to the company’s 2012 ini­tial pub­lic offer­ing, an era when most users were still access­ing the plat­form via lap­tops and com­put­ers rather than smart­phones.

Face­book took a 30% cut of pay­ments made through apps, but in return enabled their cre­ators to have access to Face­book user data.

Parak­i­las does not know how many com­pa­nies sought friends per­mis­sion data before such access was ter­mi­nat­ed around mid-2014. How­ev­er, he said he believes tens or maybe even hun­dreds of thou­sands of devel­op­ers may have done so.

Parak­i­las esti­mates that “a major­i­ty of Face­book users” could have had their data har­vest­ed by app devel­op­ers with­out their knowl­edge. The com­pa­ny now has stricter pro­to­cols around the degree of access third par­ties have to data.

...

Dur­ing the time he was at Face­book, Parak­i­las said the com­pa­ny was keen to encour­age more devel­op­ers to build apps for its plat­form and “one of the main ways to get devel­op­ers inter­est­ed in build­ing apps was through offer­ing them access to this data”. Short­ly after arriv­ing at the company’s Sil­i­con Val­ley head­quar­ters he was told that any deci­sion to ban an app required the per­son­al approval of the chief exec­u­tive, Mark Zucker­berg, although the pol­i­cy was lat­er relaxed to make it eas­i­er to deal with rogue devel­op­ers.
...

“Face­book took a 30% cut of pay­ments made through apps, but in return enabled their cre­ators to have access to Face­book user data.”

And that, right there, is per­haps the biggest scan­dal here: Face­book just hand­ed user data away in exchange for rev­enue streams from app devel­op­ers. And this was a key ele­ment of its busi­ness mod­el dur­ing this 2007–2014 peri­od. “Read the fine print” in the terms of ser­vice was the excuse they use:

...
“It was well under­stood in the com­pa­ny that that pre­sent­ed a risk,” he said. “Face­book was giv­ing data of peo­ple who had not autho­rised the app them­selves, and was rely­ing on terms of ser­vice and set­tings that peo­ple didn’t read or under­stand.”

It was this fea­ture that was exploit­ed by Glob­al Sci­ence Research, and the data pro­vid­ed to Cam­bridge Ana­lyt­i­ca in 2014. GSR was run by the Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, who built an app that was a per­son­al­i­ty test for Face­book users.
...

And this is all why Alek­san­dr Kogan’s asser­tions that he had a close work­ing rela­tion­ship with Face­book and did noth­ing tech­ni­cal­ly wrong do actu­al­ly seem to be backed up by Parak­i­las’s whis­tle-blow­ing. Both because it’s hard to see what Kogan did that was­n’t part of Face­book’s busi­ness mod­el and also because it’s hard to ignore that Kogan’s GSR shell com­pa­ny was one of the very last apps to have per­mis­sion to exploit their “friends’ per­mis­sion” app loop­hole. That sure does sug­gest that Kogan real­ly did have a “close work­ing rela­tion­ship” with Face­book. So close he got seem­ing­ly favored treat­ment, and that’s com­pared to the seem­ing­ly vast num­ber of apps that were appar­ent­ly using this “friends per­mis­sions” fea­ture: 1 in 10 Face­book apps, accord­ing to a 2010 study:

...
The test auto­mat­i­cal­ly down­loaded the data of friends of peo­ple who took the quiz, osten­si­bly for aca­d­e­m­ic pur­pos­es. Cam­bridge Ana­lyt­i­ca has denied know­ing the data was obtained improp­er­ly, and Kogan main­tains he did noth­ing ille­gal and had a “close work­ing rela­tion­ship” with Face­book.

While Kogan’s app only attract­ed around 270,000 users (most of whom were paid to take the quiz), the com­pa­ny was then able to exploit the friends per­mis­sion fea­ture to quick­ly amass data per­tain­ing to more than 50 mil­lion Face­book users.

“Kogan’s app was one of the very last to have access to friend per­mis­sions,” Parak­i­las said, adding that many oth­er sim­i­lar apps had been har­vest­ing sim­i­lar quan­ti­ties of data for years for com­mer­cial pur­pos­es. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users.

If those fig­ures were extrap­o­lat­ed, tens of thou­sands of apps, if not more, were like­ly to have sys­tem­at­i­cal­ly culled “pri­vate and per­son­al­ly iden­ti­fi­able” data belong­ing to hun­dreds of mil­lions of users, Parak­i­las said.
...

““Kogan’s app was one of the very last to have access to friend per­mis­sions,” Parak­i­las said, adding that many oth­er sim­i­lar apps had been har­vest­ing sim­i­lar quan­ti­ties of data for years for com­mer­cial pur­pos­es. Aca­d­e­m­ic research from 2010, based on an analy­sis of 1,800 Face­books apps, con­clud­ed that around 11% of third-par­ty devel­op­ers request­ed data belong­ing to friends of users.”

As of 2010, around 11 per­cent of app devel­op­ers request­ed data belong­ing to friends of users. Keep that in mind when Face­book claims that Alek­san­dr Kogan improp­er­ly obtained data from the friends of the peo­ple who down­loaded Kogan’s app.

So what made Face­book even­tu­al­ly end this “friends per­mis­sions” pol­i­cy in mid-2014? While Parak­i­las has already left the com­pa­ny by then, he does recall con­ver­sa­tions with exec­u­tive who were ner­vous about com­peti­tors build­ing their own social net­works from all the data Face­book was giv­ing away:

...
Parak­i­las said he was unsure why Face­book stopped allow­ing devel­op­ers to access friends data around mid-2014, rough­ly two years after he left the com­pa­ny. How­ev­er, he said he believed one rea­son may have been that Face­book exec­u­tives were becom­ing aware that some of the largest apps were acquir­ing enor­mous troves of valu­able data.

He recalled con­ver­sa­tions with exec­u­tives who were ner­vous about the com­mer­cial val­ue of data being passed to oth­er com­pa­nies.

“They were wor­ried that the large app devel­op­ers were build­ing their own social graphs, mean­ing they could see all the con­nec­tions between these peo­ple,” he said. “They were wor­ried that they were going to build their own social net­works.”
...

That’s how much data Face­book was hand­ing out to encour­age new app devel­op­ment: so much data that they were con­cerned about cre­at­ing com­peti­tors.

Final­ly, it’s impor­tant to note that the pic­ture paint­ed by Parak­i­las only goes until the end of 2012, when he left in frus­tra­tion. So we don’t actu­al­ly have tes­ti­mo­ny of Face­book insid­ers who were involved with app data breach­es like Parak­i­las dur­ing the peri­od when Cam­bridge Ana­lyt­i­ca was engaged in its mass data col­lec­tion scheme:

...
Frus­trat­ed at the lack of action, Parak­i­las left Face­book in late 2012. “I didn’t feel that the com­pa­ny treat­ed my con­cerns seri­ous­ly. I didn’t speak out pub­licly for years out of self-inter­est, to be frank.”
...

Now, it seems like a safe bet that the prob­lem only got worse after Parak­i­las left giv­en how the Cam­bridge Ana­lyt­i­ca sit­u­a­tion played out, but we don’t know yet just had bad it was at this point.

Alek­san­dr Kogan: Face­book’s Close Friend (Until He Belat­ed­ly Was­n’t)

So, fac­tor­ing in what we just saw with Parak­i­las’s claims about extent to which Face­book was hand­ing out pri­vate Face­book pro­file data — the inter­nal pro­file that Face­book builds up about you — to app devel­op­ers for wide­spread com­mer­cial appli­ca­tions, let’s take a look at the some of the claims Alek­san­dr Kogan has made about his rela­tion­ship with Face­book. Because while Kogan makes some extra­or­di­nary claims, they are also con­sis­tent with Parak­i­las’s claims, although in some cas­es Kogan’s descrip­tion actu­al­ly goes much fur­ther than Parak­i­las.

For instance, accord­ing to the fol­low­ing Observ­er arti­cle ...

1. In an email to col­leagues at the Uni­ver­si­ty of Cam­bridge, Alek­san­dr Kogan said that he had cre­at­ed the Face­book app in 2013 for aca­d­e­m­ic pur­pos­es, and used it for “a num­ber of stud­ies”. After he found­ed GSR, Kogan wrote, he trans­ferred the app to the com­pa­ny and changed its name, logo, descrip­tion, and terms and con­di­tions.

2. Kogan also claims in that email that the con­tract his GSR com­pa­ny signed with Face­book in 2014 made it absolute­ly clear the data was going to be used for com­mer­cial appli­ca­tions and that app users were grant­i­ng Kogan’s com­pa­ny the right to license or resell the data. “We made clear the app was for com­mer­cial use – we nev­er men­tioned aca­d­e­m­ic research nor the Uni­ver­si­ty of Cam­bridge,” Kogan wrote.We clear­ly stat­ed that the users were grant­i­ng us the right to use the data in broad scope, includ­ing sell­ing and licens­ing the data. These changes were all made on the Face­book app plat­form and thus they had full abil­i­ty to review the nature of the app and raise issues. Face­book at no point raised any con­cerns at all about any of these changes.” So Kogan says he made it clear to Face­book and user the app was for com­mer­cial pur­pos­es and that the data might be resold which sounds like the kind of sit­u­a­tion Sandy Parak­i­las said he wit­nessed except even more open (which should be eas­i­ly ver­i­fi­able if the app code still exists).

3. Face­book did­n’t actu­al­ly kick Kogan off of its plat­form until March 16th of this year, just days before this sto­ry broke. Which con­sis­tent with Kogan’s claims that he had a good work­ing rela­tion­ship with Face­book.

4. When Kogan found­ed Glob­al Sci­ence Research (GSR) in May 2014, he co-found­ed it with anoth­er Cam­bridge researcher, Joseph Chan­cel­lor. Chan­cel­lor is cur­rent­ly employed by Face­book.

5. Face­book gave Kogan’s Uni­ver­si­ty of Cam­bridge lab pro­vid­ed the dataset of “every friend­ship formed in 2011 in every coun­try in the world at the nation­al aggre­gate lev­el”. 57 bil­lion Face­book rela­tion­ships in all. The data was anonymized and aggre­gat­ed, so it did­n’t lit­er­al­ly include details on indi­vid­ual Face­book friends and was instead the aggre­gate “friend” counts at a nation­al. The data was used to pub­lish a study in Per­son­al­i­ty and Indi­vid­ual Dif­fer­ences in 2015 and two Face­book employ­ees were named as co-authors of the study, along­side researchers from Cam­bridge, Har­vard and the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley. But it’s still a sign that Kogan is indeed being hon­est when he says he had a close work­ing rela­tion­ship with Face­book. It’s also a reminder that when Face­book claims that it was just hand­ing out data for “research pur­pos­es” only, if that was true it would have hand­ed out anonymized aggre­gat­ed data like they did in this sit­u­a­tion with Kogan.

6. That study co-authored by Kogan’s team and Face­book did­n’t just use the anonymized aggre­gat­ed friend­ship data. The study also used non-anonymized Face­book ata col­lect­ed through Face­book apps using exact­ly the same tech­niques Kogan’s app for Cam­bridge Ana­lyt­i­ca used. This study was pub­lished in August of 2015. Again, it was a study co-authored by Face­book. GSR co-founder Joseph Chan­cel­lor left GSR a month lat­er and joined Face­book as a user expe­ri­ence research in Novem­ber 2015. Recall that it was a month lat­er, Decem­ber 2015, when we saw the first news reports of Ted Cruz’s cam­paign using Face­book data. Also recall that Face­book respond­ed to that Decem­ber 2015 report by say­ing it would look into the mat­ter. Face­book final­ly sent Cam­bridge Ana­lyt­i­ca a let­ter in August of 2016, days before Steve Ban­non became Trump’s cam­paign man­ag­er, ask­ing that Cam­bridge Ana­lyt­i­ca delete the data. So the fact that Face­book co-authored a paper with Kogan and Chan­cel­lor in August of the 2015 and then Chan­cel­lor joined Face­book in 2015 is a pret­ty sig­nif­i­cant bit of con­text for look­ing into Face­book’s behav­ior. Because Face­book did­n’t just know it was guilty of work­ing close­ly with Kogan. They also knew they just co-authored an aca­d­e­m­ic paper using data gath­ered with the same tech­nique Cam­bridge Ana­lyt­i­ca was charged with using.

7. Kogan does chal­lenge one of the claims by Christo­pher Wylie. Specif­i­cal­ly, Wylie claimed that Face­book became alarmed over the vol­ume of data Kogan’s app was scoop­ing up (50 mil­lion pro­files) but Kogan assuaged those con­cerns by say­ing it was all for research. Kogan says this is a fab­ri­ca­tion and Face­book nev­er actu­al­ly con­tact­ed him express­ing alarm.

So, accord­ing to Alek­san­dr Kogan, Face­book real­ly did have an excep­tion­al­ly close rela­tion­ship with Kogan and Face­book real­ly was total­ly on board with what Kogan and Cam­bridge Ana­lyt­i­ca were doing:

The Guardian

Face­book gave data about 57bn friend­ships to aca­d­e­m­ic
Vol­ume of data sug­gests trust­ed part­ner­ship with Alek­san­dr Kogan, says ana­lyst

Julia Car­rie Wong and Paul Lewis in San Fran­cis­co
Thu 22 Mar 2018 10.56 EDT
Last mod­i­fied on Sat 24 Mar 2018 22.56 EDT

Before Face­book sus­pend­ed Alek­san­dr Kogan from its plat­form for the data har­vest­ing “scam” at the cen­tre of the unfold­ing Cam­bridge Ana­lyt­i­ca scan­dal, the social media com­pa­ny enjoyed a close enough rela­tion­ship with the researcher that it pro­vid­ed him with an anonymised, aggre­gate dataset of 57bn Face­book friend­ships.

Face­book pro­vid­ed the dataset of “every friend­ship formed in 2011 in every coun­try in the world at the nation­al aggre­gate lev­el” to Kogan’s Uni­ver­si­ty of Cam­bridge lab­o­ra­to­ry for a study on inter­na­tion­al friend­ships pub­lished in Per­son­al­i­ty and Indi­vid­ual Dif­fer­ences in 2015. Two Face­book employ­ees were named as co-authors of the study, along­side researchers from Cam­bridge, Har­vard and the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley. Kogan was pub­lish­ing under the name Alek­san­dr Spec­tre at the time.

A Uni­ver­si­ty of Cam­bridge press release on the study’s pub­li­ca­tion not­ed that the paper was “the first out­put of ongo­ing research col­lab­o­ra­tions between Spectre’s lab in Cam­bridge and Face­book”. Face­book did not respond to queries about whether any oth­er col­lab­o­ra­tions occurred.

“The sheer vol­ume of the 57bn friend pairs implies a pre-exist­ing rela­tion­ship,” said Jonathan Albright, research direc­tor at the Tow Cen­ter for Dig­i­tal Jour­nal­ism at Colum­bia Uni­ver­si­ty. “It’s not com­mon for Face­book to share that kind of data. It sug­gests a trust­ed part­ner­ship between Alek­san­dr Kogan/Spectre and Face­book.”

Face­book down­played the sig­nif­i­cance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was lit­er­al­ly num­bers – num­bers of how many friend­ships were made between pairs of coun­tries – ie x num­ber of friend­ships made between the US and UK,” Face­book spokes­woman Chris­tine Chen said by email. “There was no per­son­al­ly iden­ti­fi­able infor­ma­tion includ­ed in this data.”

Facebook’s rela­tion­ship with Kogan has since soured.

“We end­ed our work­ing rela­tion­ship with Kogan alto­geth­er after we learned that he vio­lat­ed Facebook’s terms of ser­vice for his unre­lat­ed work as a Face­book app devel­op­er,” Chen said. Face­book has said that it learned of Kogan’s mis­use of the data in Decem­ber 2015, when the Guardian first report­ed that the data had been obtained by Cam­bridge Ana­lyt­i­ca.

“We start­ed to take steps to end the rela­tion­ship right after the Guardian report, and after inves­ti­ga­tion we end­ed the rela­tion­ship soon after, in 2016,” Chen said.

On Fri­day 16 March, in antic­i­pa­tion of the Observ­er’s report­ing that Kogan had improp­er­ly har­vest­ed and shared the data of more than 50 mil­lion Amer­i­cans, Face­book sus­pend­ed Kogan from the plat­form, issued a state­ment say­ing that he “lied” to the com­pa­ny, and char­ac­terised his activ­i­ties as “a scam – and a fraud”.

On Tues­day, Face­book went fur­ther, say­ing in a state­ment: “The entire com­pa­ny is out­raged we were deceived.” And on Wednes­day, in his first pub­lic state­ment on the scan­dal, its chief exec­u­tive, Mark Zucker­berg, called Kogan’s actions a “breach of trust”.

But Face­book has not explained how it came to have such a close rela­tion­ship with Kogan that it was co-author­ing research papers with him, nor why it took until this week – more than two years after the Guardian ini­tial­ly report­ed on Kogan’s data har­vest­ing activ­i­ties – for it to inform the users whose per­son­al infor­ma­tion was improp­er­ly shared.

And Kogan has offered a defence of his actions in an inter­view with the BBC and an email to his Cam­bridge col­leagues obtained by the Guardian. “My view is that I’m being basi­cal­ly used as a scape­goat by both Face­book and Cam­bridge Ana­lyt­i­ca,” Kogan said on Radio 4 on Wednes­day.

The data col­lec­tion that result­ed in Kogan’s sus­pen­sion by Face­book was under­tak­en by Glob­al Sci­ence Research (GSR), a com­pa­ny he found­ed in May 2014 with anoth­er Cam­bridge researcher, Joseph Chan­cel­lor. Chan­cel­lor is cur­rent­ly employed by Face­book.

Between June and August of that year, GSR paid approx­i­mate­ly 270,000 indi­vid­u­als to use a Face­book ques­tion­naire app that har­vest­ed data from their own Face­book pro­files, as well as from their friends, result­ing in a dataset of more than 50 mil­lion users. The data was sub­se­quent­ly giv­en to Cam­bridge Ana­lyt­i­ca, in what Face­book has said was a vio­la­tion of Kogan’s agree­ment to use the data sole­ly for aca­d­e­m­ic pur­pos­es.

In his email to col­leagues at Cam­bridge, Kogan said that he had cre­at­ed the Face­book app in 2013 for aca­d­e­m­ic pur­pos­es, and used it for “a num­ber of stud­ies”. After he found­ed GSR, Kogan wrote, he trans­ferred the app to the com­pa­ny and changed its name, logo, descrip­tion, and terms and con­di­tions. CNN first report­ed on the Cam­bridge email. Kogan did not respond to the Guardian’s request for com­ment on this arti­cle.

“We made clear the app was for com­mer­cial use – we nev­er men­tioned aca­d­e­m­ic research nor the Uni­ver­si­ty of Cam­bridge,” Kogan wrote. “We clear­ly stat­ed that the users were grant­i­ng us the right to use the data in broad scope, includ­ing sell­ing and licens­ing the data. These changes were all made on the Face­book app plat­form and thus they had full abil­i­ty to review the nature of the app and raise issues. Face­book at no point raised any con­cerns at all about any of these changes.”

Kogan is not alone in crit­i­cis­ing Facebook’s appar­ent efforts to place the blame on him.

“In my view, it’s Face­book that did most of the shar­ing,” said Albright, who ques­tioned why Face­book cre­at­ed a sys­tem for third par­ties to access so much per­son­al infor­ma­tion in the first place. That sys­tem “was designed to share their users’ data in mean­ing­ful ways in exchange for stock val­ue”, he added.

Whistle­blow­er Christo­pher Wylie told the Observ­er that Face­book was aware of the vol­ume of data being pulled by Kogan’s app. “Their secu­ri­ty pro­to­cols were trig­gered because Kogan’s apps were pulling this enor­mous amount of data, but appar­ent­ly Kogan told them it was for aca­d­e­m­ic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cam­bridge email, Kogan char­ac­terised this claim as a “fab­ri­ca­tion”, writ­ing: “There was no exchange with Face­book about it, and ... we nev­er claimed dur­ing the project that it was for aca­d­e­m­ic research. In fact, we did our absolute best not to have the project have any entan­gle­ments with the uni­ver­si­ty.”

The col­lab­o­ra­tion between Kogan and Face­book researchers which result­ed in the report pub­lished in 2015 also used data har­vest­ed by a Face­book app. The study analysed two datasets, the anony­mous macro-lev­el nation­al set of 57bn friend pairs pro­vid­ed by Face­book and a small­er dataset col­lect­ed by the Cam­bridge aca­d­e­mics.

For the small­er dataset, the research team used the same method of pay­ing peo­ple to use a Face­book app that har­vest­ed data about the indi­vid­u­als and their friends. Face­book was not involved in this part of the study. The study notes that the users signed a con­sent form about the research and that “no decep­tion was used”.

The paper was pub­lished in late August 2015. In Sep­tem­ber 2015, Chan­cel­lor left GSR, accord­ing to com­pa­ny records. In Novem­ber 2015, Chan­cel­lor was hired to work at Face­book as a user expe­ri­ence researcher.

...

———-

“Face­book gave data about 57bn friend­ships to aca­d­e­m­ic” by Julia Car­rie Wong and Paul Lewis; The Guardian; 03/22/2018

“Before Face­book sus­pend­ed Alek­san­dr Kogan from its plat­form for the data har­vest­ing “scam” at the cen­tre of the unfold­ing Cam­bridge Ana­lyt­i­ca scan­dal, the social media com­pa­ny enjoyed a close enough rela­tion­ship with the researcher that it pro­vid­ed him with an anonymised, aggre­gate dataset of 57bn Face­book friend­ships.

An anonymized, aggre­gate dataset of 57bn Face­book friend­ships sure makes it a lot eas­i­er to take Kogan at his word when he claims a close work­ing rela­tion­ship with Face­book.

Now, keep in mind that the aggre­gate anonymized data was aggre­gate at the nation­al lev­el, so it’s not as if Face­book gave Kogan a list of 57 bil­lion Face­book friend­ships. And when you think about it, that aggre­gat­ed anonymized data is far less sen­si­tive than the per­son­al Face­book pro­file data Kogan and oth­er app devel­op­ers were rou­tine­ly grab­bing dur­ing this peri­od. It’s the fact that Face­book gave this data to Kogan in the first place that lends cre­dence to his claims.

But the biggest fac­tor lend­ing cre­dence to Kogan’s claims is the fact that Face­book co-authored a study with Kogan and oth­er at the Uni­ver­si­ty of Cam­bridge using that anonymized aggre­gat­ed data. Two Face­book employ­ees were named as co-authors of the study. That is def­i­nite­ly a sign of close work­ing rela­tion­ship:

...
Face­book pro­vid­ed the dataset of “every friend­ship formed in 2011 in every coun­try in the world at the nation­al aggre­gate lev­el” to Kogan’s Uni­ver­si­ty of Cam­bridge lab­o­ra­to­ry for a study on inter­na­tion­al friend­ships pub­lished in Per­son­al­i­ty and Indi­vid­ual Dif­fer­ences in 2015. Two Face­book employ­ees were named as co-authors of the study, along­side researchers from Cam­bridge, Har­vard and the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley. Kogan was pub­lish­ing under the name Alek­san­dr Spec­tre at the time.

A Uni­ver­si­ty of Cam­bridge press release on the study’s pub­li­ca­tion not­ed that the paper was “the first out­put of ongo­ing research col­lab­o­ra­tions between Spectre’s lab in Cam­bridge and Face­book”. Face­book did not respond to queries about whether any oth­er col­lab­o­ra­tions occurred.

“The sheer vol­ume of the 57bn friend pairs implies a pre-exist­ing rela­tion­ship,” said Jonathan Albright, research direc­tor at the Tow Cen­ter for Dig­i­tal Jour­nal­ism at Colum­bia Uni­ver­si­ty. “It’s not com­mon for Face­book to share that kind of data. It sug­gests a trust­ed part­ner­ship between Alek­san­dr Kogan/Spectre and Face­book.”
...

Even more damn­ing for Face­book is that the research co-authored by Kogan, Face­book, and oth­er researchers did­n’t just includ­ed the anonymized aggre­gat­ed data. It also includ­ed a sec­ond data set of non-anonymized data that was har­vest­ed in exact­ly the same way Kogan’s GSR app worked. And while Face­book appar­ent­ly was­n’t involved in that part of the study, that’s beside the point. Face­book clear­ly knew about it if they co-authored the study:

...
The col­lab­o­ra­tion between Kogan and Face­book researchers which result­ed in the report pub­lished in 2015 also used data har­vest­ed by a Face­book app. The study analysed two datasets, the anony­mous macro-lev­el nation­al set of 57bn friend pairs pro­vid­ed by Face­book and a small­er dataset col­lect­ed by the Cam­bridge aca­d­e­mics.

For the small­er dataset, the research team used the same method of pay­ing peo­ple to use a Face­book app that har­vest­ed data about the indi­vid­u­als and their friends. Face­book was not involved in this part of the study. The study notes that the users signed a con­sent form about the research and that “no decep­tion was used”.

The paper was pub­lished in late August 2015. In Sep­tem­ber 2015, Chan­cel­lor left GSR, accord­ing to com­pa­ny records. In Novem­ber 2015, Chan­cel­lor was hired to work at Face­book as a user expe­ri­ence researcher.
...

But, alas, Kogan’s rela­tion­ship with Face­book as since soured, with Face­book now act­ing as if Kogan had total­ly vio­lat­ed their trust. And yet it’s hard to ignore the fact that Kogan was­n’t for­mal­ly kicked off Face­book’s plat­form until March 16th of this year, just a few days before all these sto­ries about Kogan and Face­book were about to go pub­lic:

...
Facebook’s rela­tion­ship with Kogan has since soured.

“We end­ed our work­ing rela­tion­ship with Kogan alto­geth­er after we learned that he vio­lat­ed Facebook’s terms of ser­vice for his unre­lat­ed work as a Face­book app devel­op­er,” Chen said. Face­book has said that it learned of Kogan’s mis­use of the data in Decem­ber 2015, when the Guardian first report­ed that the data had been obtained by Cam­bridge Ana­lyt­i­ca.

“We start­ed to take steps to end the rela­tion­ship right after the Guardian report, and after inves­ti­ga­tion we end­ed the rela­tion­ship soon after, in 2016,” Chen said.

On Fri­day 16 March, in antic­i­pa­tion of the Observ­er’s report­ing that Kogan had improp­er­ly har­vest­ed and shared the data of more than 50 mil­lion Amer­i­cans, Face­book sus­pend­ed Kogan from the plat­form, issued a state­ment say­ing that he “lied” to the com­pa­ny, and char­ac­terised his activ­i­ties as “a scam – and a fraud”.

On Tues­day, Face­book went fur­ther, say­ing in a state­ment: “The entire com­pa­ny is out­raged we were deceived.” And on Wednes­day, in his first pub­lic state­ment on the scan­dal, its chief exec­u­tive, Mark Zucker­berg, called Kogan’s actions a “breach of trust”.
...

““The entire com­pa­ny is out­raged we were deceived.” And on Wednes­day, in his first pub­lic state­ment on the scan­dal, its chief exec­u­tive, Mark Zucker­berg, called Kogan’s actions a “breach of trust”.”

Mark Zucker­berg is com­plain­ing about a “breach of trust.” LOL!

And yet Face­book has yet to explain the nature of its rela­tion­ship with Kogan or why it was that they did­n’t kick him off the plat­form until only recent­ly. But Kogan has an expla­na­tion: He’s a scape­goat and he was­n’t doing any­thing Face­book did­n’t know he was doing. And when you notice that Kogan’s co-founder of GSR, Joseph Chan­cel­lor, is now a Face­book employ­ee, it’s hard not to take his claims seri­ous­ly:

...
But Face­book has not explained how it came to have such a close rela­tion­ship with Kogan that it was co-author­ing research papers with him, nor why it took until this week – more than two years after the Guardian ini­tial­ly report­ed on Kogan’s data har­vest­ing activ­i­ties – for it to inform the users whose per­son­al infor­ma­tion was improp­er­ly shared.

And Kogan has offered a defence of his actions in an inter­view with the BBC and an email to his Cam­bridge col­leagues obtained by the Guardian. “My view is that I’m being basi­cal­ly used as a scape­goat by both Face­book and Cam­bridge Ana­lyt­i­ca,” Kogan said on Radio 4 on Wednes­day.

The data col­lec­tion that result­ed in Kogan’s sus­pen­sion by Face­book was under­tak­en by Glob­al Sci­ence Research (GSR), a com­pa­ny he found­ed in May 2014 with anoth­er Cam­bridge researcher, Joseph Chan­cel­lor. Chan­cel­lor is cur­rent­ly employed by Face­book.
...

But if Kogan’s claims are to be tak­en seri­ous­ly, we have a pret­ty seri­ous scan­dal on our hands. Because Kogan claims that not only did he make it clear to Face­book and his app users that the data they were col­lect­ing was for com­mer­cial use — with no men­tion of aca­d­e­m­ic or research pur­pos­es of the Uni­ver­si­ty of Cam­bridge — but he also claims that he made it clear the data GSR was col­lect­ing could be licensed and resold. And Face­book at no point raised any con­cerns at all about any of this:

...
“We made clear the app was for com­mer­cial use – we nev­er men­tioned aca­d­e­m­ic research nor the Uni­ver­si­ty of Cam­bridge,” Kogan wrote. “We clear­ly stat­ed that the users were grant­i­ng us the right to use the data in broad scope, includ­ing sell­ing and licens­ing the data. These changes were all made on the Face­book app plat­form and thus they had full abil­i­ty to review the nature of the app and raise issues. Face­book at no point raised any con­cerns at all about any of these changes.”

Kogan is not alone in crit­i­cis­ing Facebook’s appar­ent efforts to place the blame on him.

“In my view, it’s Face­book that did most of the shar­ing,” said Albright, who ques­tioned why Face­book cre­at­ed a sys­tem for third par­ties to access so much per­son­al infor­ma­tion in the first place. That sys­tem “was designed to share their users’ data in mean­ing­ful ways in exchange for stock val­ue”, he added.
...

Now, it’s worth not­ing that the casu­al accep­tance of the com­mer­cial use of the data col­lect­ed over these Face­book apps and the poten­tial licens­ing and reselling of that data is actu­al­ly a far more seri­ous­ly sit­u­a­tion than the one Sandy Parak­i­las described dur­ing his time at Face­book. Recall that, accord­ing to Parak­i­las, app devel­op­ers sim­ply had to tell Face­book was that they were going to use the pro­file data on app users and their friends to ‘improve the user expe­ri­ence.’ It was fine if they were com­mer­cial apps from Face­book’s per­spec­tive. But Parak­i­las did­n’t describe a sit­u­a­tion where app devel­op­ers open­ly made it clear they might license or resell the data. So Kogan’s claim that it was clear his app had com­mer­cial appli­ca­tions and might involve reselling the data is even more egre­gious than the sit­u­a­tion Parak­i­las described. But don’t for­get that Parak­i­las left Face­book in late 2012 and Kogan’s app would have been approved in 2014 so it’s entire­ly pos­si­ble Face­book’s poli­cies got even more egre­gious after Parak­i­las left.

And it’s worth not­ing how Kogan’s claims dif­fer from Christo­pher Wylie’s. Wylie asserts that Face­book grew alarmed by the vol­ume of data GSR’s app was pulling from Face­book users and Kogan assured them it was for research pur­pos­es. Where­as Kogan says Face­book nev­er expressed any alarm at all:

...
Whistle­blow­er Christo­pher Wylie told the Observ­er that Face­book was aware of the vol­ume of data being pulled by Kogan’s app. “Their secu­ri­ty pro­to­cols were trig­gered because Kogan’s apps were pulling this enor­mous amount of data, but appar­ent­ly Kogan told them it was for aca­d­e­m­ic use,” Wylie said. “So they were like: ‘Fine.’”

In the Cam­bridge email, Kogan char­ac­terised this claim as a “fab­ri­ca­tion”, writ­ing: “There was no exchange with Face­book about it, and ... we nev­er claimed dur­ing the project that it was for aca­d­e­m­ic research. In fact, we did our absolute best not to have the project have any entan­gle­ments with the uni­ver­si­ty.”
...

So as we can see, when it comes to Face­book’s “friends per­mis­sions” data shar­ing pol­i­cy, its arrange­ment with Alek­san­dr Kogan was prob­a­bly one of the more respon­si­ble ones it engaged in because, hey, at least Kogan’s work was osten­si­bly for research pur­pos­es and involved at least some anonymized data.

Cam­bridge Ana­lyt­i­ca’s Infor­mal Friend: Palan­tir

And as we can also see, the more we learn about this sit­u­a­tion, the hard­er it gets to dis­miss Kogan’s claims that Face­book is mak­ing in a scape­goat in order to cov­er up not just the rela­tion­ship Face­book had with Kogan but the fact that what Kogan was doing was rou­tine for app devel­op­ers for years.

But as the fol­low­ing New York Times arti­cle makes clear, Face­book’s rela­tion­ship with Alek­san­dr Kogan isn’t the only work­ing rela­tion­ship Face­book needs to wor­ry about that might lead back to Cam­bridge Ana­lyt­i­ca. Because it turns out there’s anoth­er Face­book con­nec­tion to Cam­bridge Ana­lyt­i­ca and it’s poten­tial­ly far, far more scan­dalous than Face­book’s rela­tion­ship with Kogan: It turns out Palan­tir might be the orig­i­na­tor of the idea to cre­ate Kogan’s app for the pur­pose of col­lect­ing psy­cho­log­i­cal pro­files. That’s right, accord­ing to doc­u­ments the New York Times has seen, Palan­tir, the pri­vate intel­li­gence firm with a close rela­tion­ship with the US nation­al secu­ri­ty state, was in talks with Cam­bridge Ana­lyt­i­ca from 2013–2014 about psy­cho­log­i­cal­ly pro­fil­ing vot­ers and it was an employ­ee of Palan­tir who raised the idea of cre­at­ing that app in the first place.

And this is of course wild­ly scan­dalous if true because Palan­tir was found­ed by the Face­book exec­u­tive Peter Thiel who also hap­pens to be a far right polit­i­cal activist and a close ally of Pres­i­dent Trump.

But it gets worse. And weird­er. Because it sounds like one of the peo­ple encour­ag­ing SCL (Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny) to work with Palan­tir was none oth­er than Sophie Schmidt, daugh­ter of Google CEO Eric Schmidt.

Keep in mind that this isn’t the first time we’ve heard about Palan­tir’s ties to Cam­bridge Ana­lyt­i­ca and Sophie Schmidt’s role in this. It was report­ed by the Observ­er last May. Accord­ing to that May 2017 arti­cle in the Observ­er, Schmidt was pass­ing through Lon­don in June of 2013 when she decid­ed to called up her for­mer boss at SCL and rec­om­mend that they con­tact Palan­tir. Also if inter­est is that if you look at the cur­rent ver­sion of that Observ­er arti­cle, all men­tion of Sophie Schmidt has been removed and there’s a note that the arti­cle is the sub­ject of legal com­plaints on behalf of Cam­bridge Ana­lyt­i­ca LLC and SCL Elec­tions Lim­it­ed. But in the orig­i­nal arti­cle she’s men­tioned quite exten­sive­ly. It would appear that some­one is very upset about the Sophie Schmidt angle to this sto­ry.

So this Palantir/Sophie Schmidt side of this sto­ry isn’t a new. But we’re learn­ing a lot more infor­ma­tion about that rela­tion­ship now. For instance:

1. In ear­ly 2013, Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix, an SCL direc­tor at the time, and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns.

2. And SCL employ­ee wrote to a col­league in a June 2013 email that Schmidt is push­ing them to work with Palan­tir. “Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” .

3. Accord­ing to Christo­pher Wylie’s tes­ti­mo­ny to law­mak­ers, “There were Palan­tir staff who would come into the office and work on the data...And we would go and meet with Palan­tir staff at Palan­tir.” Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014.

4. The Palan­tir employ­ee who float­ed the idea of cre­ate the app ulti­mate­ly built by Alek­san­dr Kogan is Alfredas Chmieli­auskas. Chmieli­auskas works on busi­ness devel­op­ment for Palan­tire accord­ing to his LinkedIn page.

5. Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er for­mal­ly start­ed work­ing togeth­er. A Palan­tir spokes­woman acknowl­edged that the com­pa­nies had briefly con­sid­ered work­ing togeth­er but said that Palan­tir declined a part­ner­ship, in part because exec­u­tives there want­ed to steer clear of elec­tion work. Emails indi­cate that Mr. Nix and Mr. Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined. Wylie acknowl­edges that Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er signed a con­tract or entered into a for­mal busi­ness rela­tion­ship. But he said some Palan­tir employ­ees helped engi­neer Cam­bridge Analytica’s psy­cho­graph­ic mod­els. In oth­er words, while there was nev­er a for­mal rela­tion­ship, there was an pret­ty sig­nif­i­cant infor­mal rela­tion­ship.

6. Mr. Chmieli­auskas was in com­mu­ni­ca­tion with Wylie’s team in 2014 dur­ing the peri­od when Cam­bridge Ana­lyt­i­ca was ini­tial­ly try­ing to con­vince the Uni­ver­si­ty of Cam­bridge team to work with them. Recall that Cam­bridge Ana­lyt­i­ca ini­tial­ly dis­cov­ered that the Uni­ver­si­ty of Cam­bridge team had exact­ly the kind of data they were inter­est­ed in col­lect­ed via a Face­book app, but the nego­ti­a­tions ulti­mate­ly failed and it was then that Cam­bridge Ana­lyt­i­ca found Alek­san­dr Kogan who agreed to cre­ate his own app. Well, accord­ing to this report, it was Chmieli­auskas who ini­tial­ly sug­gest­ed to Cam­bridge Ana­lyt­i­ca that the firm cre­ate its own ver­sion of the Uni­ver­si­ty of Cam­bridge team’s app as lever­age in those nego­ti­a­tions. In essence, Chmieli­auskas want­ed Cam­bridge Ana­lyt­i­ca to show the Uni­ver­si­ty of Cam­bridge team that they could col­lect the infor­ma­tion them­selves, pre­sum­ably to dri­ve a hard­er bar­gain. And when those nego­ti­a­tions failed Cam­bridge Ana­lyt­i­ca did indeed cre­ate their own app after team­ing up with Kogan.

7. Palan­tir asserts that Chmieli­auskas was act­ing in his own capac­i­ty when he con­tin­ued com­mu­ni­cat­ing with Wylie and made the sug­ges­tion to cre­ate their own app. Palan­tir ini­tial­ly told the New York Times that it had “nev­er had a rela­tion­ship with Cam­bridge Ana­lyt­i­ca, nor have we ever worked on any Cam­bridge Ana­lyt­i­ca data.” Palan­tir lat­er revised this, say­ing that Mr. Chmieli­auskas was not act­ing on the company’s behalf when he advised Mr. Wylie on the Face­book data.

And, again, do not for­get that Palan­tir is own by Peter Thiel, the far right bil­lion­aire ear­ly investor in Face­book and one of Face­book’s board mem­bers to this day. He was also a Trump del­e­gate in 2016 and was in dis­cus­sions with the Trump admin­is­tra­tion to lead the pow­er­ful Pres­i­den­t’s Intel­li­gence Advi­so­ry Board, although he ulti­mate­ly turned that offer down. Oh, and he’s an advo­cate of the Dark Enlight­en­ment.

Basi­cal­ly, Peter Thiel was a mem­ber of the ‘Alt Right’ before that term was ever coined. And he’s a very pow­er­ful influ­ence at Face­book. So learn­ing that Palan­tir and Cam­bridge Ana­lyt­i­ca were in dis­cus­sion to work togeth­er on elec­tion projects in 2013 and 2014, a Palan­tir employ­ee was advis­ing Cam­bridge Ana­lyt­i­ca dur­ing the nego­ti­a­tions with the Uni­ver­si­ty of Cam­bridge team, and that Palan­tir employ­ees helped engi­neer Cam­bridge Ana­lyt­i­ca’s psy­cho­graph­ic mod­el based on Face­book is the kind of rev­e­la­tion that just might qual­i­fy as the most scan­dalous rev­e­la­tion in this entire mess:

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

As a start-up called Cam­bridge Ana­lyt­i­ca sought to har­vest the Face­book data of tens of mil­lions of Amer­i­cans in sum­mer 2014, the com­pa­ny received help from at least one employ­ee at Palan­tir Tech­nolo­gies, a top Sil­i­con Val­ley con­trac­tor to Amer­i­can spy agen­cies and the Pen­ta­gon.

It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

Cam­bridge ulti­mate­ly took a sim­i­lar approach. By ear­ly sum­mer, the com­pa­ny found a uni­ver­si­ty researcher to har­vest data using a per­son­al­i­ty ques­tion­naire and Face­book app. The researcher scraped pri­vate data from over 50 mil­lion Face­book users — and Cam­bridge Ana­lyt­i­ca went into busi­ness sell­ing so-called psy­cho­me­t­ric pro­files of Amer­i­can vot­ers, set­ting itself on a col­li­sion course with reg­u­la­tors and law­mak­ers in the Unit­ed States and Britain.

The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

“There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,” said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day.

...

The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book.

The Palan­tir employ­ee, Alfredas Chmieli­auskas, works on busi­ness devel­op­ment for the com­pa­ny, accord­ing to his LinkedIn page. In an ini­tial state­ment, Palan­tir said it had “nev­er had a rela­tion­ship with Cam­bridge Ana­lyt­i­ca, nor have we ever worked on any Cam­bridge Ana­lyt­i­ca data.” Lat­er on Tues­day, Palan­tir revised its account, say­ing that Mr. Chmieli­auskas was not act­ing on the company’s behalf when he advised Mr. Wylie on the Face­book data.

“We learned today that an employ­ee, in 2013–2014, engaged in an entire­ly per­son­al capac­i­ty with peo­ple asso­ci­at­ed with Cam­bridge Ana­lyt­i­ca,” the com­pa­ny said. “We are look­ing into this and will take the appro­pri­ate action.”

The com­pa­ny said it was con­tin­u­ing to inves­ti­gate but knew of no oth­er employ­ees who took part in the effort. Mr. Wylie told law­mak­ers that mul­ti­ple Palan­tir employ­ees played a role.

Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

Ms. Schmidt did not respond to requests for com­ment, nor did a spokesman for Cam­bridge Ana­lyt­i­ca.

In ear­ly 2013, Alexan­der Nix, an SCL direc­tor who became chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns.

A Palan­tir spokes­woman acknowl­edged that the com­pa­nies had briefly con­sid­ered work­ing togeth­er but said that Palan­tir declined a part­ner­ship, in part because exec­u­tives there want­ed to steer clear of elec­tion work. Emails reviewed by The Times indi­cate that Mr. Nix and Mr. Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined.

In his tes­ti­mo­ny, Mr. Wylie acknowl­edged that Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er signed a con­tract or entered into a for­mal busi­ness rela­tion­ship. But he said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.

Palan­tir employ­ees were impressed with Cambridge’s back­ing from Mr. Mer­cer, one of the world’s rich­est men, accord­ing to mes­sages viewed by The Times. And Cam­bridge Ana­lyt­i­ca viewed Palantir’s Sil­i­con Val­ley ties as a valu­able resource for launch­ing and expand­ing its own busi­ness.

In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients.

“I had left field idea,” Mr. Chmieli­auskas wrote in May 2014. “What about repli­cat­ing the work of the cam­bridge prof as a mobile app that con­nects to face­book?” Repro­duc­ing the app, Mr. Chmieli­auskas wrote, “could be a valu­able lever­age nego­ti­at­ing with the guy.”

Those nego­ti­a­tions failed. But Mr. Wylie struck gold with anoth­er Cam­bridge researcher, the Russ­ian-Amer­i­can psy­chol­o­gist Alek­san­dr Kogan, who built his own per­son­al­i­ty quiz app for Face­book. Over sub­se­quent months, Dr. Kogan’s work helped Cam­bridge devel­op psy­cho­log­i­cal pro­files of mil­lions of Amer­i­can vot­ers.

———-

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018

“The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

Yep, a Face­book board mem­ber’s pri­vate intel­li­gence firm was work­ing close­ly with Cam­brige Ana­lyt­i­ca as they devel­oped their psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy. It’s quite a rev­e­la­tion. The kind of explo­sive rev­e­la­tion that has Palan­tir first deny­ing that there was any rela­tion­ship at all, fol­lowed with acknowledgement/denial that, yes, a Palan­tir employ­ee, Alfredas Chmieli­auskas, was indeed work­ing with Cam­bridge Ana­lyt­i­ca but not on behalf of Palan­tir:

...
It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

...

The Palan­tir employ­ee, Alfredas Chmieli­auskas, works on busi­ness devel­op­ment for the com­pa­ny, accord­ing to his LinkedIn page. In an ini­tial state­ment, Palan­tir said it had “nev­er had a rela­tion­ship with Cam­bridge Ana­lyt­i­ca, nor have we ever worked on any Cam­bridge Ana­lyt­i­ca data.” Lat­er on Tues­day, Palan­tir revised its account, say­ing that Mr. Chmieli­auskas was not act­ing on the company’s behalf when he advised Mr. Wylie on the Face­book data.
...

Adding the scan­dalous nature of it all is that Google CEO Eric Schmidt’s daugh­ter sud­den­ly appeared in June of 2013 to also pro­mote to her old boss at SCL a rela­tion­ship with Palan­tir:

...
Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

Ms. Schmidt did not respond to requests for com­ment, nor did a spokesman for Cam­bridge Ana­lyt­i­ca.
...

But this June 2013 pro­pos­al by Sophie Schmidt was­n’t what start­ed Cam­bridge Ana­lyt­i­ca’s rela­tion­ship with Palan­tir. Because that report­ed­ly start­ed in ear­ly 2013, when Alexan­der Nix and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns:

...
In ear­ly 2013, Alexan­der Nix, an SCL direc­tor who became chief exec­u­tive of Cam­bridge Ana­lyt­i­ca, and a Palan­tir exec­u­tive dis­cussed work­ing togeth­er on elec­tion cam­paigns.
...

So Sophie Schmidt swooped in to pro­mote Palan­tir to Cam­bridge Ana­lyt­i­ca months after the nego­ti­a­tions began. It rais­es the ques­tion of who encour­aged her to do that.

Palan­tir now admits these nego­ti­a­tions hap­pened, but claims that they chose not to work with Cam­bridge Ana­lyt­i­ca because they “want­ed to steer clear of elec­tion work.” And emails indi­cate that Palan­tir did indeed for­mal­ly turn down the idea of work­ing with Cam­bridge Ana­lyt­i­ca since the emails show that Nix and Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined. And yet, accord­ing to Christo­pher Wylie, some Palan­tir employ­ees helped engi­neer their psy­chogroph­ic mod­els. And that sug­gests Palan­tir turned down a for­mal rela­tion­ship in favor of an infor­mal one:

...
A Palan­tir spokes­woman acknowl­edged that the com­pa­nies had briefly con­sid­ered work­ing togeth­er but said that Palan­tir declined a part­ner­ship, in part because exec­u­tives there want­ed to steer clear of elec­tion work. Emails reviewed by The Times indi­cate that Mr. Nix and Mr. Chmieli­auskas sought to revive talks about a for­mal part­ner­ship through ear­ly 2014, but Palan­tir exec­u­tives again declined.

In his tes­ti­mo­ny, Mr. Wylie acknowl­edged that Palan­tir and Cam­bridge Ana­lyt­i­ca nev­er signed a con­tract or entered into a for­mal busi­ness rela­tion­ship. But he said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.
...

“There were Palan­tir staff who would come into the office and work on the data...And we would go and meet with Palan­tir staff at Palan­tir.”

That sure sounds like a rela­tion­ship! For­mal or not.

And that infor­mal rela­tion­ship con­tin­ued dur­ing the peri­od when Cam­bridge Ana­lyt­i­ca was in nego­ti­a­tion with the ini­tial Uni­ver­si­ty of Cam­bridge Psy­cho­met­rics Cen­tre in 2014:

...
In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients.
...

And it was dur­ing those nego­ti­a­tions, in May of 2014, when Chmieli­auskas first pro­posed the idea of just repli­cat­ing what the Uni­ver­si­ty of Cam­bridge Psy­cho­met­rics Cen­tre was doing for lever­age in the nego­ti­a­tions. When those nego­ti­a­tions ulti­mate­ly failed, Cam­bridge Ana­lyt­i­ca found anoth­er Cam­bridge Uni­ver­si­ty psy­chol­o­gist, Alek­san­dr Kogan, to build the app for them:

...
“I had left field idea,” Mr. Chmieli­auskas wrote in May 2014. “What about repli­cat­ing the work of the cam­bridge prof as a mobile app that con­nects to face­book?” Repro­duc­ing the app, Mr. Chmieli­auskas wrote, “could be a valu­able lever­age nego­ti­at­ing with the guy.”

Those nego­ti­a­tions failed. But Mr. Wylie struck gold with anoth­er Cam­bridge researcher, the Russ­ian-Amer­i­can psy­chol­o­gist Alek­san­dr Kogan, who built his own per­son­al­i­ty quiz app for Face­book. Over sub­se­quent months, Dr. Kogan’s work helped Cam­bridge devel­op psy­cho­log­i­cal pro­files of mil­lions of Amer­i­can vot­ers.
...

And that’s what we know so far about the rela­tion­ship between Cam­bridge Ana­lyt­i­ca and Palan­tir. Which rais­es a num­ber of ques­tions. Like whether or not this infor­mal rela­tion­ship con­tin­ued well after Cam­bridge Ana­lyt­i­ca start­ed har­vest­ing all that Face­book infor­ma­tion. Let’s look at sev­en key the facts about we know Palan­tir’s involve­ment in this so far:

1. Palan­tir employ­ees helped build the psy­cho­graph­ic pro­files.

2. Mr. Chmieli­auskas was in con­tact with Wylie at least as late as May of 2014 as Cam­bridge Ana­lyt­i­ca was nego­ti­at­ing with the Uni­ver­si­ty of Cam­bridge’s Psy­cho­met­rics Cen­tre.

3. We don’t know when this infor­mal rela­tion­ship between Palan­tir and Cam­bridge Ana­lyt­i­ca end­ed.

4. We don’t know if the infor­mal rela­tion­ship between Palan­tir and Cam­bridge Ana­lyt­i­ca — which large­ly appears to cen­ter around Mr. Chmieli­auskas — real­ly was large­ly Chmieli­auskas’s ini­tia­tive alone after Palan­tir ini­tial­ly reject­ed a for­mal rela­tion­ship (it’s pos­si­ble) or if Chmieli­auskas was direct­ed to pur­sue this rela­tion­ship infor­mal­ly but on behalf of Palan­tir to main­tain deni­a­bil­i­ty in the case of awk­ward sit­u­a­tions like the present one (also very pos­si­ble, and savvy giv­en the cur­rent sit­u­a­tion).

5. We don’t know if the Palan­tir employ­ees who helped build those psy­cho­graph­ic pro­files were work­ing with the data Cam­bridge Ana­lyt­i­ca har­vest­ed from Face­book or were they work­ing with the ear­li­er, inad­e­quate sets of data that did­n’t include the Face­book data? Because if the Palan­tir employ­ees helped build the psy­cho­graph­ic pro­files based on the Face­book data that implies this infor­mal rela­tion­ship went on a lot longer than May of 2014 since that’s when it first start­ed get­ting col­lect­ed via Kogan’s app. How long? We don’t yet know.

6. Nei­ther do we know how much of this data ulti­mate­ly fell into the hands of Palan­tir. As Wylie described it, “There were Palan­tir staff who would come into the office and work on the data...And we would go and meet with Palan­tir staff at Palan­tir.” So did those Palan­tir employ­ees who were work­ing on “the data” take any of that data back to Palan­tir?

7. For that mat­ter, giv­en that Peter Thiel sits on the board of Face­book, and giv­en how freely Face­book hands out this kind of data, we have to ask the ques­tion of whether or not Palan­tir already has direct access to exact­ly the kind of data Cam­bridge Ana­lyt­i­ca was har­vest­ing. Did Palan­tir even need Cam­bridge Ana­lyt­i­ca’s data? Per­haps Palan­tir was already using apps of their own to har­vest this kind of data? We don’t know. At the same time, don’t for­get that even if Palan­tir had ready access to the same Face­book pro­file data gath­ered by Kogan’s app, it’s still pos­si­ble Palan­tir would have had an inter­est in the com­pa­ny pure­ly to see how the data was ana­lyzed and learn from that. In oth­er words, the inter­est in Cam­bridge Ana­lyt­i­ca may be been more relat­ed to the algo­rithms, and not the data, for Peter Thiel’s Palan­tir. Don’t for­get that if any­one is the real pow­er behind the throne at Face­book it’s prob­a­bly Thiel.

8. What on earth is going on with Sophie Schmidt, daugh­ter of Google CEO Eric Schmidt, push­ing Cam­bridge Ana­lyt­i­ca to work with Palan­tir in June of 2013, months after Cam­bridge Ana­lyt­ic and Palan­tir began talk­ing with each oth­er? That seems poten­tial­ly sig­nif­i­cant.

Those are just some of the ques­tions raised about Palan­tir’s ambigu­ous­ly omi­nous rela­tion­ship with Cam­bridge Ana­lyt­i­ca. Bad don’t for­get that it’s not just Palan­tir that we need to ask these kinds of ques­tions. For instance, what about Steve Ban­non’s Bre­it­bart? Does Bre­it­bart, home the neo-Nazi ‘Alt Right’, also have access to all that har­vest­ed Cam­bridge Ana­lyt­i­ca data? Not just the raw Face­book data but also the processed psy­cho­log­i­cal pro­file data on 50 mil­lion Amer­i­cans that Cam­bridge Ana­lyt­i­ca gen­er­at­ed. Does Bre­it­bart have the processed pro­files too? And what about the Repub­li­can Par­ty? And all the oth­er enti­ties out there who gained access to this Face­book pro­file data. Just how many dif­fer­ent enti­ties around the globe pos­sess that Cam­bridge Ana­lyt­i­ca data set?

It’s Not Just Cam­bridge Ana­lyt­i­ca. Or Face­book. Or Google. It’s Soci­ety.

Of course, as we saw with Sandy Parak­i­las’s whis­tle-blow­er claims, when it comes to the ques­tion of who might pos­sess Face­book pro­file data har­vest­ed dur­ing the 2007–2014 peri­od when Face­book had “friends per­mis­sions” pol­i­cy, the list of sus­pects includes poten­tial­ly hun­dreds of thou­sands of devel­op­ers and any­one who has pur­chased this infor­ma­tion on the black mar­ket.

Don’t for­get one of the oth­er amaz­ing aspects of this whole sit­u­a­tion: if hun­dreds of thou­sands of devel­op­ers were using this fea­ture to scrape user pro­files, that means this real­ly was an open secret. Lots and lots of peo­ple were doing this. For years. So, like many scan­dals, per­haps the most scan­dalous part of it is that we’re learn­ing about some­thing we should have known all along and many of did know all along. It’s not like it’s a secret that peo­ple are being sur­veilled in detail in the inter­net age and this data is being stored and aggre­gat­ed in pub­lic and pri­vate data­bas­es and put up for sale. We’ve col­lec­tive­ly known this all along. At least on some lev­el.

And yet this sur­veil­lance is so per­va­sive that it’s almost nev­er thought about on a moment by moment basis at an indi­vid­ual lev­el. When peo­ple browse the web they pre­sum­ably aren’t think­ing about the vol­ume of track­ing cook­ies and oth­er per­son­al infor­ma­tion slurped up as a result of that mouse click. Nor are they think­ing about how that click con­tributes to the numer­ous per­son­al pro­files of them float­ing around the com­mer­cial data bro­ker­age mar­ket­place. So in a more fun­da­men­tal sense we don’t actu­al­ly know we’re being sur­veilled because we’re not think­ing about it.

It’s one exam­ple of how humans aren’t wired to nat­u­ral­ly think about the macro forces impact­ing their lives in day to day deci­sions, which was fine when we were cave men but becomes a prob­lem­at­ic instinct when we’re lit­er­al­ly mas­ter­ing the laws of physics and shap­ing our world and envi­ron­ment. From physics and nature to his­to­ry and con­tem­po­rary trends, the vast major­i­ty of human­i­ty spends very lit­tle time study­ing these top­ics. Which is com­plete­ly under­stand­able giv­en the lack of time or resources to do so, but that under­stand­able instinct cre­ates world per­fect­ly set up for abuse by sur­veil­lance states, both pub­lic and pri­vate, which makes it less under­stand­able and much more prob­lem­at­ic.

So, in the inter­est of gain­ing per­spec­tive on how we got to this point where the Face­book emerged as an ever-grow­ing Panop­ti­con in just a few short years after its con­cep­tion, let’s take a look at one last arti­cle. It’s an arti­cle by inves­tiga­tive jour­nal­ist Yasha Levine, who recent­ly pub­lished the must-read book Sur­veil­lance Val­ley: The Secret Mil­i­tary His­to­ry of the Inter­net. It’s a book filled with vital his­tor­i­cal fun fact about the inter­net. Fun facts like...

1. How the inter­net began as a sys­tem built for nation­al secu­ri­ty pur­pos­es with a focus on mil­i­tary hard­ware and com­mand and con­trol com­mu­ni­ca­tion pur­pos­es in gen­er­al. But there was also a focus on build­ing a sys­tem that could col­lect, store, process, and dis­trib­ute of mas­sive vol­umes of infor­ma­tion used to wage the Viet­nam war. Beyond that, these ear­ly com­put­er net­works also act­ed as a col­lec­tion and shar­ing sys­tem for deal­ing with domes­tic nation­al secu­ri­ty con­cerns (con­cerns that cen­tered around track­ing anti-war pro­test­ers, civ­il rights activists, etc). That’s what the inter­net start­ed out as. A sys­tem for stor­ing data about peo­ple and con­flict for US nation­al secu­ri­ty pur­pos­es.

2. Build­ing data­bas­es of pro­files on peo­ple (for­eign and domes­tic) was one of the very first goals of these inter­net pre­de­ces­sors. In fact, one of the key vision­ar­ies behind the devel­op­ment of the inter­net, Ithiel de Sola Pool, both helped shape the devel­op­ment of the ear­ly inter­net as a sur­veil­lance and coun­terin­sur­gency tech­nol­o­gy and also pio­neered data-dri­ven elec­tion cam­paigns. He even start­ed a pri­vate firm to do this: Simul­mat­ics. Pool’s vision was a world where the sur­veil­lance state act­ed as a benign mas­ter that the kept the peace peace­ful­ly by using supe­ri­or knowl­edge to nudge peo­ple in the ‘right’ direc­tion.

3. This vision of vast data­base of per­son­al pro­files for the pur­pose was large­ly a secret at first, but it did­n’t remain that way. And there was actu­al­ly quite a bit of pub­lic para­noia in the US about these inter­net-pre­de­ces­sors, espe­cial­ly with­in the anti-Viet­nam war activist com­mu­ni­ties. Flash for­ward a cou­ple decades and that para­noia has fad­ed almost entirely...until scan­dals like the cur­rent one erupt and we tem­porar­i­ly grow con­cerned.

4. What Cam­bridge Ana­lyt­i­ca is accused of doing is what the data giants like Face­book and Google do every day and have been going for years. And it’s not just the giants. Small­er firms are scoop­ing up fast amounts of infor­ma­tion too...it’s just not as vast as what the giants are col­lect­ing. Even cute apps, like the wild­ly pop­u­lar Angry Birds, has been found to col­lect all sorts of data about users.

5. While it’s great that pub­lic atten­tion is being direct­ed at the kind of sleazy manip­u­la­tive activ­i­ties Cam­bridge Ana­lyt­i­ca was engag­ing in, decep­tive­ly wield­ing real pow­er over real unwit­ting peo­ple, it is a wild mis­char­ac­ter­i­za­tion to act like Cam­bridge Ana­lyt­i­ca was exert­ing mass mind-con­trol over the mass­es using inter­net mar­ket­ing voodoo. What Cam­bridge Ana­lyt­i­ca, or any of the oth­er sleazy manip­u­la­tors, were doing was indeed influ­en­tial, but it needs to be viewed in the con­text of a polit­i­cal state of affairs where mas­sive num­bers of Amer­i­cans, includ­ing Trump vot­ers, real­ly have been col­lec­tive­ly failed by the Amer­i­can pow­er estab­lish­ment for decades. The col­lapse of the Amer­i­can mid­dle class and rise of the plu­toc­ra­cy is what cre­at­ed the kind of macro envi­ron­ment where car­ni­val bark­er like Don­ald Trump could use firms like Cam­bridge Ana­lyt­i­ca to ‘nudge’ peo­ple in the direc­tion of vot­ing for him. In oth­er words, the focus on Cam­bridge Ana­lyt­i­ca’s manip­u­la­tion of peo­ple’s psy­cho­log­i­cal pro­files in the absence of the recog­ni­tion of the mas­sive polit­i­cal fail­ures of last sev­er­al decades in Amer­i­ca — the mass socioe­co­nom­ic fail­ures of the Amer­i­can embrace of ‘Reaganon­ics’ and right-wing eco­nom­ic gospel cou­pled with the Amer­i­can Left­’s fail­ure to effec­tive­ly repu­di­ate these doc­trines — is pro­found­ly ahis­tor­i­cal. The sto­ry of the rise of the pow­er of firms like Face­book, Google, and Cam­bridge Ana­lyt­i­ca is a sto­ry the implic­it­ly includes the sto­ry of that entire his­to­ry of political/socioeconomic fail­ures tied to fail­ure to effec­tive­ly respond to the rise of the Amer­i­can right-wing over the last sev­er­al decades. And we are mak­ing a mas­sive mis­take if we for­get that. Cam­bridge Ana­lyt­i­ca would­n’t have been near­ly as effec­tive in nudg­ing peo­ple towards vot­ing for some­one like Trump if so many peo­ple weren’t already so ready to burn the cur­rent sys­tem down.

These are the kinds of his­tor­i­cal chap­ters that can’t be left out of any analy­sis of Cam­bridge Ana­lyt­i­ca. Because Cam­bridge Ana­lyt­i­ca isn’t the excep­tion. It’s an excep­tion­al­ly sleazy exam­ple of the rules we’ve been play­ing by for a while, whether we real­ized it or not:

The Baf­fler

The Cam­bridge Ana­lyt­i­ca Con

Yasha Levine,
March 21, 2018

“The man with the prop­er imag­i­na­tion is able to con­ceive of any com­mod­i­ty in such a way that it becomes an object of emo­tion to him and to those to whom he imparts his pic­ture, and hence cre­ates desire rather than a mere feel­ing of ought.”

Wal­ter Dill Scott, Influ­enc­ing Men in Busi­ness: Psy­chol­o­gy of Argu­ment and Sug­ges­tion (1911)

This week, Cam­bridge Ana­lyt­i­ca, the British elec­tion data out­fit fund­ed by bil­lion­aire Robert Mer­cer and linked to Steven Ban­non and Pres­i­dent Don­ald Trump, blew up the news cycle. The charge, as report­ed by twin exposés in the New York Times and the Guardian, is that the firm inap­pro­pri­ate­ly accessed Face­book pro­file infor­ma­tion belong­ing to 50 mil­lion peo­ple and then used that data to con­struct a pow­er­ful inter­net-based psy­cho­log­i­cal influ­ence weapon. This new­fan­gled con­struct was then used to brain­wash-car­pet-bomb the Amer­i­can elec­torate, shred­ding our democ­ra­cy and turn­ing peo­ple into pli­able zom­bie sup­port­ers of Don­ald Trump.

In the words of a pink-haired Cam­bridge Ana­lyt­i­ca data-war­rior-turned-whistle­blow­er, the com­pa­ny served as a dig­i­tal armory that turned “Likes” into weapons and pro­duced “Steve Bannon’s psy­cho­log­i­cal war­fare mind­fuck tool.”

Scary, right? Makes me won­der if I’m still not under Cam­bridge Analytica’s influ­ence right now.

Nat­u­ral­ly, there are also rumors of a nefar­i­ous Russ­ian con­nec­tion. And appar­ent­ly there’s more dirt com­ing. Chan­nel 4 News in Britain just pub­lished an inves­ti­ga­tion show­ing top Cam­bridge Ana­lyt­i­ca execs brag­ging to an under­cov­er reporter that their team uses high-tech psy­cho­me­t­ric voodoo to win elec­tions for clients all over the world, but also dab­bles in tra­di­tion­al meat­space tech­niques as well: bribes, kom­pro­mat, black­mail, Ukrain­ian escort honeypots—you know, the works.

It’s good that the main­stream news media are final­ly start­ing to pay atten­tion to this dark cor­ner of the inter­net —and pro­duc­ing exposés of shady sub rosa polit­i­cal cam­paigns and their eager exploita­tion of our online dig­i­tal trails in order to con­t­a­m­i­nate our infor­ma­tion streams and influ­ence our deci­sions. It’s about time.

But this sto­ry is being cov­ered and framed in a mis­lead­ing way. So far, much of the main­stream cov­er­age, dri­ven by the Times and Guardian reports, looks at Cam­bridge Ana­lyt­i­ca in isolation—almost entire­ly out­side of any his­tor­i­cal or polit­i­cal con­text. This makes it seem to read­ers unfa­mil­iar with the long his­to­ry of the strug­gle for con­trol of the dig­i­tal sphere as if the main prob­lem is that the bad actors at Cam­bridge Ana­lyt­i­ca crossed the trans­mis­sion wires of Face­book in the Promethean man­ner of Vic­tor Frankenstein—taking what were nor­mal­ly respectable, sci­en­tif­ic data pro­to­cols and per­vert­ing them to serve the dia­bol­i­cal aim of rean­i­mat­ing the decom­pos­ing lump of polit­i­cal flesh known as Don­ald Trump.

So if we’re going to view the actions of Cam­bridge Ana­lyt­i­ca in their prop­er light, we need first to start with an admis­sion. We must con­cede that covert influ­ence is not some­thing unusu­al or for­eign to our soci­ety, but is as Amer­i­can as apple pie and free­dom fries. The use of manip­u­la­tive, psy­cho­log­i­cal­ly dri­ven adver­tis­ing and mar­ket­ing tech­niques to sell us prod­ucts, lifestyles, and ideas has been the foun­da­tion of mod­ern Amer­i­can soci­ety, going back to the days of the self-styled inven­tor of pub­lic rela­tions, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ail­ing con­sumer soci­ety togeth­er. And when it comes to mar­ket­ing can­di­dates and polit­i­cal mes­sages, using data to influ­ence peo­ple and shape their deci­sions has been the holy grail of the com­put­er age, going back half a cen­tu­ry.

Let’s start with the basics: What Cam­bridge Ana­lyt­i­ca is accused of doing—siphoning people’s data, com­pil­ing pro­files, and then deploy­ing that infor­ma­tion to influ­ence them to vote a cer­tain way—Facebook and Sil­i­con Val­ley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more inva­sive scale.

Today’s inter­net busi­ness ecosys­tem is built on for-prof­it sur­veil­lance, behav­ioral pro­fil­ing, manip­u­la­tion and influ­ence. That’s the name of the game. It isn’t just Face­book or Cam­bridge Ana­lyt­i­ca or even Google. It’s Ama­zon. It’s eBay. It’s Palan­tir. It’s Angry Birds. It’s MoviePass. It’s Lock­heed Mar­tin. It’s every app you’ve ever down­loaded. Every phone you bought. Every pro­gram you watched on your on-demand cable TV pack­age.

All of these games, apps, and plat­forms prof­it from the con­cert­ed siphon­ing up of all data trails to pro­duce pro­files for all sorts of micro-tar­get­ed influ­ence ops in the pri­vate sec­tor. This com­merce in user data per­mit­ted Face­book to earn $40 bil­lion last year, while Google raked in $110 bil­lion.

What do these com­pa­nies know about us, their users? Well, just about every­thing.

Sil­i­con Val­ley of course keeps a tight lid on this infor­ma­tion, but you can get a glimpse of the kinds of data our pri­vate dig­i­tal dossiers con­tain by trawl­ing through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-tar­get­ed adver­tis­ing tech­nol­o­gy. The lan­guage, stripped of opaque tech jar­gon, revealed that just about every­thing we enter into Google’s many prod­ucts and platforms—from email cor­re­spon­dence to Web search­es and inter­net browsing—is ana­lyzed and used to pro­file users in an extreme­ly inva­sive and per­son­al way. Email cor­re­spon­dence is parsed for mean­ing and sub­ject mat­ter. Names are matched to real iden­ti­ties and address­es. Email attachments—say, bank state­ments or test­ing results from a med­ical lab—are scraped for infor­ma­tion. Demo­graph­ic and psy­cho­graph­ic data, includ­ing social class, per­son­al­i­ty type, age, sex, polit­i­cal affil­i­a­tion, cul­tur­al inter­ests, social ties, per­son­al income, and mar­i­tal sta­tus is extract­ed. In one patent, I dis­cov­ered that Google appar­ent­ly had the abil­i­ty to deter­mine if a per­son was a legal U.S. res­i­dent or not. It also turned out you didn’t have to be a reg­is­tered Google user to be snared in this pro­fil­ing appa­ra­tus. All you had to do was com­mu­ni­cate with some­one who had a Gmail address.

On the whole, Google’s pro­fil­ing phi­los­o­phy was no dif­fer­ent than Facebook’s, which also con­structs “shad­ow pro­files” to col­lect and mon­e­tize data, even if you nev­er had a reg­is­tered Face­book or Gmail account.

It’s not just the big plat­form monop­o­lies that do this, but all the small­er com­pa­nies that run their busi­ness­es on ser­vices oper­at­ed by Google and Face­book. It even includes cute games like Angry Birds, devel­oped by Finland’s Rovio Enter­tain­ment, that’s been down­loaded more than a bil­lion times. The Android ver­sion of Angry Birds was found to pull per­son­al data on its play­ers, includ­ing eth­nic­i­ty, mar­i­tal sta­tus, and sex­u­al orientation—including options for the “sin­gle,” “mar­ried,” “divorced,” “engaged,” and “swinger” cat­e­gories. Pulling per­son­al data like this didn’t con­tra­dict Google’s terms of ser­vices for its Android plat­form. Indeed, for-prof­it sur­veil­lance was the whole point of why Google start­ed plan­ning to launch an iPhone rival as far back as 2004.

In launch­ing Android, Google made a gam­ble that by releas­ing its pro­pri­etary oper­at­ing sys­tem to man­u­fac­tur­ers free of charge, it wouldn’t be rel­e­gat­ed to run­ning apps on Apple iPhone or Microsoft Mobile Win­dows like some kind of dig­i­tal sec­ond-class cit­i­zen. If it played its cards right and Android suc­ceed­ed, Google would be able to con­trol the envi­ron­ment that under­pins the entire mobile expe­ri­ence, mak­ing it the ulti­mate gate­keep­er of the many mon­e­tized inter­ac­tions among users, apps, and adver­tis­ers. And that’s exact­ly what hap­pened. Today, Google monop­o­lizes the smart phone mar­ket and dom­i­nates the mobile for-prof­it sur­veil­lance busi­ness.

These detailed psy­cho­log­i­cal pro­files, togeth­er with the direct access to users that plat­forms like Google and Face­book deliv­er, make both com­pa­nies cat­nip to adver­tis­ers, PR flacks—and dark-mon­ey polit­i­cal out­fits like Cam­bridge Ana­lyt­i­ca.

Indeed, polit­i­cal cam­paigns showed an ear­ly and pro­nounced affin­i­ty for the idea of tar­get­ed access and influ­ence on plat­forms like Face­book. Instead of blan­ket­ing air­waves with a sin­gle polit­i­cal ad, they could show peo­ple ads that appealed specif­i­cal­ly to the issues they held dear. They could also ensure that any such mes­sage spread through a tar­get­ed person’s larg­er social net­work through repost­ing and shar­ing.

The enor­mous com­mer­cial inter­est that polit­i­cal cam­paigns have shown in social media has earned them priv­i­leged atten­tion from Sil­i­con Val­ley plat­forms in return. Face­book runs a sep­a­rate polit­i­cal divi­sion specif­i­cal­ly geared to help its cus­tomers tar­get and influ­ence vot­ers.

The com­pa­ny even allows polit­i­cal cam­paigns to upload their own lists of poten­tial vot­ers and sup­port­ers direct­ly into Facebook’s data sys­tem. So armed, dig­i­tal polit­i­cal oper­a­tives can then use those people’s social net­works to iden­ti­fy oth­er prospec­tive vot­ers who might be sup­port­ive of their candidate—and then tar­get them with a whole new tidal wave of ads. “There’s a lev­el of pre­ci­sion that doesn’t exist in any oth­er medi­um,” Crys­tal Pat­ter­son, a Face­book employ­ee who works with gov­ern­ment and pol­i­tics cus­tomers, told the New York Times back in 2015. “It’s get­ting the right mes­sage to the right peo­ple at the right time.”

Nat­u­ral­ly, a whole slew of com­pa­nies and oper­a­tives in our increas­ing­ly data-dri­ven elec­tion scene have cropped up over the last decade to plug in to these amaz­ing influ­ence machines. There is a whole con­stel­la­tion of them work­ing all sorts of strate­gies: tra­di­tion­al vot­er tar­get­ing, polit­i­cal pro­pa­gan­da mills, troll armies, and bots.

Some of these firms are polit­i­cal­ly agnos­tic; they’ll work for any­one with cash. Oth­ers are par­ti­san. The Demo­c­ra­t­ic Par­ty Data Death Star is NGP VAN. The Repub­li­cans have a few of their own—including i360, a data mon­ster gen­er­ous­ly fund­ed by Charles Koch. Nat­u­ral­ly, i360 part­ners with Face­book to deliv­er tar­get vot­ers. It also claims to have 700 per­son­al data points cross-tab­u­lat­ed on 199 mil­lion vot­ers and near­ly 300 mil­lion con­sumers, with the abil­i­ty to pro­file and tar­get them with pin-point accu­ra­cy based on their beliefs and views.

Here’s how The Nation­al Jour­nal’s Andrew Rice described i360 in 2015:

Like Google, the Nation­al Secu­ri­ty Agency, or the Demo­c­ra­t­ic data machine, i360 has a vora­cious appetite for per­son­al infor­ma­tion. It is con­stant­ly ingest­ing new data into its tar­get­ing sys­tems, which pre­dict not only par­ti­san iden­ti­fi­ca­tion but also sen­ti­ments about issues such as abor­tion, tax­es, and health care. When I vis­it­ed the i360 office, an employ­ee gave me a demon­stra­tion, zoom­ing in on a map to focus on a par­tic­u­lar 66-year-old high school teacher who lives in an apart­ment com­plex in Alexan­dria, Vir­ginia. . . . Though the adver­tis­ing indus­try typ­i­cal­ly eschews address­ing any sin­gle individual—it’s not just inva­sive, it’s also inefficient—it is becom­ing com­mon­place to tar­get extreme­ly nar­row audi­ences. So the school­teacher, along with a few look-alikes, might see a tai­lored ad the next time she clicks on YouTube.

Sil­i­con Val­ley doesn’t just offer cam­paigns a neu­tral plat­form; it also works close­ly along­side polit­i­cal can­di­dates to the point that the biggest inter­net com­pa­nies have become an exten­sion of the Amer­i­can polit­i­cal sys­tem. As one recent study showed, tech com­pa­nies rou­tine­ly embed their employ­ees inside major polit­i­cal cam­paigns: “Face­book, Twit­ter, and Google go beyond pro­mot­ing their ser­vices and facil­i­tat­ing dig­i­tal adver­tis­ing buys, active­ly shap­ing cam­paign com­mu­ni­ca­tion through their close col­lab­o­ra­tion with polit­i­cal staffers . . . these firms serve as qua­si-dig­i­tal con­sul­tants to cam­paigns, shap­ing dig­i­tal strat­e­gy, con­tent, and exe­cu­tion.”

In 2008, the hip young Black­ber­ry-tot­ing Barack Oba­ma was the first major-par­ty can­di­date on the nation­al scene to tru­ly lever­age the pow­er of inter­net-tar­get­ed agit­prop. With help from Face­book cofounder Chris Hugh­es, who built and ran Obama’s inter­net cam­paign divi­sion, the first Oba­ma cam­paign built an inno­v­a­tive micro-tar­get­ing ini­tia­tive to raise huge amounts of mon­ey in small chunks direct­ly from Obama’s sup­port­ers and sell his mes­sage with a hith­er­to unprece­dent­ed laser-guid­ed pre­ci­sion in the gen­er­al elec­tion cam­paign.

...

Now, of course, every elec­tion is a Face­book Elec­tion. And why not? As Bloomberg News has not­ed, Sil­i­con Val­ley ranks elec­tions “along­side the Super Bowl and the Olympics in terms of events that draw block­buster ad dol­lars and boost engage­ment.” In 2016, $1 bil­lion was spent on dig­i­tal advertising—with the bulk going to Face­book, Twit­ter, and Google.

What’s inter­est­ing here is that because so much mon­ey is at stake, there are absolute­ly no rules that would restrict any­thing an unsa­vory polit­i­cal appa­ratchik or a Sil­i­con Val­ley oli­garch might want to foist on the unsus­pect­ing dig­i­tal pub­lic. Creep­i­ly, Facebook’s own inter­nal research divi­sion car­ried out exper­i­ments show­ing that the plat­form could influ­ence people’s emo­tion­al state in con­nec­tion to a cer­tain top­ic or event. Com­pa­ny engi­neers call this fea­ture “emo­tion­al con­ta­gion”—i.e., the abil­i­ty to viral­ly influ­ence people’s emo­tions and ideas just through the con­tent of sta­tus updates. In the twist­ed econ­o­my of emo­tion­al con­ta­gion, a neg­a­tive post by a user sup­press­es pos­i­tive posts by their friends, while a pos­i­tive post sup­press­es neg­a­tive posts. “When a Face­book user posts, the words they choose influ­ence the words cho­sen lat­er by their friends,” explained the company’s lead sci­en­tist on this study.

On a very basic lev­el, Facebook’s opaque con­trol of its feed algo­rithm means the plat­form has real pow­er over people’s ideas and actions dur­ing an elec­tion. This can be done by a data shift as sim­ple and sub­tle as imper­cep­ti­bly tweak­ing a person’s feed to show more posts from friends who are, say, sup­port­ers of a par­tic­u­lar polit­i­cal can­di­date or a spe­cif­ic polit­i­cal idea or event. As far as I know, there is no law pre­vent­ing Face­book from doing just that: it’s plain­ly able and will­ing to influ­ence a user’s feed based on polit­i­cal aims—whether done for inter­nal cor­po­rate objec­tives, or due to pay­ments from polit­i­cal groups, or by the per­son­al pref­er­ences of Mark Zucker­berg.

So our present-day freak­out over Cam­bridge Ana­lyt­i­ca needs to be put in the broad­er his­tor­i­cal con­text of our decades-long com­pla­cen­cy over Sil­i­con Valley’s busi­ness mod­el. The fact is that com­pa­nies like Face­book and Google are the real mali­cious actors here—they are vital pub­lic com­mu­ni­ca­tions sys­tems that run on pro­fil­ing and manip­u­la­tion for pri­vate prof­it with­out any reg­u­la­tion or demo­c­ra­t­ic over­sight from the soci­ety in which it oper­ates. But, hey, let’s blame Cam­bridge Ana­lyt­i­ca. Or bet­ter yet, take a cue from the Times and blame the Rus­sians along with Cam­bridge Ana­lyt­i­ca.

***

There’s anoth­er, big­ger cul­tur­al issue with the way we’ve begun to exam­ine and dis­cuss Cam­bridge Analytica’s bat­tery of inter­net-based influ­ence ops. Peo­ple are still daz­zled by the idea that the inter­net, in its pure, untaint­ed form, is some kind of mag­ic machine dis­trib­ut­ing democ­ra­cy and egal­i­tar­i­an­ism across the globe with the touch of a few key­strokes. This is the gospel preached by a stal­wart cho­rus of Net prophets, from Jeff Jarvis and the late John Per­ry Bar­low to Clay Shirky and Kevin Kel­ly. These char­la­tans all feed on an hon­or­able demo­c­ra­t­ic impulse: peo­ple still want to des­per­ate­ly believe in the utopi­an promise of this technology—its abil­i­ty to equal­ize pow­er, end cor­rup­tion, top­ple cor­po­rate media monop­o­lies, and empow­er the indi­vid­ual.

This mythology—which is of course aggres­sive­ly con­fect­ed for mass con­sump­tion by Sil­i­con Val­ley mar­ket­ing and PR outfits—is deeply root­ed in our cul­ture; it helps explain why oth­er­wise seri­ous jour­nal­ists work­ing for main­stream news out­lets can uniron­i­cal­ly employ phras­es such as “infor­ma­tion wants to be free” and “Facebook’s engine of democ­ra­cy” and get away with it.

The truth is that the inter­net has nev­er been about egal­i­tar­i­an­ism or democ­ra­cy.

The ear­ly inter­net came out of a series of Viet­nam War coun­terin­sur­gency projects aimed at devel­op­ing com­put­er tech­nol­o­gy that would give the gov­ern­ment a way to man­age a com­plex series of glob­al com­mit­ments and to mon­i­tor and pre­vent polit­i­cal strife—both at home and abroad. The inter­net, going back to its first incar­na­tion as the ARPANET mil­i­tary net­work, was always about sur­veil­lance, pro­fil­ing, and tar­get­ing.

The influ­ence of U.S. coun­terin­sur­gency doc­trine on the devel­op­ment of mod­ern com­put­ers and the inter­net is not some­thing that many peo­ple know about. But it is a sub­ject that I explore at length in my book, Sur­veil­lance Val­ley. So what jumps out at me is how seam­less­ly the report­ed activ­i­ties of Cam­bridge Ana­lyt­i­ca fit into this his­tor­i­cal nar­ra­tive.

Cam­bridge Ana­lyt­i­ca is a sub­sidiary of the SCL Group, a mil­i­tary con­trac­tor set up by a spooky huck­ster named Nigel Oakes that sells itself as a high-pow­ered con­clave of experts spe­cial­iz­ing in data-dri­ven coun­terin­sur­gency. It’s done work for the Pen­ta­gon, NATO, and the UK Min­istry of Defense in places like Afghanistan and Nepal, where it says it ran a “cam­paign to reduce and ulti­mate­ly stop the large num­bers of Maoist insur­gents in Nepal from break­ing into hous­es in remote areas to steal food, harass the home­own­ers and cause dis­rup­tion.”

In the grander scheme of high-tech coun­terin­sur­gency boon­dog­gles, which fea­tures such sto­ried psy-ops out­fits as Peter Thiel’s Palan­tir and Cold War dinosaurs like Lock­heed Mar­tin, the SCL Group appears to be a com­par­a­tive­ly minor play­er. Nev­er­the­less, its ambi­tious claims to recon­fig­ure the world order with some well-placed algo­rithms recalls one of the first major play­ers in the field: Simul­mat­ics, a 1960s coun­terin­sur­gency mil­i­tary con­trac­tor that pio­neered data-dri­ven elec­tion cam­paigns and whose founder, Ithiel de Sola Pool, helped shape the devel­op­ment of the ear­ly inter­net as a sur­veil­lance and coun­terin­sur­gency tech­nol­o­gy.

Ithiel de Sola Pool descend­ed from a promi­nent rab­bini­cal fam­i­ly that traced its roots to medieval Spain. Vir­u­lent­ly anti­com­mu­nist and tech-obsessed, he got his start in polit­i­cal work in 1950s work­ing on project at the Hoover Insti­tu­tion at Stan­ford Uni­ver­si­ty that sought to under­stand the nature and caus­es of left-wing rev­o­lu­tions and reduce their like­ly course down to a math­e­mat­i­cal for­mu­la.

He then moved to MIT and made a name for him­self help­ing cal­i­brate the mes­sag­ing of John F. Kennedy’s 1960 pres­i­den­tial cam­paign. His idea was to mod­el the Amer­i­can elec­torate by decon­struct­ing each vot­er into 480 data points that defined every­thing from their reli­gious views to racial atti­tudes to socio-eco­nom­ic sta­tus. He would then use that data to run sim­u­la­tions on how they would respond to a par­tic­u­lar message—and those tri­al runs would per­mit major cam­paigns to fine-tune their mes­sages accord­ing­ly.

These new tar­get­ed mes­sag­ing tac­tics, enabled by rudi­men­ta­ry com­put­ers, had many fans in the per­ma­nent polit­i­cal class of Wash­ing­ton; their liveli­hoods, after all, were large­ly root­ed in their claims to ana­lyze and pre­dict polit­i­cal behav­ior. And so Pool lever­aged his research to launch Simul­mat­ics, a data ana­lyt­ics start­up that offered com­put­er sim­u­la­tion ser­vices to major Amer­i­can cor­po­ra­tions, help­ing them pre-test prod­ucts and con­struct adver­tis­ing cam­paigns.

Simul­mat­ics also did a brisk busi­ness as a mil­i­tary and intel­li­gence con­trac­tor. It ran sim­u­la­tions for Radio Lib­er­ty, the CIA’s covert anti-com­mu­nist radio sta­tion, help­ing the agency mod­el the Sovi­et Union’s inter­nal com­mu­ni­ca­tion sys­tem in order to pre­dict the effect that for­eign news broad­casts would have on the country’s polit­i­cal sys­tem. At the same time, Simul­mat­ics ana­lysts were doing coun­terin­sur­gency work under an ARPA con­tract in Viet­nam, con­duct­ing inter­views and gath­er­ing data to help mil­i­tary plan­ners under­stand why Viet­namese peas­ants rebelled and resist­ed Amer­i­can paci­fi­ca­tion efforts. Simulmatic’s work in Viet­nam was just one piece of a bru­tal Amer­i­can coun­terin­sur­gency pol­i­cy that involved covert pro­grams of assas­si­na­tions, ter­ror, and tor­ture that col­lec­tive­ly came to be known as the Phoenix Pro­gram.

At the same time, Pool was also per­son­al­ly involved in an ear­ly ARPANET-con­nect­ed ver­sion of Thiel’s Palan­tir effort—a pio­neer­ing sys­tem that would allow mil­i­tary plan­ners and intel­li­gence to ingest and work with large and com­plex data sets. Pool’s pio­neer­ing work won him a devot­ed fol­low­ing among a group of tech­nocrats who shared a utopi­an belief in the pow­er of com­put­er sys­tems to run soci­ety from the top down in a har­mo­nious man­ner. They saw the left-wing upheavals of the 1960s not as a polit­i­cal or ide­o­log­i­cal prob­lem but as a chal­lenge of man­age­ment and engi­neer­ing. Pool fed these rever­ies by set­ting out to build com­put­er­ized sys­tems that could mon­i­tor the world in real time and ren­der people’s lives trans­par­ent. He saw these sur­veil­lance and man­age­ment regimes in utopi­an terms—as a vital tool to man­age away social strife and con­flict. “Secre­cy in the modem world is gen­er­al­ly a desta­bi­liz­ing fac­tor,” he wrote in a 1969 essay. “Noth­ing con­tributes more to peace and sta­bil­i­ty than those activ­i­ties of elec­tron­ic and pho­to­graph­ic eaves­drop­ping, of con­tent analy­sis and tex­tu­al inter­pre­ta­tion.”

With the advent of cheap­er com­put­er tech­nol­o­gy in the 1960s, cor­po­rate and gov­ern­ment data­bas­es were already mak­ing a good deal of Pool’s prophe­cy come to pass, via sophis­ti­cat­ed new modes of con­sumer track­ing and pre­dic­tive mod­el­ing. But rather than greet­ing such advances as the augurs of a new demo­c­ra­t­ic mir­a­cle, peo­ple at the time saw it as a threat. Crit­ics across the polit­i­cal spec­trum warned that the pro­lif­er­a­tion of these tech­nolo­gies would lead to cor­po­ra­tions and gov­ern­ments con­spir­ing to sur­veil, manip­u­late, and con­trol soci­ety.

This fear res­onat­ed with every part of the culture—from the new left to prag­mat­ic cen­trists and reac­tionary South­ern Democ­rats. It prompt­ed some high-pro­file exposés in papers like the New York Times and Wash­ing­ton Post. It was report­ed on in trade mag­a­zines of the nascent com­put­er indus­try like Com­put­er­World. And it com­mand­ed prime real estate in estab­lish­ment rags like The Atlantic.

Pool per­son­i­fied the prob­lem. His belief in the pow­er of com­put­ers to bend people’s will and man­age soci­ety was seen as a dan­ger. He was attacked and demo­nized by the anti­war left. He was also reviled by main­stream anti-com­mu­nist lib­er­als.

A prime exam­ple: The 480, a 1964 best-sell­ing polit­i­cal thriller whose plot revolved around the dan­ger that com­put­er polling and sim­u­la­tion posed for demo­c­ra­t­ic pol­i­tics—a plot direct­ly inspired by the activ­i­ties of Ithiel de Sola Pool’s Simul­mat­ics. This new­fan­gled infor­ma­tion tech­nol­o­gy was seen a weapon of manip­u­la­tion and coer­cion, wield­ed by cyn­i­cal tech­nocrats who did not care about win­ning peo­ple over with real ideas, gen­uine states­man­ship or polit­i­cal plat­forms but sim­ply sold can­di­dates just like they would a car or a bar of soap.

***

Simul­mat­ics and its first-gen­er­a­tion imi­ta­tions are now ancient history—dating back from the long-ago time when com­put­ers took up entire rooms. But now we live in Ithiel de Sola Pool’s world. The inter­net sur­rounds us, engulf­ing and mon­i­tor­ing every­thing we do. We are tracked and watched and pro­filed every minute of every day by count­less companies—from giant plat­form monop­o­lies like Face­book and Google to bou­tique data-dri­ven elec­tion firms like i360 and Cam­bridge Ana­lyt­i­ca.

Yet the fear that Ithiel de Sola Pool and his tech­no­crat­ic world view inspired half a cen­tu­ry ago has been wiped from our cul­ture. For decades, we’ve been told that a cap­i­tal­ist soci­ety where no secrets could be kept from our benev­o­lent elite is not some­thing to fear—but some­thing to cheer and pro­mote.

Now, only after Don­ald Trump shocked the lib­er­al polit­i­cal class is this fear start­ing to resur­face. But it’s doing so in a twist­ed, nar­row way.

***

And that’s the big­ger issue with the Cam­bridge Ana­lyt­i­ca freak­out: it’s not just anti-his­tor­i­cal, it’s also pro­found­ly anti-polit­i­cal. Peo­ple are still try­ing to blame Don­ald Trump’s sur­prise 2016 elec­toral vic­to­ry on some­thing, anything—other than America’s degen­er­ate pol­i­tics and a polit­i­cal class that has presided over a stun­ning nation­al decline. The keep­ers of con­ven­tion­al wis­dom all insist in one way or anoth­er that Trump won because some­thing nov­el and unique hap­pened; that some­thing had to have gone hor­ri­bly wrong. And if you’re able to iden­ti­fy and iso­late this some­thing and get rid of it, every­thing will go back to normal—back to sta­tus quo, when every­thing was good.

Cam­bridge Ana­lyt­i­ca has been one of the less­er bogey­man used to explain Trump’s vic­to­ry for quite a while, going back more than year. Back in March 2017, the New York Times, which now trum­pets the saga of Cam­bridge Analytica’s Face­book heist, was skep­ti­cal­ly ques­tion­ing the company’s tech­nol­o­gy and its role in help­ing bring about a Trump vic­to­ry. With con­sid­er­able jus­ti­fi­ca­tion, Times reporters then chalked up the company’s over­heat­ed rhetoric to the com­pe­ti­tion for clients in a crowd­ed field of data-dri­ven elec­tion influ­ence ops.

Yet now, with Robert Meuller’s Rus­sia inves­ti­ga­tion drag­ging on and pro­duc­ing no smok­ing gun point­ing to defin­i­tive col­lu­sion, it seems that Cam­bridge Ana­lyt­i­ca has been upgrad­ed to Class A supervil­lain. Now the idea that Steve Ban­non and Robert Mer­cer con­coct­ed a secret psy­cho­log­i­cal weapon to bewitch the Amer­i­can elec­torate isn’t just a far-fetched mar­ket­ing ploy—it’s a real and present dan­ger to a vir­tu­ous info-media sta­tus quo. And it’s most cer­tain­ly not the exten­sion of a lav­ish­ly fund­ed ini­tia­tive that Amer­i­can firms have been pur­su­ing for half a cen­tu­ry. No, like the Trump upris­ing it has alleged­ly mid­wifed into being, it is an oppor­tunis­tic per­ver­sion of the Amer­i­can way. Employ­ing pow­er­ful tech­nol­o­gy that rewires the inner work­ings of our body politic, Cam­bridge Ana­lyt­i­ca and its back­ers duped the Amer­i­can peo­ple into vot­ing for Trump and destroy­ing Amer­i­can democ­ra­cy.

It’s a com­fort­ing idea for our polit­i­cal elite, but it’s not true. Alexan­der Nix, Cam­bridge Analytica’s well-groomed CEO, is not a cun­ning mas­ter­mind but a gar­den-vari­ety dig­i­tal hack. Nix’s busi­ness plan is but an updat­ed ver­sion of Ithiel de Sola Pool’s vision of per­ma­nent peace and pros­per­i­ty won through a placid regime of behav­ioral­ly man­aged social con­trol. And while Nix has been sus­pend­ed fol­low­ing the blus­ter-filled video footage of his cyber-brag­ging aired on Chan­nel 4, we’re kid­ding our­selves if we think his pun­ish­ment will serve as any sort of deter­rent for the thou­sands upon thou­sands of Big Data oper­a­tors nail­ing down bil­lions in cam­paign, mil­i­tary, and cor­po­rate con­tracts to con­tin­ue mon­e­tiz­ing user data into the void. Cam­bridge Ana­lyt­i­ca is unde­ni­ably a rogue’s gallery of bad polit­i­cal actors, but to fin­ger the real cul­prits behind Don­ald Trump’s takeover Amer­i­ca, the self-appoint­ed watch­dogs of our country’s imper­iled polit­i­cal virtue had best take a long and sober­ing look in the mir­ror.

———-

“The Cam­bridge Ana­lyt­i­ca Con” by Yasha Levine; The Baf­fler; 03/21/2018

“It’s good that the main­stream news media are final­ly start­ing to pay atten­tion to this dark cor­ner of the inter­net —and pro­duc­ing exposés of shady sub rosa polit­i­cal cam­paigns and their eager exploita­tion of our online dig­i­tal trails in order to con­t­a­m­i­nate our infor­ma­tion streams and influ­ence our deci­sions. It’s about time.”

Yes indeed, it is great to see that this top­ic is final­ly get­ting the atten­tion it has long deserved. But it’s not great to see the top­ic lim­it­ed to Cam­bridge Ana­lyt­i­ca and Face­book. As Levine puts it, “We must con­cede that covert influ­ence is not some­thing unusu­al or for­eign to our soci­ety, but is as Amer­i­can as apple pie and free­dom fries.” Soci­eties in gen­er­al are held togeth­er via overt and covert influ­ence, but we’ve got­ten real­ly, real­ly good at that over the last half cen­tu­ry in Amer­i­ca and the sto­ry of Cam­bridge Ana­lyt­i­ca, and the larg­er sto­ry of Sandy Parak­i­las’s whis­tle-blow­ing about mass data col­lec­tion, can’t real­ly be under­stood out­side that his­tor­i­cal con­text:

...
But this sto­ry is being cov­ered and framed in a mis­lead­ing way. So far, much of the main­stream cov­er­age, dri­ven by the Times and Guardian reports, looks at Cam­bridge Ana­lyt­i­ca in isolation—almost entire­ly out­side of any his­tor­i­cal or polit­i­cal con­text. This makes it seem to read­ers unfa­mil­iar with the long his­to­ry of the strug­gle for con­trol of the dig­i­tal sphere as if the main prob­lem is that the bad actors at Cam­bridge Ana­lyt­i­ca crossed the trans­mis­sion wires of Face­book in the Promethean man­ner of Vic­tor Frankenstein—taking what were nor­mal­ly respectable, sci­en­tif­ic data pro­to­cols and per­vert­ing them to serve the dia­bol­i­cal aim of rean­i­mat­ing the decom­pos­ing lump of polit­i­cal flesh known as Don­ald Trump.

So if we’re going to view the actions of Cam­bridge Ana­lyt­i­ca in their prop­er light, we need first to start with an admis­sion. We must con­cede that covert influ­ence is not some­thing unusu­al or for­eign to our soci­ety, but is as Amer­i­can as apple pie and free­dom fries. The use of manip­u­la­tive, psy­cho­log­i­cal­ly dri­ven adver­tis­ing and mar­ket­ing tech­niques to sell us prod­ucts, lifestyles, and ideas has been the foun­da­tion of mod­ern Amer­i­can soci­ety, going back to the days of the self-styled inven­tor of pub­lic rela­tions, Edward Bernays. It oozes out of every pore on our body politic. It’s what holds our ail­ing con­sumer soci­ety togeth­er. And when it comes to mar­ket­ing can­di­dates and polit­i­cal mes­sages, using data to influ­ence peo­ple and shape their deci­sions has been the holy grail of the com­put­er age, going back half a cen­tu­ry.
...

And the first step in putting the Cam­bridge Ana­lyt­i­ca sto­ry in prop­er per­spec­tive is rec­og­niz­ing that what it is accused of doing — grab­bing per­son­al data and build­ing pro­files for the pur­pose of influ­enc­ing vot­ers — is done every day by enti­ties like Face­book and Google. It’s a reg­u­lar part of our lives. And you don’t even need to use Face­book or Google to become part of this vast com­mer­cial sur­veil­lance sys­tem. You just need to com­mu­ni­cate with some­one who does use those plat­forms:

...
Let’s start with the basics: What Cam­bridge Ana­lyt­i­ca is accused of doing—siphoning people’s data, com­pil­ing pro­files, and then deploy­ing that infor­ma­tion to influ­ence them to vote a cer­tain way—Facebook and Sil­i­con Val­ley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more inva­sive scale.

Today’s inter­net busi­ness ecosys­tem is built on for-prof­it sur­veil­lance, behav­ioral pro­fil­ing, manip­u­la­tion and influ­ence. That’s the name of the game. It isn’t just Face­book or Cam­bridge Ana­lyt­i­ca or even Google. It’s Ama­zon. It’s eBay. It’s Palan­tir. It’s Angry Birds. It’s MoviePass. It’s Lock­heed Mar­tin. It’s every app you’ve ever down­loaded. Every phone you bought. Every pro­gram you watched on your on-demand cable TV pack­age.

All of these games, apps, and plat­forms prof­it from the con­cert­ed siphon­ing up of all data trails to pro­duce pro­files for all sorts of micro-tar­get­ed influ­ence ops in the pri­vate sec­tor. This com­merce in user data per­mit­ted Face­book to earn $40 bil­lion last year, while Google raked in $110 bil­lion.

What do these com­pa­nies know about us, their users? Well, just about every­thing.

Sil­i­con Val­ley of course keeps a tight lid on this infor­ma­tion, but you can get a glimpse of the kinds of data our pri­vate dig­i­tal dossiers con­tain by trawl­ing through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-tar­get­ed adver­tis­ing tech­nol­o­gy. The lan­guage, stripped of opaque tech jar­gon, revealed that just about every­thing we enter into Google’s many prod­ucts and platforms—from email cor­re­spon­dence to Web search­es and inter­net browsing—is ana­lyzed and used to pro­file users in an extreme­ly inva­sive and per­son­al way. Email cor­re­spon­dence is parsed for mean­ing and sub­ject mat­ter. Names are matched to real iden­ti­ties and address­es. Email attachments—say, bank state­ments or test­ing results from a med­ical lab—are scraped for infor­ma­tion. Demo­graph­ic and psy­cho­graph­ic data, includ­ing social class, per­son­al­i­ty type, age, sex, polit­i­cal affil­i­a­tion, cul­tur­al inter­ests, social ties, per­son­al income, and mar­i­tal sta­tus is extract­ed. In one patent, I dis­cov­ered that Google appar­ent­ly had the abil­i­ty to deter­mine if a per­son was a legal U.S. res­i­dent or not. It also turned out you didn’t have to be a reg­is­tered Google user to be snared in this pro­fil­ing appa­ra­tus. All you had to do was com­mu­ni­cate with some­one who had a Gmail address.

On the whole, Google’s pro­fil­ing phi­los­o­phy was no dif­fer­ent than Facebook’s, which also con­structs “shad­ow pro­files” to col­lect and mon­e­tize data, even if you nev­er had a reg­is­tered Face­book or Gmail account.
...

The next step in con­tex­tu­al­iz­ing this is rec­og­niz­ing that Face­book and Google are mere­ly the biggest fish in an ocean of data bro­ker­age mar­kets that has many small­er inhab­i­tants try­ing to do the same thing. This is part of what makes Face­book’s hand­ing over of pro­file data to app devel­op­ers so scandalous...Facebook clear­ly new there was a vora­cious mar­ket for this infor­ma­tion and made a lot of mon­ey sell­ing into that mar­ket:

...
It’s not just the big plat­form monop­o­lies that do this, but all the small­er com­pa­nies that run their busi­ness­es on ser­vices oper­at­ed by Google and Face­book. It even includes cute games like Angry Birds, devel­oped by Finland’s Rovio Enter­tain­ment, that’s been down­loaded more than a bil­lion times. The Android ver­sion of Angry Birds was found to pull per­son­al data on its play­ers, includ­ing eth­nic­i­ty, mar­i­tal sta­tus, and sex­u­al orientation—including options for the “sin­gle,” “mar­ried,” “divorced,” “engaged,” and “swinger” cat­e­gories. Pulling per­son­al data like this didn’t con­tra­dict Google’s terms of ser­vices for its Android plat­form. Indeed, for-prof­it sur­veil­lance was the whole point of why Google start­ed plan­ning to launch an iPhone rival as far back as 2004.

In launch­ing Android, Google made a gam­ble that by releas­ing its pro­pri­etary oper­at­ing sys­tem to man­u­fac­tur­ers free of charge, it wouldn’t be rel­e­gat­ed to run­ning apps on Apple iPhone or Microsoft Mobile Win­dows like some kind of dig­i­tal sec­ond-class cit­i­zen. If it played its cards right and Android suc­ceed­ed, Google would be able to con­trol the envi­ron­ment that under­pins the entire mobile expe­ri­ence, mak­ing it the ulti­mate gate­keep­er of the many mon­e­tized inter­ac­tions among users, apps, and adver­tis­ers. And that’s exact­ly what hap­pened. Today, Google monop­o­lizes the smart phone mar­ket and dom­i­nates the mobile for-prof­it sur­veil­lance busi­ness.

These detailed psy­cho­log­i­cal pro­files, togeth­er with the direct access to users that plat­forms like Google and Face­book deliv­er, make both com­pa­nies cat­nip to adver­tis­ers, PR flacks—and dark-mon­ey polit­i­cal out­fits like Cam­bridge Ana­lyt­i­ca.
...

And when it comes to polit­i­cal cam­paigns, the dig­i­tal giants like Face­book and Google already have spe­cial elec­tion units set up to give priv­i­leged access to polit­i­cal cam­paigns so they can influ­ence vot­ers even more effec­tive­ly. The sto­ries about the Trump cam­paign’s use of Face­book “embeds” to run a mas­sive sys­tem­at­ic adver­tis­ing cam­paign of “A/B test­ing on steroids” to sys­tem­at­i­cal­ly exper­i­ment on vot­er ad respons­es is part of that larg­er sto­ry of how these giants have already made the manip­u­la­tion of vot­ers big busi­ness:

...
Indeed, polit­i­cal cam­paigns showed an ear­ly and pro­nounced affin­i­ty for the idea of tar­get­ed access and influ­ence on plat­forms like Face­book. Instead of blan­ket­ing air­waves with a sin­gle polit­i­cal ad, they could show peo­ple ads that appealed specif­i­cal­ly to the issues they held dear. They could also ensure that any such mes­sage spread through a tar­get­ed person’s larg­er social net­work through repost­ing and shar­ing.

The enor­mous com­mer­cial inter­est that polit­i­cal cam­paigns have shown in social media has earned them priv­i­leged atten­tion from Sil­i­con Val­ley plat­forms in return. Face­book runs a sep­a­rate polit­i­cal divi­sion specif­i­cal­ly geared to help its cus­tomers tar­get and influ­ence vot­ers.

The com­pa­ny even allows polit­i­cal cam­paigns to upload their own lists of poten­tial vot­ers and sup­port­ers direct­ly into Facebook’s data sys­tem. So armed, dig­i­tal polit­i­cal oper­a­tives can then use those people’s social net­works to iden­ti­fy oth­er prospec­tive vot­ers who might be sup­port­ive of their candidate—and then tar­get them with a whole new tidal wave of ads. “There’s a lev­el of pre­ci­sion that doesn’t exist in any oth­er medi­um,” Crys­tal Pat­ter­son, a Face­book employ­ee who works with gov­ern­ment and pol­i­tics cus­tomers, told the New York Times back in 2015. “It’s get­ting the right mes­sage to the right peo­ple at the right time.”

Nat­u­ral­ly, a whole slew of com­pa­nies and oper­a­tives in our increas­ing­ly data-dri­ven elec­tion scene have cropped up over the last decade to plug in to these amaz­ing influ­ence machines. There is a whole con­stel­la­tion of them work­ing all sorts of strate­gies: tra­di­tion­al vot­er tar­get­ing, polit­i­cal pro­pa­gan­da mills, troll armies, and bots.

Some of these firms are polit­i­cal­ly agnos­tic; they’ll work for any­one with cash. Oth­ers are par­ti­san. The Demo­c­ra­t­ic Par­ty Data Death Star is NGP VAN. The Repub­li­cans have a few of their own—including i360, a data mon­ster gen­er­ous­ly fund­ed by Charles Koch. Nat­u­ral­ly, i360 part­ners with Face­book to deliv­er tar­get vot­ers. It also claims to have 700 per­son­al data points cross-tab­u­lat­ed on 199 mil­lion vot­ers and near­ly 300 mil­lion con­sumers, with the abil­i­ty to pro­file and tar­get them with pin-point accu­ra­cy based on their beliefs and views.

Here’s how The Nation­al Jour­nal’s Andrew Rice described i360 in 2015:

Like Google, the Nation­al Secu­ri­ty Agency, or the Demo­c­ra­t­ic data machine, i360 has a vora­cious appetite for per­son­al infor­ma­tion. It is con­stant­ly ingest­ing new data into its tar­get­ing sys­tems, which pre­dict not only par­ti­san iden­ti­fi­ca­tion but also sen­ti­ments about issues such as abor­tion, tax­es, and health care. When I vis­it­ed the i360 office, an employ­ee gave me a demon­stra­tion, zoom­ing in on a map to focus on a par­tic­u­lar 66-year-old high school teacher who lives in an apart­ment com­plex in Alexan­dria, Vir­ginia. . . . Though the adver­tis­ing indus­try typ­i­cal­ly eschews address­ing any sin­gle individual—it’s not just inva­sive, it’s also inefficient—it is becom­ing com­mon­place to tar­get extreme­ly nar­row audi­ences. So the school­teacher, along with a few look-alikes, might see a tai­lored ad the next time she clicks on YouTube.

Sil­i­con Val­ley doesn’t just offer cam­paigns a neu­tral plat­form; it also works close­ly along­side polit­i­cal can­di­dates to the point that the biggest inter­net com­pa­nies have become an exten­sion of the Amer­i­can polit­i­cal sys­tem. As one recent study showed, tech com­pa­nies rou­tine­ly embed their employ­ees inside major polit­i­cal cam­paigns: “Face­book, Twit­ter, and Google go beyond pro­mot­ing their ser­vices and facil­i­tat­ing dig­i­tal adver­tis­ing buys, active­ly shap­ing cam­paign com­mu­ni­ca­tion through their close col­lab­o­ra­tion with polit­i­cal staffers . . . these firms serve as qua­si-dig­i­tal con­sul­tants to cam­paigns, shap­ing dig­i­tal strat­e­gy, con­tent, and exe­cu­tion.”
...

And offer­ing spe­cial ser­vices to cam­paign manip­u­late vot­ers isn’t just big busi­ness. It’s a large­ly unreg­u­lat­ed busi­ness. If Face­book decides to covert­ly manip­u­late you by alter­ing its news­feed algo­rithms so it shows you news arti­cles more from your con­ser­v­a­tive-lean­ing friends (or lib­er­al-lean­ing friends), that’s total­ly legal. Because, again, sub­tly manip­u­lat­ing peo­ple is as Amer­i­can as apple pie:

...
Now, of course, every elec­tion is a Face­book Elec­tion. And why not? As Bloomberg News has not­ed, Sil­i­con Val­ley ranks elec­tions “along­side the Super Bowl and the Olympics in terms of events that draw block­buster ad dol­lars and boost engage­ment.” In 2016, $1 bil­lion was spent on dig­i­tal advertising—with the bulk going to Face­book, Twit­ter, and Google.

What’s inter­est­ing here is that because so much mon­ey is at stake, there are absolute­ly no rules that would restrict any­thing an unsa­vory polit­i­cal appa­ratchik or a Sil­i­con Val­ley oli­garch might want to foist on the unsus­pect­ing dig­i­tal pub­lic. Creep­i­ly, Facebook’s own inter­nal research divi­sion car­ried out exper­i­ments show­ing that the plat­form could influ­ence people’s emo­tion­al state in con­nec­tion to a cer­tain top­ic or event. Com­pa­ny engi­neers call this fea­ture “emo­tion­al con­ta­gion”—i.e., the abil­i­ty to viral­ly influ­ence people’s emo­tions and ideas just through the con­tent of sta­tus updates. In the twist­ed econ­o­my of emo­tion­al con­ta­gion, a neg­a­tive post by a user sup­press­es pos­i­tive posts by their friends, while a pos­i­tive post sup­press­es neg­a­tive posts. “When a Face­book user posts, the words they choose influ­ence the words cho­sen lat­er by their friends,” explained the company’s lead sci­en­tist on this study.

On a very basic lev­el, Facebook’s opaque con­trol of its feed algo­rithm means the plat­form has real pow­er over people’s ideas and actions dur­ing an elec­tion. This can be done by a data shift as sim­ple and sub­tle as imper­cep­ti­bly tweak­ing a person’s feed to show more posts from friends who are, say, sup­port­ers of a par­tic­u­lar polit­i­cal can­di­date or a spe­cif­ic polit­i­cal idea or event. As far as I know, there is no law pre­vent­ing Face­book from doing just that: it’s plain­ly able and will­ing to influ­ence a user’s feed based on polit­i­cal aims—whether done for inter­nal cor­po­rate objec­tives, or due to pay­ments from polit­i­cal groups, or by the per­son­al pref­er­ences of Mark Zucker­berg.
...

And this con­tem­po­rary state of affairs did­n’t emerge spon­ta­neous­ly. As Levine cov­ers in Sur­veil­lance Val­ley, this is what the inter­net — back when it was the ARPANET mil­i­tary net­work — was all about from its very con­cep­tion:

...
There’s anoth­er, big­ger cul­tur­al issue with the way we’ve begun to exam­ine and dis­cuss Cam­bridge Analytica’s bat­tery of inter­net-based influ­ence ops. Peo­ple are still daz­zled by the idea that the inter­net, in its pure, untaint­ed form, is some kind of mag­ic machine dis­trib­ut­ing democ­ra­cy and egal­i­tar­i­an­ism across the globe with the touch of a few key­strokes. This is the gospel preached by a stal­wart cho­rus of Net prophets, from Jeff Jarvis and the late John Per­ry Bar­low to Clay Shirky and Kevin Kel­ly. These char­la­tans all feed on an hon­or­able demo­c­ra­t­ic impulse: peo­ple still want to des­per­ate­ly believe in the utopi­an promise of this technology—its abil­i­ty to equal­ize pow­er, end cor­rup­tion, top­ple cor­po­rate media monop­o­lies, and empow­er the indi­vid­ual.

This mythology—which is of course aggres­sive­ly con­fect­ed for mass con­sump­tion by Sil­i­con Val­ley mar­ket­ing and PR outfits—is deeply root­ed in our cul­ture; it helps explain why oth­er­wise seri­ous jour­nal­ists work­ing for main­stream news out­lets can uniron­i­cal­ly employ phras­es such as “infor­ma­tion wants to be free” and “Facebook’s engine of democ­ra­cy” and get away with it.

The truth is that the inter­net has nev­er been about egal­i­tar­i­an­ism or democ­ra­cy.

The ear­ly inter­net came out of a series of Viet­nam War coun­terin­sur­gency projects aimed at devel­op­ing com­put­er tech­nol­o­gy that would give the gov­ern­ment a way to man­age a com­plex series of glob­al com­mit­ments and to mon­i­tor and pre­vent polit­i­cal strife—both at home and abroad. The inter­net, going back to its first incar­na­tion as the ARPANET mil­i­tary net­work, was always about sur­veil­lance, pro­fil­ing, and tar­get­ing.

The influ­ence of U.S. coun­terin­sur­gency doc­trine on the devel­op­ment of mod­ern com­put­ers and the inter­net is not some­thing that many peo­ple know about. But it is a sub­ject that I explore at length in my book, Sur­veil­lance Val­ley. So what jumps out at me is how seam­less­ly the report­ed activ­i­ties of Cam­bridge Ana­lyt­i­ca fit into this his­tor­i­cal nar­ra­tive.
...

“The ear­ly inter­net came out of a series of Viet­nam War coun­terin­sur­gency projects aimed at devel­op­ing com­put­er tech­nol­o­gy that would give the gov­ern­ment a way to man­age a com­plex series of glob­al com­mit­ments and to mon­i­tor and pre­vent polit­i­cal strife—both at home and abroad. The inter­net, going back to its first incar­na­tion as the ARPANET mil­i­tary net­work, was always about sur­veil­lance, pro­fil­ing, and tar­get­ing

And one of the key fig­ures behind this ear­ly ARPANET ver­sion of the inter­net, Ithiel de Sola Pool, got his start in this area in the 1950’s work­ing at the Hoover Insti­tu­tion at Stan­ford Uni­ver­si­ty to under­stand the nature and caus­es of left-wing rev­o­lu­tions and dis­till this down to a math­e­mat­i­cal for­mu­la. Pool, an vir­u­lent anti-Com­mu­nist, also worked for JFK’s 1960 cam­paign and went on to start a pri­vate com­pa­ny, Simul­mat­ics, offer­ing ser­vices in mod­el­ing and manip­u­lat­ing human behav­ior based on large data sets on peo­ple:

...
Cam­bridge Ana­lyt­i­ca is a sub­sidiary of the SCL Group, a mil­i­tary con­trac­tor set up by a spooky huck­ster named Nigel Oakes that sells itself as a high-pow­ered con­clave of experts spe­cial­iz­ing in data-dri­ven coun­terin­sur­gency. It’s done work for the Pen­ta­gon, NATO, and the UK Min­istry of Defense in places like Afghanistan and Nepal, where it says it ran a “cam­paign to reduce and ulti­mate­ly stop the large num­bers of Maoist insur­gents in Nepal from break­ing into hous­es in remote areas to steal food, harass the home­own­ers and cause dis­rup­tion.”

In the grander scheme of high-tech coun­terin­sur­gency boon­dog­gles, which fea­tures such sto­ried psy-ops out­fits as Peter Thiel’s Palan­tir and Cold War dinosaurs like Lock­heed Mar­tin, the SCL Group appears to be a com­par­a­tive­ly minor play­er. Nev­er­the­less, its ambi­tious claims to recon­fig­ure the world order with some well-placed algo­rithms recalls one of the first major play­ers in the field: Simul­mat­ics, a 1960s coun­terin­sur­gency mil­i­tary con­trac­tor that pio­neered data-dri­ven elec­tion cam­paigns and whose founder, Ithiel de Sola Pool, helped shape the devel­op­ment of the ear­ly inter­net as a sur­veil­lance and coun­terin­sur­gency tech­nol­o­gy.

Ithiel de Sola Pool descend­ed from a promi­nent rab­bini­cal fam­i­ly that traced its roots to medieval Spain. Vir­u­lent­ly anti­com­mu­nist and tech-obsessed, he got his start in polit­i­cal work in 1950s work­ing on project at the Hoover Insti­tu­tion at Stan­ford Uni­ver­si­ty that sought to under­stand the nature and caus­es of left-wing rev­o­lu­tions and reduce their like­ly course down to a math­e­mat­i­cal for­mu­la.

He then moved to MIT and made a name for him­self help­ing cal­i­brate the mes­sag­ing of John F. Kennedy’s 1960 pres­i­den­tial cam­paign. His idea was to mod­el the Amer­i­can elec­torate by decon­struct­ing each vot­er into 480 data points that defined every­thing from their reli­gious views to racial atti­tudes to socio-eco­nom­ic sta­tus. He would then use that data to run sim­u­la­tions on how they would respond to a par­tic­u­lar message—and those tri­al runs would per­mit major cam­paigns to fine-tune their mes­sages accord­ing­ly.

These new tar­get­ed mes­sag­ing tac­tics, enabled by rudi­men­ta­ry com­put­ers, had many fans in the per­ma­nent polit­i­cal class of Wash­ing­ton; their liveli­hoods, after all, were large­ly root­ed in their claims to ana­lyze and pre­dict polit­i­cal behav­ior. And so Pool lever­aged his research to launch Simul­mat­ics, a data ana­lyt­ics start­up that offered com­put­er sim­u­la­tion ser­vices to major Amer­i­can cor­po­ra­tions, help­ing them pre-test prod­ucts and con­struct adver­tis­ing cam­paigns.

Simul­mat­ics also did a brisk busi­ness as a mil­i­tary and intel­li­gence con­trac­tor. It ran sim­u­la­tions for Radio Lib­er­ty, the CIA’s covert anti-com­mu­nist radio sta­tion, help­ing the agency mod­el the Sovi­et Union’s inter­nal com­mu­ni­ca­tion sys­tem in order to pre­dict the effect that for­eign news broad­casts would have on the country’s polit­i­cal sys­tem. At the same time, Simul­mat­ics ana­lysts were doing coun­terin­sur­gency work under an ARPA con­tract in Viet­nam, con­duct­ing inter­views and gath­er­ing data to help mil­i­tary plan­ners under­stand why Viet­namese peas­ants rebelled and resist­ed Amer­i­can paci­fi­ca­tion efforts. Simulmatic’s work in Viet­nam was just one piece of a bru­tal Amer­i­can coun­terin­sur­gency pol­i­cy that involved covert pro­grams of assas­si­na­tions, ter­ror, and tor­ture that col­lec­tive­ly came to be known as the Phoenix Pro­gram.
...

And part of what drove Pool’s was a utopi­an belief that com­put­ers and mas­sive amounts of data could be used to run soci­ety har­mo­nious­ly. Left-wing rev­o­lu­tions were prob­lems to be man­aged with Big Data. It’s a pret­ty impor­tant his­tor­i­cal con­text when think­ing about the role Cam­bridge Ana­lyt­i­ca played in elect­ing Don­ald Trump:

...
At the same time, Pool was also per­son­al­ly involved in an ear­ly ARPANET-con­nect­ed ver­sion of Thiel’s Palan­tir effort—a pio­neer­ing sys­tem that would allow mil­i­tary plan­ners and intel­li­gence to ingest and work with large and com­plex data sets. Pool’s pio­neer­ing work won him a devot­ed fol­low­ing among a group of tech­nocrats who shared a utopi­an belief in the pow­er of com­put­er sys­tems to run soci­ety from the top down in a har­mo­nious man­ner. They saw the left-wing upheavals of the 1960s not as a polit­i­cal or ide­o­log­i­cal prob­lem but as a chal­lenge of man­age­ment and engi­neer­ing. Pool fed these rever­ies by set­ting out to build com­put­er­ized sys­tems that could mon­i­tor the world in real time and ren­der people’s lives trans­par­ent. He saw these sur­veil­lance and man­age­ment regimes in utopi­an terms—as a vital tool to man­age away social strife and con­flict. “Secre­cy in the modem world is gen­er­al­ly a desta­bi­liz­ing fac­tor,” he wrote in a 1969 essay. “Noth­ing con­tributes more to peace and sta­bil­i­ty than those activ­i­ties of elec­tron­ic and pho­to­graph­ic eaves­drop­ping, of con­tent analy­sis and tex­tu­al inter­pre­ta­tion.”
...

And guess what: the Amer­i­can pub­lic was­n’t enam­ored with Pool’s vision of a world man­aged by com­put­ing tech­nol­o­gy and Big Data mod­els of soci­ety. When the pub­lic learned about these ear­ly ver­sion of the inter­net inspired by visions of a com­put­er-man­aged world in the 60’s and 70’s, the pub­lic got scared:

...
With the advent of cheap­er com­put­er tech­nol­o­gy in the 1960s, cor­po­rate and gov­ern­ment data­bas­es were already mak­ing a good deal of Pool’s prophe­cy come to pass, via sophis­ti­cat­ed new modes of con­sumer track­ing and pre­dic­tive mod­el­ing. But rather than greet­ing such advances as the augurs of a new demo­c­ra­t­ic mir­a­cle, peo­ple at the time saw it as a threat. Crit­ics across the polit­i­cal spec­trum warned that the pro­lif­er­a­tion of these tech­nolo­gies would lead to cor­po­ra­tions and gov­ern­ments con­spir­ing to sur­veil, manip­u­late, and con­trol soci­ety.

This fear res­onat­ed with every part of the culture—from the new left to prag­mat­ic cen­trists and reac­tionary South­ern Democ­rats. It prompt­ed some high-pro­file exposés in papers like the New York Times and Wash­ing­ton Post. It was report­ed on in trade mag­a­zines of the nascent com­put­er indus­try like Com­put­er­World. And it com­mand­ed prime real estate in estab­lish­ment rags like The Atlantic.

Pool per­son­i­fied the prob­lem. His belief in the pow­er of com­put­ers to bend people’s will and man­age soci­ety was seen as a dan­ger. He was attacked and demo­nized by the anti­war left. He was also reviled by main­stream anti-com­mu­nist lib­er­als.

A prime exam­ple: The 480, a 1964 best-sell­ing polit­i­cal thriller whose plot revolved around the dan­ger that com­put­er polling and sim­u­la­tion posed for demo­c­ra­t­ic pol­i­tics—a plot direct­ly inspired by the activ­i­ties of Ithiel de Sola Pool’s Simul­mat­ics. This new­fan­gled infor­ma­tion tech­nol­o­gy was seen a weapon of manip­u­la­tion and coer­cion, wield­ed by cyn­i­cal tech­nocrats who did not care about win­ning peo­ple over with real ideas, gen­uine states­man­ship or polit­i­cal plat­forms but sim­ply sold can­di­dates just like they would a car or a bar of soap.
...

But that fear some­how dis­ap­peared in sub­se­quent decades, only to be replaced with a faith in our benev­o­lent tech­no-elite. And a faith that this mass public/private sur­veil­lance sys­tem is actu­al­ly an empow­er­ing tool that will lead to a lim­it­less future. And that is per­haps the biggest scan­dal here: The pub­lic did­n’t just for­got to keep an eye on the pow­er­ful. The pub­lic for­got to keep an eye on the peo­ple whose pow­er is derived from keep­ing an eye on the pub­lic. We built a sur­veil­lance state at the same time we fell into a fog of civic and his­tor­i­cal amne­sia. And that has coin­cid­ed with the rise of a plu­toc­ra­cy, the dom­i­nance of right-wing anti-gov­ern­ment eco­nom­ic doc­trines, and the larg­er fail­ure of the Amer­i­can polit­i­cal and eco­nom­ic elites to deliv­er a soci­ety that actu­al­ly works for aver­age peo­ple. To put it anoth­er way, the rise of the mod­ern sur­veil­lance state is one ele­ment of a mas­sive, decades-long process of col­lec­tive­ly ‘drop­ping the ball’. We screwed up mas­sive­ly and Face­book and Google are just one of the con­se­quences of this. And yet we still don’t view the Trump phe­nom­e­na with­in the con­text of that mas­sive col­lec­tive screw up, which means we’re still screw­ing up mas­sive­ly:

...
Yet the fear that Ithiel de Sola Pool and his tech­no­crat­ic world view inspired half a cen­tu­ry ago has been wiped from our cul­ture. For decades, we’ve been told that a cap­i­tal­ist soci­ety where no secrets could be kept from our benev­o­lent elite is not some­thing to fear—but some­thing to cheer and pro­mote.

Now, only after Don­ald Trump shocked the lib­er­al polit­i­cal class is this fear start­ing to resur­face. But it’s doing so in a twist­ed, nar­row way.

***

And that’s the big­ger issue with the Cam­bridge Ana­lyt­i­ca freak­out: it’s not just anti-his­tor­i­cal, it’s also pro­found­ly anti-polit­i­cal. Peo­ple are still try­ing to blame Don­ald Trump’s sur­prise 2016 elec­toral vic­to­ry on some­thing, anything—other than America’s degen­er­ate pol­i­tics and a polit­i­cal class that has presided over a stun­ning nation­al decline. The keep­ers of con­ven­tion­al wis­dom all insist in one way or anoth­er that Trump won because some­thing nov­el and unique hap­pened; that some­thing had to have gone hor­ri­bly wrong. And if you’re able to iden­ti­fy and iso­late this some­thing and get rid of it, every­thing will go back to normal—back to sta­tus quo, when every­thing was good.
...

So the biggest sto­ry here isn’t that Cam­bridge Ana­lyt­i­ca was engaged in mass manip­u­la­tion cam­paign. And the biggest sto­ry isn’t even that Cam­bridge Ana­lyt­i­ca was engaged in a cut­ting-edge com­mer­cial mass manip­u­la­tion cam­paign. Because both of those sto­ries are eclipsed by the sto­ry that even if Cam­bridge Ana­lyt­i­ca real­ly was engaged in a com­mer­cial cut­ting edge cam­paign, it prob­a­bly was­n’t near­ly as cut­ting edge as what Face­book and Google and the oth­er data giants rou­tine­ly engage in. And this sit­u­a­tion has been build­ing for decades and with­in the con­text of the much larg­er scan­dal of the rise of a oli­garchy that more or less runs Amer­i­ca by and for pow­er­ful inter­ests. Pow­er­ful inter­ests that are over­whelm­ing­ly ded­i­cat­ed to right-wing elit­ist doc­trines that view the pub­lic as a resources to be con­trolled and exploit­ed for pri­vate prof­it.

It’s all a reminder that, like so many incred­i­bly com­plex issues, cre­at­ing very high qual­i­ty gov­ern­ment is the only fea­si­ble answer. A high qual­i­ty gov­ern­ment man­aged by a self-aware pub­lic. Some sort of ‘sur­veil­lance state’ is almost an inevitabil­i­ty as long as we have ubiq­ui­tous sur­veil­lance tech­nol­o­gy. Even the array of ‘cryp­to’ tools tout­ed in recent years have con­sis­tent­ly proven to be vul­ner­a­ble, which isn’t nec­es­sar­i­ly a bad thing since ubiq­ui­tous cryp­to-tech­nol­o­gy comes with its own suite of mega-col­lec­tive headaches. Nation­al secu­ri­ty and per­son­al data inse­cu­ri­ty real­ly are inter­twined in both mutu­al­ly inclu­sive and exclu­sive ways. It’s not as if the nation­al secu­ri­ty hawk argu­ments that “you can’t be free if you’re dead from [insert war, ter­ror, ran­dom chaos things a nation­al secu­ri­ty state is sup­posed to deal with]” isn’t valid. But fears of Big Broth­er are also valid, as our present sit­u­a­tion amply demon­strates. The path isn’t clear, which is why a nation­al secu­ri­ty state with a sig­nif­i­cant pri­vate sec­tor com­po­nent and access to ample inti­mate details is like­ly for the fore­see­able future whether you like it or not. Peo­ple err on imme­di­ate safe­ty. So we bet­ter have very high qual­i­ty gov­ern­ment. Espe­cial­ly high qual­i­ty reg­u­la­tions for the pri­vate sec­tor com­po­nents of that nation­al secu­ri­ty state.

And while dig­i­tal giants like Google and Face­book will inevitably have access to a troves of per­son­al data that they need to offer the kinds of ser­vices peo­ple need, there’s no rea­son any sort of reg­u­lat­ing them heav­i­ly so they don’t become per­son­al data repos­i­to­ry for sale. Which is what they are now.

What do we do about ser­vices that peo­ple use to run their lives which, by def­i­n­i­tion, neces­si­tate the col­lec­tion of pri­vate data by a third-par­ty? How do we deal with these chal­lenges? Well, again, it starts with being aware of them and actu­al­ly try­ing to col­lec­tive­ly grap­ple with them so some sort of gen­er­al con­sen­sus can be arrive at. And that’s all why we need to rec­og­nize that it is imper­a­tive that the pub­lic sur­veils the sur­veil­lance state along with sur­veilling the rest of the world going on around us too. A self-aware sur­veil­lance state com­prised of a self-aware pop­u­lace of peo­ple who know what’s going on with their sur­veil­lance state and the world. In oth­er words, part of the solu­tion to ‘Big Data Big Broth­er’ real­ly is a soci­ety of ‘Lit­tle Broth­ers and Sis­ters’ who are col­lec­tive­ly very informed about what is going on in the world and polit­i­cal­ly capa­ble of effect­ing changes to that sur­veil­lance state — and the rest of gov­ern­ment or the pri­vate sec­tor — when nec­es­sary change is iden­ti­fied. In oth­er oth­er words, the one ‘utopi­an’ solu­tion we can’t afford to give up on is the utopia of a well-func­tion democ­ra­cy pop­u­lat­ed by a well-informed cit­i­zen­ry. A well-armed cit­i­zen­ry armed with rel­e­vant facts and wis­dom (and an exten­sive under­stand­ing of the his­to­ry and tech­nique of fas­cism and oth­er author­i­tar­i­an move­ments). Because a clue­less soci­ety will be an abu­sive­ly sur­veilled soci­ety.

But the fact that this Cam­bridge Ana­lyt­i­ca scan­dal is a sur­prise and is being cov­ered large­ly in iso­la­tion of this broad­er his­toric and con­tem­po­rary con­text is a reminder that we are no where near that demo­c­ra­t­ic ide­al of a well-informed cit­i­zen­ry. Well, guess what would be a real­ly valu­able tool for sur­veilling the sur­veil­lance state and the rest of the world around us and becom­ing that well-informed cit­i­zen­ry: the inter­net! Specif­i­cal­ly, we real­ly do need to read and digest grow­ing amounts of infor­ma­tion to make sense of an increas­ing­ly com­plex world. But the inter­net is just the start. The goal needs to be the kind of func­tion­al, self-aware democ­ra­cy were sit­u­a­tions like the cur­rent one don’t devel­op in a fog of col­lec­tive amne­sia and can be pro-active­ly man­aged. To put it anoth­er way, we need an inverse of Ithiel de Sola Pool’s vision of world with benev­o­lent elites use com­put­ers and Big Data to man­age the rab­ble and ward of polit­i­cal rev­o­lu­tions. Instead, we need a polit­i­cal rev­o­lu­tion of the rab­ble fueled by the knowl­edge of our his­to­ry and world the inter­net makes wide­ly acces­si­ble. And one of the key goals of the polit­i­cal rev­o­lu­tion needs to be to cre­ate a world with the knowl­edge the inter­net makes wide­ly avail­able is used to reign in our elites and build a world that works for every­one.

And yes, that implic­it­ly implies a left-wing rev­o­lu­tion since left-wing demo­c­ra­t­ic move­ments those are the only kind that have every­one in mind. And yes, this implies an eco­nom­ic rev­o­lu­tion that sys­tem­at­i­cal­ly frees up time for vir­tu­al­ly every­one one so peo­ple actu­al­ly have the time to inform them­selves. Eco­nom­ic secu­ri­ty and time secu­ri­ty. We need to build a world that pro­vide both to every­one.

So when we ask our­selves how we should respond to the grow­ing Cam­bridge Analytica/Facebook scan­dal, don’t for­get that one of the key lessons that the sto­ry of Cam­bridge Ana­lyt­i­ca teach­es us is that there is an immense amount of knowl­edge about our­selves — our his­to­ry and con­tem­po­rary con­text- that we need­ed to learn and did­n’t. And that includes envi­sion­ing what a func­tion­al demo­c­ra­t­ic soci­ety and econ­o­my that works for every­one would look like and build­ing it. Yes, the inter­net could be very help­ful in that process, just don’t for­get about every­thing else that will be required to build that func­tion­al democ­ra­cy.

Discussion

43 comments for “The Cambridge Analytica Microcosm in Our Panoptic Macrocosm”

  1. Here’s a good exam­ple of many of the prob­lem with Face­book are facil­i­tat­ed by the many pri­va­cy prob­lems with the rest of the tech sec­tor: A num­ber of Face­book users dis­cov­ered a rather creepy pri­va­cy vio­la­tion by Face­book. It turns out that Face­book was col­lect­ing meta­da­ta about the calls and texts peo­ple were send­ing from their smart­phones with the Face­book app and Googles Android oper­at­ing sys­tem.

    And it also turns out that Face­book used a num­ber of sleazy excus­es to “get per­mis­sion” to col­lect this the data. First, Face­book had users agree to giv­ing such data away by hid­ing it away in obtuse lan­guage in the user agree­ment. Sec­ond, the default set­ting for the Face­book app was to give this data away. Users could turn off this data shar­ing, but it was nev­er obvi­ous it was on.

    Third, it was based on exploit­ing how Android’s user per­mis­sions sys­tem encour­ages peo­ple to share vasts amounts of data with­out real­iz­ing it. This is were this becomes a Google scan­dal too. If you had the Android oper­at­ing sys­tem the Face­book app would try to get per­mis­sion to access your phone con­tact infor­ma­tion. This was osten­si­bly to be used for the Face­book’s friend rec­om­men­da­tion algo­rithms. If you grant­ed per­mis­sion to read con­tacts dur­ing the Face­book app’s instal­la­tion on old­er ver­sions of Android — before ver­sion 4.1 (Jel­ly Bean) — giv­ing per­mis­sion to an app to read con­tact infor­ma­tion also grant­ed per­mis­sion to call and mes­sage logs by default. So this was just an egre­gious pri­va­cy design by Google and Face­book egre­gious­ly exploit­ed it (sur­prise!).

    And when this loose per­mis­sions sys­tem was fixed in lat­er ver­sions of Android Face­book con­tin­ued to use a loop­hole to keep grab­bing the call and text meta­da­ta. The per­mis­sion struc­ture was changed in the Android API in ver­sion 16. But Android appli­ca­tions could bypass this change if they were writ­ten to ear­li­er ver­sions of the API, so Face­book API could con­tin­ue to gain access to call and SMS data by spec­i­fy­ing an ear­li­er Android SDK ver­sion. In oth­er words, upgrad­ing the Android oper­at­ing sys­tem did­n’t guar­an­tee that upgrades to user data pri­va­cy rules would actu­al­ly take effect on the apps you already have installed. Which, again, is egre­gious. But that’s what Google’s Android oper­at­ing sys­tem allowed and Face­book total­ly exploit­ed it until Google final­ly closed the loop­hole in Octo­ber of 2017.

    Note that Apple’s iOS phones did­n’t have this issue with the Face­book app because that iOS oper­at­ing sys­tem sim­ply does not give apps access to that kind of infor­ma­tion. So the per­mis­sions Google is giv­ing are bad even com­pared to it’s major com­peti­tor in the smart­phone oper­at­ing sys­tem space.

    It’s also quite anal­o­gous to what Face­book was doing with the “friends per­mis­sions” give­away of Face­book pro­file infor­ma­tion to app devel­op­ers. In both cas­es we have a major plat­form built there was a giant pri­va­cy-vio­lat­ing loop­holes built into the plat­forms that was devel­op­ers know about but the pub­lic isn’t real­ly aware they’re sign­ing up for. That’s become much of the mod­ern inter­net giant busi­ness mod­el and as we can see it’s a mod­el that feeds on itself. Google and Face­book feed infor­ma­tion to each oth­er indi­cat­ing that the Big Data giant have deter­mined that it’s more prof­itable to share their data on all of us than keep it locked and pro­pri­etary.

    Recall how Face­book whis­tle-blow­er Sandy Parak­i­las said he remem­bered Face­book exec­u­tives get­ting con­cerned that they were giv­ing so much of their infor­ma­tion on peo­ple away to app devel­op­ers that com­peti­tors would be able to cre­ate their own social net­works. That’s how much data Face­book was giv­ing away. And now we learn that Google’s oper­at­ing sys­tem made an egre­gious amount of data avail­able to app devel­op­ers — like meta­da­ta on calls and texts — if peo­ple gave an app “con­tact” per­mis­sions.

    And so we can see that Face­book and Google just aren’t in the ad space. They’re in the data bro­ker­age space too. They’ve clear­ly deter­mined that max­i­miz­ing prof­its just might require hand­ing over the kind of data peo­ple assumed these data giants care­ful­ly guard­ed. Instead, they’ve been care­ful­ly and steadi­ly hand­ing that data out. Pre­sum­ably because it’s more prof­itable:

    Giz­mo­do

    Facebook’s Defense for Suck­ing Up Your Call and Text Data Entire­ly Miss­es the Point

    Rhett Jones
    3/26/18 2:00pm

    A num­ber of Face­book users dis­cov­ered over the past few days that the social media com­pa­ny had col­lect­ed a creepy lev­el of infor­ma­tion about their calls and texts. Many users claimed they nev­er gave Face­book per­mis­sion to gath­er this infor­ma­tion. How­ev­er, in response to the uproar, Face­book says the “fea­ture” is opt-in only. Basi­cal­ly, the company’s say­ing it’s your own fault if you don’t like it.

    To under­stand what Face­book is defend­ing requires a lot of explanation—and that’s the heart of the prob­lem.

    ...

    But as the com­pa­ny faces grow­ing scruti­ny over its data prac­tices, a num­ber of users began dig­ging around in their archives. Spurred by a tweet from devel­op­er Dylan McK­ay, social media users com­plained this week­end that Face­book had records of their con­tacts, as well as call and text meta­da­ta. Face­book has let users export their data since 2010.

    Down­loaded my face­book data as a ZIP file­Some­how it has my entire call his­to­ry with my part­ner’s mum pic.twitter.com/CIRUguf4vD— Dylan McK­ay (@dylanmckaynz) March 21, 2018

    Ars Tech­ni­ca spoke with numer­ous users who felt blind­sided, and the publication’s staff did their own tests, find­ing SMS data and con­tacts data from an Android device they used in 2015 and 2106. From the report:

    Face­book uses phone-con­tact data as part of its friend rec­om­men­da­tion algo­rithm. And in recent ver­sions of the Mes­sen­ger appli­ca­tion for Android and Face­book Lite devices, a more explic­it request is made to users for access to call logs and SMS logs on Android and Face­book Lite devices. But even if users didn’t give that per­mis­sion to Mes­sen­ger, they may have giv­en it inad­ver­tent­ly for years through Facebook’s mobile apps—because of the way Android has han­dled per­mis­sions for access­ing call logs in the past.

    If you grant­ed per­mis­sion to read con­tacts dur­ing Facebook’s instal­la­tion on Android a few ver­sions ago—specifically before Android 4.1 (Jel­ly Bean)—that per­mis­sion also grant­ed Face­book access to call and mes­sage logs by default. The per­mis­sion struc­ture was changed in the Android API in ver­sion 16. But Android appli­ca­tions could bypass this change if they were writ­ten to ear­li­er ver­sions of the API, so Face­book API could con­tin­ue to gain access to call and SMS data by spec­i­fy­ing an ear­li­er Android SDK ver­sion. Google dep­re­cat­ed ver­sion 4.0 of the Android API in Octo­ber 2017—the point at which the lat­est call meta­da­ta in Face­book users’ data was found. Apple iOS has nev­er allowed silent access to call data.

    To put all of that into plain Eng­lish, Google’s Android OS has its own pri­va­cy issues, and cou­pled with Facebook’s apps, it could’ve made it pos­si­ble for Face­book users to opt-into the company’s sur­veil­lance pro­gram with­out real­iz­ing it.

    Face­book respond­ed on Sun­day with a “Fact Check” blog post claim­ing that any asser­tion that “Face­book has been log­ging people’s call and SMS (text) his­to­ry with­out their per­mis­sion” is false. As the unsigned blog reads, in part:

    Call and text his­to­ry log­ging is part of an opt-in fea­ture for peo­ple using Mes­sen­ger or Face­book Lite on Android. This helps you find and stay con­nect­ed with the peo­ple you care about, and pro­vides you with a bet­ter expe­ri­ence across Face­book. Peo­ple have to express­ly agree to use this fea­ture. If, at any time, they no longer wish to use this fea­ture they can turn it off in set­tings, or here for Face­book Lite users, and all pre­vi­ous­ly shared call and text his­to­ry shared via that app is delet­ed. While we receive cer­tain per­mis­sions from Android, upload­ing this infor­ma­tion has always been opt-in only.

    It’s true that Face­book, as far as we know, has always made SMS meta­da­ta col­lec­tion an opt-in part of the set­up process. But take a look at the dif­fer­ence between today’s opt-in screen and one users saw back in 2016.

    Today, Mes­sen­ger gives you the options to “turn on” meta­da­ta col­lec­tion, opt-out, or learn more. But before it faced crit­i­cism in 2016, the only options were “OK” or “set­tings.” So, it’s like­ly many peo­ple gave Face­book per­mis­sion at one time with­out real­iz­ing it.

    This is an excel­lent illus­tra­tion of the web that Face­book weaves. In the Cam­bridge Ana­lyt­i­ca scan­dal, Face­book allowed the per­son­al data of 50 mil­lion users to get into the hands of a third-par­ty app, in part because its poli­cies gave up the data of users’ friends based on per­mis­sion from a sin­gle user. When that third par­ty trans­ferred the infor­ma­tion to a polit­i­cal data analy­sis firm, which was a vio­la­tion of Facebook’s poli­cies, Face­book did noth­ing when it found out in 2015 but issue a stern warn­ing and make Cam­bridge Ana­lyt­i­ca sign a doc­u­ment promis­ing that the data was delet­ed. Now, Face­book says that it no longer shot­guns that data out to devel­op­ers based on a sin­gle per­mis­sion, so appar­ent­ly every­one should feel okay going for­ward.

    Explain­ing what’s going on shouldn’t be so dif­fi­cult or time-con­sum­ing. Face­book claims this is all designed to make things more con­ve­nient for you. But it doesn’t have to con­stant­ly track your text mes­sages and the dura­tion of your calls just to cap­ture your con­tacts list. That could be a one-time thing that you do when you set the ser­vice up, and Face­book could peri­od­i­cal­ly ask if you want to do anoth­er import a month lat­er.

    How­ev­er, Face­book has turned a con­ve­nience into an excuse for grab­bing more infor­ma­tion that it can com­bine with every­thing else to make a per­fect psy­cho­log­i­cal and social pro­file of you, the user. And it has demon­strat­ed that it can’t be trust­ed to keep that data to itself.

    Mark Zucker­burg told CNN last week that he was open to more reg­u­la­tions being applied to his plat­form. “You know, I think in gen­er­al tech­nol­o­gy is an increas­ing­ly impor­tant trend in the world and I actu­al­ly think the ques­tion is more what’s the right reg­u­la­tion rather than ‘yes’ or ‘no’ should it be reg­u­lat­ed,” he said. This is fool­ish because gov­ern­ment reg­u­la­tions will undoubt­ed­ly get screwed up and lead to unin­tend­ed con­se­quences.

    But if Mark insists, the gov­ern­ment could cre­ate strict terms of ser­vice require­ments for what a com­pa­ny explains to a user before they sign up. Those reg­u­la­tions could require clear exam­ples of how data might be used and even require users to com­plete a sim­ple quiz to show they under­stand before final­iz­ing the app’s set­up. Of course, that kind of bur­den­some activ­i­ty wouldn’t be nec­es­sary if Face­book would just make every­thing clear on its own. Unfor­tu­nate­ly, with con­gres­sion­al hear­ings sched­uled, and gov­ern­ment agency inves­ti­ga­tions under­way, it may be too late.

    ———-

    “Facebook’s Defense for Suck­ing Up Your Call and Text Data Entire­ly Miss­es the Point” by Rhett Jones; Giz­mo­do; 03/26/2018

    “To under­stand what Face­book is defend­ing requires a lot of explanation—and that’s the heart of the prob­lem.”

    It’s a key insight: It real­ly a reflec­tion of the heart of the prob­lem that sim­ply under­stand­ing what Face­book is defend­ing requires a lot of expla­na­tion. When Face­book start­ed col­lect­ing peo­ple’s call and text meta­da­ta over its app it was exploit­ing the fact that Google’s Android sys­tem allowed them to do that in the first place when users gave “con­tact” per­mis­sions to an app (most peo­ple prob­a­bly did­n’t assume that giv­ing an app con­tact per­mis­sion was also giv­ing away call and text meta­da­ta). And then after Google changed the Android app per­mis­sions sys­tem and sep­a­rat­ed the per­mis­sions for con­tact infor­ma­tion with per­mis­sions for the call and text meta­da­ta Face­book relied a loop­hole Google pro­vid­ed where apps that were already installed could con­tin­ue col­lect­ing that data. And none of this was ever made clear to the mil­lions of peo­ple using the Face­book app on their Android phones because it was hid­den in the dense text of user agree­ments that no one reads. The con­vo­lut­ed­ness of the act obscures the act.

    And keep in mind that Face­book is claim­ing that it mere­ly want­ed this call and text meta­da­ta for its friend rec­om­men­da­tions algo­rithm. Which is, of course, absurd. That data was going to obvi­ous­ly go into the pool of data Face­book is com­pil­ing on every­one.

    ...
    But as the com­pa­ny faces grow­ing scruti­ny over its data prac­tices, a num­ber of users began dig­ging around in their archives. Spurred by a tweet from devel­op­er Dylan McK­ay, social media users com­plained this week­end that Face­book had records of their con­tacts, as well as call and text meta­da­ta. Face­book has let users export their data since 2010.

    Down­loaded my face­book data as a ZIP file­Some­how it has my entire call his­to­ry with my part­ner’s mum pic.twitter.com/CIRUguf4vD— Dylan McK­ay (@dylanmckaynz) March 21, 2018

    Ars Tech­ni­ca spoke with numer­ous users who felt blind­sided, and the publication’s staff did their own tests, find­ing SMS data and con­tacts data from an Android device they used in 2015 and 2106. From the report:

    Face­book uses phone-con­tact data as part of its friend rec­om­men­da­tion algo­rithm. And in recent ver­sions of the Mes­sen­ger appli­ca­tion for Android and Face­book Lite devices, a more explic­it request is made to users for access to call logs and SMS logs on Android and Face­book Lite devices. But even if users didn’t give that per­mis­sion to Mes­sen­ger, they may have giv­en it inad­ver­tent­ly for years through Facebook’s mobile apps—because of the way Android has han­dled per­mis­sions for access­ing call logs in the past.

    If you grant­ed per­mis­sion to read con­tacts dur­ing Facebook’s instal­la­tion on Android a few ver­sions ago—specifically before Android 4.1 (Jel­ly Bean)—that per­mis­sion also grant­ed Face­book access to call and mes­sage logs by default. The per­mis­sion struc­ture was changed in the Android API in ver­sion 16. But Android appli­ca­tions could bypass this change if they were writ­ten to ear­li­er ver­sions of the API, so Face­book API could con­tin­ue to gain access to call and SMS data by spec­i­fy­ing an ear­li­er Android SDK ver­sion. Google dep­re­cat­ed ver­sion 4.0 of the Android API in Octo­ber 2017—the point at which the lat­est call meta­da­ta in Face­book users’ data was found. Apple iOS has nev­er allowed silent access to call data.

    To put all of that into plain Eng­lish, Google’s Android OS has its own pri­va­cy issues, and cou­pled with Facebook’s apps, it could’ve made it pos­si­ble for Face­book users to opt-into the company’s sur­veil­lance pro­gram with­out real­iz­ing it.
    ...

    “To put all of that into plain Eng­lish, Google’s Android OS has its own pri­va­cy issues, and cou­pled with Facebook’s apps, it could’ve made it pos­si­ble for Face­book users to opt-into the company’s sur­veil­lance pro­gram with­out real­iz­ing it.”

    Face­book and Google work­ing togeth­er to share more of what they know about us with each oth­ers. That’s basi­cal­ly what hap­pened. It was a team effort.

    And as the arti­cle notes, when Face­book claims that this was all fine because it was an opt-in option they ignore the fact the app used to make it very unclear opt­ing-out was an option at all. The opt-out option was hid­den in the set­tings and opt­ing-in was the default set­ting that peo­ple had select­ed when they installed the app. And it was like that as recent­ly as 2016:

    ...
    Face­book respond­ed on Sun­day with a “Fact Check” blog post claim­ing that any asser­tion that “Face­book has been log­ging people’s call and SMS (text) his­to­ry with­out their per­mis­sion” is false. As the unsigned blog reads, in part:

    Call and text his­to­ry log­ging is part of an opt-in fea­ture for peo­ple using Mes­sen­ger or Face­book Lite on Android. This helps you find and stay con­nect­ed with the peo­ple you care about, and pro­vides you with a bet­ter expe­ri­ence across Face­book. Peo­ple have to express­ly agree to use this fea­ture. If, at any time, they no longer wish to use this fea­ture they can turn it off in set­tings, or here for Face­book Lite users, and all pre­vi­ous­ly shared call and text his­to­ry shared via that app is delet­ed. While we receive cer­tain per­mis­sions from Android, upload­ing this infor­ma­tion has always been opt-in only.

    It’s true that Face­book, as far as we know, has always made SMS meta­da­ta col­lec­tion an opt-in part of the set­up process. But take a look at the dif­fer­ence between today’s opt-in screen and one users saw back in 2016.

    Today, Mes­sen­ger gives you the options to “turn on” meta­da­ta col­lec­tion, opt-out, or learn more. But before it faced crit­i­cism in 2016, the only options were “OK” or “set­tings.” So, it’s like­ly many peo­ple gave Face­book per­mis­sion at one time with­out real­iz­ing it.
    ...

    And it’s also all an exam­ple of how the osten­si­bly help­ful rea­sons to col­lect this per­son­al­ized data (like make the friend rec­om­men­da­tion algo­rithms bet­ter in this case) are used as an excuse to engage in the per­son­al infor­ma­tion equiv­a­lent of a smash and grab ran­sack­ing:

    ...
    This is an excel­lent illus­tra­tion of the web that Face­book weaves. In the Cam­bridge Ana­lyt­i­ca scan­dal, Face­book allowed the per­son­al data of 50 mil­lion users to get into the hands of a third-par­ty app, in part because its poli­cies gave up the data of users’ friends based on per­mis­sion from a sin­gle user. When that third par­ty trans­ferred the infor­ma­tion to a polit­i­cal data analy­sis firm, which was a vio­la­tion of Facebook’s poli­cies, Face­book did noth­ing when it found out in 2015 but issue a stern warn­ing and make Cam­bridge Ana­lyt­i­ca sign a doc­u­ment promis­ing that the data was delet­ed. Now, Face­book says that it no longer shot­guns that data out to devel­op­ers based on a sin­gle per­mis­sion, so appar­ent­ly every­one should feel okay going for­ward.

    Explain­ing what’s going on shouldn’t be so dif­fi­cult or time-con­sum­ing. Face­book claims this is all designed to make things more con­ve­nient for you. But it doesn’t have to con­stant­ly track your text mes­sages and the dura­tion of your calls just to cap­ture your con­tacts list. That could be a one-time thing that you do when you set the ser­vice up, and Face­book could peri­od­i­cal­ly ask if you want to do anoth­er import a month lat­er.

    How­ev­er, Face­book has turned a con­ve­nience into an excuse for grab­bing more infor­ma­tion that it can com­bine with every­thing else to make a per­fect psy­cho­log­i­cal and social pro­file of you, the user. And it has demon­strat­ed that it can’t be trust­ed to keep that data to itself.
    ...

    “How­ev­er, Face­book has turned a con­ve­nience into an excuse for grab­bing more infor­ma­tion that it can com­bine with every­thing else to make a per­fect psy­cho­log­i­cal and social pro­file of you, the user. And it has demon­strat­ed that it can’t be trust­ed to keep that data to itself.”

    While Face­book may not have per­fect psy­cho­log­i­cal and social pro­files of every­one, they prob­a­bly have the best or near­ly the vest, with Google pos­si­bly know­ing more about peo­ple. And it’s hard to imag­ine that this call and text meta­da­ta isn’t poten­tial­ly pret­ty valu­able infor­ma­tion for putting togeth­er those per­son­al pro­files on every­one. So it’s worth not­ing that this is poten­tial­ly the same kind of pro­file data that Face­book gave out to Cam­bridge Ana­lyt­i­ca and thou­sands of oth­er app devel­op­ers. In oth­er words, this call and text meta­da­ta slurp­ing scan­dal is poten­tial­ly also part of the Cam­bridge Ana­lyt­i­ca scan­dal in the sense that the insights Face­book gained from the call and text meta­da­ta could have shown up in those pro­files Face­book was hand­ing out to app devel­op­ers like Cam­bridge Ana­lyt­i­ca.

    Which is a reminder that this new scan­dal of Google’s Android OS giv­ing Face­book this call and text meta­da­ta prob­a­bly involves a lot more than just Face­book col­lect­ing this kind of data. Who knows how many oth­er app devel­op­ers whose apps request­ed “con­tact” per­mis­sions also went ahead and grabbed all the call and text meta­da­ta?

    Also don’t for­get that this call and text meta­da­ta includes data about the peo­ple on the oth­er side of those calls and texts. So Face­book was grab­bing data on more peo­ple than just the app users. And any oth­er Android devel­op­ers were poten­tial­ly grab­bing that data too. It’s anoth­er par­al­lel with the Face­book “friends per­mis­sion” loop­hole exploit­ed by Cam­bridge Ana­lyt­i­ca and oth­er Face­book app devel­op­ers: you don’t have to down­load these pri­va­cy vio­lat­ing apps to be impact­ed. Sim­ply com­mu­ni­cat­ing with some­one who does have the pri­va­cy vio­lat­ing app will get your pri­va­cy vio­lat­ed too.

    So as we can see, Face­book does­n’t just have a scan­dal involv­ing giv­ing pri­vate data away. It also has a scan­dal involv­ing col­lect­ing pri­vate data too. A scan­dal that poten­tial­ly any oth­er Android app devel­op­er might also be involved in too. Which means there’s prob­a­bly a black mar­ket for this kind of data too. Because Google, like Face­book, appar­ent­ly could­n’t resist mak­ing itself a data-bro­ker too. And now all this data is poten­tial­ly float­ing around out there. It’s was a wild­ly irre­spon­si­ble act on Google’s part to make that kind of data avail­able under the “con­tacts” per­mis­sions in the Android oper­at­ing sys­tem but that’s how much Google designed that sys­tem to make data col­lec­tion a pri­or­i­ty. Pre­sum­ably to encour­age more app devel­op­ers to make Android apps. Access to our data is lit­er­al­ly part of the incen­tive struc­ture. It’s real­ly quite stun­ning. And quite anal­o­gous to what Face­book is in trou­ble for with Cam­bridge Ana­lyt­i­ca.

    But at least those Face­book friend rec­om­men­da­tion algo­rithms are prob­a­bly very well pow­ered, so there’s that.

    Posted by Pterrafractyl | April 1, 2018, 11:15 pm
  2. We should prob­a­bly get ready for a lot more sto­ries like this: Face­book just issued a flur­ry of new updates to its data-shar­ing poli­cies. Some of these changes include new restric­tions on the data made avail­able to app devel­op­ers while oth­er changes are focused on clar­i­fy­ing the user agree­ments that dis­close what data is tak­en.

    And there’s a new esti­mate from Face­book on the num­ber of Face­book pro­files grabbed by Cam­bridge Ana­lyt­i­ca’s app. It’s gone from 50 mil­lion to 87 mil­lion pro­files:

    Asso­ci­at­ed Press

    Face­book: 87M Users May Have Had Data Breached By Cam­bridge Ana­lyt­i­ca

    By BARBARA ORTUTAY
    April 4, 2018 2:47 pm

    NEW YORK (AP) — Face­book revealed Wednes­day that tens of mil­lions more peo­ple might have been exposed in the Cam­bridge Ana­lyt­i­ca pri­va­cy scan­dal than pre­vi­ous­ly thought and said it will restrict the data it allows out­siders to access on its users.

    Those devel­op­ments came as con­gres­sion­al offi­cials said CEO Mark Zucker­berg will tes­ti­fy next week, while Face­book unveiled a new pri­va­cy pol­i­cy that aims to explain the data it gath­ers on users more clear­ly — but doesn’t actu­al­ly change what it col­lects and shares.

    Face­book is fac­ing its worst pri­va­cy scan­dal in years fol­low­ing alle­ga­tions that a Trump-affil­i­at­ed data min­ing firm, Cam­bridge Ana­lyt­i­ca, used used ill-got­ten data from mil­lions of users to try to influ­ence elec­tions. The com­pa­ny said Wednes­day that as many as 87 mil­lion peo­ple might have had their data accessed — an increase from the 50 mil­lion dis­closed in pub­lished reports.

    This Mon­day, all Face­book users will receive a notice on their Face­book feeds with a link to see what apps they use and what infor­ma­tion they have shared with those apps. They’ll have a chance to delete apps they no longer want. Users who might have had their data shared with Cam­bridge Ana­lyt­i­ca will be told of that. Face­book says most of the affect­ed users are in the U.S.

    ...

    Face­book is restrict­ing access that apps can get about users’ events, as well as infor­ma­tion about groups such as mem­ber lists and con­tent. In addi­tion, the com­pa­ny is also remov­ing the option to search for users by enter­ing a phone num­ber or an email address. While this was use­ful to peo­ple to find friends who may have a com­mon name, Face­book says mali­cious actors abused it by col­lect­ing people’s pro­file infor­ma­tion through phone or email lists they had access to.

    This comes on top of changes announced a few weeks ago. For exam­ple, Face­book has said it will remove devel­op­ers’ access to people’s data if the per­son has not used the app in three months.

    Ear­li­er Wednes­day, Face­book unveiled a new pri­va­cy pol­i­cy that seeks to clar­i­fy its data col­lec­tion and use.

    For instance, Face­book added a sec­tion explain­ing that it col­lects people’s con­tact infor­ma­tion if they choose to “upload, sync or import” this to the ser­vice. This may include users’ address books on their phones, as well as their call logs and text his­to­ries. The new pol­i­cy says Face­book may use this data to help “you and oth­ers find peo­ple you may know.”

    The pre­vi­ous pol­i­cy did not men­tion call logs or text his­to­ries. Sev­er­al users were sur­prised to learn recent­ly that Face­book had been col­lect­ing infor­ma­tion about whom they texted or called and for how long, though not the actu­al con­tents of text mes­sages. It seemed to have been done with­out explic­it con­sent, though Face­book says it col­lect­ed such data only from Android users who specif­i­cal­ly allowed it to do so — for instance, by agree­ing to per­mis­sions when installing Face­book.

    Face­book also added clar­i­fi­ca­tion that local laws could affect what it does with “sen­si­tive” data on peo­ple, such as infor­ma­tion about a user’s race or eth­nic­i­ty, health, polit­i­cal views or even trade union mem­ber­ship. This and oth­er infor­ma­tion, the new pol­i­cy states, “could be sub­ject to spe­cial pro­tec­tions under the laws of your coun­try.” But it means the com­pa­ny is unlike­ly to apply stricter pro­tec­tions to coun­tries with loos­er pri­va­cy laws — such as the U.S., for exam­ple. Face­book has always had region­al dif­fer­ences in poli­cies, and the new doc­u­ment makes that clear­er.

    The new pol­i­cy also makes it clear that What­sApp and Insta­gram are part of Face­book and that the com­pa­nies share infor­ma­tion about users. The two were not men­tioned in the pre­vi­ous pol­i­cy. While What­sApp still doesn’t show adver­tise­ments, and has its own pri­va­cy pol­i­cy, Insta­gram long has and its pol­i­cy is the same as Facebook’s. But the notice could be a sign of things to come for What­sApp as well.

    Oth­er changes incor­po­rate some of the restric­tions Face­book pre­vi­ous­ly announced on what third-par­ty apps can col­lect from users and their friends.

    Although Face­book says the changes aren’t prompt­ed by recent events or tighter pri­va­cy rules com­ing from the EU, it’s an oppor­tune time. It comes as Zucker­berg is set to appear April 11 before a House com­mit­tee — his first tes­ti­mo­ny before Con­gress.

    As Face­book evolved from a closed, Har­vard-only net­work with no ads to a giant cor­po­ra­tion with $40 bil­lion in adver­tis­ing rev­enue and huge sub­sidiaries like Insta­gram and What­sApp, its pri­va­cy pol­i­cy has also shift­ed — over and over.

    Almost always, crit­ics say, the changes meant a move away from pro­tect­ing user pri­va­cy toward push­ing open­ness and more shar­ing. On the oth­er hand, reg­u­la­to­ry and user pres­sure has some­times led Face­book to pull back on its data col­lec­tion and use and to explain things in plain­er lan­guage — in con­trast to dense legalese from many oth­er inter­net com­pa­nies.

    The pol­i­cy changes come a week after Face­book gave its pri­va­cy set­tings a makeover. The com­pa­ny tried to make it eas­i­er to nav­i­gate its com­plex and often con­fus­ing pri­va­cy and secu­ri­ty set­tings, though the makeover didn’t change what Face­book col­lects and shares either.

    Those who fol­lowed Facebook’s pri­va­cy gaffes over the years may feel a sense of famil­iar­i­ty. Over and over, the com­pa­ny — often Zucker­berg — owned up to mis­steps and promised changes.

    In 2009, the com­pa­ny announced that it was con­sol­i­dat­ing six pri­va­cy pages and more than 30 set­tings on to a sin­gle pri­va­cy page. Yet, some­how, the com­pa­ny said last week that users still had to go to 20 dif­fer­ent places to access all of their pri­va­cy con­trols and it was chang­ing this so the con­trols will be acces­si­ble from a sin­gle place.

    ———-

    “Face­book: 87M Users May Have Had Data Breached By Cam­bridge Ana­lyt­i­ca” by BARBARA ORTUTAY; Asso­ci­at­ed Press; 04/04/2018

    “Face­book is fac­ing its worst pri­va­cy scan­dal in years fol­low­ing alle­ga­tions that a Trump-affil­i­at­ed data min­ing firm, Cam­bridge Ana­lyt­i­ca, used used ill-got­ten data from mil­lions of users to try to influ­ence elec­tions. The com­pa­ny said Wednes­day that as many as 87 mil­lion peo­ple might have had their data accessed — an increase from the 50 mil­lion dis­closed in pub­lished reports.

    50 mil­lion to now 87 mil­lion. It’s quite a jump. How high might it get when this is all over? We’ll see.

    And beyond that update, Face­book also updat­ed their data-col­lec­tion dis­clo­sure poli­cies. Now they’re actu­al­ly men­tion­ing things like the grab­bing of call and text data off of your smart­phone, which they appar­ent­ly did­n’t feel the need to tell peo­ple about before:

    ...
    Ear­li­er Wednes­day, Face­book unveiled a new pri­va­cy pol­i­cy that seeks to clar­i­fy its data col­lec­tion and use.

    For instance, Face­book added a sec­tion explain­ing that it col­lects people’s con­tact infor­ma­tion if they choose to “upload, sync or import” this to the ser­vice. This may include users’ address books on their phones, as well as their call logs and text his­to­ries. The new pol­i­cy says Face­book may use this data to help “you and oth­ers find peo­ple you may know.”

    The pre­vi­ous pol­i­cy did not men­tion call logs or text his­to­ries. Sev­er­al users were sur­prised to learn recent­ly that Face­book had been col­lect­ing infor­ma­tion about whom they texted or called and for how long, though not the actu­al con­tents of text mes­sages. It seemed to have been done with­out explic­it con­sent, though Face­book says it col­lect­ed such data only from Android users who specif­i­cal­ly allowed it to do so — for instance, by agree­ing to per­mis­sions when installing Face­book.
    ...

    And note how Face­book’s update on how local pri­va­cy laws could affect its han­dling of “sen­si­tive” data implies that the absence of those local laws means the same “sen­si­tive” data isn’t going to be han­dled in a sen­si­tive man­ner. So if you were hop­ing the big new EU data pri­va­cy rules were going to impact Face­book’s poli­cies out­side the EU, nope:

    ...
    Face­book also added clar­i­fi­ca­tion that local laws could affect what it does with “sen­si­tive” data on peo­ple, such as infor­ma­tion about a user’s race or eth­nic­i­ty, health, polit­i­cal views or even trade union mem­ber­ship. This and oth­er infor­ma­tion, the new pol­i­cy states, “could be sub­ject to spe­cial pro­tec­tions under the laws of your coun­try.” But it means the com­pa­ny is unlike­ly to apply stricter pro­tec­tions to coun­tries with loos­er pri­va­cy laws — such as the U.S., for exam­ple. Face­book has always had region­al dif­fer­ences in poli­cies, and the new doc­u­ment makes that clear­er.
    ...

    And that’s just some of the updates Face­book issued today. And while a num­ber of these updates are pret­ty notable, per­haps the most notable part of this flur­ry of updates is is that they’re updates that actu­al­ly increase pri­va­cy pro­tec­tions, which is not how these updates have nor­mal­ly gone for Face­book in the past

    ...
    As Face­book evolved from a closed, Har­vard-only net­work with no ads to a giant cor­po­ra­tion with $40 bil­lion in adver­tis­ing rev­enue and huge sub­sidiaries like Insta­gram and What­sApp, its pri­va­cy pol­i­cy has also shift­ed — over and over.

    Almost always, crit­ics say, the changes meant a move away from pro­tect­ing user pri­va­cy toward push­ing open­ness and more shar­ing. On the oth­er hand, reg­u­la­to­ry and user pres­sure has some­times led Face­book to pull back on its data col­lec­tion and use and to explain things in plain­er lan­guage — in con­trast to dense legalese from many oth­er inter­net com­pa­nies.
    ...

    And now let’s take a look at one of the oth­er dis­clo­sures Face­book made today: Remem­ber how Face­book whis­tle-blow­er Sandy Parak­i­las spec­u­lat­ed that a major­i­ty of Face­book users prob­a­bly had their Face­book pro­file infor­ma­tion scraped by app devel­op­ers using exact­ly the same tech­nique Cam­bridge Ana­lyt­i­ca used? Well, it looks like Face­book has very belat­ed­ly arrived at the same con­clu­sion:

    Wash­ing­ton Post

    Face­book said the per­son­al data of most of its 2 bil­lion users has been col­lect­ed and shared with out­siders

    by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin
    April 4, 2018 at 5:09 PM

    Face­book said Wednes­day that most of its 2 bil­lion users like­ly have had their pub­lic pro­files scraped by out­siders with­out the users’ explic­it per­mis­sion, dra­mat­i­cal­ly rais­ing the stakes in a pri­va­cy con­tro­ver­sy that has dogged the com­pa­ny for weeks, spurred inves­ti­ga­tions in the Unit­ed States and Europe, and sent the com­pa­ny’s stock price tum­bling.

    ...

    “We’re an ide­al­is­tic and opti­mistic com­pa­ny, and for the first decade, we were real­ly focused on all the good that con­nect­ing peo­ple brings,” Chief Exec­u­tive Mark Zucker­berg said on a call with reporters Wednes­day after­noon. “But it’s clear now that we didn’t focus enough on pre­vent­ing abuse and think­ing about how peo­ple could use these tools for harm as well.”

    As part of the dis­clo­sure, Face­book for the first time detailed the scale of the improp­er data col­lec­tion for Cam­bridge Ana­lyt­i­ca, a polit­i­cal data con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­can can­di­dates in the last two fed­er­al elec­tion cycles. The polit­i­cal con­sul­tan­cy gained access to Face­book infor­ma­tion on up to 87 mil­lion users, 71 mil­lion of whom are Amer­i­cans, Face­book said. Cam­bridge Ana­lyt­i­ca obtained the data to build “psy­cho­graph­ic” pro­files that would help deliv­er tar­get­ed mes­sages intend­ed to shape vot­er behav­ior in a wide range of U.S. elec­tions.

    But in research sparked by rev­e­la­tions from a Cam­bridge Ana­lyt­i­ca whistle­blow­er last month, Face­book deter­mined that the prob­lem of third-par­ty col­lec­tion of user data was far larg­er still and, with the com­pa­ny’s mas­sive user base, like­ly affect­ed a large cross-sec­tion of peo­ple in the devel­oped world.

    “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped,” the com­pa­ny wrote in its blog post.

    The scrap­ing by mali­cious actors typ­i­cal­ly involved gath­er­ing pub­lic pro­file infor­ma­tion — includ­ing names, email address­es and phone num­bers, accord­ing to Face­book — by using a “search and account recov­ery” func­tion that Face­book said it has now dis­abled. Face­book did­n’t make clear in its post exact­ly what data was col­lect­ed.

    The data obtained by Cam­bridge Ana­lyt­i­ca was more detailed and exten­sive, includ­ing the names, home towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.

    Face­book ini­tial­ly had sought to down­play the prob­lem, say­ing in March only that 270,000 peo­ple had respond­ed to a sur­vey on an app cre­at­ed by the researcher in 2014. That net­ted Cam­bridge Ana­lyt­i­ca the data on the friends of those who respond­ed to the sur­vey, with­out their per­mis­sion. But Face­book declined to say at the time how many oth­er users may have had their data col­lect­ed in the process. The whistle­blow­er, Christo­pher Wylie, a for­mer researcher for the com­pa­ny, said the real num­ber of affect­ed peo­ple was at least 50 mil­lion.

    Wylie tweet­ed on Wednes­day after­noon that Cam­bridge Ana­lyt­i­ca could have obtained even more than 87 mil­lion pro­files. “Could be more tbh,” he wrote, using an abbre­vi­a­tion for “to be hon­est.”

    Cam­bridge Ana­lyt­i­ca on Wednes­day respond­ed to Face­book’s announce­ment by say­ing that it had licensed data on 30 mil­lion users. Face­book banned Cam­bridge Ana­lyt­i­ca from its plat­form last month for obtain­ing the data under false pre­tens­es.

    Face­book’s announce­ment, made near the bot­tom of a blog post Wednes­day after­noon on plans to restrict access to data in the future, under­scores the sever­i­ty of a data mishap that appears to have affect­ed about one out of every four Amer­i­cans and sparked wide­spread out­rage at the care­less­ness of the com­pa­ny’s han­dling of infor­ma­tion on its users. Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    With its moves over the past week, Face­book is embark­ing on a major shift in its rela­tion­ship with third-par­ty app devel­op­ers that have used Facebook’s vast net­work to expand their busi­ness­es. What was large­ly an auto­mat­ed process will now involve devel­op­ers agree­ing to “strict require­ments,” the com­pa­ny said in its blog post Wednes­day. The 2015 pol­i­cy change cur­tailed devel­op­ers’ abil­i­ties to access data about people’s friends net­works but left open many loop­holes that the com­pa­ny tight­ened on Wednes­day.

    The news quick­ly rever­ber­at­ed on Capi­tol Hill, where law­mak­ers are set to grill Zucker­berg at a series of hear­ings next week.

    “The more we learn, the clear­er it is that this was an avalanche of pri­va­cy vio­la­tions that strike at the core of one of our most pre­cious Amer­i­can val­ues – the right to pri­va­cy,” said Sen. Ed Markey (D‑Mass.), who serves on the Sen­ate Com­merce Com­mit­tee, which has called on Zucker­berg to tes­ti­fy at a hear­ing next week.

    “This lat­est rev­e­la­tion is extreme­ly trou­bling and shows that Face­book still has a lot of work to do to deter­mine how big this breach actu­al­ly is,” said Rep. Frank Pal­lone Jr. (D‑N.J.), the top Demo­c­rat on the House Ener­gy and Com­merce Com­mit­tee, which will hear from Zucker­berg on Wednes­day.

    “I’m deeply con­cerned that Face­book only address­es con­cerns on its plat­form when it becomes a pub­lic cri­sis, and that is sim­ply not the way you run a com­pa­ny that is used by over 2 bil­lion peo­ple,” he said. “We need to know how they are going to fix this prob­lem next week at our hear­ing.”

    Face­book announced plans on Wednes­day to add new restric­tions to how out­siders can gain access to this data, the lat­est steps in a years-long process by the com­pa­ny to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion.

    Cam­bridge Ana­lyt­i­ca, which col­lect­ed this infor­ma­tion with the help of Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, was found­ed by a mul­ti­mil­lion-dol­lar invest­ment by hedge-fund bil­lion­aire Robert Mer­cer and head­ed by his daugh­ter, Rebekah Mer­cer, who was the com­pa­ny’s pres­i­dent, accord­ing to doc­u­ments pro­vid­ed by Wylie. Serv­ing as vice pres­i­dent was con­ser­v­a­tive strate­gist Stephen K. Ban­non, who also was the head of Bre­it­bart News. He has since left both jobs and also his post as top White House advis­er to Trump.

    Until Wednes­day, apps that let peo­ple input a Face­book event into their cal­en­dar could also auto­mat­i­cal­ly import lists of all the peo­ple who attend­ed that event, Face­book said. Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group. App devel­op­ers who want this access will now have to prove their activ­i­ties ben­e­fit the group. Face­book will now need to approve tools that busi­ness­es use to oper­ate Face­book pages. A busi­ness that uses an app to help it respond quick­ly to cus­tomer mes­sages, for exam­ple, will not be able to do so auto­mat­i­cal­ly. Devel­op­ers’ access to Insta­gram will also be severe­ly restrict­ed.

    Face­book is allow ban­ning apps from access­ing users’ infor­ma­tion about their reli­gious or polit­i­cal views, rela­tion­ship sta­tus, edu­ca­tion, work his­to­ry, fit­ness activ­i­ty, book read­ing habits, music lis­ten­ing and news read­ing activ­i­ty, video watch­ing and games. Data bro­kers and busi­ness­es col­lect this type of infor­ma­tion to build pro­files of their cus­tomers’ tastes.

    Face­book last week said it is also shut­ting down access to data bro­kers who use their own data to tar­get cus­tomers on Face­book.

    Facebook’s broad changes to how data is used apply most­ly to out­siders and third par­ties. Face­book is not lim­it­ing the data the com­pa­ny itself can col­lect, nor is it restrict­ing its abil­i­ty to pro­file users to enable adver­tis­ers to tar­get them with per­son­al­ized mes­sages. One piece of data Face­book said it would stop col­lect­ing was the time of phone calls, a response to out­rage from users of Facebook’s mes­sen­ger ser­vice who dis­cov­ered that allow­ing Face­book to access their phone con­tact list was giv­ing the com­pa­ny access to their call logs.

    ———-

    “Face­book said the per­son­al data of most of its 2 bil­lion users has been col­lect­ed and shared with out­siders” by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin; Wash­ing­ton Post; 04/04/2018

    “Face­book said Wednes­day that most of its 2 bil­lion users like­ly have had their pub­lic pro­files scraped by out­siders with­out the users’ explic­it per­mis­sion, dra­mat­i­cal­ly rais­ing the stakes in a pri­va­cy con­tro­ver­sy that has dogged the com­pa­ny for weeks, spurred inves­ti­ga­tions in the Unit­ed States and Europe, and sent the com­pa­ny’s stock price tum­bling.”

    So a bil­lion or so peo­ple prob­a­bly had their Face­book pro­file data sucked away by app devel­op­ers. Face­book appar­ent­ly just dis­cov­ered this. And while it’s laugh­able to imag­ine that Face­book just sud­den­ly dis­cov­ered this now, recall how Sandy Parak­i­las also said exec­u­tives had a “it’s best not to know” atti­tude about how this data was used by third-par­ties, so it’s pos­si­ble that Face­book tech­ni­cal­ly did­n’t offi­cial­ly know this until now because they offi­cial­ly nev­er looked before:

    ...
    As part of the dis­clo­sure, Face­book for the first time detailed the scale of the improp­er data col­lec­tion for Cam­bridge Ana­lyt­i­ca, a polit­i­cal data con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­can can­di­dates in the last two fed­er­al elec­tion cycles. The polit­i­cal con­sul­tan­cy gained access to Face­book infor­ma­tion on up to 87 mil­lion users, 71 mil­lion of whom are Amer­i­cans, Face­book said. Cam­bridge Ana­lyt­i­ca obtained the data to build “psy­cho­graph­ic” pro­files that would help deliv­er tar­get­ed mes­sages intend­ed to shape vot­er behav­ior in a wide range of U.S. elec­tions.

    But in research sparked by rev­e­la­tions from a Cam­bridge Ana­lyt­i­ca whistle­blow­er last month, Face­book deter­mined that the prob­lem of third-par­ty col­lec­tion of user data was far larg­er still and, with the com­pa­ny’s mas­sive user base, like­ly affect­ed a large cross-sec­tion of peo­ple in the devel­oped world.

    “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped,” the com­pa­ny wrote in its blog post.

    The scrap­ing by mali­cious actors typ­i­cal­ly involved gath­er­ing pub­lic pro­file infor­ma­tion — includ­ing names, email address­es and phone num­bers, accord­ing to Face­book — by using a “search and account recov­ery” func­tion that Face­book said it has now dis­abled. Face­book did­n’t make clear in its post exact­ly what data was col­lect­ed.

    The data obtained by Cam­bridge Ana­lyt­i­ca was more detailed and exten­sive, includ­ing the names, home towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.
    ...

    ““Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped,” the com­pa­ny wrote in its blog post.”

    LOL! They just dis­cov­ered this and knew noth­ing about how their mas­sive shar­ing of pro­file infor­ma­tion with app devel­op­ers might lead to a mas­sive release of pro­file data. That’s their sto­ry and they’re stick­ing to it. For now.

    And notice how it’s just casu­al­ly acknowl­edged that “Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015,” and Face­book is announc­ing all these new restric­tions on the data app devel­op­ers, or even data bro­kers, can access. And yet Face­book is act­ing like this is all sort of rev­e­la­tion:

    ...
    Face­book’s announce­ment, made near the bot­tom of a blog post Wednes­day after­noon on plans to restrict access to data in the future, under­scores the sever­i­ty of a data mishap that appears to have affect­ed about one out of every four Amer­i­cans and sparked wide­spread out­rage at the care­less­ness of the com­pa­ny’s han­dling of infor­ma­tion on its users. Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    ...

    Face­book announced plans on Wednes­day to add new restric­tions to how out­siders can gain access to this data, the lat­est steps in a years-long process by the com­pa­ny to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion.

    Cam­bridge Ana­lyt­i­ca, which col­lect­ed this infor­ma­tion with the help of Cam­bridge Uni­ver­si­ty psy­chol­o­gist Alek­san­dr Kogan, was found­ed by a mul­ti­mil­lion-dol­lar invest­ment by hedge-fund bil­lion­aire Robert Mer­cer and head­ed by his daugh­ter, Rebekah Mer­cer, who was the com­pa­ny’s pres­i­dent, accord­ing to doc­u­ments pro­vid­ed by Wylie. Serv­ing as vice pres­i­dent was con­ser­v­a­tive strate­gist Stephen K. Ban­non, who also was the head of Bre­it­bart News. He has since left both jobs and also his post as top White House advis­er to Trump.

    Until Wednes­day, apps that let peo­ple input a Face­book event into their cal­en­dar could also auto­mat­i­cal­ly import lists of all the peo­ple who attend­ed that event, Face­book said. Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group. App devel­op­ers who want this access will now have to prove their activ­i­ties ben­e­fit the group. Face­book will now need to approve tools that busi­ness­es use to oper­ate Face­book pages. A busi­ness that uses an app to help it respond quick­ly to cus­tomer mes­sages, for exam­ple, will not be able to do so auto­mat­i­cal­ly. Devel­op­ers’ access to Insta­gram will also be severe­ly restrict­ed.

    Face­book is allow ban­ning apps from access­ing users’ infor­ma­tion about their reli­gious or polit­i­cal views, rela­tion­ship sta­tus, edu­ca­tion, work his­to­ry, fit­ness activ­i­ty, book read­ing habits, music lis­ten­ing and news read­ing activ­i­ty, video watch­ing and games. Data bro­kers and busi­ness­es col­lect this type of infor­ma­tion to build pro­files of their cus­tomers’ tastes.

    Face­book last week said it is also shut­ting down access to data bro­kers who use their own data to tar­get cus­tomers on Face­book.

    Facebook’s broad changes to how data is used apply most­ly to out­siders and third par­ties. Face­book is not lim­it­ing the data the com­pa­ny itself can col­lect, nor is it restrict­ing its abil­i­ty to pro­file users to enable adver­tis­ers to tar­get them with per­son­al­ized mes­sages. One piece of data Face­book said it would stop col­lect­ing was the time of phone calls, a response to out­rage from users of Facebook’s mes­sen­ger ser­vice who dis­cov­ered that allow­ing Face­book to access their phone con­tact list was giv­ing the com­pa­ny access to their call logs.
    ...

    And note how Cam­bridge Ana­lyt­i­ca whis­tle-blow­er Christo­pher Wylie has already tweet­ed out that the new 87 mil­lion esti­mate might not be high enough:

    ...
    Face­book ini­tial­ly had sought to down­play the prob­lem, say­ing in March only that 270,000 peo­ple had respond­ed to a sur­vey on an app cre­at­ed by the researcher in 2014. That net­ted Cam­bridge Ana­lyt­i­ca the data on the friends of those who respond­ed to the sur­vey, with­out their per­mis­sion. But Face­book declined to say at the time how many oth­er users may have had their data col­lect­ed in the process. The whistle­blow­er, Christo­pher Wylie, a for­mer researcher for the com­pa­ny, said the real num­ber of affect­ed peo­ple was at least 50 mil­lion.

    Wylie tweet­ed on Wednes­day after­noon that Cam­bridge Ana­lyt­i­ca could have obtained even more than 87 mil­lion pro­files. “Could be more tbh,” he wrote, using an abbre­vi­a­tion for “to be hon­est.”
    ...

    “Could be more tbh.” It’s a rather omi­nous tweet con­sid­er­ing the con­text.

    And don’t for­get that the count in the orig­i­nal num­ber of peo­ple using the Cam­bridge Ana­lyt­i­ca app, ~270,000, has­n’t been updat­ed. That’s still just 270,000 peo­ple. So this scan­dal is pro­vid­ing us a sense of just how many peo­ple were like­ly get­ting their pro­file infor­ma­tion grabbed by app devel­op­ers using the “Friends Per­mis­sion” fea­ture. When it was 50 mil­lion peo­ple in total, that came out to about 187 friends get­ting their pro­files grabbed for each per­son who actu­al­ly down­loaded the app. But if it’s 87 mil­lion peo­ple that makes it ~322 friends for each Cam­bridge Ana­lyt­i­ca app user on aver­age.

    Along those lines, it’s worth not­ing the the aver­age num­ber of friends Face­book users have is 338 while the medi­an num­ber of friends in 200, accord­ing to a 2014 Pew research poll. So if that 87 mil­lion num­ber keeps climb­ing, and there­fore the assumed num­ber of friends per user of the Cam­bridge Ana­lyt­i­ca app keeps climb­ing too, at some point we’re going to start get­ting into sus­pi­cious ter­ri­to­ry and have to ask the ques­tion of whether or not the users of that app were unusu­al­ly pop­u­lar or if Cam­bridge Ana­lyt­i­ca was get­ting data from more than just that app.

    After all, for all we know Cam­bridge Ana­lyt­i­ca may have sim­ply pur­chased a bunch of data on the Face­book pro­file black mar­ket, some­thing else Sandy Parak­i­las warned about. So how high might that 87 mil­lion num­ber get if Cam­bridge Ana­lyt­i­ca was just buy­ing this infor­ma­tion for oth­er app devel­op­ers? Who knows, although at this point, “a bil­lion pro­files” can no longer be ruled out, thanks to Face­book’s very belat­ed update today.

    Posted by Pterrafractyl | April 4, 2018, 3:33 pm
  3. And the hits keep com­ing: Here’s an arti­cle some more infor­ma­tion on the dis­clo­sure Face­book made on Wednes­day that “mali­cious actors” may have been using a cou­ple of ‘fea­tures’ Face­book pro­vides to scrape pub­lic pro­file infor­ma­tion from Face­book accounts and asso­ciate that infor­ma­tion with email address and phone num­bers. This is sep­a­rate from the data col­lec­tion tech­nique used by the Cam­bridge Ana­lyt­i­ca app, and thou­sands of oth­er app devel­op­ers, to grab the pri­vate pro­file infor­ma­tion from app users and their friends.

    One tech­nique used by these “mali­cious actors” was to sim­ply feed phone num­bers and email address­es into a Face­book “search” box that would return the Face­book pro­file asso­ci­at­ed with that email for phone num­ber. All the pub­lic infor­ma­tion on that pro­file could then sub­se­quent­ly be col­lect­ed and asso­ci­at­ed with that email/phone data. Users had the option of turn­ing off the abil­i­ty for oth­ers to find their pro­file using this method, but it was turned on by default and appar­ent­ly few peo­ple turned it off.

    The sec­ond tech­nique involved used an account recov­ery tool Face­book pro­vid­ed names, pro­file pic­tures and links to the pub­lic pro­files them­selves for any­one pre­tend­ing to be a Face­book user who for­got how to access their account.

    And accord­ing to Face­book, this was being done by actors obtain­ing emails address­es and phone num­bers on peo­ple on the Dark Web and then set­ting up scripts to auto­mate this process for large num­bers of emails and phone num­bers, “with few Face­book users like­ly escap­ing the scam.” In oth­er words, almost every Face­book user prob­a­bly had their email and phone num­ber asso­ci­at­ed with their Face­book account via this method. Also keep in mind that you don’t need to go to the Dark Web to buy lists of email address­es and phone num­bers, so plac­ing an empha­sis on the “Dark Web” as the source for this infor­ma­tion is like­ly part of Face­book’s ongo­ing attempt to ensure that this scan­dal does­n’t turn into an edu­ca­tion­al expe­ri­ence for the pub­lic on how wide­spread the data bro­ker­age indus­try real­ly is and how much infor­ma­tion on peo­ple is legal­ly com­mer­cial­ly avail­able. In oth­er words, these “mali­cious actors” were prob­a­bly oper­a­tors in the com­mer­cial data bro­ker­age mar­ket in many cas­es.

    And as the arti­cle notes, pair­ing email and phone num­ber infor­ma­tion with the kind of infor­ma­tion peo­ple made pub­licly avail­able on their pro­files is exact­ly the kind of infor­ma­tion that iden­ti­ty thieves want to obtain as a start­ing point for steal­ing your iden­ti­ty.

    The arti­cle also includes more infor­ma­tion on just what kind of pri­vate pro­file infor­ma­tion app devel­op­ers like Cam­bridge Ana­lyt­i­ca were allowed to grab. Because it’s impor­tant to note that we don’t have clar­i­ty yet on what exact­ly app devel­op­ers were allowed to grab from Face­book pro­files. We’ve heard vague descrip­tions of what was avail­able to the app devel­op­ers, like Face­book’s ‘pro­file’ of you (pre­sum­ably, what they’ve learned or inferred about you) and the list of what you “liked”. But it has­n’t been clear if app devel­op­ers also had access to lit­er­al­ly all of your pri­vate Face­book posts. Well, based on the fol­low­ing arti­cle, it does indeed sound like app devel­op­ers poten­tial­ly had access to lit­er­al­ly all of your pri­vate Face­book posts. And a lot of that data is prob­a­bly avail­able on the Dark Web and oth­er black mar­kets too at this point because why not? Face­book made it avail­able and it’s valu­able, so why would­n’t we expect it to be avail­able for sale?

    And the arti­cle makes one more stun­ning rev­e­la­tion regard­ing the per­mis­sions app devel­op­ers had to scrape this pri­vate infor­ma­tion: Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group.

    So while Face­book has­n’t yet admit­ted that they made almost all the pri­vate infor­ma­tion on peo­ple’s Face­book pro­files avail­able for iden­ti­ty thieves and any oth­er bad actors for years with lit­tle to no over­sight and that this data is prob­a­bly float­ing around on the Dark Web for sale, they are get­ting much close to admit­ting this giv­en their lat­est round of admis­sions:

    The Wash­ing­ton Post

    Face­book: ‘Mali­cious actors’ used its tools to dis­cov­er iden­ti­ties and col­lect data on a mas­sive glob­al scale

    by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin
    April 4, 2017 at 8:13 PM

    Face­book said Wednes­day that “mali­cious actors” took advan­tage of search tools on its plat­form, mak­ing it pos­si­ble for them to dis­cov­er the iden­ti­ties and col­lect infor­ma­tion on most of its 2 bil­lion users world­wide.

    The rev­e­la­tion came amid ris­ing acknowl­edge­ment by Face­book about its strug­gles to con­trol the data it gath­ers on users. Among the announce­ments Wednes­day was that Cam­bridge Ana­lyt­i­ca, a polit­i­cal con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­cans, had improp­er­ly gath­ered detailed Face­book infor­ma­tion on 87 mil­lion peo­ple, of whom 71 mil­lion were Amer­i­cans.

    But the abuse of Facebook’s search tools — now dis­abled — hap­pened far more broad­ly and over the course of sev­er­al years, with few Face­book users like­ly escap­ing the scam, com­pa­ny offi­cials acknowl­edged.

    The scam start­ed when mali­cious hack­ers har­vest­ed email address­es and phone num­bers on the so-called “Dark Web,” where crim­i­nals post infor­ma­tion stolen from data breach­es over the years. Then the hack­ers used auto­mat­ed com­put­er pro­grams to feed the num­bers and address­es into Facebook’s “search” box, allow­ing them to dis­cov­er the full names of peo­ple affil­i­at­ed with the phone num­bers or address­es, along with what­ev­er Face­book pro­file infor­ma­tion they chose to make pub­lic, often includ­ing their pro­file pho­tos and home­town.

    “We built this fea­ture, and it’s very use­ful. There were a lot of peo­ple using it up until we shut it down today,” Chief Exec­u­tive Mark Zucker­berg said in a call with reporters Wednes­day.

    Face­book said in a blog post Wednes­day, “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped.”

    Face­book users could have blocked this search func­tion, which was turned on by default, by tweak­ing their set­tings to restrict find­ing their iden­ti­ties by using phone num­bers or email address­es. But research has con­sis­tent­ly shown that users of online plat­forms rarely adjust default pri­va­cy set­tings and often fail to under­stand what infor­ma­tion they are shar­ing.

    Hack­ers also abused Facebook’s account recov­ery func­tion, by pre­tend­ing to be legit­i­mate users who had for­got­ten account details. Facebook’s recov­ery sys­tem served up names, pro­file pic­tures and links to the pub­lic pro­files them­selves. This tool could also be blocked in pri­va­cy set­tings.

    Names, phone num­bers, email address­es and oth­er per­son­al infor­ma­tion amount to crit­i­cal starter kits for iden­ti­ty theft and oth­er mali­cious online activ­i­ty, experts on Inter­net crime say. The Face­book hack allowed bad actors to tie raw data to people’s real iden­ti­ties and build fuller pro­files of them.

    Pri­va­cy experts had issued warn­ings that the phone num­ber and email address lookip tool left Face­book users’ data exposed.

    Face­book didn’t dis­close who the mali­cious actors are, how the data might have been used, or exact­ly how many peo­ple were affect­ed.

    The rev­e­la­tions about the pri­va­cy mishaps come at a per­ilous time for Face­book, which since last month has wres­tled with the fall­out of how the data of tens of mil­lions of Amer­i­cans end­ed up in the hands of Cam­bridge Ana­lyt­i­ca. Those reports have spurred inves­ti­ga­tions in the Unit­ed States and Europe and sent the company’s stock price tum­bling.

    The news quick­ly rever­ber­at­ed on Capi­tol Hill, where law­mak­ers are set to grill Zucker­berg at a series of hear­ings next week.

    “The more we learn, the clear­er it is that this was an avalanche of pri­va­cy vio­la­tions that strike at the core of one of our most pre­cious Amer­i­can val­ues – the right to pri­va­cy,” said Sen. Ed Markey (D‑Mass.), who serves on the Sen­ate Com­merce Com­mit­tee, which has called on Zucker­berg to tes­ti­fy at a hear­ing next week.

    Per­haps the most urgent ques­tion for Face­book is whether its prac­tices ran afoul of a set­tle­ment it bro­kered with the Fed­er­al Trade Com­mis­sion in 2011 in response to pre­vi­ous con­tro­ver­sies over its han­dling of user data.

    At the time, the FTC fault­ed Face­book for mis­rep­re­sent­ing the pri­va­cy pro­tec­tions it afford­ed its users and required the com­pa­ny to main­tain a com­pre­hen­sive pri­va­cy pol­i­cy and ask per­mis­sion before shar­ing user data in new ways. Vio­lat­ing the terms could result in many mil­lions of dol­lars of fines.

    The FTC said last week that it would open a new inves­ti­ga­tion in light of the Cam­bridge Ana­lyt­i­ca news, and Wedneday’s rev­e­la­tions are like­ly to com­pli­cate the legal sit­u­a­tion, said David Vladeck, a for­mer FTC direc­tor of con­sumer pro­tec­tion who over­saw the 2011 con­sent decree.

    “This is a com­pa­ny that is, in my view, like­ly gross­ly out of com­pli­ance with the FTC con­sent decree,” said Vladeck, now a George­town Uni­ver­si­ty Law pro­fes­sor. “I don’t think that after these rev­e­la­tions they have any defense at all.” He called the num­bers “just stag­ger­ing.”

    The data Cam­bridge Ana­lyt­i­ca obtained relied on dif­fer­ent tech­niques and was more detailed and exten­sive than what the hack­ers col­lect­ed using Facebook’s search func­tions. The Cam­bridge Ana­lyt­i­ca data set includ­ed user names, home­towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users and their friends, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.

    Face­book said it banned Cam­bridge Ana­lyt­i­ca last month because the data firm improp­er­ly obtained pro­file infor­ma­tion.

    Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    Face­book in March declined to say how much user data went to Cam­bridge Ana­lyt­i­ca, say­ing only that 270,000 peo­ple had respond­ed to a sur­vey app cre­at­ed by the researcher in 2014. The researcher was able to gath­er infor­ma­tion on the friends of the respon­dents with­out their per­mis­sion, vast­ly expand­ing the scope of his data. That researcher then passed the infor­ma­tion on to Cam­bridge Ana­lyt­i­ca.

    Face­book declined to say at the time how many oth­er users may have had their data col­lect­ed in the process. A Cam­bridge Ana­lyt­i­ca whistle­blow­er, for­mer researcher Christo­pher Wylie, said last month the real num­ber of affect­ed peo­ple was at least 50 mil­lion.

    ...

    With its moves over the past week, Face­book is embark­ing on a major shift in its rela­tion­ship with third-par­ty app devel­op­ers that have used Facebook’s vast net­work to expand their busi­ness­es. What was large­ly an auto­mat­ed process will now involve devel­op­ers agree­ing to “strict require­ments,” the com­pa­ny said in its blog post Wednes­day. The 2015 pol­i­cy change cur­tailed devel­op­ers’ abil­i­ties to access data about people’s friends net­works but left open many loop­holes that the com­pa­ny tight­ened on Wednes­day.

    “This lat­est rev­e­la­tion is extreme­ly trou­bling and shows that Face­book still has a lot of work to do to deter­mine how big this breach actu­al­ly is,” said Rep. Frank Pal­lone Jr. (D‑N.J.), the top Demo­c­rat on the House Ener­gy and Com­merce Com­mit­tee, which will hear from Zucker­berg next Wednes­day.

    “I’m deeply con­cerned that Face­book only address­es con­cerns on its plat­form when it becomes a pub­lic cri­sis, and that is sim­ply not the way you run a com­pa­ny that is used by over 2 bil­lion peo­ple,” he said.

    Face­book announced plans on Wednes­day to add new restric­tions to how app devel­op­ers, data bro­kers and oth­er third par­ties can gain access to this data, the lat­est steps in a years-long process to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion, Face­book said.

    Until Wednes­day, apps that let peo­ple input a Face­book event into their cal­en­dar could also auto­mat­i­cal­ly import lists of all the peo­ple who attend­ed that event, Face­book said. Admin­is­tra­tors of pri­vate groups, some of which have tens of thou­sands of mem­bers, could also let apps scrape the Face­book posts and pro­files of mem­bers of that group. App devel­op­ers who want this access will now have to prove their activ­i­ties ben­e­fit the group. Face­book will now need to approve tools that busi­ness­es use to oper­ate Face­book pages. A busi­ness that uses an app to help it respond quick­ly to cus­tomer mes­sages, for exam­ple, will not be able to do so auto­mat­i­cal­ly. Devel­op­ers’ access to Insta­gram will also be severe­ly restrict­ed.

    Face­book is now ban­ning apps from access­ing users’ infor­ma­tion about their reli­gious or polit­i­cal views, rela­tion­ship sta­tus, edu­ca­tion, work his­to­ry, fit­ness activ­i­ty, book read­ing habits, music lis­ten­ing and news read­ing activ­i­ty, video watch­ing and games. Data bro­kers and busi­ness­es col­lect this type of infor­ma­tion to build pro­files of their cus­tomers’ tastes.

    ———-

    “Face­book: ‘Mali­cious actors’ used its tools to dis­cov­er iden­ti­ties and col­lect data on a mas­sive glob­al scale” by Craig Tim­berg, Tony Romm and Eliz­a­beth Dwoskin; The Wash­ing­ton Post; 04/04/2018

    “But the abuse of Facebook’s search tools — now dis­abled — hap­pened far more broad­ly and over the course of sev­er­al years, with few Face­book users like­ly escap­ing the scam, com­pa­ny offi­cials acknowl­edged.”

    Few Face­book users like­ly escap­ing the “scam” of using the fea­ture Face­book turned on by default and was a obvi­ous mas­sive pri­va­cy vio­la­tion. A “scam” that was also far less of a pri­va­cy vio­la­tion than what Face­book made avail­able to app devel­op­ers, but still a scam that like­ly impact­ed almost all Face­book users. And the more infor­ma­tion peo­ple made avail­able on their pub­lic pro­files, the more these “scam­mers” could col­lect about them:

    ...
    The scam start­ed when mali­cious hack­ers har­vest­ed email address­es and phone num­bers on the so-called “Dark Web,” where crim­i­nals post infor­ma­tion stolen from data breach­es over the years. Then the hack­ers used auto­mat­ed com­put­er pro­grams to feed the num­bers and address­es into Facebook’s “search” box, allow­ing them to dis­cov­er the full names of peo­ple affil­i­at­ed with the phone num­bers or address­es, along with what­ev­er Face­book pro­file infor­ma­tion they chose to make pub­lic, often includ­ing their pro­file pho­tos and home­town.

    “We built this fea­ture, and it’s very use­ful. There were a lot of peo­ple using it up until we shut it down today,” Chief Exec­u­tive Mark Zucker­berg said in a call with reporters Wednes­day.

    Face­book said in a blog post Wednes­day, “Giv­en the scale and sophis­ti­ca­tion of the activ­i­ty we’ve seen, we believe most peo­ple on Face­book could have had their pub­lic pro­file scraped.”

    Face­book users could have blocked this search func­tion, which was turned on by default, by tweak­ing their set­tings to restrict find­ing their iden­ti­ties by using phone num­bers or email address­es. But research has con­sis­tent­ly shown that users of online plat­forms rarely adjust default pri­va­cy set­tings and often fail to under­stand what infor­ma­tion they are shar­ing.
    ...

    And then there was Face­book’s account recov­ery func­tion that Face­book also made easy to exploit:

    ...
    Hack­ers also abused Facebook’s account recov­ery func­tion, by pre­tend­ing to be legit­i­mate users who had for­got­ten account details. Facebook’s recov­ery sys­tem served up names, pro­file pic­tures and links to the pub­lic pro­files them­selves. This tool could also be blocked in pri­va­cy set­tings.
    ...

    And, again, while this kind of infor­ma­tion was­n’t nec­es­sar­i­ly as exten­sive as the pri­vate infor­ma­tion Face­book made avail­able to app devel­op­ers, it was still a very valu­able starter kit of iden­ti­ty theft:

    ...
    Names, phone num­bers, email address­es and oth­er per­son­al infor­ma­tion amount to crit­i­cal starter kits for iden­ti­ty theft and oth­er mali­cious online activ­i­ty, experts on Inter­net crime say. The Face­book hack allowed bad actors to tie raw data to people’s real iden­ti­ties and build fuller pro­files of them.

    Pri­va­cy experts had issued warn­ings that the phone num­ber and email address lookip tool left Face­book users’ data exposed.
    ...

    And, of course, that ‘iden­ti­ty theft starter kit’ data — asso­ci­at­ed phone num­bers and emails with real names and oth­er pub­licly avail­able infor­ma­tion — could poten­tial­ly become com­bined with the pri­vate infor­ma­tion made to app devel­op­ers. Infor­ma­tion to app devel­op­ers that appar­ent­ly includ­ed “people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data”:

    ...
    The data Cam­bridge Ana­lyt­i­ca obtained relied on dif­fer­ent tech­niques and was more detailed and exten­sive than what the hack­ers col­lect­ed using Facebook’s search func­tions. The Cam­bridge Ana­lyt­i­ca data set includ­ed user names, home­towns, work and edu­ca­tion­al his­to­ries, reli­gious affil­i­a­tions and Face­book “likes” of users and their friends, among oth­er data. Oth­er users affect­ed were in coun­tries includ­ing the Philip­pines, Indone­sia, U.K., Cana­da and Mex­i­co.

    Face­book said it banned Cam­bridge Ana­lyt­i­ca last month because the data firm improp­er­ly obtained pro­file infor­ma­tion.

    Per­son­al data on users and their Face­book friends was eas­i­ly and wide­ly avail­able to devel­op­ers of apps before 2015.

    ...

    With its moves over the past week, Face­book is embark­ing on a major shift in its rela­tion­ship with third-par­ty app devel­op­ers that have used Facebook’s vast net­work to expand their busi­ness­es. What was large­ly an auto­mat­ed process will now involve devel­op­ers agree­ing to “strict require­ments,” the com­pa­ny said in its blog post Wednes­day. The 2015 pol­i­cy change cur­tailed devel­op­ers’ abil­i­ties to access data about people’s friends net­works but left open many loop­holes that the com­pa­ny tight­ened on Wednes­day.

    ...

    Face­book announced plans on Wednes­day to add new restric­tions to how app devel­op­ers, data bro­kers and oth­er third par­ties can gain access to this data, the lat­est steps in a years-long process to improve its dam­aged rep­u­ta­tion as a stew­ard of the per­son­al pri­va­cy of its users.

    Devel­op­ers who in the past could get access to people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtain­ing the infor­ma­tion, Face­book said.
    ...

    So if “people’s rela­tion­ship sta­tus, cal­en­dar events, pri­vate Face­book posts, and much more data” was made avail­able to app devel­op­ers, it rais­es the ques­tion: what was­n’t made avail­able?

    It’s all a reminder that there is indeed a “mali­cious actor” who took pos­ses­sion of all your pri­vate data and its name is Face­book.

    Posted by Pterrafractyl | April 5, 2018, 1:53 pm
  4. Here’s a series of arti­cles that that serve as a reminder that Face­book isn’t just an ever-grow­ing vault of per­son­al data pro­files on almost every­one (albeit a very leaky data vault). It’s also a medi­um through which non-Face­book ever-grow­ing vaults of per­son­al data, in par­tic­u­lar the data bro­ker­age giants like Acx­iom, can be merged with Face­book’s vault, osten­si­bly for the pur­pose of mak­ing Face­book’s tar­get­ed ads even more tar­get­ed.

    This third-par­ty shar­ing is done through Face­book’s “Part­ner Cat­e­gories” pro­gram: Face­book adver­tis­ers have the option of fil­ter­ing their Face­book ad tar­get­ing based on, for instance, the group of peo­ple who pur­chased cere­al using data from Acx­iom’s con­sumer spend­ing data base. As such, data bro­ker giants that are poten­tial­ly Face­book’s biggest com­peti­tors become Face­book’s biggest part­ners.

    Not sur­pris­ing­ly, merg­ing Face­book’s exten­sive per­son­al data pro­files with the already very exten­sive per­son­al data pro­files held by the data bro­ker­age indus­try rais­es a num­ber of pri­va­cy con­cerns. Pri­va­cy con­cerns that are hit­ting a peak in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. So, also not sur­pris­ing­ly, Face­book just announced the end of the Part­ner Cat­e­gories pro­gram over the next six months as part of its post-Cam­bridge Ana­lyt­i­ca pub­lic rela­tions cam­paign:

    Recode

    Face­book is cut­ting third-par­ty data providers out of ad tar­get­ing to clean up its act
    Face­book says it’s going to stop using data from third-par­ty data pro­vides like Exper­ian and Acx­iom.

    By Kurt Wag­n­er
    Mar 28, 2018, 6:11pm EDT

    Face­book is going to lim­it how much data it makes avail­able to adver­tis­ers buy­ing hyper-tar­get­ed ads on the social net­work.

    More specif­i­cal­ly, Face­book says it will stop using data from third-par­ty data aggre­ga­tors — com­pa­nies like Exper­ian and Acx­iom — to help sup­ple­ment its own data set for ad tar­get­ing.

    Face­book pre­vi­ous­ly let adver­tis­ers tar­get peo­ple using data from a num­ber of sources:

    * Data from Face­book, which the com­pa­ny col­lects from user activ­i­ty and pro­files.
    * Data from the adver­tis­er itself, like cus­tomer emails they’ve col­lect­ed on their own.
    * Data from third-par­ty ser­vices like Exper­ian, which can col­lect offline data such as pur­chas­ing activ­i­ty, that Face­book uses to help sup­ple­ment its own data set. When mar­keters use this data to tar­get ads on Face­book, the social giant gives some of the ad mon­ey from that sale to the data provider.

    This third data set is pri­mar­i­ly help­ful to adver­tis­ers that might not have their own cus­tomer data, like small busi­ness­es or con­sumer pack­aged goods com­pa­nies that sell their prod­ucts through brick-and-mor­tar retail­ers.

    But now Face­book is chang­ing its rela­tion­ship with these third par­ties as part of a broad­er effort to clean up its data prac­tices fol­low­ing the recent Cam­bridge Ana­lyt­i­ca pri­va­cy scan­dal. (Face­book still uses these com­pa­nies to help with ad mea­sure­ment, though a source says that the com­pa­ny is reeval­u­at­ing that prac­tice, too.)

    The think­ing is that Face­book has less con­trol over where and how these firms col­lect their data, which makes using it more of a risk. Appar­ent­ly it’s not impor­tant enough to Facebook’s rev­enue stream to deal with a poten­tial headache if some­thing goes wrong.

    Face­book con­firmed the move in a state­ment attrib­ut­able to Gra­ham Mudd, a prod­uct mar­ket­ing direc­tor at the com­pa­ny.

    ”We want to let adver­tis­ers know that we will be shut­ting down Part­ner Cat­e­gories,” Mudd said in the state­ment. “This prod­uct enables third-par­ty data providers to offer their tar­get­ing direct­ly on Face­book. While this is com­mon indus­try prac­tice, we believe this step, wind­ing down over the next six months, will help improve people’s pri­va­cy on Face­book.”

    Had it been made ear­li­er, Facebook’s deci­sion to stop using third-par­ty data providers for tar­get­ing would not have impact­ed the out­come of the Cam­bridge Ana­lyt­i­ca scan­dal, in which the out­side firm col­lect­ed the per­son­al data of some 50 mil­lion Face­book users with­out their per­mis­sion.

    ...

    ———

    “Face­book is cut­ting third-par­ty data providers out of ad tar­get­ing to clean up its act” by Kurt Wag­n­er; Recode; 03/28/2018

    “More specif­i­cal­ly, Face­book says it will stop using data from third-par­ty data aggre­ga­tors — com­pa­nies like Exper­ian and Acx­iom — to help sup­ple­ment its own data set for ad tar­get­ing.”

    As we can see, Face­book isn’t just promis­ing to cut off the per­son­al data leak­ing out of its plat­forms to address pri­va­cy con­cerns. It’s also promis­ing to cut off some of the data flow­ing into its plat­forms. Data from the data bro­ker­age giants flow­ing into Face­book in exchange for some of the ad mon­ey when that data results in a sale:

    ...
    Face­book pre­vi­ous­ly let adver­tis­ers tar­get peo­ple using data from a num­ber of sources:

    * Data from Face­book, which the com­pa­ny col­lects from user activ­i­ty and pro­files.
    * Data from the adver­tis­er itself, like cus­tomer emails they’ve col­lect­ed on their own.
    * Data from third-par­ty ser­vices like Exper­ian, which can col­lect offline data such as pur­chas­ing activ­i­ty, that Face­book uses to help sup­ple­ment its own data set. When mar­keters use this data to tar­get ads on Face­book, the social giant gives some of the ad mon­ey from that sale to the data provider.

    This third data set is pri­mar­i­ly help­ful to adver­tis­ers that might not have their own cus­tomer data, like small busi­ness­es or con­sumer pack­aged goods com­pa­nies that sell their prod­ucts through brick-and-mor­tar retail­ers.
    ...

    And while the pub­lic expla­na­tion for this move is that this is being done to address pri­va­cy con­cerns, there’s also the sus­pi­cion that Face­book is will­ing to make this move sim­ply because Face­book does­n’t nec­es­sar­i­ly need this third-par­ty data to make its ads more effec­tive. So while cut­ting out this data-bro­ker­age data is a poten­tial loss for Face­book, that loss might be out­weighed by the grow­ing headache of pri­va­cy con­cerns for Face­book that comes from direct­ly incor­po­rat­ing third-par­ty data into its ad algo­rithms when it can’t con­trol whether or not these third-par­ty data bro­ker­ages obtained their own data sets in an eth­i­cal man­ner. In oth­er words, the headache isn’t worth the extra prof­it this data-shar­ing arrange­ment yields:

    ...
    The think­ing is that Face­book has less con­trol over where and how these firms col­lect their data, which makes using it more of a risk. Appar­ent­ly it’s not impor­tant enough to Facebook’s rev­enue stream to deal with a poten­tial headache if some­thing goes wrong.
    ...

    So is it the case that Face­book is using this Cam­bridge Ana­lyt­i­ca scan­dal as an excuse to cut these data bro­kers that Face­book does­n’t actu­al­ly need out of the loop? Well, as the fol­low­ing arti­cle notes, it’s not like Face­book does­n’t have the option of buy­ing that data from the data bro­kers them­selves and just incor­po­rat­ing the data into their inter­nal ad tar­get­ing mod­els. But Face­book always had that option and still chose to go ahead with this Part­ner Cat­e­gories pro­gram, so it’s pre­sum­ably the case that pay­ing out­right for that bro­ker­age data is more expen­sive than set­ting up the Part­ner Cat­e­gories pro­gram and giv­ing the bro­ker­ages a cut of the ad sales.

    As the fol­low­ing arti­cle also notes, adver­tis­ers will still be able to get that data bro­ker­age infor­ma­tion for the pur­pose of fur­ther tar­get­ing Face­book users. How so? Because notice the sec­ond data set in the above arti­cle that Face­book uses for tar­get­ing ads: data sets from the adver­tis­ers them­selves. Like lists of email address­es of the peo­ple they want to tar­get. It’s the same Cus­tom Audi­ences tool that was used exten­sive­ly by the Trump cam­paign for its “A/B test­ing on steroids” psy­cho­log­i­cal pro­fil­ing tech­niques. So there’s noth­ing stop­ping adver­tis­ers from get­ting that list of email address­es from a data bro­ker and then feed­ing that into Face­book, effec­tive­ly leav­ing the same arrange­ment in place but in a less direct man­ner. But it’s less con­ve­nient and pre­sum­ably less prof­itable if adver­tis­ers have to do this them­selves. It’s a reminder that part­ner­ing means more prof­its in the busi­ness Face­book is in.

    Final­ly, as dig­i­tal pri­va­cy expert Frank Pasquale also points out in the fol­low­ing arti­cle, there’s no real rea­son to assume Face­book is actu­al­ly going to stand by this pledge to shut down the Part­ner Cat­e­gories pro­gram over the next six months. It might just qui­et­ly start it up again in some oth­er form or reverse sim­ply reverse this deci­sion after the pub­lic’s atten­tion shifts away.

    So while there are valid ques­tions as to why Face­book is mak­ing this pol­i­cy change, there are unfor­tu­nate­ly also valid ques­tions over whether or not this pol­i­cy change will make any dif­fer­ence and whether or not Face­book will even make this pol­i­cy change at all:

    The Wash­ing­ton Post

    Face­book, long­time friend of data bro­kers, becomes their stiffest com­pe­ti­tion

    by Drew Har­well
    March 29, 2018

    Face­book was for years a best friend to the data bro­kers who make hun­dreds of mil­lions of dol­lars a year gath­er­ing and sell­ing Amer­i­cans’ per­son­al infor­ma­tion. Now, the world’s largest social net­work is sour­ing that rela­tion­ship — a sign that the com­pa­ny believes it has over­shad­owed their data-gath­er­ing machine.

    Face­book said late Wednes­day that it would stop data bro­kers from help­ing adver­tis­ers tar­get peo­ple with ads, sev­er­ing one of the key meth­ods mar­keters used to link users’ Face­book data about their friends and lifestyle with their offline data about their fam­i­lies, finances and health.

    The data bro­kers have for years served a silent but crit­i­cal role in direct­ing users’ atten­tion to Face­book’s ads. They also, crit­ics say, stealth­ily con­tributed to the seem­ing­ly all-know­ing creepi­ness of users see­ing ads for things they nev­er men­tioned on their Face­book pages. A mar­keter who want­ed to tar­get new moth­ers, for instance, could use the data bro­kers’ infor­ma­tion to send Face­book ads to all women who bought baby for­mu­la with a store rewards card.

    Acx­iom, Exper­ian and oth­er data bro­kers once had a prized seat at Face­book’s table, through a pro­gram called Part­ner Cat­e­gories, that allowed adver­tis­ers to tap into the shad­ow pro­files craft­ed with data from Face­book and the bro­kers to drill down on their tar­get audi­ence. The data bro­kers got a cut of the mon­ey when the ads they helped place turned into a sale, and Face­book also shared some data with the bro­kers to help gauge how well its ads per­formed.

    A Face­book direc­tor said in a state­ment that the com­pa­ny will wind down that pro­gram over the next six months, which “will help improve people’s pri­va­cy on Face­book.” But pri­va­cy experts saw the move as an asser­tion of dom­i­nance from the social net­work, which in recent years has con­sol­i­dat­ed its pow­er over an increas­ing­ly inti­mate lev­el of detail about its users’ lives — and wants adver­tis­ers to pay for its exper­tise.

    “Face­book is offi­cial­ly in the data-min­ing busi­ness,” said Joel Win­ston, a pri­va­cy lawyer in Pitts­burgh. “It’s a defin­i­tive sig­nal that Face­book’s data cap­ture and iden­ti­ty-tar­get­ing tech­nol­o­gy is light-years ahead of its com­peti­tors’.”

    The move comes as Face­book bat­tles a major pri­va­cy scan­dal in the wake of rev­e­la­tions that a polit­i­cal data firm, Cam­bridge Ana­lyt­i­ca, took advan­tage of the site’s loose pri­va­cy rules and improp­er­ly obtained data on more than 30 mil­lion Face­book users. The com­pa­ny has in recent days out­lined steps show­ing how users can see and lim­it what Face­book knows about them, fol­low­ing what chief exec­u­tive Mark Zucker­berg called a “major breach of trust.”

    In 2015, Face­book restrict­ed the kinds of data that out­side devel­op­ers, includ­ing the researcher who fed data to Cam­bridge Ana­lyt­i­ca, could gath­er from users and their friends. Christo­pher Wylie, Cam­bridge Ana­lyt­i­ca’s whistle­blow­er, told The Wash­ing­ton Post that Cam­bridge Ana­lyt­i­ca had paired Face­book data with infor­ma­tion from data bro­kers to build out its vot­er pro­files.

    But the social net­work con­tin­ued to strength­en its ties with the data bro­kers who gath­er and repack­age user infor­ma­tion. That year, Acx­iom said its involve­ment in Part­ner Cat­e­gories helped its adver­tis­ing clients use Face­book “to bet­ter con­nect with peo­ple more inclined to buy cer­tain prod­ucts or ser­vices,” adding that its clients includ­ed most of the coun­try’s top 10 insur­ers, retail­ers, automak­ers, hotels, telecom­mu­ni­ca­tions giants and banks. One year ear­li­er, in 2014, the Fed­er­al Trade Com­mis­sion issued a report find­ing that data bro­kers had col­lect­ed infor­ma­tion on near­ly every Amer­i­can and say­ing that the bro­kers “oper­ate with a fun­da­men­tal lack of trans­paren­cy.”

    While Face­book gath­ers much of its 2 bil­lion users’ online infor­ma­tion, the data bro­kers attempt to scoop up every­thing else, includ­ing bil­lions of bits of infor­ma­tion from vot­er rolls, prop­er­ty records, pur­chase his­to­ries, loy­al­ty card pro­grams, con­sumer sur­veys, car deal­er­ship records and oth­er data­bas­es.

    The bro­kers use that raw data to build mod­els pre­dict­ing (with vary­ing suc­cess) many hun­dreds of details about a cus­tomer’s behav­ior, finances and per­son­al­i­ty: age, fam­i­ly sta­tus, house­hold income, whether she likes cross­word puz­zles, inter­est in buy­ing a house­hold pet, like­li­hood of hav­ing a funer­al plan. The data bro­kers then sell those con­sumer pro­files to mar­keters and major con­glom­er­ates seek­ing a vast and tar­get­ed cus­tomer base — includ­ing on Face­book, which now accounts for a fifth of the world’s online ads.

    Acx­iom, the Arkansas-based bro­ker that has worked with Face­book since 2013 and report­ed more than $880 mil­lion in rev­enue last year, esti­mat­ed Face­book’s ditch­ing of its data-shar­ing pro­gram would carve as much as $25 mil­lion from the com­pa­ny’s rev­enue and prof­it. In a state­ment late Wednes­day, Acx­iom said Face­book had alert­ed it that day to the news. “Today, more than ever, it is impor­tant for busi­ness­es to be able to rely upon com­pa­nies that under­stand the crit­i­cal impor­tance of eth­i­cal­ly sourced data and strong data gov­er­nance. These are among Acxiom’s core strengths,” chief exec­u­tive Scott Howe said. Its stock plunged more than 30 per­cent Thurs­day morn­ing.

    Rep­re­sen­ta­tives for data bro­ker Exper­ian did not respond to ques­tions, and data bro­ker Ora­cle Data Cloud declined to com­ment. Exper­ian stock moved down­ward slight­ly, while Ora­cle shares trad­ed up about 1 per­cent. Face­book shares climbed about 3 per­cent, help­ing punc­ture weeks of loss­es.

    Data bro­kers’ mod­els are often intri­cate­ly and odd­ly detailed: Acx­iom has cat­e­go­rized peo­ple into one of 70 “house­hold life stage clus­ters,” includ­ing “Career-Cen­tered Sin­gles,” “Soc­cer and SUVs,” “Apple Pie Fam­i­lies” and “Rolling Stones.” But adver­tis­ers want­i­ng more infor­ma­tion — served straight from the source, in the per­son­’s own words — have increas­ing­ly turned to Face­book, where they can grab first-par­ty data from the actu­al cus­tomer, and not just third-par­ty data gath­ered and ana­lyzed from afar.

    Face­book and the data bro­kers have often dealt in the same kinds of per­son­al infor­ma­tion adver­tis­ers find impos­si­ble to resist. Exper­ian, for instance, runs a New­born Net­work that sells adver­tis­ers detailed infor­ma­tion, gleaned from per­son­al spend­ing and demo­graph­ic data, of women they pre­dict are new and expec­tant moth­ers; the com­pa­ny says it “cap­tures more than 80 per­cent of all U.S. births.” But Face­book users also freely share baby pho­tos and mark their life events — a more direct way of relay­ing the same infor­ma­tion to sell­ers of baby for­mu­la, cribs and mater­ni­ty clothes.

    Adver­tis­ers will still be able to work with the data bro­kers to gath­er infor­ma­tion and tar­get cus­tomers; they’ll just have to do it out­side Face­book. Crit­ics point­ed to a few ways, such as Face­book’s Cus­tom Audi­ences tool, that will allow adver­tis­ers to still tar­get cus­tomers en masse based on finan­cial and oth­er data they’ve pulled from across the Web.

    Some pri­va­cy experts cheered Face­book’s data-bro­ker move as a step toward pre­serv­ing user pri­va­cy. “It’s long over­due that Face­book owned up to the seri­ous ero­sion of con­sumer pri­va­cy made pos­si­ble by its alliance with pow­er­ful data bro­kers,” said Jef­frey Chester, exec­u­tive direc­tor of the Wash­ing­ton pri­va­cy-rights non­prof­it Cen­ter for Dig­i­tal Democ­ra­cy.

    Chris Speran­dio, a prod­uct head of pri­va­cy at the mar­ket­ing-data start-up Seg­ment, said the move also helps Face­book dodge grow­ing ques­tions over the source of its user infor­ma­tion. That is quick­ly becom­ing a high-stakes legal issue: A sweep­ing pri­va­cy rule com­ing to Europe in May, the Gen­er­al Data Pro­tec­tion Reg­u­la­tion, will make the com­pa­ny more liable and account­able for know­ing where its data comes from.

    ...

    But some crit­ics ques­tioned what effect the move would have in a site that counts sell­ing access to its users’ infor­ma­tion as its biggest mon­ey­mak­er. Face­book, pri­va­cy experts said, nets a vast range of real-time infor­ma­tion — friend­ships, pho­tos, work his­to­ries, inter­ests and con­sumer tastes, as well as mobile, loca­tion and facial-recog­ni­tion data — that adver­tis­ers view as more cur­rent and accu­rate than the bro­ker infor­ma­tion inferred from old receipts and gov­ern­ment logs. What, they ask, would adver­tis­ers need to pay data bro­kers for?

    “We don’t know enough about Face­book’s data trove to know whether their aban­don­ment of Part­ner Cat­e­gories helps users avoid pri­va­cy inva­sions,” said Frank Pasquale, a Uni­ver­si­ty of Mary­land pro­fes­sor who spe­cial­izes in algo­rithms and pri­va­cy. “Even if we did have that knowl­edge, we have lit­tle rea­son to trust Face­book to actu­al­ly fol­low through on it. It may well change course once media atten­tion has gone else­where.”

    ———-

    “Face­book, long­time friend of data bro­kers, becomes their stiffest com­pe­ti­tion” by Drew Har­well; The Wash­ing­ton Post; 03/29/2018

    “Face­book said late Wednes­day that it would stop data bro­kers from help­ing adver­tis­ers tar­get peo­ple with ads, sev­er­ing one of the key meth­ods mar­keters used to link users’ Face­book data about their friends and lifestyle with their offline data about their fam­i­lies, finances and health.”

    Yep, one of the key meth­ods mar­keters used to link Face­book data with all the offline data that these data bro­ker­ages were able to col­lect just might get sev­ered. It’s poten­tial­ly a big deal for Face­book and the adver­tis­ing indus­try. Or poten­tial­ly not. That’s part of what makes this such a fas­ci­nat­ing move by Face­book: It’s poten­tial­ly quite sig­nif­i­cant and poten­tial­ly incon­se­quen­tial:

    ...
    The data bro­kers have for years served a silent but crit­i­cal role in direct­ing users’ atten­tion to Face­book’s ads. They also, crit­ics say, stealth­ily con­tributed to the seem­ing­ly all-know­ing creepi­ness of users see­ing ads for things they nev­er men­tioned on their Face­book pages. A mar­keter who want­ed to tar­get new moth­ers, for instance, could use the data bro­kers’ infor­ma­tion to send Face­book ads to all women who bought baby for­mu­la with a store rewards card.

    Acx­iom, Exper­ian and oth­er data bro­kers once had a prized seat at Face­book’s table, through a pro­gram called Part­ner Cat­e­gories, that allowed adver­tis­ers to tap into the shad­ow pro­files craft­ed with data from Face­book and the bro­kers to drill down on their tar­get audi­ence. The data bro­kers got a cut of the mon­ey when the ads they helped place turned into a sale, and Face­book also shared some data with the bro­kers to help gauge how well its ads per­formed.
    ...

    And note how this coop­er­at­ing with these bro­ker­ages as only grow­ing dur­ing the same peri­od that Face­book cut off the “friends per­mis­sions” pri­va­cy loop­hole exploit­ed by Cam­bridge Ana­lyt­i­ca’s app and thou­sands of oth­er apps in 2015. It’s a reminder that even when Face­book is get­ting bet­ter in some ways, it’s prob­a­bly get­ting worse in oth­er ways:

    ...
    In 2015, Face­book restrict­ed the kinds of data that out­side devel­op­ers, includ­ing the researcher who fed data to Cam­bridge Ana­lyt­i­ca, could gath­er from users and their friends. Christo­pher Wylie, Cam­bridge Ana­lyt­i­ca’s whistle­blow­er, told The Wash­ing­ton Post that Cam­bridge Ana­lyt­i­ca had paired Face­book data with infor­ma­tion from data bro­kers to build out its vot­er pro­files.

    But the social net­work con­tin­ued to strength­en its ties with the data bro­kers who gath­er and repack­age user infor­ma­tion. That year, Acx­iom said its involve­ment in Part­ner Cat­e­gories helped its adver­tis­ing clients use Face­book “to bet­ter con­nect with peo­ple more inclined to buy cer­tain prod­ucts or ser­vices,” adding that its clients includ­ed most of the coun­try’s top 10 insur­ers, retail­ers, automak­ers, hotels, telecom­mu­ni­ca­tions giants and banks. One year ear­li­er, in 2014, the Fed­er­al Trade Com­mis­sion issued a report find­ing that data bro­kers had col­lect­ed infor­ma­tion on near­ly every Amer­i­can and say­ing that the bro­kers “oper­ate with a fun­da­men­tal lack of trans­paren­cy.”
    ...

    And while some of the data gath­ered by the data bro­ker­ages inevitably over­laps with what Face­book also gath­ers on peo­ple, there are quite a few cat­e­gories of ‘offline’ data these data bro­kers sys­tem­at­i­cal­ly gath­er that Face­book can’t gath­er with­out seem­ing super extra creepy. Data bro­kers gath­er data from places like vot­er rolls, prop­er­ty records, pur­chase his­to­ries, loy­al­ty card pro­grams, con­sumer sur­veys, car deal­er­ship records. Imag­ine if Face­book direct­ly gath­ered that kind of offline infor­ma­tion about every­one instead of just buy­ing it from the bro­ker­ages or set­ting up arrange­ments like the Part­ner Cat­e­gories pro­gram. Imag­ine how incred­i­bly creepy that would be if Face­book had an ‘offline data col­lec­tive’ divi­sion. It’s a reminder that Face­book and the data bro­kers real­ly are engaged in an ‘online’/‘offline’ data gath­er­ing and aggre­ga­tion joint effort. “Part­ner Cat­e­gories” is an appro­pri­ate name because it’s a real part­ner­ship that’s impor­tant to both par­ties because it would be a big­ger PR night­mare if Face­book had to col­lect all this offline data itself:

    ...
    While Face­book gath­ers much of its 2 bil­lion users’ online infor­ma­tion, the data bro­kers attempt to scoop up every­thing else, includ­ing bil­lions of bits of infor­ma­tion from vot­er rolls, prop­er­ty records, pur­chase his­to­ries, loy­al­ty card pro­grams, con­sumer sur­veys, car deal­er­ship records and oth­er data­bas­es.

    The bro­kers use that raw data to build mod­els pre­dict­ing (with vary­ing suc­cess) many hun­dreds of details about a cus­tomer’s behav­ior, finances and per­son­al­i­ty: age, fam­i­ly sta­tus, house­hold income, whether she likes cross­word puz­zles, inter­est in buy­ing a house­hold pet, like­li­hood of hav­ing a funer­al plan. The data bro­kers then sell those con­sumer pro­files to mar­keters and major con­glom­er­ates seek­ing a vast and tar­get­ed cus­tomer base — includ­ing on Face­book, which now accounts for a fifth of the world’s online ads.
    ...

    And, of course, the Cus­tom Audi­ences tool that lets adver­tis­ers feed in lists of things like email address to tar­get spe­cif­ic audi­ences — used exten­sive­ly by the 2016 Trump cam­paign — might make the deci­sion to end the Part­ner Cat­e­gories pro­gram moot:

    ...
    Adver­tis­ers will still be able to work with the data bro­kers to gath­er infor­ma­tion and tar­get cus­tomers; they’ll just have to do it out­side Face­book. Crit­ics point­ed to a few ways, such as Face­book’s Cus­tom Audi­ences tool, that will allow adver­tis­ers to still tar­get cus­tomers en masse based on finan­cial and oth­er data they’ve pulled from across the Web.
    ...

    And as Frank Pasquale points out, we also don’t know enough about what Face­book knows about us to know now much of an impact end­ing the Part­ner Cat­e­gories pro­gram will make to the pri­va­cy vio­la­tions involved with Face­book’s whole busi­ness mod­el. It’s entire­ly pos­si­ble this change will make fus­ing data bro­ker data with Face­book data less con­ve­nient and less prof­itable, but also still just as pri­va­cy vio­lat­ing both because the present day set up can be repli­cat­ed indi­rect­ly (by Face­book adver­tis­ers coor­di­nat­ing with the data bro­kers sep­a­rate­ly) and also because Face­book might know almost every­one the data bro­kers know just from its own data col­lec­tion meth­ods. In oth­er words, this could be large­ly cos­met­ic. And, as Pasquale also point­ed out, Face­book might just change its mind and not end the pro­gram once pub­lic atten­tion wanes:

    ...
    Some pri­va­cy experts cheered Face­book’s data-bro­ker move as a step toward pre­serv­ing user pri­va­cy. “It’s long over­due that Face­book owned up to the seri­ous ero­sion of con­sumer pri­va­cy made pos­si­ble by its alliance with pow­er­ful data bro­kers,” said Jef­frey Chester, exec­u­tive direc­tor of the Wash­ing­ton pri­va­cy-rights non­prof­it Cen­ter for Dig­i­tal Democ­ra­cy.

    ...

    But some crit­ics ques­tioned what effect the move would have in a site that counts sell­ing access to its users’ infor­ma­tion as its biggest mon­ey­mak­er. Face­book, pri­va­cy experts said, nets a vast range of real-time infor­ma­tion — friend­ships, pho­tos, work his­to­ries, inter­ests and con­sumer tastes, as well as mobile, loca­tion and facial-recog­ni­tion data — that adver­tis­ers view as more cur­rent and accu­rate than the bro­ker infor­ma­tion inferred from old receipts and gov­ern­ment logs. What, they ask, would adver­tis­ers need to pay data bro­kers for?

    “We don’t know enough about Face­book’s data trove to know whether their aban­don­ment of Part­ner Cat­e­gories helps users avoid pri­va­cy inva­sions,” said Frank Pasquale, a Uni­ver­si­ty of Mary­land pro­fes­sor who spe­cial­izes in algo­rithms and pri­va­cy. “Even if we did have that knowl­edge, we have lit­tle rea­son to trust Face­book to actu­al­ly fol­low through on it. It may well change course once media atten­tion has gone else­where.”

    So is this annoiced pol­i­cy changed going to hap­pen? Will it mat­ter if it hap­pens? It’s a pret­ty sig­nif­i­cant ques­tion and not one easy to answer giv­en that Face­book’s algo­rithms are large­ly a black box.

    That said, Josh Mar­shall might have a sig­nif­i­cant data point for us with regards to how impor­tant the cur­rent third-par­ty data shar­ing arrange­ment with data bro­ker­age giants real­ly is in terms of the per­for­mance of Face­book’s ad tar­get­ing per­for­mance: start­ing in ear­ly March adver­tis­ers start­ed notic­ing a sig­nif­i­cant drop off in the tar­get­ing qual­i­ty of Face­book’s ads. Face­books ad tar­get­ing qual­i­ty just got worse for some rea­son. And this was ear­ly March, which is before the Cam­bridge Ana­lyt­i­ca sto­ry hit in mid-march but pos­si­bly after Face­book knew the Cam­bridge Ana­lyt­i­ca sto­ry was com­ing. So the tim­ing of this obser­va­tion is inter­est­ing and Mar­shall has a hunch: Face­book was already exper­i­ment­ing with how its inter­nal adver­tis­ing algo­rithm would oper­ate with­out direct access to the data bro­ker­ages and poten­tial­ly with­out access to a lot of oth­er data sources in antic­i­pa­tion of the new EU reg­u­la­tions and new reg­u­la­tions from the US Con­gress. In oth­er words, Face­book already saw the writ­ing on the wall before the recent wave of Cam­bridge Ana­lyt­i­ca rev­e­la­tions went pub­lic and has already start­ed the shift to an in-house ad tar­get­ing algo­rithm and it shows.

    Now, it’s pos­si­ble that Josh Mar­shall could be cor­rect that Face­book has already start­ed imple­ment­ing an inter­nal-only ad tar­get­ing algo­rithm and it’s not­i­ca­bly worse now but that it get bet­ter in the long run because Face­bill will improve its third par­ty-lim­it­ed algo­rithm and the adver­tis­ers and bro­kers adapt to a new, less direct data-shar­ing arrange­ment. Maybe every­one will adapt and get up to par. Time will tell.

    But if not, and if the loss of these data shar­ing arrange­ments makes Face­book’s ads less effec­tive in the long run because — maybe because it’s much more effi­cient to direct­ly fun­nel the bro­ker data and a whole bunch of oth­er third-par­ty data into Face­book and the indi­rect meth­ods can’t repli­cate this arrange­ment — then it’s worth not­ing that this down­grade in Face­book’s ad tar­get­ing qual­i­ty as a result of the loss of this third-par­ty data would reflect a real form of pri­va­cy enhance­ment and gen­er­al­ly should be cheered. And is also a state­ment on the pub­lic util­i­ty of the over­all data bro­ker­age indus­try that is ded­i­cat­ed to col­lect­ing, aggre­gat­ed, and sell­ing per­son­al data pro­files. There’s a lot of neg­a­tive util­i­ty in this indus­try and this wave of Face­book scan­dals is just one facet of it. So if Mar­shal­l’s guess is cor­rect and this observ­able dropoff in Face­book ad qual­i­ty reflects a deci­sion by Face­book to pre­emp­tive­ly take third-par­ty data out of its ad tar­get­ing algo­rithms in antic­i­pa­tion of the new EU data pri­va­cy laws and future con­gres­sion­al action in the US, let’s hope that drop-off is sus­tained for our pri­va­cy’s sake:

    Talk­ing Points Memo
    Edi­tor’s Blog

    Is Face­book In More Trou­ble Than Peo­ple Think?

    By Josh Mar­shall | April 5, 2018 12:45 pm

    For more than a year, Face­book has faced a rolling pub­lic rela­tions deba­cle. Part of this is the Amer­i­can public’s shift­ing atti­tudes toward Big Tech and plat­forms in gen­er­al. But the dri­ving prob­lem has been the way the plat­form was tied up with and per­haps impli­cat­ed in Russia’s attempt to influ­ence the 2016 pres­i­den­tial elec­tion. Users’ trust in the plat­form has been shak­en, politi­cians are threat­en­ing scruti­ny and pos­si­ble reg­u­la­tion, and there’s even a cam­paign to get peo­ple to delete their Face­book accounts. All of this is wide­ly known and we hear more about it every day. But most users, most peo­ple in tech and also Wall Street (which is the source of Facebook’s gar­gan­tu­an val­u­a­tion) don’t yet get the full pic­ture. We know about Facebook’s rep­u­ta­tion­al cri­sis. But peo­ple aren’t ful­ly inter­nal­iz­ing that the cur­rent cri­sis pos­es a poten­tial­ly dire threat to Facebook’s core busi­ness mod­el, its core adver­tis­ing busi­ness.

    Face­book is fun­da­men­tal­ly an adver­tis­ing busi­ness. Almost all of the company’s rev­enue comes from adver­tis­ing that it tar­gets with unpar­al­leled effi­cien­cy to its bil­lions of users. In a media world in which adver­tis­ing rates face almost uni­ver­sal down­ward pres­sure, Facebook’s rates have con­sis­tent­ly risen. Monop­oly pow­er may dri­ve some of that growth. But the key dri­ver is effi­cien­cy. If old-fash­ioned adver­tis­ing shows my adver­tise­ment to 100 peo­ple for every actu­al buy­er and oth­er dig­i­tal plat­forms show it to 30 peo­ple and Face­book shows it to 5 peo­ple, Facebook’s ads are just worth a lot more.

    As long as the rates bear some rela­tion­ship to that effi­cien­cy (those num­bers above are just for illus­tra­tion), I’ll be hap­py to pay it. Because it’s objec­tive­ly worth more. Indeed, as the prices have gone up, Face­book has actu­al­ly got­ten more effi­cient. As one dig­i­tal ad agency exec­u­tive recent­ly told me, even if Face­book jacked up the prices a lot more, his firm would like­ly keep using them just as much because on this cost to effi­cien­cy basis it’s still cheap. This is the basis of Facebook’s astro­nom­i­cal mar­ket cap­i­tal­iza­tion which today rates at over $450 bil­lion, even after some recent revers­es.

    So the mon­ey comes from the adver­tis­ing. And the adver­tis­ing comes from the data and the arti­fi­cial intel­li­gence that crunch­es it and mod­els it into pre­dic­tive effi­cien­cy. But what if there’s a break­down in the data?

    Start­ing in a ear­ly March, a num­ber of mar­keters run­ning sub­stan­tial sums on the Face­book ad engine, who’ve spo­ken to TPM, start­ed notic­ing a new lev­el of plat­form insta­bil­i­ty and reduc­tions in tar­get­ing effi­cien­cy. To under­stand what this means, think about it like how an effi­cient debt or equi­ty mar­ket oper­ates. If there is rel­a­tive­ly accu­rate infor­ma­tion, no big exter­nal shocks and enough buy­ers and sell­ers, pric­ing should have rel­a­tive sta­bil­i­ty and oper­ate with­in cer­tain bands. Account­ing for some rea­son­able amount of bumpi­ness that’s what Facebook’s ad engine has looked like for a few years. But start­ing in March, if you’re down in the trench­es work­ing with the gran­u­lar num­bers, some­thing start­ed to look weird: price oscil­la­tions, reduced tar­get­ing effi­cien­cy and even glitch­es.

    We’ve talked to a num­ber of adver­tis­ers who’ve report­ed this. We’ve also talk to oth­ers who haven’t. But the ones who have tend to be the ones more tight­ly tied to the num­bers and in mar­ket­ing oper­a­tions with tighter ROI (return on invest­ment). Where we’ve seen the most of this is with so-called DTC (direct to con­sumer) mar­keters. Face­book is an amaz­ing­ly large ecosys­tem. And it’s all a black box. So there’s no way for us to talk to a rep­re­sen­ta­tive sam­ple of adver­tis­ers. But some­thing is going on in at least sub­stan­tial sec­tors of Facebook’s ad engine. What is it? Mar­keters who’ve asked main­ly get told it’s their cre­ativ­i­ty. In oth­er words, the ad you’re run­ning isn’t work­ing. Come up with anoth­er ad. Here at TPM, we oper­ate in a dif­fer­ent part of the pro­gram­mat­ic ad uni­verse. You hear com­pa­ra­ble things like that a lot. And it’s hard to ignore. But we’ve talked to peo­ple with dif­fer­ent peo­ple with (by def­i­n­i­tion) dif­fer­ent ads in total­ly dif­fer­ent indus­tries. So that’s not it. Some­thing is hap­pen­ing.

    So what’s up?

    One thing is already being dis­cussed wide­ly in the trade press. In response to the rolling pub­lic rela­tions deba­cle Face­book has already dra­mat­i­cal­ly reduced or has announced that it will reduce adver­tis­ers’ abil­i­ty to use third par­ty ad data on the Face­book plat­form. That is a big deal. What’s that mean?

    ...

    As you know, through your activ­i­ty on Face­book, Face­book col­lects lots of data about you that it then uses to tar­get ads. That’s “Face­book data” (or it’s yours, but you know what I mean). Face­book also allows adver­tis­ers to upload “1st par­ty data”. What’s that? That’s if my book pub­lish­ing com­pa­ny has a list of 50,000 emails, I can upload those emails to Face­book and run ads to those peo­ple. Then there’s “3rd par­ty data”. That’s if the adver­tis­er or Face­book itself goes to anoth­er per­son­al data bro­ker, buys access to that data and pours it into the Face­book ecosys­tem for more effi­cient tar­get­ing.

    If you’re not versed in the world of data and dig­i­tal adver­tis­ing, there’s a ton here to keep up with. But here’s the key. How reliant is Facebook’s adver­tis­ing cash cow on third par­ty data? Not just the third par­ty data that Face­book allows adver­tis­ers to put into its ecosys­tem for bet­ter tar­get­ing (which is now being phased out) but 3rd par­ty data Face­book uses itself to improve its ad tar­get­ing? As one data indus­try exec­u­tive put it to me, sure Face­book can crunch its own data to find out all sorts of things about you. But in a lot of cas­es it may be eas­i­er, cheap­er and in some cas­es sim­ply more effec­tive to buy that data from oth­er sources. We don’t real­ly know – and no one out­side Face­book real­ly knows – how good Facebook’s AI real­ly is at mod­el­ing user data entire­ly on its own with­out oth­er sorts of data mixed in. It’s a black box. It mat­ters a lot in terms of Facebook’s core rev­enue stream.

    Here’s anoth­er ques­tion: when you con­sid­er Facebook’s own data, how reliant is Face­book on ways it col­lects and process­es Face­book data which it may not be able to do any longer either because of new reg­u­la­tions that come into effect lat­er this year for the EU or because of new reg­u­la­tions Con­gress may put into effect as it puts new scruti­ny on Facebook’s behav­ior?

    My hunch is that the answer to most or all of these ques­tions is “a lot more than most peo­ple real­ize.” We already know that Face­book is mak­ing a lot of changes to how it uses data, espe­cial­ly third par­ty data and how it allows adver­tis­ers to use data. Some of this is already pub­lic. Indeed, it’s get­ting dis­cussed a lot in the trade press – par­tic­u­lar how Face­book will imple­ment with and cope with the new regs from the Euro­pean Union. So why all the chop­pi­ness in Facebook’s adver­tis­ing and tar­get­ing met­rics? I sus­pect that Face­book is try­ing to rejig­ger its algo­rithm on the fly more than peo­ple real­ize in order to see if they can get it to work as effec­tive­ly for ads with­out a lot of data sources or data uses they real­ly aren’t sup­posed to be doing or which they sus­pect they’ll lose access to in com­ing reg­u­la­tion. That is the most log­i­cal expla­na­tion of the insta­bil­i­ty in their report­ing.

    If you talk to ad indus­try peo­ple, they treat it as a giv­en that Face­book is already hav­ing to “rebuild their plat­form basi­cal­ly from the ground up” as one top agency exec­u­tive told me, in response to “fake news”, pro­pa­gan­da cam­paigns, pri­va­cy scruti­ny, etc. – all the stuff we’ve read about over recent months. But it’s Face­book. They’ll work it out, is what these peo­ple fig­ure. And they’re prob­a­bly right. Face­book is huge, has mas­sive resources and access to the world’s largest audi­ence for any­thing ever. They have oceans of data and a mas­sive leg up on every­one. Down at the more gran­u­lar lev­el though, even in the indus­try press, it is treat­ed as a giv­en that the already pub­licly announced new restric­tions on third par­ty data will like­ly lead to at least some migra­tion of adver­tis­ers to new plat­forms. Gin­ny Mar­vin, a top trade press reporter work­ing at the gran­u­lar ad tech and mar­ket­ing lev­el rather than up in tech big think land, tweet­ed this on March 30th: “FB remov­ing 3P [3rd par­ty] data is a big change for adver­tis­ers. But at FB’s scale, you’re not going to see advts sharply piv­ot else­where en masse. This will look more like a slow mov­ing ship of bud­gets divert­ing to oth­er media if they don’t get per­for­mance they want from FB.”

    For now, as Mar­vin notes, Facebook’s adver­tis­er lock-in, mar­ket pow­er and sim­ple price val­ue make it high­ly unlike­ly that there’s going to be any dra­mat­ic near-term move from Face­book even in the worse case sce­nario. But Face­book isn’t just mak­ing mon­ey hand over fist. It’s mar­ket val­u­a­tion rests on the assump­tion that it will keep mak­ing that amount of mon­ey hand over fist and indeed keep increas­ing the amount of mon­ey it makes hand over fist. Any break­down or sig­nif­i­cant slow­down in that growth and con­sis­ten­cy is a big prob­lem. Years ago, every­one count­ed Face­book out as a true prof­it plat­form, until it exceed­ed everyone’s expec­ta­tions. Now, even with all the bad press, most fig­ure that it’s prof­itable for­ev­er. Both con­ven­tion­al wis­doms were wrong. For now, keep in mind that Face­book isn’t just deal­ing with a rep­u­ta­tion­al cri­sis. It’s hav­ing to clean up the rep­u­ta­tion­al mess by rejig­ger­ing parts of its core rev­enue stream it’s not clear it real­ly knows how to do. That cre­ates a lot of unpre­dictabil­i­ty. More than most peo­ple seem to real­ize.

    ———-

    “Is Face­book In More Trou­ble Than Peo­ple Think?” by Josh Mar­shall; Talk­ing Points Memo; 04/05/2018

    “For more than a year, Face­book has faced a rolling pub­lic rela­tions deba­cle. Part of this is the Amer­i­can public’s shift­ing atti­tudes toward Big Tech and plat­forms in gen­er­al. But the dri­ving prob­lem has been the way the plat­form was tied up with and per­haps impli­cat­ed in Russia’s attempt to influ­ence the 2016 pres­i­den­tial elec­tion. Users’ trust in the plat­form has been shak­en, politi­cians are threat­en­ing scruti­ny and pos­si­ble reg­u­la­tion, and there’s even a cam­paign to get peo­ple to delete their Face­book accounts. All of this is wide­ly known and we hear more about it every day. But most users, most peo­ple in tech and also Wall Street (which is the source of Facebook’s gar­gan­tu­an val­u­a­tion) don’t yet get the full pic­ture. We know about Facebook’s rep­u­ta­tion­al cri­sis. But peo­ple aren’t ful­ly inter­nal­iz­ing that the cur­rent cri­sis pos­es a poten­tial­ly dire threat to Facebook’s core busi­ness mod­el, its core adver­tis­ing busi­ness.”

    As Josh Mar­shall points out, if Face­book real­ly does have to turn off the third-par­ty data spig­ot, the ques­tion of what this will actu­al­ly do to the qual­i­ty of its ad tar­get­ing is a mas­sive ques­tion. The impor­tance of the direct third-par­ty data shar­ing arrange­ment is one of the big ques­tions swirling around Face­book for both Face­book’s investors (from a price per share stand­point) and the pub­lic (from a pub­lic pri­va­cy stand­point). The fact that the EU’s new data pri­va­cy rules are hit­ting Face­book in Europe right when the Cam­bridge Ana­lyt­i­ca scan­dal starts play­ing out in the US and threat­ens to snow­ball into a larg­er scan­dal about Face­book’s busi­ness mod­el in gen­er­al just makes it a big­ger ques­tion for Face­book.

    And it’s a cri­sis for Face­book that will be numer­i­cal­ly reflect­ed in one key mea­sure point­ed out by Mar­shall: the num­ber of adver­tise­ments that need to be shown to trig­ger a sale on Face­book com­pared to oth­er plat­forms. It’s a 5‑to‑1 ratio for Face­book vs a 30-to‑1 ratio for oth­er dig­i­tal plat­forms and 100-to‑1 for tra­di­tion­al ads. Face­book real­ly is much bet­ter at tar­get­ing its ads than even its dig­i­tal peers. So when Face­book gets worse at tar­get­ing its ads, that does amount to real pri­va­cy gains because it’s one of the biggest and best ad cut­ting edge ad tar­get­ing plat­forms. This is why Face­book is worth over $450 bil­lion:

    ...
    Face­book is fun­da­men­tal­ly an adver­tis­ing busi­ness. Almost all of the company’s rev­enue comes from adver­tis­ing that it tar­gets with unpar­al­leled effi­cien­cy to its bil­lions of users. In a media world in which adver­tis­ing rates face almost uni­ver­sal down­ward pres­sure, Facebook’s rates have con­sis­tent­ly risen. Monop­oly pow­er may dri­ve some of that growth. But the key dri­ver is effi­cien­cy. If old-fash­ioned adver­tis­ing shows my adver­tise­ment to 100 peo­ple for every actu­al buy­er and oth­er dig­i­tal plat­forms show it to 30 peo­ple and Face­book shows it to 5 peo­ple, Facebook’s ads are just worth a lot more.

    As long as the rates bear some rela­tion­ship to that effi­cien­cy (those num­bers above are just for illus­tra­tion), I’ll be hap­py to pay it. Because it’s objec­tive­ly worth more. Indeed, as the prices have gone up, Face­book has actu­al­ly got­ten more effi­cient. As one dig­i­tal ad agency exec­u­tive recent­ly told me, even if Face­book jacked up the prices a lot more, his firm would like­ly keep using them just as much because on this cost to effi­cien­cy basis it’s still cheap. This is the basis of Facebook’s astro­nom­i­cal mar­ket cap­i­tal­iza­tion which today rates at over $450 bil­lion, even after some recent revers­es.
    ...

    “If old-fash­ioned adver­tis­ing shows my adver­tise­ment to 100 peo­ple for every actu­al buy­er and oth­er dig­i­tal plat­forms show it to 30 peo­ple and Face­book shows it to 5 peo­ple, Facebook’s ads are just worth a lot more”

    And that’s why this is a pret­ty big sto­ry if there’s a real drop in the qual­i­ty of Face­book’s ad tar­get­ing qual­i­ty. Face­book is wild­ly ahead of almost all of its com­pe­ti­tion. Only Google and gov­ern­ments are going to com­pete with what Face­book knows about us all. So if Face­book effec­tive­ly knows less about us, as reflect­ed in a drop in the ad tar­get­ing observed start­ing in ear­ly March, that reflects a real de fac­to increase in pub­lic pri­va­cy. And it’s also a big sto­ry from a busi­ness stand­point because it’s it’s not just about Face­book, it’s also about the entire data bro­ker­age indus­try. There’s a large part of the mod­ern US econ­o­my poten­tial­ly tied into this Face­book scan­dal. A scan­dal that now extends beyond the Cam­bridge Ana­lyt­i­ca app sit­u­a­tion and has led to Face­book declar­ing the phase­out of its Part­ner Cat­e­gories pro­gram. Is this ush­er­ing in a sea change in the data bro­ker­age indus­try? If so, that’s big.

    Face­book was going to have a sea change in how it did busi­ness in the EU thanks to the new data pri­va­cy laws, but it’s this Cam­bridge Ana­lyt­i­ca scan­dal that appears to be dri­ving the like­li­hood of sea change in the US mar­ket too. And that’s part of why it’s notable if Face­book real­ly did start rejig­ger­ing its algo­rithms with­out that third-par­ty data in ear­ly March, poten­tial­ly in antic­i­pa­tion of this flur­ry of bad press, and then the ad tar­get­ing sud­den­ly got worse. Because if it turns out that the loss of the third-par­ty data makes Face­book’s ad tar­get­ing worse, we should note that. And ask our­selves whether or not mak­ing Face­book even worse at tar­get­ing ads would be desir­able from a pub­lic pri­va­cy per­spec­tive. The more Face­book sucks at ads the bet­ter Face­book is for every­one from a pri­va­cy per­spec­tive. It’s one of the fun­da­men­tal con­tra­dic­tions of Face­book’s busi­ness mod­el that this Cam­bridge Ana­lyt­i­ca scan­dal risks expos­ing to the pub­lic:

    ...
    So the mon­ey comes from the adver­tis­ing. And the adver­tis­ing comes from the data and the arti­fi­cial intel­li­gence that crunch­es it and mod­els it into pre­dic­tive effi­cien­cy. But what if there’s a break­down in the data?

    Start­ing in a ear­ly March, a num­ber of mar­keters run­ning sub­stan­tial sums on the Face­book ad engine, who’ve spo­ken to TPM, start­ed notic­ing a new lev­el of plat­form insta­bil­i­ty and reduc­tions in tar­get­ing effi­cien­cy. To under­stand what this means, think about it like how an effi­cient debt or equi­ty mar­ket oper­ates. If there is rel­a­tive­ly accu­rate infor­ma­tion, no big exter­nal shocks and enough buy­ers and sell­ers, pric­ing should have rel­a­tive sta­bil­i­ty and oper­ate with­in cer­tain bands. Account­ing for some rea­son­able amount of bumpi­ness that’s what Facebook’s ad engine has looked like for a few years. But start­ing in March, if you’re down in the trench­es work­ing with the gran­u­lar num­bers, some­thing start­ed to look weird: price oscil­la­tions, reduced tar­get­ing effi­cien­cy and even glitch­es.

    We’ve talked to a num­ber of adver­tis­ers who’ve report­ed this. We’ve also talk to oth­ers who haven’t. But the ones who have tend to be the ones more tight­ly tied to the num­bers and in mar­ket­ing oper­a­tions with tighter ROI (return on invest­ment). Where we’ve seen the most of this is with so-called DTC (direct to con­sumer) mar­keters. Face­book is an amaz­ing­ly large ecosys­tem. And it’s all a black box. So there’s no way for us to talk to a rep­re­sen­ta­tive sam­ple of adver­tis­ers. But some­thing is going on in at least sub­stan­tial sec­tors of Facebook’s ad engine. What is it? Mar­keters who’ve asked main­ly get told it’s their cre­ativ­i­ty. In oth­er words, the ad you’re run­ning isn’t work­ing. Come up with anoth­er ad. Here at TPM, we oper­ate in a dif­fer­ent part of the pro­gram­mat­ic ad uni­verse. You hear com­pa­ra­ble things like that a lot. And it’s hard to ignore. But we’ve talked to peo­ple with dif­fer­ent peo­ple with (by def­i­n­i­tion) dif­fer­ent ads in total­ly dif­fer­ent indus­tries. So that’s not it. Some­thing is hap­pen­ing.

    So what’s up?

    One thing is already being dis­cussed wide­ly in the trade press. In response to the rolling pub­lic rela­tions deba­cle Face­book has already dra­mat­i­cal­ly reduced or has announced that it will reduce adver­tis­ers’ abil­i­ty to use third par­ty ad data on the Face­book plat­form. That is a big deal. What’s that mean?
    ...

    And as Josh Mar­shall points out, the impact of the loss of this third-par­ty data on Face­book’s ad tar­get­ing algo­rithms is large­ly spec­u­la­tive because we know so lit­tle about what Face­book knows about us with­out those third par­ty algo­rithms. Face­book is a black box:

    ...
    As you know, through your activ­i­ty on Face­book, Face­book col­lects lots of data about you that it then uses to tar­get ads. That’s “Face­book data” (or it’s yours, but you know what I mean). Face­book also allows adver­tis­ers to upload “1st par­ty data”. What’s that? That’s if my book pub­lish­ing com­pa­ny has a list of 50,000 emails, I can upload those emails to Face­book and run ads to those peo­ple. Then there’s “3rd par­ty data”. That’s if the adver­tis­er or Face­book itself goes to anoth­er per­son­al data bro­ker, buys access to that data and pours it into the Face­book ecosys­tem for more effi­cient tar­get­ing.

    If you’re not versed in the world of data and dig­i­tal adver­tis­ing, there’s a ton here to keep up with. But here’s the key. How reliant is Facebook’s adver­tis­ing cash cow on third par­ty data? Not just the third par­ty data that Face­book allows adver­tis­ers to put into its ecosys­tem for bet­ter tar­get­ing (which is now being phased out) but 3rd par­ty data Face­book uses itself to improve its ad tar­get­ing? As one data indus­try exec­u­tive put it to me, sure Face­book can crunch its own data to find out all sorts of things about you. But in a lot of cas­es it may be eas­i­er, cheap­er and in some cas­es sim­ply more effec­tive to buy that data from oth­er sources. We don’t real­ly know – and no one out­side Face­book real­ly knows – how good Facebook’s AI real­ly is at mod­el­ing user data entire­ly on its own with­out oth­er sorts of data mixed in. It’s a black box. It mat­ters a lot in terms of Facebook’s core rev­enue stream.
    ...

    But we might get an answer to the ques­tion of whether or not Face­book needs that third-par­ty data to achieve the ad tar­get­ing pro­fi­cien­cy is has today because of those new EU reg­u­la­tions and the real pos­si­bil­i­ty of some sort of con­gres­sion­al action as a result of the Cam­bridge Ana­lyt­i­ca scan­dal. And that, of course, is why Josh Mar­shall sus­pects what we’re see­ing in the report­ed drop in Face­book’s ad tar­get­ing is that Face­book is already prepar­ing for com­ing reg­u­la­tion:

    ...
    Here’s anoth­er ques­tion: when you con­sid­er Facebook’s own data, how reliant is Face­book on ways it col­lects and process­es Face­book data which it may not be able to do any longer either because of new reg­u­la­tions that come into effect lat­er this year for the EU or because of new reg­u­la­tions Con­gress may put into effect as it puts new scruti­ny on Facebook’s behav­ior?

    My hunch is that the answer to most or all of these ques­tions is “a lot more than most peo­ple real­ize.” We already know that Face­book is mak­ing a lot of changes to how it uses data, espe­cial­ly third par­ty data and how it allows adver­tis­ers to use data. Some of this is already pub­lic. Indeed, it’s get­ting dis­cussed a lot in the trade press – par­tic­u­lar how Face­book will imple­ment with and cope with the new regs from the Euro­pean Union. So why all the chop­pi­ness in Facebook’s adver­tis­ing and tar­get­ing met­rics? I sus­pect that Face­book is try­ing to rejig­ger its algo­rithm on the fly more than peo­ple real­ize in order to see if they can get it to work as effec­tive­ly for ads with­out a lot of data sources or data uses they real­ly aren’t sup­posed to be doing or which they sus­pect they’ll lose access to in com­ing reg­u­la­tion. That is the most log­i­cal expla­na­tion of the insta­bil­i­ty in their report­ing.
    ...

    And if Josh Mar­shal­l’s hunch is cor­rect and Face­book real­ly did start rejig­ger­ing its ad tar­get­ing algo­rithms in antic­i­pa­tion of com­ing con­gres­sion­al reg­u­la­tion — which points towards an antic­i­pa­tion by Face­book of a very neg­a­tive pub­lic response to the yet-to-be-released Cam­bridge Ana­lyt­i­ca sto­ry — we have to won­der just have many oth­er pri­va­cy vio­lat­ing schemes Face­book has been up to with oth­er third-par­ties beyond the data bro­ker­age giants like Acx­iom or Exper­ian. Like what kinds of oth­er class­es of third-par­ty providers might Face­book be incor­po­rat­ing into their algo­rithms?

    Well, here’s a chill­ing exam­ple of the kind of third-par­ty data-shar­ing part­ner­ship Face­book might be inter­est­ed in: hos­pi­tal record meta data. Like what dis­eases peo­ple have an the med­ica­tions they’re on and when they vis­it­ed the hos­pi­tal. From sev­er­al major hos­pi­tals, includ­ing Stan­ford Med­ical School’s.

    Face­book says it would be for research pur­pos­es only by the med­ical com­mu­ni­ty but Face­book would have been able to deanonymize the data. And it’s kind of obscene because Face­book says the plan for pro­tect­ing every­one’s pri­va­cy is to using “hash­ing” — where patients would be assign an anony­mous num­ber that is assigned based on a math­e­mat­i­cal algo­rithm that takes some­thing like the patient name and turns it into a seem­ing­ly ran­dom num­ber — and that only the med­ical research com­mu­ni­ty will have access to the anonymized data so no one’s pri­va­cy is at risk. But using hash­ing to match the Face­book data set and the hos­pi­tal data set means Face­book can match up the hos­pi­tal data with its Face­book users. Face­book is try­ing to get deanonymized patient health data from hos­pi­tals. It’s a dis­turb­ing exam­ple of the kind of third-par­ty data that Face­book is inter­est­ed in.

    And there’s no real rea­son to believe they would­n’t wild­ly abuse the data and prob­a­bly turn the patients of those hos­pi­tals into focus groups of algo­rith­mic test­ing using their med­ical records to pitch ads. Which will prob­a­bly freak those peo­ple out. Face­book + hos­pi­tal data = yikes.

    And this plan was being pur­sued last month. The Cam­bridge Ana­lyt­i­ca scan­dal dis­rupt­ed active talks. The plan was “put on pause” by Face­book last week in response to the Cam­bridge Ana­lyt­i­ca out­rage. Still, that’s just “on pause”. So it sounds like the plan is still “on” and we should expect a con­tin­ued push into the med­ical record space by Face­book.

    Face­book’s pitch was to com­bine what health sys­tem data on patients (such as: per­son has heart dis­ease, is age 50, takes 2 med­ica­tions and made 3 trips to the hos­pi­tal this year) with Face­book’s data on the per­son (such as: user is age 50, mar­ried with 3 kids, Eng­lish isn’t a pri­ma­ry lan­guage, active­ly engages with the com­mu­ni­ty by send­ing a lot of mes­sages). And then the research project would try to use this com­bined infor­ma­tion to improve patient care in some way, with an ini­tial focus on car­dio­vas­cu­lar health. For instance, if Face­book could deter­mine that an elder­ly patient does­n’t have many near­by close friends or much com­mu­ni­ty sup­port, the health sys­tem might decide to send over a nurse to check in after a major surgery.

    In oth­er words, Face­book was set­ting up a research project ded­i­cat­ed to devel­op­er hos­pi­tal deci­sion-mak­ing sup­port that uti­lizes Face­book’s pool of per­son­al­ized data on peo­ple. Which is a path to plug Face­book into the hos­pi­tal sys­tem. Yikes:

    CNBC

    Face­book sent a doc­tor on a secret mis­sion to ask hos­pi­tals to share patient data

    * Face­book was in talks with top hos­pi­tals and oth­er med­ical groups as recent­ly as last month about a pro­pos­al to share data about the social net­works of their most vul­ner­a­ble patients.
    * The idea was to build pro­files of peo­ple that includ­ed their med­ical con­di­tions, infor­ma­tion that health sys­tems have, as well as social and eco­nom­ic fac­tors gleaned from Face­book.
    * Face­book said the project is on hia­tus so it can focus on “oth­er impor­tant work, includ­ing doing a bet­ter job of pro­tect­ing peo­ple’s data.”

    Christi­na Farr | @chrissyfarr
    Pub­lished 2:01 PM ET Thu, 5 April 2018 Updat­ed 11:46 AM ET Fri, 6 April 2018

    Face­book has asked sev­er­al major U.S. hos­pi­tals to share anonymized data about their patients, such as ill­ness­es and pre­scrip­tion info, for a pro­posed research project. Face­book was intend­ing to match it up with user data it had col­lect­ed, and help the hos­pi­tals fig­ure out which patients might need spe­cial care or treat­ment.

    The pro­pos­al nev­er went past the plan­ning phas­es and has been put on pause after the Cam­bridge Ana­lyt­i­ca data leak scan­dal raised pub­lic con­cerns over how Face­book and oth­ers col­lect and use detailed infor­ma­tion about Face­book users.

    “This work has not pro­gressed past the plan­ning phase, and we have not received, shared, or ana­lyzed any­one’s data,” a Face­book spokesper­son told CNBC.

    But as recent­ly as last month, the com­pa­ny was talk­ing to sev­er­al health orga­ni­za­tions, includ­ing Stan­ford Med­ical School and Amer­i­can Col­lege of Car­di­ol­o­gy, about sign­ing the data-shar­ing agree­ment.

    While the data shared would obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as the patien­t’s name, Face­book pro­posed using a com­mon com­put­er sci­ence tech­nique called “hash­ing” to match indi­vid­u­als who exist­ed in both sets. Face­book says the data would have been used only for research con­duct­ed by the med­ical com­mu­ni­ty.

    The project could have raised new con­cerns about the mas­sive amount of data Face­book col­lects about its users, and how this data can be used in ways users nev­er expect­ed.

    ...

    Led out of Build­ing 8

    The explorato­ry effort to share med­ical-relat­ed data was led by an inter­ven­tion­al car­di­ol­o­gist called Fred­dy Abnousi, who describes his role on LinkedIn as “lead­ing top-secret projects.” It was under the purview of Regi­na Dugan, the head of Face­book’s “Build­ing 8” exper­i­ment projects group, before she left in Octo­ber 2017.

    Face­book’s pitch, accord­ing to two peo­ple who heard it and one who is famil­iar with the project, was to com­bine what a health sys­tem knows about its patients (such as: per­son has heart dis­ease, is age 50, takes 2 med­ica­tions and made 3 trips to the hos­pi­tal this year) with what Face­book knows (such as: user is age 50, mar­ried with 3 kids, Eng­lish isn’t a pri­ma­ry lan­guage, active­ly engages with the com­mu­ni­ty by send­ing a lot of mes­sages).

    The project would then fig­ure out if this com­bined infor­ma­tion could improve patient care, ini­tial­ly with a focus on car­dio­vas­cu­lar health. For instance, if Face­book could deter­mine that an elder­ly patient does­n’t have many near­by close friends or much com­mu­ni­ty sup­port, the health sys­tem might decide to send over a nurse to check in after a major surgery.

    The peo­ple declined to be named as they were asked to sign con­fi­den­tial­i­ty agree­ments.

    Face­book pro­vid­ed a quote from Cath­leen Gates, the inter­im CEO of the Amer­i­can Col­lege of Car­di­ol­o­gy, explain­ing the pos­si­ble ben­e­fits of the plan:

    “For the first time in his­to­ry, peo­ple are shar­ing infor­ma­tion about them­selves online in ways that may help deter­mine how to improve their health. As part of its mis­sion to trans­form car­dio­vas­cu­lar care and improve heart health, the Amer­i­can Col­lege of Car­di­ol­o­gy has been engaged in dis­cus­sions with Face­book around the use of anonymized Face­book data, cou­pled with anonymized ACC data, to fur­ther sci­en­tif­ic research on the ways social media can aid in the pre­ven­tion and treat­ment of heart disease—the #1 cause of death in the world. This part­ner­ship is in the very ear­ly phas­es as we work on both sides to ensure pri­va­cy, trans­paren­cy and sci­en­tif­ic rig­or. No data has been shared between any par­ties.”

    Health sys­tems are noto­ri­ous­ly care­ful about shar­ing patient health infor­ma­tion, in part because of state and fed­er­al patient pri­va­cy laws that are designed to ensure that peo­ple’s sen­si­tive med­ical infor­ma­tion does­n’t end up in the wrong hands.

    To address these pri­va­cy laws and con­cerns, Face­book pro­posed to obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as names, in the data being shared by both sides.

    How­ev­er, the com­pa­ny pro­posed using a com­mon cryp­to­graph­ic tech­nique called hash­ing to match indi­vid­u­als who were in both data sets. That way, both par­ties would be able to tell when a spe­cif­ic set of Face­book data matched up with a spe­cif­ic set of patient data.

    The issue of patient con­sent did not come up in the ear­ly dis­cus­sions, one of the peo­ple said. Crit­ics have attacked Face­book in the past for doing research on users with­out their per­mis­sion. Notably, in 2014, Face­book manip­u­lat­ed hun­dreds of thou­sands of peo­ple’s news feeds to study whether cer­tain types of con­tent made peo­ple hap­pi­er or sad­der. Face­book lat­er apol­o­gized for the study.

    Health pol­i­cy experts say that this health ini­tia­tive would be prob­lem­at­ic if Face­book did not think through the pri­va­cy impli­ca­tions.

    “Con­sumers would­n’t have assumed their data would be used in this way,” said Aneesh Chopra, pres­i­dent of a health soft­ware com­pa­ny spe­cial­iz­ing in patient data called Care­Jour­ney and the for­mer White House chief tech­nol­o­gy offi­cer.

    “If Face­book moves ahead (with its plans), I would be wary of efforts that repur­pose user data with­out explic­it con­sent.”

    When asked about the plans, Face­book pro­vid­ed the fol­low­ing state­ment:

    “The med­ical indus­try has long under­stood that there are gen­er­al health ben­e­fits to hav­ing a close-knit cir­cle of fam­i­ly and friends. But deep­er research into this link is need­ed to help med­ical pro­fes­sion­als devel­op spe­cif­ic treat­ment and inter­ven­tion plans that take social con­nec­tion into account.”

    “With this in mind, last year Face­book began dis­cus­sions with lead­ing med­ical insti­tu­tions, includ­ing the Amer­i­can Col­lege of Car­di­ol­o­gy and the Stan­ford Uni­ver­si­ty School of Med­i­cine, to explore whether sci­en­tif­ic research using anonymized Face­book data could help the med­ical com­mu­ni­ty advance our under­stand­ing in this area. This work has not pro­gressed past the plan­ning phase, and we have not received, shared, or ana­lyzed any­one’s data.”

    Last month we decid­ed that we should pause these dis­cus­sions so we can focus on oth­er impor­tant work, includ­ing doing a bet­ter job of pro­tect­ing peo­ple’s data and being clear­er with them about how that data is used in our prod­ucts and ser­vices.”

    ...

    ———-

    “Face­book sent a doc­tor on a secret mis­sion to ask hos­pi­tals to share patient data” by Christi­na Farr; CNBC; 04/05/2018

    “Face­book has asked sev­er­al major U.S. hos­pi­tals to share anonymized data about their patients, such as ill­ness­es and pre­scrip­tion info, for a pro­posed research project. Face­book was intend­ing to match it up with user data it had col­lect­ed, and help the hos­pi­tals fig­ure out which patients might need spe­cial care or treat­ment.”

    Patient data from hos­pi­tals. It’s Face­book’s brave new third-par­ty data fron­tier. Cur­rent­ly under the aus­pices of med­ical research, but its research for the pur­pose of show­ing Face­book’s util­i­ty in med­ical deci­sion-sup­port which is research to demon­strate the util­i­ty of shar­ing patient infor­ma­tion with Face­book. That was the gen­er­al pitch Face­book was mak­ing to sev­er­al major US hos­pi­tals, includ­ing Stan­ford. And it’s a plan that, accord­ing to Face­book, was being pur­sued last month and has mere­ly been “put on pause” in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal:

    ...
    The pro­pos­al nev­er went past the plan­ning phas­es and has been put on pause after the Cam­bridge Ana­lyt­i­ca data leak scan­dal raised pub­lic con­cerns over how Face­book and oth­ers col­lect and use detailed infor­ma­tion about Face­book users.

    “This work has not pro­gressed past the plan­ning phase, and we have not received, shared, or ana­lyzed any­one’s data,” a Face­book spokesper­son told CNBC.

    But as recent­ly as last month, the com­pa­ny was talk­ing to sev­er­al health orga­ni­za­tions, includ­ing Stan­ford Med­ical School and Amer­i­can Col­lege of Car­di­ol­o­gy, about sign­ing the data-shar­ing agree­ment.
    ...

    The way Face­book pitched it, the anonymized data from Face­book and the anonymized data from the hos­pi­tals would be com­bined and used for med­ical com­mu­ni­ty research (research into Face­book as a patient care deci­sion-sup­port part­ner):

    ...
    Face­book’s pitch, accord­ing to two peo­ple who heard it and one who is famil­iar with the project, was to com­bine what a health sys­tem knows about its patients (such as: per­son has heart dis­ease, is age 50, takes 2 med­ica­tions and made 3 trips to the hos­pi­tal this year) with what Face­book knows (such as: user is age 50, mar­ried with 3 kids, Eng­lish isn’t a pri­ma­ry lan­guage, active­ly engages with the com­mu­ni­ty by send­ing a lot of mes­sages).

    The project would then fig­ure out if this com­bined infor­ma­tion could improve patient care, ini­tial­ly with a focus on car­dio­vas­cu­lar health. For instance, if Face­book could deter­mine that an elder­ly patient does­n’t have many near­by close friends or much com­mu­ni­ty sup­port, the health sys­tem might decide to send over a nurse to check in after a major surgery.
    ...

    But what Face­book does­n’t acknowl­edge in that pitch is that the tech­nique it’s propos­ing to anonymiz­ing the data only anonymizes it to every­one except the hos­pi­tal and Face­book. Face­book can eas­i­ly deanonymize the hos­pi­tal data if it gets its hands on it. The med­ical researchers aren’t the pri­va­cy threat. It’s actu­al­ly anonymized for them because they don’t know the patients or the Face­book pro­files. They’re just hashed ids. But Face­book sure as hell is a pri­va­cy threat because it’s Face­book with it’s hands on the deanonymized data:

    ...
    While the data shared would obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as the patien­t’s name, Face­book pro­posed using a com­mon com­put­er sci­ence tech­nique called “hash­ing” to match indi­vid­u­als who exist­ed in both sets. Face­book says the data would have been used only for research con­duct­ed by the med­ical com­mu­ni­ty.

    The project could have raised new con­cerns about the mas­sive amount of data Face­book col­lects about its users, and how this data can be used in ways users nev­er expect­ed.

    ...

    Health sys­tems are noto­ri­ous­ly care­ful about shar­ing patient health infor­ma­tion, in part because of state and fed­er­al patient pri­va­cy laws that are designed to ensure that peo­ple’s sen­si­tive med­ical infor­ma­tion does­n’t end up in the wrong hands.

    To address these pri­va­cy laws and con­cerns, Face­book pro­posed to obscure per­son­al­ly iden­ti­fi­able infor­ma­tion, such as names, in the data being shared by both sides.

    How­ev­er, the com­pa­ny pro­posed using a com­mon cryp­to­graph­ic tech­nique called hash­ing to match indi­vid­u­als who were in both data sets. That way, both par­ties would be able to tell when a spe­cif­ic set of Face­book data matched up with a spe­cif­ic set of patient data.
    ...

    And note how the issue of patient con­sent did­n’t come up in these ear­ly dis­cus­sions, sug­gest­ing that Face­book is try­ing to work out a sit­u­a­tion where peo­ple don’t know their patient record data was hand­ed over to Face­book:

    ...
    The issue of patient con­sent did not come up in the ear­ly dis­cus­sions, one of the peo­ple said. Crit­ics have attacked Face­book in the past for doing research on users with­out their per­mis­sion. Notably, in 2014, Face­book manip­u­lat­ed hun­dreds of thou­sands of peo­ple’s news feeds to study whether cer­tain types of con­tent made peo­ple hap­pi­er or sad­der. Face­book lat­er apol­o­gized for the study.

    Health pol­i­cy experts say that this health ini­tia­tive would be prob­lem­at­ic if Face­book did not think through the pri­va­cy impli­ca­tions.

    “Con­sumers would­n’t have assumed their data would be used in this way,” said Aneesh Chopra, pres­i­dent of a health soft­ware com­pa­ny spe­cial­iz­ing in patient data called Care­Jour­ney and the for­mer White House chief tech­nol­o­gy offi­cer.

    “If Face­book moves ahead (with its plans), I would be wary of efforts that repur­pose user data with­out explic­it con­sent.”
    ...

    And, of course, it was Face­book’s mad sci­ence “Build­ing 8” R&D group that is behind this pro­pos­al. The same group behind projects like the human-to-com­put­er mind-read­ing inter­face tech­nol­o­gy that will allow human-to-com­put­er inter­faces (so Face­book can lit­er­al­ly data mine your brain activ­i­ty). And the same R&D group that was recent­ly led by for­mer DARPA chief Regi­na Dugan, until Dugan left last year with a cryp­tic mes­sage about step­ping away to be “pur­pose­ful about what’s next, thought­ful about new ways to con­tribute in times of dis­rup­tion.”. This next-gen­er­a­tion Face­book stuff:

    ...
    The explorato­ry effort to share med­ical-relat­ed data was led by an inter­ven­tion­al car­di­ol­o­gist called Fred­dy Abnousi, who describes his role on LinkedIn as “lead­ing top-secret projects.” It was under the purview of Regi­na Dugan, the head of Face­book’s “Build­ing 8” exper­i­ment projects group, before she left in Octo­ber 2017.
    ...

    It’s a reminder that Face­book’s R&D teams are prob­a­bly work­ing on all sorts of new ways to tap into data-rich third-par­ty sources. Hos­pi­tals are mere­ly one par­tic­u­lar­ly data rich exam­ple of the prob­lem.

    And if Face­book real­ly does cuts out third-par­ty data bro­kers from its algo­rithms, let’s not for­get that Face­book is prob­a­bly going to use that as an excuse and imper­a­tive to reach out to all sorts of niche third-par­ty data providers for direct access. Like hos­pi­tals. Don’t for­get that the above plan was mere­ly “put on pause”. They want to do more stuff like this going for­ward. And why not if they can get hos­pi­tals to give this kind of data out. And any oth­er kind of insti­tu­tion they can con­vince to out our data. This is how Face­book can go “offline”. With direct data shar­ing ser­vices, like patient care deci­sion-mak­ing sup­port ser­vices, with one field of insti­tu­tion at a time. Hos­pi­tals are just one exam­ple.

    So giv­en Face­book faces poten­tial con­gres­sion­al action and new reg­u­la­tions, it’s going to be impor­tant to keep in mind that those reg­u­la­tions are going to have to include more than just the data bro­ker­age giants like Exper­ian. Because Face­book is inter­est­ed in what you tell your doc­tor too. And pre­sum­ably lots of oth­er ‘ser­vices’ where they fuse their data about you with anoth­er data source for com­bined deci­sion-mak­ing sup­port. And the more Face­book promis­ing to cut out third-par­ty data, but more Face­book is going to try to direct­ly col­lect “offline” data by fus­ing itself with oth­er facets of our lives. It’s real­ly quite dis­turb­ing.

    And who knows who else in the data bro­ker­age indus­try might try to fol­low Face­book’s lead. Will Google also wants get into the patient care deci­sion sup­port mar­ket? Third-par­ty data-bro­ker­age deci­sion-mak­ing sup­port could be poten­tial­ly applied to a lot more than just the med­ical sec­tor. It’s a creepy new prof­it fron­tier.

    Beyond that, how else might Face­book attempt to replace the “offline” third-par­ty data it’s pledg­ing to phase out over the next six months? We’ll see, but we can be sure that Face­book is work­ing on some­thing.

    Posted by Pterrafractyl | April 8, 2018, 1:01 am
  5. Here’s a reminder that the pro­pos­al to com­bine Face­book data with patient hos­pi­tal data — osten­si­bly for patient care deci­sion-sup­port pur­pos­es but also like­ly so Face­book can get its hands on patient med­ical record infor­ma­tion — isn’t the only project Face­book has put ‘on pause’ (but not can­celed) in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. For exam­ple, there’s a new hard­ware prod­uct for your home that Face­book is plan­ning out rolling out lat­er this year.

    It’s a “smart speak­er” like the kind Ama­zon and Google already have sale. A smart speak­er that will sit in your home and lis­ten to every­thing and answer ques­tions and sched­ule things. Poten­tial­ly with cam­eras. Your per­son­al home assis­tant. That’s the mar­ket Face­book is get­ting into lat­er this year. But thanks to the pub­lic rela­tions night­mare sit­u­a­tion Face­book is expe­ri­enc­ing at the moment the announce­ment of this new smart speak­er at its devel­op­ers con­fer­ence in May has been can­celled. But it sounds like the roll out is still planned for this fall. So that smart speak­er is a use­ful reminder to the US pub­lic and reg­u­la­tors of the future direc­tion Face­book is plan­ning on head­ing: in home “offline” data col­lec­tion using inter­net-con­nect­ed smart devices:

    Bloomberg Tech­nol­o­gy

    Face­book Delays Home-Speak­er Unveil Amid Data Cri­sis

    By Sarah Frier
    March 27, 2018, 7:34 PM CDT

    * Social net­work had hoped to show off devices at F8 in May
    * Com­pa­ny still plans to launch prod­ucts lat­er this year

    Face­book Inc. has decid­ed not to unveil new home prod­ucts at its major devel­op­er con­fer­ence in May, in part because the pub­lic is cur­rent­ly so out­raged about the social network’s data-pri­va­cy prac­tices, accord­ing to peo­ple famil­iar with the mat­ter.

    The company’s new hard­ware prod­ucts, con­nect­ed speak­ers with dig­i­tal-assis­tant and video-chat capa­bil­i­ties, are under­go­ing a deep­er review to ensure that they make the right trade-offs regard­ing user data, the peo­ple said. While the hard­ware wasn’t expect­ed to be avail­able until the fall, the com­pa­ny had hoped to pre­view the devices at the largest annu­al gath­er­ing of Face­book devel­op­ers, said the peo­ple, who asked not to be named dis­cussing inter­nal plans.

    The devices are part of Facebook’s plan to become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers. As con­cerns esca­late about Facebook’s col­lec­tion and use of per­son­al data, now may be the wrong time to ask con­sumers to trust it with even more infor­ma­tion by plac­ing a con­nect­ed device in their homes. A Face­book spokes­woman declined to com­ment.

    ...

    The social-media com­pa­ny had already found in focus-group test­ing that users were con­cerned about a Face­book-brand­ed device in their liv­ing rooms, giv­en how much inti­mate data the social net­work col­lects. Face­book still plans to launch the devices lat­er this year.

    At the devel­op­er con­fer­ence, set for May 1, the com­pa­ny will also need to explain new, more restric­tive rules around what kinds of infor­ma­tion app mak­ers can col­lect on their users via Facebook’s ser­vice. The Men­lo Park, Cal­i­for­nia-based com­pa­ny said in a blog post this week that for devel­op­ers, the changes “are not easy,” but are impor­tant to “mit­i­gate any breach of trust with the broad­er devel­op­er ecosys­tem.”

    ———-

    “Face­book Delays Home-Speak­er Unveil Amid Data Cri­sis” by Sarah Frier; Bloomberg Tech­nol­o­gy; 03/27/2018

    “Face­book Inc. has decid­ed not to unveil new home prod­ucts at its major devel­op­er con­fer­ence in May, in part because the pub­lic is cur­rent­ly so out­raged about the social network’s data-pri­va­cy prac­tices, accord­ing to peo­ple famil­iar with the mat­ter.”

    Yeah, it’s under­stand­able that pub­lic out­rage over years of decep­tive and sys­temic mass pri­va­cy vio­la­tions might com­pli­cate the roll out of your new in-home “smart speak­ers” which will be lis­ten­ing to every­thing hap­pen­ing in your home and send­ing that infor­ma­tion back to Face­book. A pause on that grand unveil­ing does seem pru­dent.

    And yet Face­book still plans to actu­al­ly launch its new smart speak­ers lat­er this year:

    ...
    The social-media com­pa­ny had already found in focus-group test­ing that users were con­cerned about a Face­book-brand­ed device in their liv­ing rooms, giv­en how much inti­mate data the social net­work col­lects. Face­book still plans to launch the devices lat­er this year.
    ...

    And that planned roll out of these smart speak­ers lat­er this year is just one ele­ment of Face­book’s plan to “become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers”:

    ...
    The company’s new hard­ware prod­ucts, con­nect­ed speak­ers with dig­i­tal-assis­tant and video-chat capa­bil­i­ties, are under­go­ing a deep­er review to ensure that they make the right trade-offs regard­ing user data, the peo­ple said. While the hard­ware wasn’t expect­ed to be avail­able until the fall, the com­pa­ny had hoped to pre­view the devices at the largest annu­al gath­er­ing of Face­book devel­op­ers, said the peo­ple, who asked not to be named dis­cussing inter­nal plans.

    The devices are part of Facebook’s plan to become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers. As con­cerns esca­late about Facebook’s col­lec­tion and use of per­son­al data, now may be the wrong time to ask con­sumers to trust it with even more infor­ma­tion by plac­ing a con­nect­ed device in their homes. A Face­book spokes­woman declined to com­ment.
    ...

    “The devices are part of Facebook’s plan to become more inti­mate­ly involved with users’ every­day social lives, using arti­fi­cial intel­li­gence — fol­low­ing a path forged by Amazon.com Inc. and its Echo in-home smart speak­ers.”

    Yep, Face­book has all sorts of plans to become more inti­mate­ly involved with your every­day life. Using arti­fi­cial intel­li­gence. And smart speak­ers. And no pri­va­cy con­cerns, of course.

    And in fair­ness this move to sell con­sumer devices that mon­i­tor you for the pur­pose of offer­ing use­ful ser­vices with the data its col­lect­ing (and for sell­ing you ads and pro­fil­ing you) is mere­ly fol­low­ing in the foot­steps of com­pa­nies like Google or Ama­zon with their wild­ly pop­u­lar smart speak­ers. As the fol­low­ing arti­cle notes, A recent Gallup poll found found that 22 per­cent of Amer­i­cans use “Home per­son­al assis­tants” like Google Home or Ama­zon Echo. That is a huge per­cent­age of the Amer­i­can pub­lic that’s already hand­ing out exact­ly the kind of data Face­book is try­ing to col­lect with its new smart speak­er.

    And as the fol­low­ing arti­cle also notes, if the creepy patents Google and Ama­zon have already filed are any indi­ca­tion of what we can expect from Face­book, we should expect Face­book to work on things like incor­po­rat­ing the smart speak­ers into smart home AI sys­tems for mon­i­tor­ing chil­dren, with whis­per detec­tion capa­bil­i­ties and the abil­i­ty to issue ver­bal com­mands at the kids. The smart home would replace the tele­vi­sion as the tech­no­log­i­cal par­ent of today’s kids and one of these mega cor­po­ra­tions sell­ing this tech­nol­o­gy will get audio and visu­al access to your home. Yes, the exist­ing Google and Ama­zon patents would incor­po­rate visu­al data too since these smart speak­ers tend to have cam­eras.

    And one patent involved a sce­nario where the cam­era on a smart speak­er rec­og­nized a t‑shirt on the floor and rec­og­nized a pic­ture of Will Smith on the shirt and then tied that to a data­base of that per­son­’s brows­ing his­to­ry to see if they looked up Will Smith con­tent online and then serv­ing up tar­get­ed ads if they found a Will Smith hit. That’s a real patent from Google and that’s the kind of Orwellian patent race that Face­book is qui­et­ly get­ting ready to join lat­er this year:

    The New York Times

    Hey, Alexa, What Can You Hear? And What Will You Do With It?

    By SAPNA MAHESHWARI
    MARCH 31, 2018

    Ama­zon ran a com­mer­cial on this year’s Super Bowl that pre­tend­ed its dig­i­tal assis­tant Alexa had tem­porar­i­ly lost her voice. It fea­tured celebri­ties like Rebel Wil­son, Car­di B and even the company’s chief exec­u­tive, Jeff Bezos.

    While the ad riffed on what Alexa can say to users, the more intrigu­ing ques­tion may be what she and oth­er dig­i­tal assis­tants can hear — espe­cial­ly as more peo­ple bring smart speak­ers into their homes.

    Ama­zon and Google, the lead­ing sell­ers of such devices, say the assis­tants record and process audio only after users trig­ger them by push­ing a but­ton or utter­ing a phrase like “Hey, Alexa” or “O.K., Google.” But each com­pa­ny has filed patent appli­ca­tions, many of them still under con­sid­er­a­tion, that out­line an array of pos­si­bil­i­ties for how devices like these could mon­i­tor more of what users say and do. That infor­ma­tion could then be used to iden­ti­fy a person’s desires or inter­ests, which could be mined for ads and prod­uct rec­om­men­da­tions.

    In one set of patent appli­ca­tions, Ama­zon describes how a “voice snif­fer algo­rithm” could be used on an array of devices, like tablets and e‑book read­ers, to ana­lyze audio almost in real time when it hears words like “love,” bought” or “dis­like.” A dia­gram includ­ed with the appli­ca­tion illus­trat­ed how a phone call between two friends could result in one receiv­ing an offer for the San Diego Zoo and the oth­er see­ing an ad for a Wine of the Month Club mem­ber­ship.

    Some patent appli­ca­tions from Google, which also owns the smart home prod­uct mak­er Nest Labs, describe how audio and visu­al sig­nals could be used in the con­text of elab­o­rate smart home setups.

    One appli­ca­tion details how audio mon­i­tor­ing could help detect that a child is engag­ing in “mis­chief” at home by first using speech pat­terns and pitch to iden­ti­fy a child’s pres­ence, one fil­ing said. A device could then try to sense move­ment while lis­ten­ing for whis­pers or silence, and even pro­gram a smart speak­er to “pro­vide a ver­bal warn­ing.”

    A sep­a­rate appli­ca­tion regard­ing per­son­al­iz­ing con­tent for peo­ple while respect­ing their pri­va­cy not­ed that voic­es could be used to deter­mine a speaker’s mood using the “vol­ume of the user’s voice, detect­ed breath­ing rate, cry­ing and so forth,” and med­ical con­di­tion “based on detect­ed cough­ing, sneez­ing and so forth.”

    The same appli­ca­tion out­lines how a device could “rec­og­nize a T‑shirt on a floor of the user’s clos­et” bear­ing Will Smith’s face and com­bine that with a brows­er his­to­ry that shows search­es for Mr. Smith “to pro­vide a movie rec­om­men­da­tion that dis­plays, ‘You seem to like Will Smith. His new movie is play­ing in a the­ater near you.’”

    In a state­ment, Ama­zon said the com­pa­ny took “pri­va­cy seri­ous­ly” and did “not use cus­tomers’ voice record­ings for tar­get­ed adver­tis­ing.” Ama­zon said that it filed “a num­ber of for­ward-look­ing patent appli­ca­tions that explore the full pos­si­bil­i­ties of new tech­nol­o­gy,” and that they “take mul­ti­ple years to receive and do not nec­es­sar­i­ly reflect cur­rent devel­op­ments to prod­ucts and ser­vices.”

    Google said it did not “use raw audio to extrap­o­late moods, med­ical con­di­tions or demo­graph­ic infor­ma­tion.” The com­pa­ny added, “All devices that come with the Google Assis­tant, includ­ing Google Home, are designed with user pri­va­cy in mind.”

    Tech com­pa­nies apply for a dizzy­ing num­ber of patents every year, many of which are nev­er used and are years from even being pos­si­ble.

    Still, Jamie Court, the pres­i­dent of Con­sumer Watch­dog, a non­prof­it advo­ca­cy group in San­ta Mon­i­ca, Calif., which pub­lished a study of some of the patent appli­ca­tions in Decem­ber, said, “When you read parts of the appli­ca­tions, it’s real­ly clear that this is spy­ware and a sur­veil­lance sys­tem meant to serve you up to adver­tis­ers.”

    The com­pa­nies, Mr. Court added, are “basi­cal­ly going to be find­ing out what our home life is like in qual­i­ta­tive ways.”

    Google called Con­sumer Watchdog’s claims “unfound­ed,” and said, “Prospec­tive prod­uct announce­ments should not nec­es­sar­i­ly be inferred from our patent appli­ca­tions.”

    A recent Gallup poll found found that 22 per­cent of Amer­i­cans used devices like Google Home or Ama­zon Echo. The grow­ing adop­tion of smart speak­ers means that gad­gets, some of which con­tain up to eight micro­phones and a cam­era, are being placed in kitchens and bed­rooms and used to answer ques­tions, con­trol appli­ances and make phone calls. Apple recent­ly intro­duced its own ver­sion, called the Home­Pod.

    ...

    Both Ama­zon and Google have empha­sized that devices with Alexa and Google Assis­tant store voice record­ings from users only after they are inten­tion­al­ly trig­gered. Amazon’s Echo and its new­er smart speak­ers with screens use lights to show when they are stream­ing audio to the cloud, and con­sumers can view and delete their record­ings on the Alexa smart­phone app or on Amazon’s web­site (though they are warned online that “may degrade” their expe­ri­ence). Google Home also has a light that indi­cates when it is record­ing, and users can sim­i­lar­ly see and delete that audio online.

    Ama­zon says voice record­ings may help ful­fill requests and improve its ser­vices, while Google says the data helps it learn over time to pro­vide bet­ter, more per­son­al­ized respons­es.

    But the ecosys­tem around voice data is still evolv­ing.

    Take the thou­sands of third-par­ty apps devel­oped for Alexa called “skills,” which can be used to play games, dim lights or pro­vide clean­ing advice. While Ama­zon said it didn’t share users’ actu­al record­ings with third par­ties, its terms of use for Alexa say it may share the con­tent of their requests or infor­ma­tion like their ZIP codes. Google says it will “gen­er­al­ly” not pro­vide audio record­ings to third-par­ty ser­vice providers, but may send tran­scrip­tions of what peo­ple say.

    And some devices have already shown that they are capa­ble of record­ing more than what users expect. Google faced some embar­rass­ment last fall when a batch of Google Home Min­is that it dis­trib­uted at com­pa­ny events and to jour­nal­ists were almost con­stant­ly record­ing.

    In a stark­er exam­ple, detec­tives inves­ti­gat­ing a death at an Arkansas home sought access to audio on an Echo device in 2016. Ama­zon resist­ed, but the record­ings were ulti­mate­ly shared with the per­mis­sion of the defen­dant, James Bates. (A judge lat­er dis­missed Mr. Bates’s first-degree mur­der charge based on sep­a­rate evi­dence.)

    Kath­leen Zell­ner, his lawyer, said in an inter­view that the Echo had been record­ing more than it was sup­posed to. Mr. Bates told her that it had been reg­u­lar­ly light­ing up with­out being prompt­ed, and had logged con­ver­sa­tions that were unre­lat­ed to Alexa com­mands, includ­ing a con­ver­sa­tion about foot­ball in a sep­a­rate room, she said.

    “It was just extreme­ly slop­py the way the acti­va­tion occurred,” Ms. Zell­ner said.

    The Elec­tron­ic Pri­va­cy Infor­ma­tion Cen­ter has rec­om­mend­ed more robust dis­clo­sure rules for inter­net-con­nect­ed devices, includ­ing an “algo­rith­mic trans­paren­cy require­ment” that would help peo­ple under­stand how their data was being used and what auto­mat­ed deci­sions were then being made about them.

    Sam Lester, the center’s con­sumer pri­va­cy fel­low, said he believed that the abil­i­ties of new smart home devices high­light­ed the need for Unit­ed States reg­u­la­tors to get more involved with how con­sumer data was col­lect­ed and used.

    “A lot of these tech­no­log­i­cal inno­va­tions can be very good for con­sumers,” he said. “But it’s not the respon­si­bil­i­ty of con­sumers to pro­tect them­selves from these prod­ucts any more than it’s their respon­si­bil­i­ty to pro­tect them­selves from the safe­ty risks in food and drugs. It’s why we estab­lished a Food and Drug Admin­is­tra­tion years ago.”

    ———–

    “Hey, Alexa, What Can You Hear? And What Will You Do With It?” by SAPNA MAHESHWARI; The New York Times; 03/31/2018

    “While the ad riffed on what Alexa can say to users, the more intrigu­ing ques­tion may be what she and oth­er dig­i­tal assis­tants can hear — espe­cial­ly as more peo­ple bring smart speak­ers into their homes.”

    It’s one of the conun­drums of the smart speak­er busi­ness mod­el: it’s obvi­ous these smart speak­er man­u­fac­tur­ers would love to just col­lect all the infor­ma­tion they can about what peo­ple are say­ing and doing, but they need to main­tain the pre­tense of not doing that in order to get peo­ple to buy their devices. So it’s no sur­prise that Google and Ama­zon rou­tine­ly make it clear that their devices are only record­ing infor­ma­tion after they’ve been acti­vat­ed by the users. But as these patents make clear, there are all sorts of home life sur­veil­lance appli­ca­tions that these com­pa­nies have in mind. Like the smart home child mon­i­tor­ing sys­tem, with whis­per detec­tion capa­bil­i­ties and mis­chief-detect­ing AI capa­bil­i­ties:

    ...
    Ama­zon and Google, the lead­ing sell­ers of such devices, say the assis­tants record and process audio only after users trig­ger them by push­ing a but­ton or utter­ing a phrase like “Hey, Alexa” or “O.K., Google.” But each com­pa­ny has filed patent appli­ca­tions, many of them still under con­sid­er­a­tion, that out­line an array of pos­si­bil­i­ties for how devices like these could mon­i­tor more of what users say and do. That infor­ma­tion could then be used to iden­ti­fy a person’s desires or inter­ests, which could be mined for ads and prod­uct rec­om­men­da­tions.

    In one set of patent appli­ca­tions, Ama­zon describes how a “voice snif­fer algo­rithm” could be used on an array of devices, like tablets and e‑book read­ers, to ana­lyze audio almost in real time when it hears words like “love,” bought” or “dis­like.” A dia­gram includ­ed with the appli­ca­tion illus­trat­ed how a phone call between two friends could result in one receiv­ing an offer for the San Diego Zoo and the oth­er see­ing an ad for a Wine of the Month Club mem­ber­ship.

    Some patent appli­ca­tions from Google, which also owns the smart home prod­uct mak­er Nest Labs, describe how audio and visu­al sig­nals could be used in the con­text of elab­o­rate smart home setups.

    One appli­ca­tion details how audio mon­i­tor­ing could help detect that a child is engag­ing in “mis­chief” at home by first using speech pat­terns and pitch to iden­ti­fy a child’s pres­ence, one fil­ing said. A device could then try to sense move­ment while lis­ten­ing for whis­pers or silence, and even pro­gram a smart speak­er to “pro­vide a ver­bal warn­ing.”
    ...

    “One appli­ca­tion details how audio mon­i­tor­ing could help detect that a child is engag­ing in “mis­chief” at home by first using speech pat­terns and pitch to iden­ti­fy a child’s pres­ence, one fil­ing said. A device could then try to sense move­ment while lis­ten­ing for whis­pers or silence, and even pro­gram a smart speak­er to “pro­vide a ver­bal warn­ing.”

    Lis­ten­ing for the mis­chie­vous whis­pers of chil­dren and issu­ing a ver­bal warn­ing. Those are the kinds of capa­bil­i­ties com­pa­nies like Google, Ama­zon, and now Face­book are going to be invest­ing in. And it will prob­a­bly be very pop­u­lar because that would be a very handy tool for par­ents to have smart home sys­tems that lit­er­al­ly watch the kids. But it’s going to come at the cost of open­ing up our homes to mon­i­tor­ing by one of these data giants. And that’s insane, right?

    Anoth­er patent not­ed how the smart speak­ers could detect med­ical con­di­tions from your voice, like detect­ing cough­ing, sneez­ing, and the breath­ing rate. And that’s just an exam­ple of the kind of per­son­al data these devices are clear­ly capa­ble of gath­er­ing and they’re only going to get bet­ter at it:

    ...
    A sep­a­rate appli­ca­tion regard­ing per­son­al­iz­ing con­tent for peo­ple while respect­ing their pri­va­cy not­ed that voic­es could be used to deter­mine a speaker’s mood using the “vol­ume of the user’s voice, detect­ed breath­ing rate, cry­ing and so forth,” and med­ical con­di­tion “based on detect­ed cough­ing, sneez­ing and so forth.”

    The same appli­ca­tion out­lines how a device could “rec­og­nize a T‑shirt on a floor of the user’s clos­et” bear­ing Will Smith’s face and com­bine that with a brows­er his­to­ry that shows search­es for Mr. Smith “to pro­vide a movie rec­om­men­da­tion that dis­plays, ‘You seem to like Will Smith. His new movie is play­ing in a the­ater near you.’”
    ...

    “The same appli­ca­tion out­lines how a device could “rec­og­nize a T‑shirt on a floor of the user’s clos­et” bear­ing Will Smith’s face and com­bine that with a brows­er his­to­ry that shows search­es for Mr. Smith “to pro­vide a movie rec­om­men­da­tion that dis­plays, ‘You seem to like Will Smith. His new movie is play­ing in a the­ater near you.’””

    The smart speak­er cam­era is going to inter­face things it sees in your home with your brows­er his­to­ry. For ad tar­get­ing. That’s a patent.

    It’s why Con­sumer Watch­dog’s Jamie Court warn­ings that these con­sumer home devices are real­ly just home life spy­ware should be heed­ed. Because it’s pret­ty obvi­ous that the plan is to turn these things into home activ­i­ty mon­i­tor­ing devices. And with 22 per­cent of Amer­i­cans say­ing they use a “Home per­son­al assis­tants” in a recent Gallup poll, that real­ly does make the com­ing era of smart device home mon­i­tor­ing a pub­lic pri­va­cy night­mare:

    ...
    Still, Jamie Court, the pres­i­dent of Con­sumer Watch­dog, a non­prof­it advo­ca­cy group in San­ta Mon­i­ca, Calif., which pub­lished a study of some of the patent appli­ca­tions in Decem­ber, said, “When you read parts of the appli­ca­tions, it’s real­ly clear that this is spy­ware and a sur­veil­lance sys­tem meant to serve you up to adver­tis­ers.”

    The com­pa­nies, Mr. Court added, are “basi­cal­ly going to be find­ing out what our home life is like in qual­i­ta­tive ways.”

    Google called Con­sumer Watchdog’s claims “unfound­ed,” and said, “Prospec­tive prod­uct announce­ments should not nec­es­sar­i­ly be inferred from our patent appli­ca­tions.”

    A recent Gallup poll found found that 22 per­cent of Amer­i­cans used devices like Google Home or Ama­zon Echo. The grow­ing adop­tion of smart speak­ers means that gad­gets, some of which con­tain up to eight micro­phones and a cam­era, are being placed in kitchens and bed­rooms and used to answer ques­tions, con­trol appli­ances and make phone calls. Apple recent­ly intro­duced its own ver­sion, called the Home­Pod.
    ...

    Of course, both Google and Ama­zon assure us that their devices are only record­ing audio after they’re trig­gered. And it’s only being used to improve the user expe­ri­ence and make it more per­son­al­ized:

    ...
    Both Ama­zon and Google have empha­sized that devices with Alexa and Google Assis­tant store voice record­ings from users only after they are inten­tion­al­ly trig­gered. Amazon’s Echo and its new­er smart speak­ers with screens use lights to show when they are stream­ing audio to the cloud, and con­sumers can view and delete their record­ings on the Alexa smart­phone app or on Amazon’s web­site (though they are warned online that “may degrade” their expe­ri­ence). Google Home also has a light that indi­cates when it is record­ing, and users can sim­i­lar­ly see and delete that audio online.

    Ama­zon says voice record­ings may help ful­fill requests and improve its ser­vices, while Google says the data helps it learn over time to pro­vide bet­ter, more per­son­al­ized respons­es.
    ...

    And while Google assures us those voice record­ings will only be used to per­son­al­ize the expe­ri­ence, Google’s user agree­ment includes the pos­si­bil­i­ty of send­ing tran­scripts of what peo­ple say to third-par­ty ser­vice providers. And it “gen­er­al­ly” won’t send audio sam­ples to those third-par­ty providers. It’s an exam­ple of how lit­tle audio and visu­al snip­pets of peo­ple’s home life are becom­ing the new “mouse click” of con­sumer data col­lect­ed and sold in exchange for a dig­i­tal ser­vice:

    ...
    Take the thou­sands of third-par­ty apps devel­oped for Alexa called “skills,” which can be used to play games, dim lights or pro­vide clean­ing advice. While Ama­zon said it didn’t share users’ actu­al record­ings with third par­ties, its terms of use for Alexa say it may share the con­tent of their requests or infor­ma­tion like their ZIP codes. Google says it will “gen­er­al­ly” not pro­vide audio record­ings to third-par­ty ser­vice providers, but may send tran­scrip­tions of what peo­ple say.
    ...

    And it’s not like these patents are nec­es­sar­i­ly future pri­va­cy night­mares. They’re poten­tial­ly present pri­va­cy night­mares if it’s the case that these devices are actu­al­ly just col­lect­ing data all the time in secret. And in a num­ber of doc­u­ment­ed cas­es that’s been exact­ly what hap­pened, includ­ing a mur­der case par­tial­ly solved by an Ama­zon Echo with a propen­si­ty to start record­ing ran­dom­ly:

    ...
    And some devices have already shown that they are capa­ble of record­ing more than what users expect. Google faced some embar­rass­ment last fall when a batch of Google Home Min­is that it dis­trib­uted at com­pa­ny events and to jour­nal­ists were almost con­stant­ly record­ing.

    In a stark­er exam­ple, detec­tives inves­ti­gat­ing a death at an Arkansas home sought access to audio on an Echo device in 2016. Ama­zon resist­ed, but the record­ings were ulti­mate­ly shared with the per­mis­sion of the defen­dant, James Bates. (A judge lat­er dis­missed Mr. Bates’s first-degree mur­der charge based on sep­a­rate evi­dence.)

    Kath­leen Zell­ner, his lawyer, said in an inter­view that the Echo had been record­ing more than it was sup­posed to. Mr. Bates told her that it had been reg­u­lar­ly light­ing up with­out being prompt­ed, and had logged con­ver­sa­tions that were unre­lat­ed to Alexa com­mands, includ­ing a con­ver­sa­tion about foot­ball in a sep­a­rate room, she said.

    “It was just extreme­ly slop­py the way the acti­va­tion occurred,” Ms. Zell­ner said.
    ...

    And that’s all why bet­ter con­sumer reg­u­la­tion in this area real­ly is called fall, because there’s no way con­sumers can real­is­ti­cal­ly nav­i­gate this tech­no­log­i­cal land­scape:

    ...
    Sam Lester, the center’s con­sumer pri­va­cy fel­low, said he believed that the abil­i­ties of new smart home devices high­light­ed the need for Unit­ed States reg­u­la­tors to get more involved with how con­sumer data was col­lect­ed and used.

    “A lot of these tech­no­log­i­cal inno­va­tions can be very good for con­sumers,” he said. “But it’s not the respon­si­bil­i­ty of con­sumers to pro­tect them­selves from these prod­ucts any more than it’s their respon­si­bil­i­ty to pro­tect them­selves from the safe­ty risks in food and drugs. It’s why we estab­lished a Food and Drug Admin­is­tra­tion years ago.”
    ...

    And that’s one of the big ques­tions that real­ly should be asked in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal: does the US need some­thing like the Food and Drug Admin­is­tra­tion for data pri­va­cy for devices? Some­thing far more sub­stan­tial than the reg­u­la­to­ry infra­struc­ture that exists today and is ded­i­cat­ed to ensur­ing trans­paren­cy of data col­lec­tion prac­tices? It seems like the answer is obvi­ous­ly yes. And if the Cam­bridge Ana­lyt­i­ca scan­dal is enough evi­dence those Orwellian patents should suf­fice. It

    And as the Cam­bridge Ana­lyt­i­ca scan­dal also reminds us, we can either wait for the data abus­es to hap­pen and only belat­ed­ly deal with the prob­lem or we can deal with it proac­tive­ly. And deal­ing with it proac­tive­ly real­is­ti­cal­ly involves some­thing like an FDA for data pri­va­cy.

    But as we also just saw with those creepy patents, espe­cial­ly the child monitoring/scolding patent, con­sumers have much more than data pri­va­cy con­cerns with the world of smart devices Google and Face­book and Ama­zon have in mind. That future is going to involve devices that are lit­er­al­ly rais­ing the kids. Move over tele­vi­sion, it’s par­ent­ing brought to you by smart home AIs and Sil­i­con Val­ley.

    And let’s also not for­get one of the oth­er lessons that we can take from the Cam­bridge Ana­lyt­i­ca scan­dal: the data col­lect­ed by these smart devices isn’t just going to be col­lect­ed by Google and Face­book and Ama­zon. Some of that data is going to be col­lect­ed by all the third-par­ty app devel­op­ers too. Home life, brought to you by Google/Facebook/Amazon. That’s going to be a thing.

    At the same time it’s unde­ni­able that there will be very pos­i­tive appli­ca­tions for this kind of tech­nol­o­gy. And that’s why it’s such a shame com­pa­nies with the track record of Face­book and Google and Ama­zon are the ones lead­ing this kind of tech­no­log­i­cal rev­o­lu­tion: like much tech­nol­o­gy, the con­sumer home smart device tech­nol­o­gy is heav­i­ly reliant on trust in the man­u­fac­tur­er and trust that the man­u­fac­tur­er won’t screw things up and turn their device into a pri­va­cy night­mare. That’s not the kind of sit­u­a­tion where you want Google, Face­book, and Ama­zon lead­ing the way.

    So that’s all some­thing to keep in mind when Face­book does­n’t talk about its upcom­ing smart speak­ers at its annu­al devel­op­ers con­fer­ence next month.

    Posted by Pterrafractyl | April 8, 2018, 9:32 pm
  6. Here’s a fas­ci­nat­ing angle to the Cam­bridge Ana­lyt­i­ca scan­dal that involves an East­ern Ukrain­ian politi­cian with pro-EU lean­ings and ties to Yulia Tymoshenko and the Azov Bat­tal­ion:

    It turns out Cam­bridge Ana­lyt­i­ca out­sourced the pro­duc­tion of its “Ripon” psy­cho­log­i­cal pro­fil­ing soft­ware to a sep­a­rate com­pa­ny, Aggre­gateIQ (AIQ). AIQ was found­ed by Cam­bridge Ana­lyt­i­ca co-founder/whis­tle-blow­er Christo­pher Wylie, so it’s basi­cal­ly a sub­sidiary of Cam­bridge Ana­lyt­i­ca. But they were tech­ni­cal­ly sep­a­rate com­pa­nies and it turns out that AIQ could end up play­ing a big role in an inves­ti­ga­tion into whether or not UK elec­tion laws were vio­lat­ed by the “Vote Leave” camp dur­ing the lead up to the Brex­it vote.

    It looks like the “Vote Leave” camp basi­cal­ly secret­ly spent more than it legal­ly could using AIQ as a vehi­cle for doing this. Here’s how it worked: There was offi­cial “leave” polit­i­cal cam­paign but there were also third-par­ty pro-leave cam­paigns. One of those was Leave.EU. In 2016, Robert Mer­cer offered Leave.EU the ser­vices of Cam­bridge Ana­lyt­i­ca for free. Leave.EU relied on Cam­bridge Ana­lyt­i­ca’s ser­vices for its vot­er influ­ence cam­paign.

    The offi­cial Vote Leave cam­paign, on the oth­er hand, relied on AIQ for its data ana­lyt­ics ser­vices. Vote Leave even­tu­al­ly payed AIQ rough­ly 40 per­cent of its £7 mil­lion cam­paign bud­get. Here’s where the ille­gal­i­ty came int: Vote Leave also end­ed up gath­er­ing more cash than British law legal­ly allowed it to spend. Vote Leave could legal­ly donate that cash to oth­er cam­paigns but it could­n’t then coor­di­nate with those cam­paigns. But that’s exact­ly what it looks like Vote Leave did. About a a week before the EU ref­er­en­dum, Vote Leave inex­plic­a­bly donat­ed £625,000 to the founder of a small, unof­fi­cial Brex­it cam­paign called BeLeave. Grimes then imme­di­ate­ly gave a sub­stan­tial amount of the cash he received to AIQ. Vote Leave also donat­ed £100,000 to anoth­er Leave cam­paign called Vet­er­ans for Britain, which then paid AIQ pre­cise­ly that amount. So Vote Leave was basi­cal­ly using these small ‘leave’ groups as cam­paign mon­ey laun­der­ing vehi­cles, with AIQ as the final des­ti­na­tion of that mon­ey.

    That’s all why AIQ is now the focus of British inves­ti­ga­tors. AIQ’s role in this came to light in part from thou­sands of pages of code that was dis­cov­ered by a cyber­se­cu­ri­ty researcher at UpGuard on the web page of a devel­op­er named Ali Yas­sine who worked for SCL Group. With­in the code are notes that show SCL had request­ed that code be turned over by AIQ’s lead devel­op­er, Koji Hamid Pourseyed.

    AIQ’s con­tract with SCL stip­u­lates that SCL is the sole own­er of “Ripon”, Cam­bridge Analytica’s cam­paign plat­form. The doc­u­ments also include an inter­nal wiki where AIQ devel­op­ers also dis­cussed a project known as The Data­base of Truth, a sys­tem that “inte­grates, obtains, and nor­mal­izes data from dis­parate sources, includ­ing start­ing with the RNC Data Trust. It’s a reminder that the sto­ry of Cam­bridge Ana­lyt­i­ca isn’t just a sto­ry about the Trump cam­paign or the Brex­it vote. It’s also about the Repub­li­can Par­ty’s polit­i­cal ana­lyt­ics in gen­er­al

    Also includ­ed to the dis­cov­ered AIQ files were notes relat­ed to active projects for Cruz, Abbott, and a Ukrain­ian oli­garch, Sergei Taru­ta.

    So who is Sergei Taru­ta? Well, he’s a Ukrain­ian bil­lion­aire and co-founder of the Indus­tri­al Union of Don­bass, one of the largest com­pa­nies in Ukraine. He was appoint­ed gov­er­nor of the Donet­sk Oblast in East­ern Ukraine by Petro Poroshenko in March of 2014 before being fired in Octo­ber of 2014.

    Taru­ta went on to get elect­ed to par­lia­ment where he remains today. He recent­ly co-found­ed the “Osno­va” polit­i­cal par­ty that describes itself as pop­ulist and a pro­mot­er of “lib­er­al con­ser­vatism” (pre­sum­ably it’s “lib­er­al” in the lib­er­tar­i­an sense). It’s sus­pect­ed by some that Rinat Akhme­tov, Ukraine’s wealth­i­est oli­garch and anoth­er East­ern Ukrain­ian who strad­dles the line between back­ing the Kiev gov­ern­ment and main­tain­ing friend­ly ties with the pro-Russ­ian seg­ments of East­ern Urkaine, is also one of the par­ty back­ers. Ahk­me­tov was a sig­nif­i­cant backer of Yankovy­ch’s Par­ty of Regions and a dom­i­nant fig­ure in the Oppo­si­tion Bloc today. It was Ahk­me­tov who ini­tial­ly hired Paul Man­afort back in 2005 to act as a polit­i­cal con­sul­tant.

    It’s report­ed­ly pret­ty clear that Taru­ta’s Osno­va par­ty is designed to splin­ter away ex-sup­port­ers of Vik­tor Yankovy­ch’s Par­ty of Regions based on the politi­cians who have already declared they are going to join it. And yet as a politi­cian Taru­ta is char­ac­ter­ized as hav­ing nev­er real­ly tried to cozy up to the pro-Russ­ian side and has a his­to­ry of sup­port­ing pro-EU politi­cians. In 2006 he sup­port­ed Vik­tor Yuschenko over Vik­tor Yanukovych. In 2010 he backed Yulia Tymoshenko over Yik­tor Yanukovych.

    So Taru­ta a pro-EU East­ern Ukrain­ian politi­cian, which is notable because he’s not the only pro-EU East­ern Ukrain­ian politi­cian to be involved with enti­ties and fig­ures in the #TrumpRus­sia orbit. Don’t for­get about Andreii Arte­menko, the Ukrain­ian politi­cian who was involved with that ‘peace plan’ pro­pos­al with Michael Cohen and Felix Sater — a pro­pos­al that may have been part of a broad­er offer made to Rus­sia over both Ukraine and the Syr­ia and Iran — and how Arte­menko was a pro-EU mem­ber of the far right “Rad­i­cal Par­ty” and also has ties to Right Sec­tor. Arte­menko head­ed up the Kiev depart­ment of Yulia Tymoshenko’s Batkivshchy­na Par­ty par­ty back in 2006 and was serv­ing in a coali­tion head­ed by Tymoshenko.

    Also recall that the fig­ure who appears to have arranged for the ini­tial con­tact between Andreii Arte­menko with Michael Cohen and Felix Sater was Alexan­der Oronov, the father-in-law of Michael Cohen’s broth­er. And Oronov was, him­self, co-owned an ethanol plant with Vik­tor Topolov, anoth­er Ukrain­ian oli­garch who was Vik­tor Yuschenko’s coal min­is­ter and who because an assas­si­na­tion tar­get by Semi­on Mogilevy­ch’s mafia orga­ni­za­tion. One of Topolov’s part­ners who was also tar­get­ed by Mogilevych, Sla­va Kon­stan­ti­novsky, end up form­ing and join­ing one of the “vol­un­teer bat­tal­ions” fight­ing the sep­a­ratists in the East.

    So now we learn that AIQ (so, basi­cal­ly Cam­bridge Ana­lyt­i­ca) is doing some sort of work for Sergei Taru­ta, putting anoth­er East­ern Ukrain­ian oli­garch politi­cians with pro-EU lean­ings in the orbit of this #TrumpRus­sia scan­dal.

    So what kind of work did AIQ do for Taru­ta? That’s unclear. And it seems rea­son­able to assume that it’s work involv­ing Taru­ta’s new par­ty in Ukraine and its attempts to splin­ter off for­mer Par­ty of Regions vot­ers.

    But as we’re also going to see, Sergei Taru­ta has been doing some lob­by­ing work in Wash­ing­ton DC. Rather curi­ous lob­by­ing work: It turns out Taru­ta was at the cen­ter of a bizarre ‘con­gres­sion­al hear­ing’ that took place in the US cap­i­tal last Sep­tem­ber. This hear­ing focused on cor­rup­tion alle­ga­tions Taru­ta has been pro­mot­ing for over a year against the Nation­al Bank of Ukraine, the coun­try’s cen­tral bank.

    There were two Ukrain­ian tele­vi­sion sta­tions cov­er­ing the event and pre­tend­ing like it was a real con­gres­sion­al hear­ing. For­mer CIA direc­tor James Woolsey, who was briefly part of the Trump cam­paign, was also at the event, along with for­mer Repub­li­can House mem­ber Con­nie Mack, who is now a lob­by­ist. Mack was basi­cal­ly pre­tend­ing to speak on behalf of the US Con­gress and express­ing out­rage over Taru­ta’s cor­rup­tion alle­ga­tions for the Ukrain­ian tele­vi­sion audi­ences while express­ing his resolve to inves­ti­gate it. Rep. Ron Estes, a fresh­man Repub­li­can, booked the room in the US Cap­i­tal for Mack and lob­by­ing first. Estes’s office lat­er said it won’t hap­pen again.

    And there’s anoth­er twist to this strange attack on the Nation­al Bank of Ukraine: Accord­ing to Vox Ukraine, a a num­ber of the crit­i­cisms Taru­ta brings against the bank are based on dis­tor­tions and half-truths. In oth­er words, it does­n’t appear to be a gen­uine anti-cor­rup­tion cam­paign. So what is Taru­ta moti­va­tion? Well it’s notable that his crit­i­cism of the Nation­al Bank of Urkaine extends back to the actions of its pre­vi­ous chair, Valeriya Gontare­va (Hontare­va). Gontare­va was appoint­ed chair­man of the bank in June of 2014. And one of her first big moves was the gov­ern­ment takeover of Ukraine’s biggest com­mer­cial bank, Pri­vat­bank. Pri­vate­bank was co-found­ed by Ihor Kolo­moisky, anoth­er East­ern Ukrain­ian oli­garch.

    Ihor Kolo­moisky was appoint­ed gov­er­nor of the East­ern oblast of Dnipropetro­vsk at the same time Taru­ta was appoint­ed gov­er­nor of Donet­sk. Kolo­moisky has been sup­port­ing the Kiev gov­ern­ment in the civ­il war by finan­cial­ly sup­port­ing a num­ber of the vol­un­teer bat­tal­ions, includ­ing direct­ly cre­at­ing the large pri­vate Dnipro Bat­tal­ion. As we’ll see, both Kolo­moisky and Taru­ta report­ed­ly sup­port­ed the neo-Nazi Azov Bat­tal­ion accord­ing to a 2015 Reuters report. In oth­er words, Kolo­moisky is an East­ern Ukrain­ian oli­garch with ties to the far right, kind of like Andreii Arte­menko.

    Kolo­moisky was­n’t hap­py about the takeover of Pri­vat­bank. When Gontare­va presided over the bank’s nation­al­iza­tion, its accounts were miss­ing more than $5 bil­lion in large part because the bank lent so much mon­ey to peo­ple with con­nec­tions to Kolo­moisky. After the bank takeover, Gontare­va received numer­ous threats. On April 10, 2017, she announced at a press con­fer­ence that she was resign­ing from her post.

    So it looks like Sergei Taru­ta might be wag­ing an inter­na­tion­al PR bat­tle in against the Nation­al Bank of Ukraine as part of a counter move on half of Ihor Kolo­moisky an the Pri­vat­bank investors.

    And then there’s the per­son who actu­al­ly orga­nized this fake con­gres­sion­al hear. A lit­tle-known fig­ure came for­ward to take full respon­si­bil­i­ty: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch accused of lead­ing a coup attempt. Motkin the founder and pres­i­dent of StrategEast, a lob­by­ing firm that describes itself as “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe.”

    That’s all some new con­text to fac­tor into the analy­sis of Cam­bridge Ana­lyt­i­ca and the forces it was work­ing for: one of its clients is a pro-EU East­ern Ukrain­ian oli­garch who just set up a polit­i­cal par­ty designed to appeal to for Yanukovych sup­port­ers.

    Ok, so first, let’s look at the sto­ry of Cam­bridge Ana­lyt­i­ca and Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot that was used to both devel­op the GOP’s “Ripon” ana­lyt­ics soft­ware and also act­ed as the ana­lyt­ics firm for the Vote Leave cam­paign. And the work AIQ was doing for Vote Leave was appar­ent­ly so valu­able that Vote Leave secret­ly laun­der almost a mil­lion pounds through two small­er ‘leave’ groups in order to get that mon­ey to AIQ and secret­ly exceed the legal spend­ing caps. And that’s the dis­cov­ery of thou­sands of AIQ doc­u­ments by a cyber­se­cu­ri­ty firm is so polit­i­cal­ly sig­nif­i­cant in the UK right now. But as those doc­u­ments also reveal, AIQ was doing work for oth­er clients: Texas Gov­er­nor Greg Abbott, Texas Sen­a­tor Ted Cruz, and Ukrain­ian oli­garch Sergei Taru­ta:

    Giz­mo­do

    Aggre­gateIQ Cre­at­ed Cam­bridge Ana­lyt­i­ca’s Elec­tion Soft­ware, and Here’s the Proof

    Dell Cameron
    3/26/18 12:50pm

    A lit­tle-known Cana­di­an data firm ensnared by an inter­na­tion­al inves­ti­ga­tion into alleged wrong­do­ing dur­ing the Brex­it cam­paign cre­at­ed an elec­tion soft­ware plat­form mar­ket­ed by Cam­bridge Ana­lyt­i­ca, accord­ing to a batch of inter­nal files obtained exclu­sive­ly by Giz­mo­do.

    Dis­cov­ered by a secu­ri­ty researcher last week, the files con­firm that Aggre­gateIQ, a British Colum­bia-based data firm, devel­oped the tech­nol­o­gy Cam­bridge Ana­lyt­i­ca sold to clients for mil­lions of dol­lars dur­ing the 2016 US pres­i­den­tial elec­tion. Hun­dreds if not thou­sands of pages of code, as well as detailed notes signed by Aggre­gateIQ staff, whol­ly sub­stan­ti­ate recent reports that Cam­bridge Analytica’s soft­ware plat­form was not its own cre­ation.

    What’s more, the files reveal that AggregateIQ—also known as “AIQ”—is the devel­op­er behind cam­paign apps cre­at­ed for Texas Sen­a­tor Ted Cruz and Texas Gov­er­nor Greg Abbott, as well as a Ukrain­ian steel mag­nate named Ser­hiy Taru­ta, head the country’s new­ly formed Osno­va par­ty.

    Oth­er records show the firm once pitched an app to Bre­it­bart News, the far-right web­site fund­ed by hedge-fund bil­lion­aire Robert Mercer—Cambridge Analytica’s prin­ci­pal investor—and are cur­rent­ly con­tract­ed by WPA Intel­li­gence, a US-based con­sul­tan­cy found­ed by Repub­li­can poll­ster Chris Wil­son, who was direc­tor of dig­i­tal strat­e­gy for Cruz’s 2016 pres­i­den­tial cam­paign.

    The files were unearthed last week by Chris Vick­ery, research direc­tor at UpGuard, a Cal­i­for­nia-based cyber risk firm. On Sun­day night, after Giz­mo­do reached out to Jeff Sil­vester, co-founder of AIQ, the files were quick­ly tak­en offline.

    The files are like­ly to draw the inter­est of inves­ti­ga­tors on both sides of the Atlantic. Cana­di­an and British reg­u­la­tors are cur­rent­ly pur­su­ing leads to estab­lish whether mul­ti­ple “Leave” cam­paigns ille­gal­ly coor­di­nat­ed dur­ing the 2016 EU ref­er­en­dum.

    Ties between AIQ and Cam­bridge Analytica—the focus of recent wide­spread furor over the mis­use of data pulled from 50 mil­lion Face­book accounts—has like­wise drawn the inter­est of US and British author­i­ties. Cam­bridge CEO Alexan­der Nix was sus­pend­ed by his com­pa­ny last week after British reporters pub­lished covert­ly record­ed footage show­ing Nix boast­ing about brib­ing and black­mail­ing polit­i­cal rivals.

    Cam­bridge Ana­lyt­i­ca did not respond to a request for com­ment.

    AIQ is bound by an non-dis­clo­sure agree­ment the com­pa­ny signed in 2014 to take on for­mer client SCL Group, Cam­bridge Analytica’s par­ent com­pa­ny, accord­ing to a source with direct knowl­edge of the con­tract.

    In an inter­view over the week­end with London’s The Observ­er, Christo­pher Wylie, the for­mer Cam­bridge Ana­lyt­i­ca employ­ee turned whistle­blow­er, claimed that he helped estab­lish AIQ years ago in an effort to help SCL Group expand its data oper­a­tions. Sil­vester denied that Wylie was ever involved on that lev­el, but admits that Wylie helped AIQ land its first big con­tract.

    “We did some work with SCL and had a con­tract with them in 2014 for some cus­tom soft­ware devel­op­ment,” Sil­vester told Giz­mo­do. “We last worked with SCL in 2016 and have not worked with them since.”

    Data Exposed

    UpGuard first dis­cov­ered code belong­ing to AIQ last Thurs­day on the web page of a devel­op­er named Ali Yas­sine who worked for SCL Group. With­in the code—uploaded to the web­site GitHub in August 2016—are notes that show SCL had request­ed that code be turned over by AIQ’s lead devel­op­er, Koji Hamid Pourseyed.

    AIQ’s con­tract with SCL, a por­tion of which was pub­lished by The Guardian last year, stip­u­lates that SCL is the sole own­er of the intel­lec­tu­al prop­er­ty per­tain­ing to the contract—namely, the devel­op­ment of Ripon, Cam­bridge Analytica’s cam­paign plat­form.

    The find led UpGuard to unearth a code repos­i­to­ry on AIQ’s web­site. With­in it were count­less files link­ing AIQ to the Ripon pro­gram, as well as notes relat­ed to active projects for Cruz, Abbott, and the Ukrain­ian oli­garch.

    In an inter­nal wiki, AIQ devel­op­ers also dis­cussed a project known as The Data­base of Truth, a sys­tem that “inte­grates, obtains, and nor­mal­izes data from dis­parate sor­ces, includ­ing start­ing with the RNC Data Trust.” (RNC Data Trust is the Repub­li­can party’s pri­ma­ry vot­er file provider.) “The pri­ma­ry data source will be com­bined with state vot­er files, con­sumer data, third par­ty data providers, his­tor­i­cal WPA sur­vey and projects and cus­tomer data.”

    The Data­base of Truth, accord­ing to the wiki, is a project under devel­op­ment for WPA Intel­li­gence.

    Wil­son told Giz­mo­do on Mon­day that he has almost no knowl­edge of the con­tro­ver­sy unfold­ing over AIQ’s role in the UK. “I would nev­er work with a firm that I felt had done some­thing ille­gal or even uneth­i­cal,” he said. AIQ’s work for WPA fol­lowed a com­pet­i­tive bid process, he said. “They offered us the best capa­bil­i­ties for the best price.”

    Vapor­ware

    Until recent­ly, Cam­bridge Ana­lyt­i­ca oper­at­ed large­ly in the shad­ows. For years, it planned to tar­get right-lean­ing vot­ers for a host of high-pro­file polit­i­cal cam­paigns, work­ing for both Cruz and Pres­i­dent Don­ald Trump. With its bil­lion­aire back­ing, the firm promised to lever­age oceans of data col­lect­ed about voters—which we now know was acquired from sources both legal and unau­tho­rized.

    Known as Project Ripon, Cam­bridge Analytica’s goal was to fur­nish Repub­li­can can­di­dates with a tech­nol­o­gy plat­form capa­ble of reach­ing vot­ers through the use of psy­cho­log­i­cal pro­fil­ing. (SCL Group has long used behav­ioral research to con­duct “influ­ence oper­a­tions” on behalf of mil­i­tary and polit­i­cal clients world­wide.)

    Cam­bridge Ana­lyt­i­ca, which even­tu­al­ly chose AIQ to help build its plat­form, once boast­ed that it pos­sessed files on as many as 230 mil­lion Amer­i­cans com­piled from thou­sands of data points, includ­ing psy­cho­log­i­cal data har­vest­ed from social media, as well as com­mer­cial data avail­able to vir­tu­al­ly any­one who can afford it. The com­pa­ny intend­ed to clas­si­fy vot­ers by select per­son­al­i­ty types, apply­ing its sys­tem to craft mes­sages, online ads, and mail­ers that, it believed, would res­onate dis­tinc­tive­ly with vot­ers of each group.

    Sources with­in the Cruz cam­paign, which large­ly fund­ed Ripon’s devel­op­ment, claim the soft­ware nev­er actu­al­ly func­tioned. One for­mer staffer told Giz­mo­do the prod­uct was noth­ing but “vapor­ware.”

    AIQ’s inter­nal files show the com­pa­ny had unlim­it­ed access to the Ripon code, and a source with­in the Cruz cam­paign con­firmed to Giz­mo­do that AIQ was sole­ly respon­si­ble for the software’s devel­op­ment.

    The cam­paign even­tu­al­ly dumped more than $5.8 mil­lion into Ripon’s development—which is only about half the amount Robert Mer­cer, Cam­bridge Analytica’s prin­ci­pal investor, poured into Cruz’s White House bid. (After Trump took the nom­i­na­tion, Mer­cer con­tributed more than $15.5 mil­lion to his cam­paign, $5 mil­lion of which end­ed up back in Cam­bridge Analytica’s pock­ets.)

    A for­mer Cruz aide, who request­ed anonymi­ty to dis­cuss their work for the cam­paign, told Giz­mo­do that even at the high­est lev­els, no one knew that Cam­bridge Ana­lyt­i­ca had out­sourced Ripon’s devel­op­ment. “Ulti­mate­ly, I found out through some emails that they’re not even doing this work,” the source said. “It was being out­sourced to AIQ.”

    Accord­ing to the aide, when Cruz’s staff began to ques­tion AIQ over whether it was behind Ripon’s devel­op­ment, AIQ con­firmed that it was, but said it was nev­er sup­posed to dis­cuss its work.

    The Brex­it

    In 2016, Mer­cer report­ed­ly offered up Cam­bridge Analytica’s ser­vices for free to Leave.EU, one of sev­er­al group urg­ing the UK to depart the Euro­pean Union, accord­ing to The Guardian. Leave.EU was not, how­ev­er, the offi­cial “Leave” group rep­re­sent­ing the Brex­it cam­paign. Instead, a seper­ate group, known as Vote Leave, was for­mal­ly cho­sen by elec­tion offi­cials to lead the ref­er­en­dum.

    Where­as Leave.EU relied on Cam­bridge to influ­ence vot­ers through its use of data ana­lyt­ics, Vote Leave turned to AIQ, even­tu­al­ly pay­ing the firm rough­ly 40 per­cent of its £7 mil­lion cam­paign bud­get, accord­ing to The Guardian. Over time, how­ev­er, Vote Leave amassed more cash than it was legal­ly allowed to spend. While UK elec­tion laws per­mit­ted Vote Leave to gift its remain­ing funds to oth­er cam­paigns, fur­ther coor­di­na­tion between them was express­ly for­bid­den.

    Rough­ly a week before the EU ref­er­en­dum, Vote Leave inex­plic­a­bly donat­ed £625,000 to a young fash­ion design stu­dent named Dar­ren Grimes, the founder of a small, unof­fi­cial Brex­it cam­paign called BeLeave. Accord­ing to a Buz­zFeed inves­ti­ga­tion, Grimes imme­di­ate­ly gave a “sub­stan­tial amount” of the cash he received from Vote Leave to AIQ. Vote Leave also donat­ed £100,000 to anoth­er Leave cam­paign called Vet­er­ans for Britain, which, accord­ing to The Guardian, then paid AIQ pre­cise­ly that amount.

    A review of the AIQ files by UpGuard’s Chris Vick­ery revealed sev­er­al men­tions of Vote Leave and at least one men­tion of Vet­er­ans for Britain, appar­ent­ly relat­ed to web­site devel­op­ment.

    In an inter­view on Mon­day, Shah­mir San­ni, a for­mer vol­un­teer for Vote Leave cam­paign, told The Globe and Mail that he had “first-hand knowl­edge about the alleged wrong­do­ing in the Brex­it cam­paign.” San­ni, who was 22 when he worked for Vote Leave, said he was “encour­aged to spin out” anoth­er cam­paign, but that he had “no con­trol” over the £625,000 that was imme­di­ate­ly spent on AIQ’s ser­vices.

    British author­i­ties are pur­su­ing leads to estab­lish whether BeLeave and Vet­er­ans for Britain were mere­ly a con­duit through which Vote Leave sought to direct addi­tion­al funds to AIQ. While the UK Elec­toral Com­mis­sion took no action in ear­ly 2017, in Novem­ber it claimed that “new infor­ma­tion” had “come to light,” giv­ing the com­mis­sion “rea­son­able grounds to sus­pect an offence may have been com­mit­ted.”

    In an email, the UK Elec­tion Com­mis­sion told Giz­mo­do its inves­ti­ga­tion into Vote Leave pay­ments was ongo­ing.

    ...

    ———-

    “Aggre­gateIQ Cre­at­ed Cam­bridge Ana­lyt­i­ca’s Elec­tion Soft­ware, and Here’s the Proof” by Dell Cameron; Giz­mo­do; 3/26/2018

    “A lit­tle-known Cana­di­an data firm ensnared by an inter­na­tion­al inves­ti­ga­tion into alleged wrong­do­ing dur­ing the Brex­it cam­paign cre­at­ed an elec­tion soft­ware plat­form mar­ket­ed by Cam­bridge Ana­lyt­i­ca, accord­ing to a batch of inter­nal files obtained exclu­sive­ly by Giz­mo­do.”

    As we can see, AIQ was the under-the-radar SCL sub­sidiary that actu­al­ly cre­at­ed “Ripon”, the polit­i­cal mod­el­ing soft­ware Cam­bridge Ana­lyt­i­ca was offer­ing to client. Cam­bridge Ana­lyt­i­ca co-founder/whis­tle-blow­er Christo­pher Wylie helped SCL found the firm. Also AIQ co-found, Jeff Sil­vester, admits that Wylie was involved with AIQ land­ing its first big con­tract but asserts that Wylie was nev­er close­ly involved with the com­pa­ny. And Sil­vester also admits that the com­pa­ny had a con­tract with SCL in 2014 but haven’t worked with SCL since 2016. So AIQ is offi­cial­ly act­ing like it’s not real­ly an SCL off­shoot at this point:

    ...
    The files were unearthed last week by Chris Vick­ery, research direc­tor at UpGuard, a Cal­i­for­nia-based cyber risk firm. On Sun­day night, after Giz­mo­do reached out to Jeff Sil­vester, co-founder of AIQ, the files were quick­ly tak­en offline.

    ...

    In an inter­view over the week­end with London’s The Observ­er, Christo­pher Wylie, the for­mer Cam­bridge Ana­lyt­i­ca employ­ee turned whistle­blow­er, claimed that he helped estab­lish AIQ years ago in an effort to help SCL Group expand its data oper­a­tions. Sil­vester denied that Wylie was ever involved on that lev­el, but admits that Wylie helped AIQ land its first big con­tract.

    “We did some work with SCL and had a con­tract with them in 2014 for some cus­tom soft­ware devel­op­ment,” Sil­vester told Giz­mo­do. “We last worked with SCL in 2016 and have not worked with them since.”
    ...

    And based on the AIQ’s con­tract with SCL, we have a bet­ter idea of when exact­ly AIQ’s work with SCL end­ed in 2016: the code found by UpGuard was uploaded to the code-repos­i­to­ry web­site GitHub in August of 2016. That sug­gests that was the point when the cod­ed was effec­tive­ly hand­ed off from AIQ to SCL. And August of 2016, it’s impor­tant to recall, is the same month that Steve Ban­non, a Cam­bridge Ana­lyt­i­ca com­pa­ny offi­cer — and “the boss” accord­ing to Wylie — went to work as cam­paign man­ag­er of the Trump cam­paign. So you have to won­der if that’s a coin­ci­dence or a reflec­tion of con­cerns over this SCL/Cambridge Analytica/AIQ nexus get­ting some unwant­ed atten­tion:

    ...
    Data Exposed

    UpGuard first dis­cov­ered code belong­ing to AIQ last Thurs­day on the web page of a devel­op­er named Ali Yas­sine who worked for SCL Group. With­in the code—uploaded to the web­site GitHub in August 2016—are notes that show SCL had request­ed that code be turned over by AIQ’s lead devel­op­er, Koji Hamid Pourseyed.

    AIQ’s con­tract with SCL, a por­tion of which was pub­lished by The Guardian last year, stip­u­lates that SCL is the sole own­er of the intel­lec­tu­al prop­er­ty per­tain­ing to the contract—namely, the devel­op­ment of Ripon, Cam­bridge Analytica’s cam­paign plat­form.
    ...

    And in those dis­cov­ered AIQ doc­u­ments are notes on projects AIQ was doing for Cruz, Abbott and Taru­ta. Along with notes on a project for the GOP called The Data­base of Truth:

    ...
    The find led UpGuard to unearth a code repos­i­to­ry on AIQ’s web­site. With­in it were count­less files link­ing AIQ to the Ripon pro­gram, as well as notes relat­ed to active projects for Cruz, Abbott, and the Ukrain­ian oli­garch.

    In an inter­nal wiki, AIQ devel­op­ers also dis­cussed a project known as The Data­base of Truth, a sys­tem that “inte­grates, obtains, and nor­mal­izes data from dis­parate sor­ces, includ­ing start­ing with the RNC Data Trust.” (RNC Data Trust is the Repub­li­can party’s pri­ma­ry vot­er file provider.) “The pri­ma­ry data source will be com­bined with state vot­er files, con­sumer data, third par­ty data providers, his­tor­i­cal WPA sur­vey and projects and cus­tomer data.”

    The Data­base of Truth, accord­ing to the wiki, is a project under devel­op­ment for WPA Intel­li­gence.
    ...

    AIQ is mak­ing the GOP a “Data­base of Truth”. Great.

    And that sounds like a sep­a­rate sys­tem from Ripon. The Data­base of Truth appears to focus on the kind of data found in data bro­ker­ages — state vot­er files, con­sumer data, third par­ty data providers, etc. — where­as Ripon soft­ware appeared to be specif­i­cal­ly focused on the kind of psy­cho­log­i­cal pro­fil­ing Cam­bridge Ana­lyt­i­ca was spe­cial­iz­ing in:

    ...
    Vapor­ware

    Until recent­ly, Cam­bridge Ana­lyt­i­ca oper­at­ed large­ly in the shad­ows. For years, it planned to tar­get right-lean­ing vot­ers for a host of high-pro­file polit­i­cal cam­paigns, work­ing for both Cruz and Pres­i­dent Don­ald Trump. With its bil­lion­aire back­ing, the firm promised to lever­age oceans of data col­lect­ed about voters—which we now know was acquired from sources both legal and unau­tho­rized.

    Known as Project Ripon, Cam­bridge Analytica’s goal was to fur­nish Repub­li­can can­di­dates with a tech­nol­o­gy plat­form capa­ble of reach­ing vot­ers through the use of psy­cho­log­i­cal pro­fil­ing. (SCL Group has long used behav­ioral research to con­duct “influ­ence oper­a­tions” on behalf of mil­i­tary and polit­i­cal clients world­wide.)

    Cam­bridge Ana­lyt­i­ca, which even­tu­al­ly chose AIQ to help build its plat­form, once boast­ed that it pos­sessed files on as many as 230 mil­lion Amer­i­cans com­piled from thou­sands of data points, includ­ing psy­cho­log­i­cal data har­vest­ed from social media, as well as com­mer­cial data avail­able to vir­tu­al­ly any­one who can afford it. The com­pa­ny intend­ed to clas­si­fy vot­ers by select per­son­al­i­ty types, apply­ing its sys­tem to craft mes­sages, online ads, and mail­ers that, it believed, would res­onate dis­tinc­tive­ly with vot­ers of each group.
    ...

    And as we’ve heard from the Trump cam­paign, and their asser­tions that the Cam­bridge Ana­lyt­i­ca soft­ware was­n’t actu­al­ly very use­ful, the Cruz cam­paign is also call­ing this Ripon soft­ware just “vapor­ware”. Denials of the effec­tive­ness of Cam­bridge Ana­lyt­i­ca’s psy­cho­log­i­cal pro­fil­ing meth­ods has been one of the across-the-board asser­tions we’ve seen from the peo­ple involved with this sto­ry:

    ...
    Sources with­in the Cruz cam­paign, which large­ly fund­ed Ripon’s devel­op­ment, claim the soft­ware nev­er actu­al­ly func­tioned. One for­mer staffer told Giz­mo­do the prod­uct was noth­ing but “vapor­ware.”

    AIQ’s inter­nal files show the com­pa­ny had unlim­it­ed access to the Ripon code, and a source with­in the Cruz cam­paign con­firmed to Giz­mo­do that AIQ was sole­ly respon­si­ble for the software’s devel­op­ment.

    The cam­paign even­tu­al­ly dumped more than $5.8 mil­lion into Ripon’s development—which is only about half the amount Robert Mer­cer, Cam­bridge Analytica’s prin­ci­pal investor, poured into Cruz’s White House bid. (After Trump took the nom­i­na­tion, Mer­cer con­tributed more than $15.5 mil­lion to his cam­paign, $5 mil­lion of which end­ed up back in Cam­bridge Analytica’s pock­ets.)
    ...

    And while every­one involved with Cam­bridge Ana­lyt­i­ca has been claim­ing it’s large­ly use­less, it’s hard to ignored the Brex­it scan­dal that involved Vote Leave using two out­side groups to laun­der almost a mil­lion pounds to AIQ for AIQ’s ana­lyt­ics ser­vices in excess of the legal spend­ing caps. That’s quite a vote of con­fi­dence by Vote Leave:

    ...
    The Brex­it

    In 2016, Mer­cer report­ed­ly offered up Cam­bridge Analytica’s ser­vices for free to Leave.EU, one of sev­er­al group urg­ing the UK to depart the Euro­pean Union, accord­ing to The Guardian. Leave.EU was not, how­ev­er, the offi­cial “Leave” group rep­re­sent­ing the Brex­it cam­paign. Instead, a seper­ate group, known as Vote Leave, was for­mal­ly cho­sen by elec­tion offi­cials to lead the ref­er­en­dum.

    Where­as Leave.EU relied on Cam­bridge to influ­ence vot­ers through its use of data ana­lyt­ics, Vote Leave turned to AIQ, even­tu­al­ly pay­ing the firm rough­ly 40 per­cent of its £7 mil­lion cam­paign bud­get, accord­ing to The Guardian. Over time, how­ev­er, Vote Leave amassed more cash than it was legal­ly allowed to spend. While UK elec­tion laws per­mit­ted Vote Leave to gift its remain­ing funds to oth­er cam­paigns, fur­ther coor­di­na­tion between them was express­ly for­bid­den.

    Rough­ly a week before the EU ref­er­en­dum, Vote Leave inex­plic­a­bly donat­ed £625,000 to a young fash­ion design stu­dent named Dar­ren Grimes, the founder of a small, unof­fi­cial Brex­it cam­paign called BeLeave. Accord­ing to a Buz­zFeed inves­ti­ga­tion, Grimes imme­di­ate­ly gave a “sub­stan­tial amount” of the cash he received from Vote Leave to AIQ. Vote Leave also donat­ed £100,000 to anoth­er Leave cam­paign called Vet­er­ans for Britain, which, accord­ing to The Guardian, then paid AIQ pre­cise­ly that amount.

    A review of the AIQ files by UpGuard’s Chris Vick­ery revealed sev­er­al men­tions of Vote Leave and at least one men­tion of Vet­er­ans for Britain, appar­ent­ly relat­ed to web­site devel­op­ment.

    In an inter­view on Mon­day, Shah­mir San­ni, a for­mer vol­un­teer for Vote Leave cam­paign, told The Globe and Mail that he had “first-hand knowl­edge about the alleged wrong­do­ing in the Brex­it cam­paign.” San­ni, who was 22 when he worked for Vote Leave, said he was “encour­aged to spin out” anoth­er cam­paign, but that he had “no con­trol” over the £625,000 that was imme­di­ate­ly spent on AIQ’s ser­vices.
    ...

    As we can see, AIQ is an impor­tant enti­ty in terms of under­stand­ing the broad­er scope of the kind of work and clients this SCL/Cambridge Analytica/Bannon/Mercer polit­i­cal influ­ence project was under­tak­ing. AIQ is crit­i­cal for under­stand­ing the extent of the role this influ­ence net­work played in the Brex­it vote but also impor­tant for show­ing the oth­er kinds of clients this net­work was tak­ing on. Like Sergei Taru­ta.

    Now let’s take a clos­er look at Taru­ta with this Ukrain­ian Week pro­file from Octo­ber about the cre­ation of Taru­ta’s new Osno­va polit­i­cal par­ty. Many sus­pect has Rinat Akhme­tov of the Oppo­si­tion Bloc is behind Taru­ta’s new par­ty. But there is no evi­dence of that yet and the par­ty so far appears to be designed to appeal to for­mer Par­ty of Regions vot­ers, man of which are now Oppo­si­tion Bloc vot­ers in many cas­es and Akhme­tov is a major Oppo­si­tion Bloc backer. So ques­tions about Akhme­tov’s involve­ment remain open but it’s clear that Osno­va is try­ing to appeal to Akhme­tov’s polit­i­cal con­stituen­cy.

    As the arti­cle also notes, Taru­ta has a his­to­ry of sup­port­ing pro-EU politi­cians, includ­ing Vik­tor Yuschenko and Yulia Tymoshenko. And he’s nev­er cozied up to the pro-Russ­ian groups.

    But Taru­ta does have one very notable Krem­lin con­nec­tion: In 2010, 50%+2 shares of the Taru­ta’s indus­tri­al con­glom­er­ate, Indus­tri­al Union of Don­bas (IUD), was bought up by Russia’s Vneshekonom­bank, the for­eign trade bank. It is 100% state-owned and Russ­ian Pre­mier Dmit­ry Medvedev is the chair of its super­vi­so­ry board. So Taru­ta does have a notable direct busi­ness tie with with the Russ­ian gov­ern­ment. But as the arti­cle notes, there are no indi­ca­tions Taru­ta or his new par­ty are tak­ing Russ­ian mon­ey. And based on his polit­i­cal his­to­ry it would be sur­pris­ing if he was tak­ing Kremiln mon­ey because he’s clear­ly part of the pro-Euro­pean branch of Ukraine’s pol­i­tics.

    So we have AIQ doing some sort of work for Sergei (Ser­hiy) Taru­ta. Is that work data ana­lyt­ics for Osno­va? We don’t know. If it prob­a­bly involves Taru­ta’s cam­paign against the Nation­al Bank of Ukraine, because Taru­ta is clear­ly very inter­est­ed in wag­ing that polit­i­cal fight. So inter­est­ed that he had a fake con­gres­sion­al hear­ing at the US cap­i­tal that was broad­cast on two Ukrain­ian tele­vi­sion chan­nels and sent the mes­sage that the US con­gress was going to inves­ti­gate Taru­ta’s claims about cor­rup­tion at Ukraine’s cen­tral bank. So it’s pos­si­ble AIQ was involved in that kind of polit­i­cal work too. Espe­cial­ly giv­en what we know about Cam­bridge Ana­lyt­i­ca and SCL and their reliance of psy­cho­log­i­cal war­fare meth­ods to change pub­lic opin­ion. A fake con­gres­sion­al hear­ing, made pos­si­ble with the help of a Repub­li­can Con­gress­man, Rep. Estes, to sched­ule the room at the US Cap­i­tal, seems like exact­ly the kind of advice we should expect from the Cam­bridge Ana­lyt­i­ca peo­ple.
    The ques­tion of what exact­ly AIQ has been doing for Taru­ta would be a pret­ty big ques­tion giv­en the scan­dal and mys­tery swirling around Cam­bridge Ana­lyt­i­ca and SCL. The fake con­gres­sion­al hear­ing made it a much wierder big ques­tion about the ulti­mate goals and agen­da of the peo­ple behind Cam­bridge Ana­lyt­i­ca:

    Ukrain­ian Week

    Osno­va: Taruta’s polit­i­cal foun­da­tion
    Found­ed this fall, Donet­sk oli­garch Ser­hiy Taruta’s Osno­va or Foun­da­tion par­ty has already start­ed cam­paign­ing although the next Verk­hov­na Rada elec­tion is two years away

    Denys Kazan­skyi
    18 Octo­ber, 2017

    Dozens of bill­boards with his por­trait and the party’s name and slo­gan have popped up in Kyiv and in the south­east­ern oblasts of Ukraine. Infor­ma­tion about the new par­ty is not read­i­ly avail­able, how­ev­er, as it is still most­ly just on paper. But any oli­garchic project stands a good chance of meet­ing the thresh­old require­ment for gain­ing seats in the Rada based on a sol­id adver­tis­ing bud­get, as past expe­ri­ence has shown.

    Short on ide­ol­o­gy

    The Osno­va site states that the party’s ide­ol­o­gy is based on the prin­ci­ples of lib­er­al con­ser­vatism. In Ukrain­ian pol­i­tics, how­ev­er, these words typ­i­cal­ly mean very lit­tle. What kind of con­ser­vatism are we talk­ing about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s oth­er pop­ulists, all of whom count on a fair­ly unde­mand­ing elec­toral base. In some ways, he resem­bles Ser­hiy Tihip­ko, who tried over and over again to enter pol­i­tics as a “new face,” although he had been in pol­i­tics since his days in the Dnipropetro­vsk Oblast Kom­so­mol Exec­u­tive.

    Who will join the Taru­ta team? Whose inter­ests will the par­ty pro­mote and who will be its allies? Where will its mon­ey come from? Taru­ta him­self is a very ambigu­ous fig­ure. For a long time he was seen as an untyp­i­cal Donet­sk home­boy: a high-pro­file busi­ness­man with an intel­li­gent demeanor with­out any known crim­i­nal back­ground. He also dif­fered from the oth­er Donet­sk politi­cians in his polit­i­cal posi­tions. He nev­er played up to pro-Russ­ian par­ties and move­ments, sup­port­ing, instead, pro-Ukrain­ian forces that were nev­er very pop­u­lar in Don­bas.

    For instance, in a 2006 inter­view in Ukrain­s­ka Prav­da, the tycoon admit­ted that in 2004 he had cast his bal­lot for Vik­tor Yushchenko. “My posi­tion was the Euro­pean choice,” he empha­sized. In that same inter­view he also men­tioned that he liked Yulia Tymoshenko.

    In 2010, Taru­ta, in fact, sup­port­ed Tymoshenko in her bid for the pres­i­den­cy. “Of the two can­di­dates run­ning today, only Yulia Tymoshenko will be able to effec­tive­ly defend busi­ness inter­ests and over­come cor­rup­tion,” he said in Feb­ru­ary 2010. “She rep­re­sents polit­i­cal and eco­nom­ic sta­bil­i­ty in Ukraine and will work in the country’s inter­ests, not the inter­ests of some par­tic­u­lar busi­ness clan. Besides, Ms. Tymoshenko has well-deserved author­i­ty in the eyes of lead­ers in Rus­sia and Europe, which means she will always be able to work out a deal in favor of Ukrain­ian busi­ness. Only with Pres­i­dent Tymoshenko will it be pos­si­ble for Ukraine to see all those promis­ing growth plans that we have out­lined with our new Russ­ian part­ners.”

    Pos­i­tive image, poor per­for­mance

    And so, when Taru­ta was appoint­ed Gov­er­nor of Donet­sk Oblast in 2014, just as the anti-Ukrain­ian putsch began there, Ukraini­ans by and large saw this as some­thing pos­i­tive. Taru­ta seemed to be just the right can­di­date with the strength to resolve the sit­u­a­tion: a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Kolo­moyskiy in neigh­bor­ing Dnipropetro­vsk Oblast, Taru­ta either did not dare or did not want to put togeth­er pro-Ukrain­ian Self-Defense squads. And so the Don­bas Bat­tal­ion was actu­al­ly formed in Dnipro, and not in the Don­bas. Mean­while in Donet­sk Oblast, the advan­tage went to the mil­i­tants almost from the start.

    After he resigned as gov­er­nor, Taru­ta was elect­ed to the Verk­hov­na Rada. Even­tu­al­ly, he announced the for­ma­tion of his own polit­i­cal par­ty. Based on infor­ma­tion leaked in the press, it was clear from the begin­ning that this new par­ty was intend­ed to pick up the elec­torate of the now-defunct Par­ty of the Regions, most­ly in Ukraine’s south­ern and east­ern oblasts. This cer­tain­ly makes sense, but the prob­lem is that there are sev­er­al sim­i­lar par­ties already busy work­ing to win over this same elec­torate. The monop­oly enjoyed by PR has long since col­lapsed. Now, vot­ers in those regions have the Oppo­si­tion Bloc or Opobloc, Vadym Rabinovych’s Za Zhyt­tia [For Life] Par­ty, the for­ev­er-lurk­ing Vidrodzhen­nia [Revival] found­ed in 2004, and Nash Krai [Our Coun­try]. Osno­va will make five in this clus­ter and can only hope that yet anoth­er project along the lines of the also-defunct Social­ist Par­ty doesn’t make an appear­ance in the run-up to the 2019 elec­tion. In this kind of sit­u­a­tion, the chances of Osno­va suc­ceed­ing with­out form­ing an alliance with any of the more pop­u­lar polit­i­cal par­ties are very low.

    There were rumors at one point that : a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Taruta’s par­ty was being sup­port­ed by Rinat Akhme­tov, but this is hard to con­firm, one way or anoth­er, espe­cial­ly since rela­tions between the two Donet­sk tycoons were always strained. The chances of this being true are at most 50–50. One sto­ry is that the pur­pose of Osno­va is to grad­u­al­ly siphon off Akhmetov’s folks from the Oppo­si­tion Bloc, giv­en that for­mer Region­als split into the Akhme­tov wing, which is more loy­al to Poroshenko, and the Liovochkin-Fir­tash wing, which is com­plete­ly opposed. If this is true, how­ev­er, then Osno­va is pret­ty much guar­an­teed a spot in the next Rada, because Akhme­tov has both the mon­ey and the admin­is­tra­tive lever­age in Donet­sk, Zapor­izhzhia and Dnipropetro­vsk Oblasts, where his busi­ness­es are locat­ed, to make sure of this.

    Fill­ing Osno­va’s ranks

    So far, it’s not obvi­ous that Akhme­tov is behind this new par­ty of Taruta’s. Of those who have already con­firmed that they will join Osno­va, Akhmetov’s peo­ple are not espe­cial­ly evi­dent. Right now, the par­ty appears to be draw­ing peo­ple who are not espe­cial­ly known in Ukrain­ian pol­i­tics. Indeed, judg­ing from the party’s Face­book page, there are only three or four spokesper­sons oth­er than Taru­ta.

    ...

    The PR-Rus­sia con­nec­tion

    Why Ser­hiy Taru­ta decid­ed to put his faith in peo­ple relat­ed to the Yanukovych regime is not entire­ly under­stand­able. Is this the per­son­al ini­tia­tive of the oli­garch him­self or is it at the request of some silent investor? It’s not clear who actu­al­ly is fund­ing the par­ty, but it seems unlike­ly that Taru­ta is putting up his own mon­ey. Although this oligarch’s worth was esti­mat­ed at over US $2 bil­lion back in 2008, he claims today that his wealth has shrunk a thou­sand-fold. In an inter­view with Hard Talk in 2015, he announced that he had pre­served only 0.1% of his for­mer wealth.

    Which brings the sto­ry around to Taruta’s busi­ness inter­ests. In 2010, 50%+2 shares of the Indus­tri­al Union of Don­bas (IUD), found­ed by the oli­garch, was bought up by Russia’s Vneshekonom­bank, the for­eign trade bank. That means that Taru­ta and the bank are part­ners. Taru­ta him­self holds only 24.999% of IUD, while the bank is 100% state-owned and Russ­ian Pre­mier Dmit­ry Medvedev is the chair of its super­vi­so­ry board. And so, whether he intend­ed it to be so or not, Ser­hiy Taru­ta is busi­ness part­ners with the Krem­lin.

    What kind of influ­ence the Krem­lin has over the Donet­sk oli­garch and his par­ty is not entire­ly clear and, so far, there is no evi­dence. Nor is there evi­dence that Osno­va is being financed by Russ­ian mon­ey. Giv­en the polit­i­cal his­to­ries of the party’s spokesper­sons, how­ev­er, and the nature of Taruta’s busi­ness inter­ests, it’s worth get­ting a good glimpse into its inner work­ings. It’s entire­ly pos­si­ble that, under the aegis of a pro-Euro­pean politi­cian, some more agents of influ­ence from an ene­my state could find their way to seats in the Rada.

    In the base­ment of the Capi­tol

    Anna Kor­but

    On Sep­tem­ber 25, New­sOne report­ed on Ser­hiy Taruta’s event in Wash­ing­ton, “The high­est lev­el in the US, the Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues [sic], will find out about the cor­rup­tion at the NBU, Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence about the cor­rup­tion of the top offi­cials at the Nation­al Bank of Ukraine, will the strongest in the world find out about this.” At the event, Taru­ta and Olek­san­dr Zavadet­skiy, a one-time direc­tor of the NBU Depart­ment for Mon­i­tor­ing indi­vid­u­als con­nect­ed to banks, were plan­ning to report on the deals by-then-depart­ed NBU Gov­er­nor Vale­ria Hontare­va had cut. The event did take place… in a tiny base­ment room at the Capi­tol where the Con­gress meets, with a very small audience—and New­sOne cam­eras.

    The speak­ers at the event were intro­duced, not with­out some prob­lems in pro­nun­ci­a­tion, by Con­nie Mack IV, a Repub­li­can mem­ber of the US House of Rep­re­sen­ta­tives from 2005 to 2013. Since leav­ing his Con­gres­sion­al career behind, Mack has been work­ing as a lob­by­ist and con­sul­tant. Over 2015–2016, his name often came up as a lob­by­ist for Hungary’s Vik­tor Orban Admin­is­tra­tion in the US.

    For­mer CIA direc­tor James Woolsey Jr. offered a few gen­er­al­ized com­ments about cor­rup­tion. In addi­tion to being the CIA boss in 1993–1995 under the first Clin­ton Admin­is­tra­tion, Woolsey held high posts under oth­er US pres­i­dents as well and was involved in nego­ti­a­tions with the USSR over arms treaties in the 1980s.

    Inter­est­ing­ly, there were no cur­rent elect­ed Amer­i­can offi­cials in atten­dance at the event. More­over, there is no such crea­ture as a “Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues” in the US Con­gress. The Con­gress has a Finan­cial Ser­vices Com­mit­tee and the Sen­ate has Finance Com­mit­tee. Among the joint Con­gres­sion­al com­mit­tees there is none that spe­cial­izes specif­i­cal­ly on finan­cial issues. The Sen­ate Finance Com­mit­tee met on Sep­tem­ber 25 but the agen­da includ­ed only propo­si­tions from a num­ber of sen­a­tors on how to reform the Afford­able Care Act. Pret­ty much the only reac­tion to Taruta’s US event was an arti­cle by JP Car­roll in the Week­ly Stan­dard under the head­line, “The moth­er of all fake news items: How a win­dow­less room in the base­ment of the Capi­tol was set up to look like a fake [sic] Con­gres­sion­al hear­ing.” And some angry tweets in response.

    Lat­er on, in fact, some ques­tions did arise, such as the valid­i­ty of infor­ma­tion pub­lished in a pam­phlet enti­tled: “Hontare­va: a threat to Ukraine’s eco­nom­ic secu­ri­ty,” which was hand­ed out to par­tic­i­pants. Yet, this very brochure had been chal­lenged near­ly a year ear­li­er, in Octo­ber 2016, by reporters at Vox Ukraine, who ana­lyzed the infor­ma­tion pre­sent­ed. In an arti­cle enti­tled “Vox­Check of the Year. How Much Truth There Is in Ser­hiy Taruta’s Pam­phlet about the Head of Ukraine’s Cen­tral Bank,” jour­nal­ists came to the con­clu­sion that, while the data in the text was large­ly accu­rate, it had been com­plete­ly manip­u­lat­ed. Some­what lat­er, they did a fol­low-up analy­sis of what Ms. Hontare­va actu­al­ly did wrong as NBU Chair.

    Trans­lat­ed by Lidia Wolan­skyj

    ———-
    “Osno­va: Taruta’s polit­i­cal foun­da­tion” by Denys Kazan­skyi; Ukrain­ian Week; 10/18/2017

    “The Osno­va site states that the party’s ide­ol­o­gy is based on the prin­ci­ples of lib­er­al con­ser­vatism. In Ukrain­ian pol­i­tics, how­ev­er, these words typ­i­cal­ly mean very lit­tle. What kind of con­ser­vatism are we talk­ing about? That’s not very clear. And Taruta’s rhetoric so far sounds very much like the rhetoric of Ukraine’s oth­er pop­ulists, all of whom count on a fair­ly unde­mand­ing elec­toral base. In some ways, he resem­bles Ser­hiy Tihip­ko, who tried over and over again to enter pol­i­tics as a “new face,” although he had been in pol­i­tics since his days in the Dnipropetro­vsk Oblast Kom­so­mol Exec­u­tive.”

    A par­ty based on the prin­ci­ples of lib­er­al con­ser­vatism. So a vague par­ty for a vague cause. That seems like an appro­pri­ate fit for Sergei Taru­ta, an intrigu­ing­ly vague fig­ure. But a notable fig­ure from Donet­sk, the heart­land of the sep­a­ratists, because he nev­er played up to the pro-Russ­ian par­ties and move­ments and was con­sis­tent­ly a sup­port of the pro-Kiev forces. That includ­ed sup­port­ing Vik­tor Yuschenko in 2006 and Yulia Tymoshenko in 2010:

    ...
    Who will join the Taru­ta team? Whose inter­ests will the par­ty pro­mote and who will be its allies? Where will its mon­ey come from? Taru­ta him­self is a very ambigu­ous fig­ure. For a long time he was seen as an untyp­i­cal Donet­sk home­boy: a high-pro­file busi­ness­man with an intel­li­gent demeanor with­out any known crim­i­nal back­ground. He also dif­fered from the oth­er Donet­sk politi­cians in his polit­i­cal posi­tions. He nev­er played up to pro-Russ­ian par­ties and move­ments, sup­port­ing, instead, pro-Ukrain­ian forces that were nev­er very pop­u­lar in Don­bas.

    For instance, in a 2006 inter­view in Ukrain­s­ka Prav­da, the tycoon admit­ted that in 2004 he had cast his bal­lot for Vik­tor Yushchenko. “My posi­tion was the Euro­pean choice,” he empha­sized. In that same inter­view he also men­tioned that he liked Yulia Tymoshenko.

    In 2010, Taru­ta, in fact, sup­port­ed Tymoshenko in her bid for the pres­i­den­cy. “Of the two can­di­dates run­ning today, only Yulia Tymoshenko will be able to effec­tive­ly defend busi­ness inter­ests and over­come cor­rup­tion,” he said in Feb­ru­ary 2010. “She rep­re­sents polit­i­cal and eco­nom­ic sta­bil­i­ty in Ukraine and will work in the country’s inter­ests, not the inter­ests of some par­tic­u­lar busi­ness clan. Besides, Ms. Tymoshenko has well-deserved author­i­ty in the eyes of lead­ers in Rus­sia and Europe, which means she will always be able to work out a deal in favor of Ukrain­ian busi­ness. Only with Pres­i­dent Tymoshenko will it be pos­si­ble for Ukraine to see all those promis­ing growth plans that we have out­lined with our new Russ­ian part­ners.”
    ...

    And Taru­ta’s pro-Kiev ori­en­ta­tion is no doubt a big rea­son he was appoint­ed gov­er­nor of Donet­sk in March of 2014 fol­low­ing the post-Maid­an col­lapse of the Yanukovych gov­ern­ment. But he did­n’t last long, resign­ing in Octo­ber of 2014. And that was part­ly attrib­uted to his lim­it­ed sup­port for the vol­un­teer mili­tias when com­pared to the appoint­ed gov­er­nor of the neigh­bor­ing Dnipo oblast, Ihor Kolo­moisky (note that, as we’ll see in a fol­low­ing arti­cle, both Taru­ta and Kolo­moisky report­ed­ly sup­port­ed the Azov Bat­tal­ion):

    ...
    Pos­i­tive image, poor per­for­mance

    And so, when Taru­ta was appoint­ed Gov­er­nor of Donet­sk Oblast in 2014, just as the anti-Ukrain­ian putsch began there, Ukraini­ans by and large saw this as some­thing pos­i­tive. Taru­ta seemed to be just the right can­di­date with the strength to resolve the sit­u­a­tion: a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Kolo­moyskiy in neigh­bor­ing Dnipropetro­vsk Oblast, Taru­ta either did not dare or did not want to put togeth­er pro-Ukrain­ian Self-Defense squads. And so the Don­bas Bat­tal­ion was actu­al­ly formed in Dnipro, and not in the Don­bas. Mean­while in Donet­sk Oblast, the advan­tage went to the mil­i­tants almost from the start.
    ...

    After resign­ing as gov­er­nor, he gets elect­ed to the par­lia­ment. And now he has a new par­ty, Osno­va, which is char­ac­ter­ized as clear­ly designed to pick up the elec­torate of the now-defunct Par­ty of Regions:

    ...
    After he resigned as gov­er­nor, Taru­ta was elect­ed to the Verk­hov­na Rada. Even­tu­al­ly, he announced the for­ma­tion of his own polit­i­cal par­ty. Based on infor­ma­tion leaked in the press, it was clear from the begin­ning that this new par­ty was intend­ed to pick up the elec­torate of the now-defunct Par­ty of the Regions, most­ly in Ukraine’s south­ern and east­ern oblasts. This cer­tain­ly makes sense, but the prob­lem is that there are sev­er­al sim­i­lar par­ties already busy work­ing to win over this same elec­torate. The monop­oly enjoyed by PR has long since col­lapsed. Now, vot­ers in those regions have the Oppo­si­tion Bloc or Opobloc, Vadym Rabinovych’s Za Zhyt­tia [For Life] Par­ty, the for­ev­er-lurk­ing Vidrodzhen­nia [Revival] found­ed in 2004, and Nash Krai [Our Coun­try]. Osno­va will make five in this clus­ter and can only hope that yet anoth­er project along the lines of the also-defunct Social­ist Par­ty doesn’t make an appear­ance in the run-up to the 2019 elec­tion. In this kind of sit­u­a­tion, the chances of Osno­va suc­ceed­ing with­out form­ing an alliance with any of the more pop­u­lar polit­i­cal par­ties are very low.
    ...

    And while the trans­la­tion is some­what gar­bled here, it appears that there is spec­u­la­tion that Rinat Akhme­tov, a top oli­garch and one of the pri­ma­ry back­ers of the “Oppo­si­tion Bloc”, may be behind Taru­ta’s Osno­va ini­tia­tive. But there’s no evi­dence of this and if true it would put Osno­va in com­pe­ti­tion for Akhme­tov’s Oppo­si­tion Bloc vot­ers. Also, peo­ple close to Akhme­tov aren’t found in Osno­va’s lead­er­ship:

    ...
    There were rumors at one point that : a local oli­garch who under­stood the local men­tal­i­ty well and was ori­ent­ed towards Ukraine. But it was not to be. Taru­ta proved to be a weak politi­cian and was unable to get con­trol over the sit­u­a­tion. The local police and SBU kept sab­o­tag­ing orders from above and had lit­tle inter­est in defend­ing the Oblast State Admin­is­tra­tion. Unlike Ihor Taruta’s par­ty was being sup­port­ed by Rinat Akhme­tov, but this is hard to con­firm, one way or anoth­er, espe­cial­ly since rela­tions between the two Donet­sk tycoons were always strained. The chances of this being true are at most 50–50. One sto­ry is that the pur­pose of Osno­va is to grad­u­al­ly siphon off Akhmetov’s folks from the Oppo­si­tion Bloc, giv­en that for­mer Region­als split into the Akhme­tov wing, which is more loy­al to Poroshenko, and the Liovochkin-Fir­tash wing, which is com­plete­ly opposed. If this is true, how­ev­er, then Osno­va is pret­ty much guar­an­teed a spot in the next Rada, because Akhme­tov has both the mon­ey and the admin­is­tra­tive lever­age in Donet­sk, Zapor­izhzhia and Dnipropetro­vsk Oblasts, where his busi­ness­es are locat­ed, to make sure of this.

    Fill­ing Osno­va’s ranks

    So far, it’s not obvi­ous that Akhme­tov is behind this new par­ty of Taruta’s. Of those who have already con­firmed that they will join Osno­va, Akhmetov’s peo­ple are not espe­cial­ly evi­dent. Right now, the par­ty appears to be draw­ing peo­ple who are not espe­cial­ly known in Ukrain­ian pol­i­tics. Indeed, judg­ing from the party’s Face­book page, there are only three or four spokesper­sons oth­er than Taru­ta.
    ...

    But while Taru­ta is clear­ly a pro-Kiev/pro-EU kind of Ukrain­ian politi­cian, he does have one notable tie to the Kremiln: a major­i­ty stake in his indus­tri­al con­glom­er­ate was sold to a Russ­ian state-own bank in 2010:

    ...
    The PR-Rus­sia con­nec­tion

    Why Ser­hiy Taru­ta decid­ed to put his faith in peo­ple relat­ed to the Yanukovych regime is not entire­ly under­stand­able. Is this the per­son­al ini­tia­tive of the oli­garch him­self or is it at the request of some silent investor? It’s not clear who actu­al­ly is fund­ing the par­ty, but it seems unlike­ly that Taru­ta is putting up his own mon­ey. Although this oligarch’s worth was esti­mat­ed at over US $2 bil­lion back in 2008, he claims today that his wealth has shrunk a thou­sand-fold. In an inter­view with Hard Talk in 2015, he announced that he had pre­served only 0.1% of his for­mer wealth.

    Which brings the sto­ry around to Taruta’s busi­ness inter­ests. In 2010, 50%+2 shares of the Indus­tri­al Union of Don­bas (IUD), found­ed by the oli­garch, was bought up by Russia’s Vneshekonom­bank, the for­eign trade bank. That means that Taru­ta and the bank are part­ners. Taru­ta him­self holds only 24.999% of IUD, while the bank is 100% state-owned and Russ­ian Pre­mier Dmit­ry Medvedev is the chair of its super­vi­so­ry board. And so, whether he intend­ed it to be so or not, Ser­hiy Taru­ta is busi­ness part­ners with the Krem­lin.

    What kind of influ­ence the Krem­lin has over the Donet­sk oli­garch and his par­ty is not entire­ly clear and, so far, there is no evi­dence. Nor is there evi­dence that Osno­va is being financed by Russ­ian mon­ey. Giv­en the polit­i­cal his­to­ries of the party’s spokesper­sons, how­ev­er, and the nature of Taruta’s busi­ness inter­ests, it’s worth get­ting a good glimpse into its inner work­ings. It’s entire­ly pos­si­ble that, under the aegis of a pro-Euro­pean politi­cian, some more agents of influ­ence from an ene­my state could find their way to seats in the Rada.
    ...

    And beyond build­ing his mys­te­ri­ous new Osno­va par­ty, Taru­ta is also busy lob­by­ing the US about his pet project of out­ing alleged cor­rup­tion at Ukraine’s cen­tral bank. Or at least he’s busy mak­ing it look like he’s lob­by­ing the US about this. And he’s will­ing to go to enor­mous lengths to cre­ate those appear­ances, like a Sep­tem­ber 25, 2017 fake con­gres­sion­al hear­ing in the US Cap­i­tal where an ex-Con­gress­man, Con­nie Mack, pre­tend­ed to expres­sion con­gers­sion­al out­rage over Taru­ta’s alle­ga­tions and an ex-CIA chief, James Woolsey, gave words of sup­port for the ‘anti-cor­rup­tion dri­ve’. And this was all tele­vised in Ukraine and treat­ed like a real US polit­i­cal event:

    ...
    On Sep­tem­ber 25, New­sOne report­ed on Ser­hiy Taruta’s event in Wash­ing­ton, “The high­est lev­el in the US, the Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues [sic], will find out about the cor­rup­tion at the NBU, Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence about the cor­rup­tion of the top offi­cials at the Nation­al Bank of Ukraine, will the strongest in the world find out about this.” At the event, Taru­ta and Olek­san­dr Zavadet­skiy, a one-time direc­tor of the NBU Depart­ment for Mon­i­tor­ing indi­vid­u­als con­nect­ed to banks, were plan­ning to report on the deals by-then-depart­ed NBU Gov­er­nor Vale­ria Hontare­va had cut. The event did take place… in a tiny base­ment room at the Capi­tol where the Con­gress meets, with a very small audience—and New­sOne cam­eras.

    The speak­ers at the event were intro­duced, not with­out some prob­lems in pro­nun­ci­a­tion, by Con­nie Mack IV, a Repub­li­can mem­ber of the US House of Rep­re­sen­ta­tives from 2005 to 2013. Since leav­ing his Con­gres­sion­al career behind, Mack has been work­ing as a lob­by­ist and con­sul­tant. Over 2015–2016, his name often came up as a lob­by­ist for Hungary’s Vik­tor Orban Admin­is­tra­tion in the US.

    For­mer CIA direc­tor James Woolsey Jr. offered a few gen­er­al­ized com­ments about cor­rup­tion. In addi­tion to being the CIA boss in 1993–1995 under the first Clin­ton Admin­is­tra­tion, Woolsey held high posts under oth­er US pres­i­dents as well and was involved in nego­ti­a­tions with the USSR over arms treaties in the 1980s.

    Inter­est­ing­ly, there were no cur­rent elect­ed Amer­i­can offi­cials in atten­dance at the event. More­over, there is no such crea­ture as a “Spe­cial Con­gres­sion­al Com­mit­tee for Finan­cial Issues” in the US Con­gress. The Con­gress has a Finan­cial Ser­vices Com­mit­tee and the Sen­ate has Finance Com­mit­tee. Among the joint Con­gres­sion­al com­mit­tees there is none that spe­cial­izes specif­i­cal­ly on finan­cial issues. The Sen­ate Finance Com­mit­tee met on Sep­tem­ber 25 but the agen­da includ­ed only propo­si­tions from a num­ber of sen­a­tors on how to reform the Afford­able Care Act. Pret­ty much the only reac­tion to Taruta’s US event was an arti­cle by JP Car­roll in the Week­ly Stan­dard under the head­line, “The moth­er of all fake news items: How a win­dow­less room in the base­ment of the Capi­tol was set up to look like a fake [sic] Con­gres­sion­al hear­ing.” And some angry tweets in response.

    Lat­er on, in fact, some ques­tions did arise, such as the valid­i­ty of infor­ma­tion pub­lished in a pam­phlet enti­tled: “Hontare­va: a threat to Ukraine’s eco­nom­ic secu­ri­ty,” which was hand­ed out to par­tic­i­pants. Yet, this very brochure had been chal­lenged near­ly a year ear­li­er, in Octo­ber 2016, by reporters at Vox Ukraine, who ana­lyzed the infor­ma­tion pre­sent­ed. In an arti­cle enti­tled “Vox­Check of the Year. How Much Truth There Is in Ser­hiy Taruta’s Pam­phlet about the Head of Ukraine’s Cen­tral Bank,” jour­nal­ists came to the con­clu­sion that, while the data in the text was large­ly accu­rate, it had been com­plete­ly manip­u­lat­ed. Some­what lat­er, they did a fol­low-up analy­sis of what Ms. Hontare­va actu­al­ly did wrong as NBU Chair.
    ...

    So now lets take a look at a report in this bizarre fake event writ­ten by the one Amer­i­can reporter who was invit­ed to attend. As the arti­cle notes, the event will billed by the Ukrain­ian tele­vi­sion chan­nel as a meet­ing of the “US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.” No cur­rent mem­bers of Con­gress were there. Instead, it was a pri­vate pan­el dis­cus­sion host­ed by for­mer Rep. Con­nie Mack IV (R‑FL), and Matt Kee­len, a vet­er­an polit­i­cal fundrais­er and oper­a­tive. It was open only to invit­ed guests (includ­ing con­gres­sion­al staffers), two Ukrain­ian reporters (from New­sOne), and one Amer­i­can reporter. Mack was wear­ing his old con­gres­sion­al pin on his lapel.

    Much of the event was spent crit­i­ciz­ing Ukraine’s for­mer cen­tral banker Valeriya Hontare­va (Gontare­va). The “HONTAREVA report” is the prod­uct of Taru­ta, and he has been out pro­mot­ing it since late 2016. Accord­ing to Vox­Check, a Ukrain­ian fact check­ing web­site, “the data [in the report], though most­ly cor­rect, are manip­u­lat­ed in almost all occa­sions.” Vox­Check also notes that the report has split Ukrain­ian politi­cians.

    James Woolsey, the for­mer CIA direc­tor and for­mer Trump cam­paign advis­er, was also at the event and briefly spoke. Woolsey talked about how “sweet” Rus­sia was in the ear­ly years after the fall of the Berlin Wall and the need to find a way to make Rus­sia “sweet” like that again.

    One Sen­ate Aide described Woolsey’s appear­ance there a strange, strange event and an “inter-oli­garch dis­pute”: “It was a strange, strange event. Even by Ukrain­ian stan­dards, that was an odd one. . . . I mean, why would a for­mer CIA direc­tor be in the base­ment of the Capi­tol for a inter-oli­garch dis­pute? [For­mer] CIA direc­tors don’t just go to events and say, how much we could get along with the Rus­sians. They don’t do that with­out a rea­son.” And that seems like a good way to sum­ma­rizee this: a strange, strange event that’s one ele­ment of a broad inter-oli­garch dis­pute. A dis­pute that’s giv­ing us some insights in the the kind of fig­ures in Ukraine Cam­bridge Ana­lyt­i­ca and AIQ want to work for:

    The Week­ly Stan­dard

    The Moth­er of All Fake News
    How a win­dow­less room in the base­ment of the Capi­tol was set up to look like a fake con­gres­sion­al hear­ing.

    1:12 PM, Sep 29, 2017 | By J.P. CARROLL

    Watch­ers of Ukraine’s New­sOne tele­vi­sion chan­nel on Sep­tem­ber 25 were treat­ed to what was sug­gest­ed to be a con­gres­sion­al hear­ing in Wash­ing­ton about cor­rup­tion in the Nation­al Bank of Ukraine (the NBU), which is the Ukrain­ian equiv­a­lent of the Fed­er­al Reserve Board.

    The event, which took place in the base­ment of the U.S. Capi­tol, Room HC 8, was billed by the Ukrain­ian tele­vi­sion chan­nel as a meet­ing of the “US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.” New­sOne teased it this way:

    The high­est lev­els of cor­rup­tion in the NBU are known by the US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.

    Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence of cor­rup­tions of the most impor­tant offi­cials of the Nation­al Bank, the strongest of the world will find out about it.

    Shock­ing details and res­o­nant details—live stream­ing on New­sOne! Turn on at 21:00—live from Wash­ing­ton DC

    Except, what was broad­cast was not a hear­ing of any com­mit­tee of Con­gress. No cur­rent mem­bers of Con­gress were even there. What was this odd event? A pri­vate pan­el dis­cus­sion host­ed by for­mer Rep. Con­nie Mack IV (R‑FL), along with vet­er­an polit­i­cal fundrais­er and oper­a­tive, Matt Kee­len. But unlike an actu­al con­gres­sion­al hear­ing, this pri­vate event was open only to invit­ed guests (includ­ing con­gres­sion­al staffers), two Ukrain­ian reporters (from New­sOne), and one Amer­i­can reporter (me).

    Hand­ed out to atten­dees was a report titled “HONTAREVA: Com­bat­ting Cor­rup­tion in the Nation­al Bank of Ukraine.” The report’s sub­ject is Valeriya Hontare­va, who resigned as gov­er­nor from the NBU in April in the wake of death threats after she reformed the Ukraine’s bank­ing sys­tem, includ­ing nation­al­iz­ing the largest bank, Pri­vat­Bank. Hontare­va is an ally of Ukrain­ian Pres­i­dent Petro Poroshenko.

    Join­ing Mack and Kee­len at the front of the room were two pan­elists: Sergiy Taru­ta, a bil­lion­aire mem­ber of the Ukrain­ian par­lia­ment who pre­vi­ous­ly served as gov­er­nor of Donet­sk in east­ern Ukraine, and Olek­san­dr Zavadet­skyi, who for­mer­ly worked at the NBU and claimed to have been fired after ask­ing inap­pro­pri­ate ques­tions regard­ing bank nation­al­iza­tion pro­ce­dures while Hontare­va was in charge.

    The HONTAREVA report is the prod­uct of Sergiy Taru­ta, and he has been out flog­ging it for near­ly a year. Vox­Check, a Ukrain­ian fact check­ing web­site, ana­lyzed Taruta’s report in late 2016 and says of the report: “Vox­Check has checked most of the facts from the Taruta’s brochure and has dis­cov­ered that the data, though most­ly cor­rect, are manip­u­lat­ed in almost all occa­sions.”

    Vox­Check reports that the effect of Taruta’s “pam­phlet” has been a “split [between] politi­cians and experts into two oppos­ing camps, those who sup­port Taru­ta and those who sup­port Valeriya Hontare­va.” (Vox­Check was sim­i­lar­ly crit­i­cal of Hontareva’s rebut­tal.)

    Much of the event was spent crit­i­ciz­ing Hontare­va. Mack wore his old con­gres­sion­al pin on his lapel through­out. He opened by mus­ing about his time on the House For­eign Affairs Com­mit­tee. “It was always impor­tant for us as a com­mit­tee and as a Con­gress to under­stand what’s hap­pen­ing around the world, and the top­ic of cor­rup­tion would always come up,” he said.

    Curi­ous­ly, James Woolsey, the for­mer Clin­ton Admin­is­tra­tion CIA direc­tor and for­mer Trump cam­paign advis­er, also attend­ed and briefly spoke dur­ing the event.

    Mack iden­ti­fied Woolsey as “a spe­cial guest with us today.” Woolsey got up from his seat in the sparse audi­ence and recalled the time years ago when he helped nego­ti­ate a con­ven­tion­al arms treaty in Europe. He men­tioned Ukraine in that con­text, but did not talk about cor­rup­tion. Woolsey said in part that after the fall of the Berlin Wall, “For the next three to four years, the Rus­sians were very easy to get along with. They were sweet­hearts.” The for­mer CIA direc­tor went on to say, “I would love to see the inter­na­tion­al events work out in such a way that we end up being able to do two things. One, is to deal with the exis­tence of cor­rup­tion in the way that you referred to and that many peo­ple here are experts on. And the oth­er is to keep Ukraine and oth­er states in the region, such as Poland, from feel­ing that they are con­stant­ly under pres­sure from Rus­sia to do the wrong thing. Resus­ci­tate the days of friend­ly Rus­sia in the ear­ly ‘90s.

    When asked as to why he host­ed this event, for­mer Con­gress­man Mack told this reporter, “I rep­re­sent a group that is inter­est­ed in high­light­ing cor­rup­tion, not just in Ukraine, but all over: from Cen­tral to South Amer­i­ca, to East­ern Europe.” Mack acknowl­edged before­hand that the event was on the record, but when I asked Woolsey about his atten­dance after the event, he sug­gest­ed that his remarks were off the record, despite the event being record­ed and broad­cast on Ukraine’s New­sOne.

    Whether inten­tion­al or not, the nature and loca­tion of the event gave Ukrain­ian jour­nal­ists the pre­text to mis­lead­ing­ly sug­gest the event was action by the Unit­ed States Con­gress.

    In an inter­view after the event con­clud­ed, Taru­ta told New­sOne: “the fact that we’re here is exact­ly proof that the Amer­i­can gov­ern­ment, the Amer­i­can Con­gress, are not indif­fer­ent to the cor­rup­tion that is today at the high­est ech­e­lons of power/government.”

    Ukrain­ian offi­cials derid­ed Mack’s pan­el as fake news. Via a press release, the NBU’s web­site respond­ed this way:

    Ser­hii Taru­ta spreads false infor­ma­tion about an alleged hear­ing in the Con­gress of the Unit­ed States of Amer­i­ca ded­i­cat­ed to Ukrain­ian author­i­ties and the NBU.

    As far as the NBU is informed, the US Con­gress held no offi­cial hear­ing or meet­ing on the sub­jects indi­cat­ed in Mr Taruta’s mes­sage either today or any oth­er day. In real­i­ty, an infor­mal meet­ing host­ing less than 20 per­sons was held in a room tak­en on lease; the orga­niz­er and mod­er­a­tor was a rep­re­sen­ta­tive of the lob­by­ing com­pa­ny Lib­er­ty­In­ter­na­tion­al­Group, and the speak­ers were Mr Ser­hii Taru­ta and Mr Olek­san­dr Zavadet­skyi, an NBU’s for­mer employ­ee. No offi­cials from the US Admin­is­tra­tion or Con­gress attend­ed the events.

    In an email, Dmytro Shymkiv, the deputy head of Pres­i­den­tial Admin­is­tra­tion of Ukraine, said: “The event on Capi­tol Hill about the Nation­al Bank of Ukraine was not a con­gres­sion­al hear­ing . . . The dis­cus­sion was held with­out pub­lic scruti­ny and was spon­sored by a secret source. It just hap­pened to be con­vened in a room on Capi­tol Hill by an Amer­i­can who was once, years ago, a con­gress­man.” Mack, who is now a reg­is­tered lob­by­ist, was last in Con­gress in 2013 after being defeat­ed in a race for a U.S. Sen­ate seat.

    It is unclear whether the event was “spon­sored” in the sense that mon­ey was exchanged for use of the room. Meet­ing rooms—like HC‑8—are typ­i­cal­ly used in con­junc­tion with offi­cial con­gres­sion­al activ­i­ty, but cur­rent mem­bers of Con­gress are able to spon­sor use of the such rooms for con­stituent groups, pro­vid­ed they attend. If they can­not attend, one of their aides is required to attend. The room reser­va­tion form from the speaker’s office, which con­trols reser­va­tions, warns con­gres­sion­al offices that these rooms can­not be used for: “Com­mer­cial, prof­it-mak­ing, fundrais­ing, adver­tis­ing, polit­i­cal or lob­by­ing pur­pos­es, nor for enter­tain­ing tour groups.”

    An inquiry to Speak­er Ryan’s office about the use of the space was not returned.

    Mack is reg­is­tered to lob­by on behalf of Inter­con­nec­tion Com­merce S.A. to try to raise aware­ness of “cor­rup­tion with­in the Nation­al Bank of Ukraine.” POLITICO Influ­ence reports that “It’s unclear who Inter­con­nec­tion S.A. rep­re­sents. The firm lists an address in the British Vir­gin Islands and shows up in the Pana­ma Papers leaks but oth­er­wise has no online pres­ence.”

    A Sen­ate aide with knowl­edge of the event said, “It was a strange, strange event. Even by Ukrain­ian stan­dards, that was an odd one. . . . I mean, why would a for­mer CIA direc­tor be in the base­ment of the Capi­tol for a inter-oli­garch dis­pute? [For­mer] CIA direc­tors don’t just go to events and say, how much we could get along with the Rus­sians. They don’t do that with­out a rea­son.”

    ...

    ———-

    “The Moth­er of All Fake News” by J.P. CARROLL; The Week­ly Stan­dard; 09/29/2017

    The HONTAREVA report is the prod­uct of Sergiy Taru­ta, and he has been out flog­ging it for near­ly a year. Vox­Check, a Ukrain­ian fact check­ing web­site, ana­lyzed Taruta’s report in late 2016 and says of the report: “Vox­Check has checked most of the facts from the Taruta’s brochure and has dis­cov­ered that the data, though most­ly cor­rect, are manip­u­lat­ed in almost all occa­sions.””

    The fake con­gres­sion­al hear is a sign of how much Taru­ta wants to pub­li­cize his report report on the cor­rup­tion at Ukraine’s cen­tral bank. But it’s also a sign that Taru­ta’s pri­ma­ry audi­ence with this fake hear­ing was Ukraini­ans. And Taru­ta and his NewOne Ukrain­ian media part­ners were more than hap­py to main­tain the pre­tense that this was a real con­gres­sion­al event for that Ukrain­ian audi­ence. It was a pri­vate event hoax designed to look like a pub­lic event:

    ...
    The event, which took place in the base­ment of the U.S. Capi­tol, Room HC 8, was billed by the Ukrain­ian tele­vi­sion chan­nel as a meet­ing of the “US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.” New­sOne teased it this way:

    The high­est lev­els of cor­rup­tion in the NBU are known by the US Con­gres­sion­al Com­mit­tee on Finan­cial Issues.

    Only thanks to the sys­tem­at­ic work of the team that col­lect­ed evi­dence of cor­rup­tions of the most impor­tant offi­cials of the Nation­al Bank, the strongest of the world will find out about it.

    Shock­ing details and res­o­nant details—live stream­ing on New­sOne! Turn on at 21:00—live from Wash­ing­ton DC

    Except, what was broad­cast was not a hear­ing of any com­mit­tee of Con­gress. No cur­rent mem­bers of Con­gress were even there. What was this odd event? A pri­vate pan­el dis­cus­sion host­ed by for­mer Rep. Con­nie Mack IV (R‑FL), along with vet­er­an polit­i­cal fundrais­er and oper­a­tive, Matt Kee­len. But unlike an actu­al con­gres­sion­al hear­ing, this pri­vate event was open only to invit­ed guests (includ­ing con­gres­sion­al staffers), two Ukrain­ian reporters (from New­sOne), and one Amer­i­can reporter (me).
    ...

    Adding to the bizarreness was the speech by for­mer CIA direc­tor James Woolsey about what sweet­hearts Rus­sia was after the fall of the Berlin wall and the need to return to that point:

    ...
    Curi­ous­ly, James Woolsey, the for­mer Clin­ton Admin­is­tra­tion CIA direc­tor and for­mer Trump cam­paign advis­er, also attend­ed and briefly spoke dur­ing the event.

    Mack iden­ti­fied Woolsey as “a spe­cial guest with us today.” Woolsey got up from his seat in the sparse audi­ence and recalled the time years ago when he helped nego­ti­ate a con­ven­tion­al arms treaty in Europe. He men­tioned Ukraine in that con­text, but did not talk about cor­rup­tion. Woolsey said in part that after the fall of the Berlin Wall, “For the next three to four years, the Rus­sians were very easy to get along with. They were sweet­hearts.” The for­mer CIA direc­tor went on to say, “I would love to see the inter­na­tion­al events work out in such a way that we end up being able to do two things. One, is to deal with the exis­tence of cor­rup­tion in the way that you referred to and that many peo­ple here are experts on. And the oth­er is to keep Ukraine and oth­er states in the region, such as Poland, from feel­ing that they are con­stant­ly under pres­sure from Rus­sia to do the wrong thing. Resus­ci­tate the days of friend­ly Rus­sia in the ear­ly ‘90s.
    ...

    And that’s all why one Sen­ate Aide referred to it all as a strange, strange event to see a for­mer CIA direc­tor show up at a hoax event that’s part of a larg­er inter-oli­garch dis­pute:

    ...
    A Sen­ate aide with knowl­edge of the event said, “It was a strange, strange event. Even by Ukrain­ian stan­dards, that was an odd one. . . . I mean, why would a for­mer CIA direc­tor be in the base­ment of the Capi­tol for a inter-oli­garch dis­pute? [For­mer] CIA direc­tors don’t just go to events and say, how much we could get along with the Rus­sians. They don’t do that with­out a rea­son.”
    ...

    So let’s now take a clos­er look at that inter-oli­garch dis­pute to get a bet­ter sense of who Taru­ta is aligned with in Ukraine. And in this case he’s clear­ly aligned with Ihor Kolo­moisky, co-founder of the nation­al­ized Pri­vat­bank.

    As the arti­cle also notes, when Taru­ta was sell­ing the major­i­ty stake in the indus­tri­al con­glom­er­ate he co-found­ed, Indus­tri­al Union of Don­bass, in 2010, he was a close ally Yulia Tymoshenko at the time. And accord­ing to leaked cables, Tymoshenko want­ed him to keep the sale a secret over fears that she would be attacked for sell­ing out Ukraine. It’s anoth­er indi­ca­tion of Taru­ta’s polit­i­cal pedi­gree.

    The arti­cle also has an expla­na­tion from James Woolsey on why he attend­ed that event: he was duped. He agreed to show up in the audi­ence and then was asked on the spot to make some remarks. That’s the line he’s going with.

    And the arti­cle iden­ti­fies the per­son who has come for­ward to claim respon­si­bil­i­ty for arrang­ing the event: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch. Motkin found­ed the StrategEast con­sult­ing firm that describes itself as “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe.” Motkin claims that he decid­ed to fund the event because Taru­ta brought the alle­ga­tions about Gontare­va to his atten­tion.

    So that gives us a few more data points about Taru­ta: he was close to Tymoshenko, he’s doing Ihor Kolo­moisky’s bid­ding in wag­ing this fight against the nation­al­iza­tion of Pri­vat­bank, and the per­son who actu­al­ly set up the even runs a lob­by­ing firm for that described itself as “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe”:

    The Dai­ly Beast

    The Alleged­ly Mur­der­ous Oli­garch, the Duped CIA Chief, and the Trump­kin
    Who was behind a mys­te­ri­ous fake hear­ing in the base­ment of the U.S. Capi­tol?

    Bet­sy Woodruff
    03.27.18 5:04 AM ET

    On Sept. 25, 2017, a win­dow­less room in the base­ment of the Capi­tol Build­ing became the site of one of Washington’s more mys­te­ri­ous recent events.

    On hand: an investor who was once unsuc­cess­ful­ly sued for alleged­ly help­ing mur­der his own boss, a for­mer con­gress­man from the Flori­da pan­han­dle, and a for­mer Trump cam­paign staffer. One of two Ukrain­ian media out­lets to cov­er the event is owned by an old asso­ciate of Paul Manafort’s—a man who fed­er­al pros­e­cu­tors allege to be an “upper-ech­e­lon asso­ciate of Russ­ian orga­nized crime.”

    Oh, and the for­mer direc­tor of the CIA was involved.

    The for­mer CIA direc­tor told The Dai­ly Beast he wouldn’t have got­ten involved if he had known what was going on. One of the Amer­i­can lob­by­ists said the event was used for pro­pa­gan­da. The guy who got sued over his boss’s death? He now takes cred­it for the whole she­bang.

    THE BANK TAKEOVER

    This sto­ry starts in Kyiv, Ukraine, on June 19, 2014. That’s when a woman named Valeriya Gontare­va became the chair of the country’s pow­er­ful cen­tral bank. Ukrain­ian pol­i­tics is rife with cor­rup­tion, espe­cial­ly by Amer­i­can stan­dards, and is dom­i­nat­ed by the country’s pow­er­ful oli­garchs. As chair of the nation­al bank, Gontare­va made a host of changes to the country’s finan­cial system—and some pow­er­ful ene­mies.

    One of the biggest changes she over­saw was a gov­ern­ment takeover of the country’s biggest com­mer­cial bank, Pri­vat­bank. The oli­garch Ihor Kolo­moisky (who The Wall Street Jour­nal once described as “feisty”) co-found­ed it. When Gontare­va presided over the bank’s nation­al­iza­tion, its accounts were miss­ing more than $5 bil­lion, accord­ing to the Finan­cial Times, in large part because the bank lent so much mon­ey to peo­ple with con­nec­tions to Kolo­moisky.

    “Inter­na­tion­al finan­cial insti­tu­tions applaud­ed the state takeover,” wrote FT. “It has been wide­ly seen as the cul­mi­na­tion of Ukraine’s efforts since 2014 to clean up a dys­func­tion­al bank­ing sec­tor dom­i­nat­ed by oli­garch-owned banks.”

    The bank’s founders weren’t pleased.

    After the bank takeover, Gontare­va received numer­ous threats. One pro­test­er put a cof­fin out­side her door, accord­ing to Reuters. On April 10, 2017, she announced at a press con­fer­ence that she was resign­ing from her post. She tout­ed her accom­plish­ments at the event, but cau­tioned that in her absence the country’s finan­cial sec­tor could fake greater trou­bles.

    “I believe that resis­tance to changes and reforms will grow stronger now,” she said.

    THE FAKE HEARING

    Five months lat­er, in Wash­ing­ton D.C., some­thing odd hap­pened: Amer­i­can lob­by­ists host­ed an event, osten­si­bly on anti-cor­rup­tion issues, in the base­ment of the Capi­tol Build­ing. The event vil­i­fied Gontare­va. Orga­niz­ers dis­trib­uted lit­er­a­ture fea­tur­ing a grim close-up of her face, call­ing her a threat to Ukraine’s eco­nom­ic secu­ri­ty, and ask­ing if she was “CINDERELLA OR WICKED STEPMOTHER?”

    Ser­hiy Taru­ta, a mem­ber of the Ukrain­ian par­lia­ment, is named as the author of the report. In 2008, Forbes esti­mat­ed his net worth at $2.7 bil­lion. Accord­ing to a diplo­mat­ic cable pub­lished by Wik­iLeaks, Amer­i­can gov­ern­ment offi­cials believed Taru­ta played a role in the sale of a major­i­ty stake in the sale of one of Ukraine’s largest steel groups—val­ued at $2 bil­lion—to a pow­er­ful Russ­ian busi­ness­man. Taru­ta was a close ally of politi­cian Yulia Tymoshenko at the time, and the cable said she and Taru­ta want­ed to keep the deal “hid­den from pub­lic view” to avoid crit­i­cism. Had the nature of the deal been made pub­lic, the cable said, Tymoshenko could have faced “increased attacks from polit­i­cal rivals for ‘sell­ing out’ Ukrain­ian assets to Russ­ian inter­ests, per­haps to finance her pres­i­den­tial cam­paign.”

    The event’s orga­niz­ers are adamant that they did not plan for it to look like a fake con­gres­sion­al hear­ing. But Ukrain­ian reporters who attend­ed the event cov­ered it that way. For­mer Rep. Con­nie Mack, one of the Amer­i­can lob­by­ists who orga­nized the event, sport­ed the pin that mem­bers of Con­gress wear. James Woolsey, for­mer CIA direc­tor, attend­ed and spoke briefly to the group.

    Woolsey’s spokesper­son, Jonathan Franks, lat­er said he was duped.

    “Ambas­sador Woolsey was delib­er­ate­ly mis­led about the nature of this event when he agreed to attend,” Franks told The Dai­ly Beast. “He expect­ed to be a mem­ber of the audi­ence for a seri­ous dis­cus­sion of issues fac­ing the Ukraine, an area he’s been inter­est­ed in for decades. He didn’t agree to be iden­ti­fied a ‘spe­cial guest’ nor did he agree to speak. Per­haps he was guilty of being old fash­ioned, but it nev­er occurred to him the orga­niz­ers would lure him to an event in the Capi­tol in order to make him an invol­un­tary par­tic­i­pant in a sham.”

    Rep. Ron Estes, a fresh­man from Kansas, booked the room for Mack and Co. His office lat­er told The Dai­ly Beast this won’t hap­pen again.

    Mack and Matt Kee­len, a lob­by­ist whose firm’s web­site boasts of his “well fos­tered rela­tion­ships” in the Trump admin­is­tra­tion, both dis­closed in fed­er­al reg­is­tra­tion forms that they put on the event for a shell com­pa­ny based in the British Vir­gin Islands called Inter­con­nec­tion Com­merce SA.

    “I nev­er por­trayed this as a hear­ing,” Mack told The Dai­ly Beast. “We didn’t do any­thing to make it look like a hear­ing. It was in a very stale room in the base­ment, no mark­ings of a con­gres­sion­al hear­ing at all.”

    At the event, Mack used the term “we” when refer­ring to Con­gress, and was emphat­ic that mem­bers should inves­ti­gate Gontare­va.

    “One thing is clear: that we, the Con­gress of the Unit­ed States—and there are tax­pay­er dol­lars at risk, and there are alle­ga­tions, sug­ges­tions, and evidence—should inves­ti­gate,” he said, accord­ing to an audio record­ing of the event.

    Mack blamed BGR Group, a lob­by­ing firm that works for Ukraine’s cur­rent pres­i­dent, Petro Poroshenko, for push­ing the nar­ra­tive that he and Kee­len put on a fake hear­ing.

    Two Ukrain­ian news out­lets cov­ered the event. One of those out­lets, Chan­nelOne, described it as a hear­ing of the nonex­is­tent “U.S. Con­gres­sion­al Com­mit­tee on Finan­cial Issues.”

    “That was pure pro­pa­gan­da on their part,” Mack said. “Who­ev­er those news out­lets are, it real­ly is fake news. They had to go a long way to try to make it look like a hear­ing.”

    The oth­er Ukrain­ian news out­let that cov­ered the event was UkraNews, which—accord­ing to the Objec­tive Project, which mon­i­tors media own­er­ship in Ukraine—belongs to Dmit­ry Fir­tash.

    That name should ring a bell, if you’ve been fol­low­ing the far-flung dra­ma into for­eign influ­ence on the 2016 elec­tion. Fed­er­al pros­e­cu­tors in Chica­go are seek­ing Firtash’s extra­di­tion to the Unit­ed States to put him on tri­al for rack­e­teer­ing. Man­afort, for­mer Man­afort deputy Rick Gates, and Fir­tash worked on a deal in 2008 to buy New York’s Drake Hotel—for a cool $850 million—but the deal fell through.

    Lan­ny Davis—a for­mer spe­cial coun­sel in Bill Clinton’s White House who today rep­re­sents Firtash—said his client had noth­ing to do with the hear­ing.

    “Mr. Fir­tash had and has no knowl­edge of, no posi­tion on, and no involve­ment what­so­ev­er in the con­gres­sion­al brief­ing that occurred and takes no posi­tion and has no inter­est in the issues dis­cussed,” Davis said.

    THE MYSTERY MAN

    So who dreamed up this fake hear­ing? And who paid for it? For months, the backer of this so-called sham was a mys­tery. But when The Dai­ly Beast start­ed ask­ing who paid for the event, a lit­tle-known fig­ure came for­ward to take full respon­si­bil­i­ty: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch accused of lead­ing a coup attempt.

    A spokesper­son for Motkin, for­mer­ly an asso­ciate to the now-deceased Badri Patarkat­sishvili, told The Dai­ly Beast that he paid for the entire event. Ali­son Patch, a spokesper­son for Motkin, said Motkin paid for the event him­self in his per­son­al capac­i­ty.

    Motkin was an aide to Patarkat­sishvili when he report­ed­ly tried to foment a coup in Geor­gia. After Patarkat­sishvili died, Motkin found him­self embroiled in a legal bat­tle with Patarkatsishvili’s cousin. The cousin alleged in doc­u­ments filed as part of a civ­il suit in New York state court that Motkin was part of a plot to kill Patarkat­sishvili (PDF).

    A spokesper­son for Motkin said he decid­ed to fund the event because Taru­ta, the Ukrain­ian bil­lion­aire, brought the alle­ga­tions about Gontare­va to his atten­tion.

    “Although this report was entire­ly brought by Mr. Taruta’s ini­tia­tive, for many years Mr. Motkin has worked on pro­mot­ing demo­c­ra­t­ic val­ues amongst com­mu­ni­ties close to the for­mer Sovi­et Union,” said Patch. “Know­ing of his inter­est in sup­port­ing anti-cor­rup­tion efforts, Mr. Taru­ta shared the infor­ma­tion about his report. Mr. Motkin found the evi­dence pre­sent­ed com­pelling and decid­ed that if he could help get the issues in front of peo­ple who may make a dif­fer­ence, he would.”

    Anders Aslund of the Atlantic Coun­cil, an expert on oli­garchs’ pol­i­tick­ing, didn’t quite believe it. Aslund said he believes the dri­ving force behind the event was Ihor Kolomoisky—the Ukrain­ian oli­garch whose cronies lost all that mon­ey when Pri­vat­bank was nation­al­ized. Kolomisky would have mil­lions of rea­sons to detest Gontare­va, the object of the fake hearing’s ire, accord­ing to Aslund.

    “This was entire­ly Kolo­moisky,” he said. “Kolo­moisky is crooked and clever. He is a per­son who makes busi­ness by doing bank­rupt­cy rather than mak­ing prof­its.”

    Kolo­moisky has faced alle­ga­tions of involve­ment in con­tract killings, which he denies. An attor­ney for Kolo­moisky did not respond to mul­ti­ple requests for com­ment.

    ...

    ———-

    “The Alleged­ly Mur­der­ous Oli­garch, the Duped CIA Chief, and the Trump­kin” by Bet­sy Woodruff; The Dai­ly Beast; 03/27/2018

    “Ser­hiy Taru­ta, a mem­ber of the Ukrain­ian par­lia­ment, is named as the author of the report. In 2008, Forbes esti­mat­ed his net worth at $2.7 bil­lion. Accord­ing to a diplo­mat­ic cable pub­lished by Wik­iLeaks, Amer­i­can gov­ern­ment offi­cials believed Taru­ta played a role in the sale of a major­i­ty stake in the sale of one of Ukraine’s largest steel groups—val­ued at $2 bil­lion—to a pow­er­ful Russ­ian busi­ness­man. Taru­ta was a close ally of politi­cian Yulia Tymoshenko at the time, and the cable said she and Taru­ta want­ed to keep the deal “hid­den from pub­lic view” to avoid crit­i­cism. Had the nature of the deal been made pub­lic, the cable said, Tymoshenko could have faced “increased attacks from polit­i­cal rivals for ‘sell­ing out’ Ukrain­ian assets to Russ­ian inter­ests, per­haps to finance her pres­i­den­tial cam­paign.””

    That’s a key obser­va­tion: Taru­ta was seen as a close Tymoshenko ally.

    But he’s also a Koloimoisky ally since this inter-oli­garch dis­pute is Kolo­moisky’s dis­pute and Taru­ta is fight­ing Kolo­moisky’s fight:

    ...
    This sto­ry starts in Kyiv, Ukraine, on June 19, 2014. That’s when a woman named Valeriya Gontare­va became the chair of the country’s pow­er­ful cen­tral bank. Ukrain­ian pol­i­tics is rife with cor­rup­tion, espe­cial­ly by Amer­i­can stan­dards, and is dom­i­nat­ed by the country’s pow­er­ful oli­garchs. As chair of the nation­al bank, Gontare­va made a host of changes to the country’s finan­cial system—and some pow­er­ful ene­mies.

    One of the biggest changes she over­saw was a gov­ern­ment takeover of the country’s biggest com­mer­cial bank, Pri­vat­bank. The oli­garch Ihor Kolo­moisky (who The Wall Street Jour­nal once described as “feisty”) co-found­ed it. When Gontare­va presided over the bank’s nation­al­iza­tion, its accounts were miss­ing more than $5 bil­lion, accord­ing to the Finan­cial Times, in large part because the bank lent so much mon­ey to peo­ple with con­nec­tions to Kolo­moisky.

    “Inter­na­tion­al finan­cial insti­tu­tions applaud­ed the state takeover,” wrote FT. “It has been wide­ly seen as the cul­mi­na­tion of Ukraine’s efforts since 2014 to clean up a dys­func­tion­al bank­ing sec­tor dom­i­nat­ed by oli­garch-owned banks.”

    The bank’s founders weren’t pleased.

    After the bank takeover, Gontare­va received numer­ous threats. One pro­test­er put a cof­fin out­side her door, accord­ing to Reuters. On April 10, 2017, she announced at a press con­fer­ence that she was resign­ing from her post. She tout­ed her accom­plish­ments at the event, but cau­tioned that in her absence the country’s finan­cial sec­tor could fake greater trou­bles.
    ...

    But what about James Woolsey? What’s his excuse for fight­ing Kolo­moiksy’s fight? He was tricked. That was his excuse:

    ...
    Woolsey’s spokesper­son, Jonathan Franks, lat­er said he was duped.

    “Ambas­sador Woolsey was delib­er­ate­ly mis­led about the nature of this event when he agreed to attend,” Franks told The Dai­ly Beast. “He expect­ed to be a mem­ber of the audi­ence for a seri­ous dis­cus­sion of issues fac­ing the Ukraine, an area he’s been inter­est­ed in for decades. He didn’t agree to be iden­ti­fied a ‘spe­cial guest’ nor did he agree to speak. Per­haps he was guilty of being old fash­ioned, but it nev­er occurred to him the orga­niz­ers would lure him to an event in the Capi­tol in order to make him an invol­un­tary par­tic­i­pant in a sham.”
    ...

    And what about Rep. Estes, the con­gress­man who made this offi­cial room avail­able for the stunt? Well, he assures us that it won’t hap­pen again. It’s sort of an expla­na­tion:

    ...
    Rep. Ron Estes, a fresh­man from Kansas, booked the room for Mack and Co. His office lat­er told The Dai­ly Beast this won’t hap­pen again.
    ...

    And note the two Ukrain­ian media com­pa­nies that cov­ered this. There was Chan­nelOne, which is owned by 1+1 Media, Ihor Kolo­moisky’s media group. And also UkraNews, which belongs to Dmit­ry Fir­tash:

    ...
    Two Ukrain­ian news out­lets cov­ered the event. One of those out­lets, Chan­nelOne, described it as a hear­ing of the nonex­is­tent “U.S. Con­gres­sion­al Com­mit­tee on Finan­cial Issues.”

    “That was pure pro­pa­gan­da on their part,” Mack said. “Who­ev­er those news out­lets are, it real­ly is fake news. They had to go a long way to try to make it look like a hear­ing.”

    The oth­er Ukrain­ian news out­let that cov­ered the event was UkraNews, which—accord­ing to the Objec­tive Project, which mon­i­tors media own­er­ship in Ukraine—belongs to Dmit­ry Fir­tash.

    That name should ring a bell, if you’ve been fol­low­ing the far-flung dra­ma into for­eign influ­ence on the 2016 elec­tion. Fed­er­al pros­e­cu­tors in Chica­go are seek­ing Firtash’s extra­di­tion to the Unit­ed States to put him on tri­al for rack­e­teer­ing. Man­afort, for­mer Man­afort deputy Rick Gates, and Fir­tash worked on a deal in 2008 to buy New York’s Drake Hotel—for a cool $850 million—but the deal fell through.
    ...

    And recall what we saw in the above Ukraine Week piece about the make­up of the Oppo­si­tion Bloc and the unproven spec­u­la­tion that Rinat Akhme­tov could be behind Osno­va: “One sto­ry is that the pur­pose of Osno­va is to grad­u­al­ly siphon off Akhmetov’s folks from the Oppo­si­tion Bloc, giv­en that for­mer Region­als split into the Akhme­tov wing, which is more loy­al to Poroshenko, and the Liovochkin-Fir­tash wing, which is com­plete­ly opposed”. That sure sounds like Fir­tash rep­re­sents a fac­tion of the Oppo­si­tion Bloc that would like to see Poroshenko go (recall that Andreii Arte­menko’s peace plan pro­pos­al involved the col­lapse of the Porokshenko gov­ern­ment under a wave scan­dal rev­e­la­tions. Arte­menko would pro­vide the scan­dal evi­dence). So it’s notable that we have Fir­tash’s news chan­nel also pro­mot­ing Taru­ta’s fake con­gres­sion­al along with Kolo­moisky’s Chan­nelOne.

    And look who has come for­ward as the even orga­niz­er. Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch:

    ...
    THE MYSTERY MAN

    So who dreamed up this fake hear­ing? And who paid for it? For months, the backer of this so-called sham was a mys­tery. But when The Dai­ly Beast start­ed ask­ing who paid for the event, a lit­tle-known fig­ure came for­ward to take full respon­si­bil­i­ty: Ana­toly Motkin, a one-time aide to a Geor­gian oli­garch accused of lead­ing a coup attempt.

    A spokesper­son for Motkin, for­mer­ly an asso­ciate to the now-deceased Badri Patarkat­sishvili, told The Dai­ly Beast that he paid for the entire event. Ali­son Patch, a spokesper­son for Motkin, said Motkin paid for the event him­self in his per­son­al capac­i­ty.

    Motkin was an aide to Patarkat­sishvili when he report­ed­ly tried to foment a coup in Geor­gia. After Patarkat­sishvili died, Motkin found him­self embroiled in a legal bat­tle with Patarkatsishvili’s cousin. The cousin alleged in doc­u­ments filed as part of a civ­il suit in New York state court that Motkin was part of a plot to kill Patarkat­sishvili (PDF).

    A spokesper­son for Motkin said he decid­ed to fund the event because Taru­ta, the Ukrain­ian bil­lion­aire, brought the alle­ga­tions about Gontare­va to his atten­tion.

    “Although this report was entire­ly brought by Mr. Taruta’s ini­tia­tive, for many years Mr. Motkin has worked on pro­mot­ing demo­c­ra­t­ic val­ues amongst com­mu­ni­ties close to the for­mer Sovi­et Union,” said Patch. “Know­ing of his inter­est in sup­port­ing anti-cor­rup­tion efforts, Mr. Taru­ta shared the infor­ma­tion about his report. Mr. Motkin found the evi­dence pre­sent­ed com­pelling and decid­ed that if he could help get the issues in front of peo­ple who may make a dif­fer­ence, he would.”
    ...

    And when we look at how Motk­in’s lob­by­ing firm describes itself, it’s “a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe”:

    Strategeast

    About US

    Ana­toly Motkin
    Founder and Pres­i­dent

    Ana­toly Motkin is founder and pres­i­dent of StrategEast, a strate­gic cen­ter for polit­i­cal and diplo­mat­ic solu­tions whose mis­sion is to guide and assist elites of the post-Sovi­et region into clos­er work­ing rela­tion­ships with the USA and West­ern Europe. In this role, Mr. Motkin uses his two decades of involve­ment in the devel­op­ment of media and polit­i­cal projects in the post-Sovi­et region to sup­port var­i­ous pro­grams and com­bat cor­rup­tion in the region.

    Mr. Motkin has devot­ed much of his career to assist­ing the process­es of West­ern­iza­tion in post-Sovi­et states through the launch­ing of a vari­ety of media, polit­i­cal and busi­ness ini­tia­tives aimed to dri­ve social aware­ness and con­nect com­mu­ni­ties. He has suc­cess­ful­ly invest­ed in mul­ti­ple tech­nol­o­gy star­tups, such as one of the most pop­u­lar mes­sag­ing apps and the rideshar­ing ser­vice app Juno, which was recent­ly acquired by on-demand ride ser­vice Gett.

    Mr. Motkin has also cre­at­ed and pro­duced sev­er­al suc­cess­ful Russ­ian-lan­guage media projects in his native Israel, as well as in Latvia, Belarus and Geor­gia.

    Projects estab­lished by Mr. Motkin include a part­ner­ship with Yedio­th Ahronoth pub­lish­ing group, the strongest media house in Israel, to pro­duce an enter­tain­ment mag­a­zine “Tele-Boom”, Time Out – Israel and “7:40” – a prime­time show on Chan­nel 9 – the only Israeli TV broad­cast chan­nel in Russ­ian. He is also the founder of Cur­sor­in­fo, one of the old­est Russ­ian-lan­guage news web­sites and one of the most cit­ed sources for infor­ma­tion on cur­rent events in Israel.

    Mr. Motkin began his career as a polit­i­cal con­sul­tant advis­ing the Israeli Gov­ern­ment on the country’s Russ­ian-speak­ing sec­tor. Dur­ing this time, Mr. Motkin served as the head of the Russ­ian-speak­ing vot­ers cam­paign for the Shinui par­ty, assist­ing to triple the num­ber of votes for the par­ty and assist­ing the Shinui in win­ning 15 seats in the 2003 Knes­set elec­tion.

    ...

    ———-

    “Strategeast: About US: Ana­toly Motkin”; Strategeast.org; 04/07/2018

    Mr. Motkin has devot­ed much of his career to assist­ing the process­es of West­ern­iza­tion in post-Sovi­et states through the launch­ing of a vari­ety of media, polit­i­cal and busi­ness ini­tia­tives aimed to dri­ve social aware­ness and con­nect com­mu­ni­ties. He has suc­cess­ful­ly invest­ed in mul­ti­ple tech­nol­o­gy star­tups, such as one of the most pop­u­lar mes­sag­ing apps and the rideshar­ing ser­vice app Juno, which was recent­ly acquired by on-demand ride ser­vice Gett.”

    And the involved of some­one like Motkin in arrang­ing the the­atrics of what amounts to an inter-oli­garch dis­pute over Ihor Kolo­moisky’s nation­al­ized bank points to one of the key obser­va­tions in this sit­u­a­tion: it appears to be an inter-oli­garch fight of dif­fer­ent fac­tions of pro-west­ern Ukrain­ian oli­garchs. And Sergei Taru­ta appears to be square­ly in the camp of fac­tion that does­n’t sup­port the sep­a­ratists but also does­n’t sup­port Poroshenko. As we’ve seen, Taru­ta has his­to­ry ties to Yulia Tymosheko’s pow­er base, but he also appears to be work­ing with fel­low East Ukrain­ian oli­garch Ihor Kolo­moisky.

    So, final­ly, let’s note some­thing impor­tant about Taru­ta and Kolo­moisky from this 2015 report by Joshua Cohen, who has done a lot of good report­ing about the risk of the neo-Nazi in Ukraine. It’s a report that would explain some of the ani­mos­i­ty between Kolo­moisky and the Poroshenko gov­ern­ment: The report describes the use of pri­vate­ly financed mili­tias that are, in effect, pri­vate armies con­trolled by their Ukrain­ian oli­garch financiers, with Ihor Kolo­moisky being one of the biggest mili­tia financiers. And this actu­al­ly led to Kol­moiksy’s fir­ing in 2015 after Komoloisky sent one of this pri­vate armies to seize con­trol of the head­quar­ters of the state-owned oil com­pa­ny, Ukr­TransNaf­ta, after Kiev fired the company’s chief exec­u­tive offi­cer who hap­pened to be an ally of Kolo­moisky. This led to Kolo­moiksy’s fir­ing as gov­er­nor of Dnipro. So that, in addi­tion to the Pri­vat­bank nation­al­iza­tion, is no doubt part of why Koloimoisky might not be super enthi­a­sis­tic about the Poroshenko gov­ern­ment.

    Giv­en the ongo­ing ten­sions between the neo-Nazis groups in Ukraine and the Kiev gov­ern­ment and the ongo­ing Nazi threats from groups like the Azov Bat­tal­ion to ‘march on Kiev’ and take over, it’s note­wor­thy that one of their biggest finan­cial back­ers, Ihor Kolo­moisky, has so much ani­mos­i­ty towards the Poroshenko gov­ern­ment. And in our look at Sergei Targuta it’s also pret­ty wor­thy that, as the arti­cle notes, both Kolo­moisky and Taru­ta were par­tial­ly financ­ing the neo-Nazi Azov Bat­tal­ion:

    Reuters

    The Great Debate

    In the bat­tle between Ukraine and Russ­ian sep­a­ratists, shady pri­vate armies take the field

    By Josh Cohen
    May 5, 2015

    While the cease­fire agree­ment between the Ukrain­ian gov­ern­ment and sep­a­ratist rebels in the east­ern part of the coun­try seems large­ly to be hold­ing, a recent show­down in Kiev between a Ukrain­ian oli­garch and the gov­ern­ment revealed one of the country’s ongo­ing chal­lenges: pri­vate mil­i­tary bat­tal­ions that do not always oper­ate under the cen­tral government’s con­trol.

    In March, mem­bers of the pri­vate army backed by tycoon Ihor Kolo­moisky showed up at the head­quar­ters of the state-owned oil com­pa­ny, Ukr­TransNaf­ta. The stand­off occurred after Kiev fired the company’s chief exec­u­tive offi­cer — an ally of Kolomoisky’s. Kolo­moisky said that he was try­ing to pro­tect the com­pa­ny from an ille­gal takeover.

    More than 30 of these pri­vate bat­tal­ions, com­prised most­ly of vol­un­teer sol­diers, exist through­out Ukraine. Although all have been brought under the author­i­ty of the mil­i­tary or the Nation­al Guard, the post-Maid­an gov­ern­ment is still strug­gling to con­trol them.

    Ukraine’s mil­i­tary is so weak that after the Russ­ian Fed­er­a­tion seized Crimea, Russ­ian-spon­sored sep­a­ratists were able to take over large swathes of east­ern Ukraine. Pri­vate bat­tal­ions, fund­ed par­tial­ly by Ukrain­ian oli­garchs, stepped into this vac­u­um and played a key role in stop­ping the sep­a­ratists’ advance.

    By sup­ply­ing weapons to the bat­tal­ions and in some cas­es pay­ing recruits, Ukraine’s rich­est men are defend­ing their coun­try — and also pro­tect­ing their own eco­nom­ic inter­ests. Many of the oli­garchs amassed great wealth by using their polit­i­cal con­nec­tions to pur­chase gov­ern­ment assets at knock­down prices, siphon off prof­its from state-owned com­pa­nies and bribe Ukrain­ian offi­cials to win state con­tracts.

    When the Maid­an pro­test­ers over­threw for­mer Pres­i­dent Vik­tor Yanukovich, they demand­ed that the new gov­ern­ment clamp down on the oli­garchs’ abuse of pow­er. Instead, many became even more pow­er­ful: Kiev hand­ed Kolo­moisky and min­ing tycoon Ser­hiy Taru­ta gov­er­nor posts in impor­tant east­ern regions of Ukraine, for exam­ple.

    Many of these para­mil­i­tary groups are accused of abus­ing the cit­i­zens they are charged with pro­tect­ing. Amnesty Inter­na­tion­al has report­ed that the Aidar bat­tal­ion — also par­tial­ly fund­ed by Kolo­moisky — com­mit­ted war crimes, includ­ing ille­gal abduc­tions, unlaw­ful deten­tion, rob­bery, extor­tion and even pos­si­ble exe­cu­tions.

    Oth­er pro-Kiev pri­vate bat­tal­ions have starved civil­ians as a form of war­fare, pre­vent­ing aid con­voys from reach­ing sep­a­ratist-con­trolled areas of east­ern Ukraine, accord­ing to the Amnesty report.

    Some of Ukraine’s pri­vate bat­tal­ions have black­ened the country’s inter­na­tion­al rep­u­ta­tion with their extrem­ist views. The Azov bat­tal­ion, par­tial­ly fund­ed by Taru­ta and Kolo­moisky, uses the Nazi Wolf­san­gel sym­bol as its logo, and many of its mem­bers open­ly espouse neo-Nazi, anti-Semit­ic views. The bat­tal­ion mem­bers have spo­ken about “bring­ing the war to Kiev,” and said that Ukraine needs “a strong dic­ta­tor to come to pow­er who could shed plen­ty of blood but unite the nation in the process.”

    Ukraine’s Pres­i­dent Petro Poroshenko has made clear his inten­tion to rein in Ukraine’s vol­un­teer war­riors. Days after Kolomoisky’s sol­diers appeared at Ukr­TransNaf­ta, he said that he would not tol­er­ate oli­garchs with “pock­et armies” and then fired Kolo­moisky from his perch as the gov­er­nor of Dnipropetro­vsk.

    By bring­ing the pri­vate vol­un­teers under Kiev’s full con­trol, Ukraine will ben­e­fit in a num­ber of ways. The vol­un­teer bat­tal­ions will receive the same train­ing as the mil­i­tary, which should help them to bet­ter inte­grate their tac­tics. They’ll qual­i­fy for reg­u­lar mil­i­tary ben­e­fits and pen­sions. Final­ly, they will be sub­ject to mil­i­tary law, which allows the gov­ern­ment to bet­ter deal with any crim­i­nal or human rights vio­la­tions that they com­mit.

    ...

    ———-

    “In the bat­tle between Ukraine and Russ­ian sep­a­ratists, shady pri­vate armies take the field” by Josh Cohen; Reuters; 05/05/2015

    “Ukraine’s Pres­i­dent Petro Poroshenko has made clear his inten­tion to rein in Ukraine’s vol­un­teer war­riors. Days after Kolomoisky’s sol­diers appeared at Ukr­TransNaf­ta, he said that he would not tol­er­ate oli­garchs with “pock­et armies” and then fired Kolo­moisky from his perch as the gov­er­nor of Dnipropetro­vsk.”

    Yep, it was the pri­vate use of a pri­vate army to seize state assets in a busi­ness dis­pute that got Ihor Kolo­moisky fired as gov­ern­ment of the Dnipro Oblast in May of 2015. And that was just one exam­ple of how these neo-Nazi mili­tias posed a threat to Ukrain­ian soci­ety. There’s also the obvi­ous risk that they act on their own orders and try to seize con­trol.

    But the great­est threat these neo-Nazi mili­tias pose clear­ly involves work­ing in coor­di­na­tion with a team of Ukrain­ian oli­garchs. And that’s part of what makes an under­stand­ing of the opaque Ukrain­ian oli­garchic fault lines so impor­tant, because there’s always the chance that these inter-oli­garch dis­putes will result in these pri­vate armies get­ting used for a coup or some­thing along those lines.

    And that’s a big part of why it’s notable that about Taru­ta and Kolo­moisky have a his­to­ry of financ­ing groups like the Azov Bat­tal­ion:

    ...
    “Some of Ukraine’s pri­vate bat­tal­ions have black­ened the country’s inter­na­tion­al rep­u­ta­tion with their extrem­ist views. The Azov bat­tal­ion, par­tial­ly fund­ed by Taru­ta and Kolo­moisky, uses the Nazi Wolf­san­gel sym­bol as its logo, and many of its mem­bers open­ly espouse neo-Nazi, anti-Semit­ic views. The bat­tal­ion mem­bers have spo­ken about “bring­ing the war to Kiev,” and said that Ukraine needs “a strong dic­ta­tor to come to pow­er who could shed plen­ty of blood but unite the nation in the process.””
    ...

    And that’s also why it’s so notable if a com­pa­ny like AIQ is offer­ing polit­i­cal ser­vices to some­one like Taru­ta: Because Taru­ta appears to be allied with the pro-West­ern fac­tion of Ukrain­ian oli­garchs who want to replace their cur­rent Ukrain­ian gov­ern­ment with their own fac­tion. Much like Andreii Arte­menko and his ‘peace plan’ pro­pos­al, which also appeared to be a plan from a pro-West­ern-anti-Poroshenko fac­tion of Ukrain­ian oli­garchs.

    In oth­er words, the sto­ry about Sergei Taru­ta and the bizarre fake con­gres­sion­al cam­paign appears to be one ele­ment of a much larg­er very A real inter-oli­garch dis­pute involv­ing some very pow­er­ful oli­garchs. And Cam­bridge Analytica/AIQ/SCL appears to be work­ing for one of those sides and it’s the side cur­rent­ly out of pow­er and try­ing to reverse that sit­u­a­tion.

    Posted by Pterrafractyl | April 9, 2018, 4:27 pm
  7. So you know that creepy feel­ing you get when you Google some­thing and ads creep­i­ly relat­ed to what you just browsed start fol­low­ing you around on the inter­net? Rejoice! At least, rejoice if you enjoy that creepy feel­ing. Because you’ll get to expe­ri­ence that creepy feel­ing watch­ing broad­cast tv too with the next gen­er­a­tion of tele­vi­sions and ATSC 3.0 broad­cast for­mat tech­nol­o­gy that just got offered to the Amer­i­can pub­lic for the first time on KFPH UniMás 35 in Pheonix, Ari­zona, with more mar­ket roll­outs planned soon.

    So how is the ATSC 3.0 broad­cast for­mat for tele­vi­sion going to allow creep­i­ly per­son­al­ized ads to fol­low you on tele­vi­sion too? The new for­mat basi­cal­ly com­bines over-the-air TV with inter­net stream­ing. So part of what you’ll see on the screen will be con­tent sent over the inter­net which will obvi­ous­ly be per­son­al­ized. And that’s going to include ads.

    But it won’t just be deliv­ery per­son­al­ized con­tent. The tech­nol­o­gy will also allow for track­ing of user behav­ior. And there are no pri­va­cy stan­dards at all. That will be up to indi­vid­ual broad­cast­ers who will design their own app will will deliv­er the per­son­al­ized con­tent. Which obvi­ous­ly means there are going to be lots of broad­cast­ers track­ing your tele­vi­sion view­ing habits, cre­at­ing the kind of night­mare pri­va­cy sit­u­a­tion we’ve already seen on plat­forms like Face­book and app devel­op­ers. This ATSC 3.0 broad­cast for­mat is like a new giant plat­form that every­one will share in the US, but there are no pri­va­cy stan­dards for the app devel­op­ers which might even be worse than Face­book.

    So that’s com­ing with the next gen­er­a­tion of tele­vi­sions. As one might imag­ine giv­en the fact that this new tech­nol­o­gy threat­ens to turn the tv into the next con­sumer pri­va­cy night­mare, this tech­nol­o­gy was a major focus of sev­er­al tech demon­stra­tions at the recent Nation­al Asso­ci­a­tion of Broad­cast­ers (NAB) con­fer­ence in Las Vegas. And as one might also imag­ine, the indus­try has­n’t had much to say about the pri­va­cy aspect of this pri­va­cy night­mare it’s about to unleash:

    Tech­Hive

    Next-gen TV to ush­er in view­er track­ing and per­son­al­ized ads along with 4K broad­casts
    More of your life will be lost to adver­tis­ers when TV sta­tions switch to a new dig­i­tal for­mat

    Mar­tyn Williams By Mar­tyn Williams

    Senior Cor­re­spon­dent, Tech­Hive
    Apr 13, 2018 3:00 AM PT

    On Mon­day a lit­tle bit of U.S. tele­vi­sion his­to­ry was made when KFPH UniMás 35 became the first sta­tion to go on air using the new ATSC 3.0 broad­cast for­mat in Phoenix, Ari­zona. Over the com­ing weeks, sev­er­al more broad­cast­ers will fol­low and the first wide-scale test of the new for­mat will be under­way.

    The for­mat attempts to blend over-the-air TV with inter­net stream­ing, can sup­port 4K broad­cast­ing and local­ized emer­gency alerts, and should be more robust for city recep­tion; but it also gives TV sta­tions the chance to start serv­ing per­son­al­ized adver­tis­ing.

    Broad­cast­ers haven’t talked much about the adver­tis­ing aspect, and they’ve said even less about the poten­tial pri­va­cy impli­ca­tions, but it was a major focus of sev­er­al tech demon­stra­tions at the Nation­al Asso­ci­a­tion of Broad­cast­ers (NAB) con­fer­ence in Las Vegas this week.

    At the event, about 300 miles to the north of Phoenix, it was clear that TV sta­tions are keen to use the new for­mat to track more close­ly what view­ers are watch­ing and serve up the same kind of tar­get­ed ads that are com­mon on the Inter­net.

    When view­ers tune into an ATSC 3.0, the TV sta­tion has the abil­i­ty to serve them an appli­ca­tion that will run inside a brows­er on their TV. View­ers won’t see a tra­di­tion­al brows­er win­dow, it will look some­thing like the images above, and because it’s writ­ten in HTML5 it will work across all TVs.

    But the style of the app and the fea­tures it offers will be down to each indi­vid­ual broad­cast­er. Some might offer quick links to news clips and the weath­er and access to a catch-up ser­vice (i.e., video on demand that would let you watch pre­vi­ous­ly aired pro­gram­ming you’d missed the first time), while small­er sta­tions might just pro­vide a TV guide.

    One thing many are like­ly to do is track exact­ly what you’re watch­ing and for how long.

    The ATSC 3.0 for­mat does­n’t define a pri­va­cy pol­i­cy. It’s down to each TV sta­tion so there is no guar­an­tee they will all be uni­form.

    In a demon­stra­tion app on dis­play at NAB, the TV tracked what a view­er watched and for how long. The pay-off for the view­er would be free or exclu­sive access to con­tent. So, for exam­ple, imag­ine a future where a TV sta­tion gives you free access to pre­mi­um con­tent in return for being loy­al to its news­casts.

    But the TV sta­tion would be get­ting more than loy­al­ty. The data would be used to build a pro­file of the view­er and serve them per­son­al­ized ads, deliv­ered over the inter­net to their TV.

    That will be a lucra­tive new ad mod­el for TV broadcasters–and that’s why the TV indus­try is so excit­ed about ATSC 3.0.

    ...

    Can you imag­ine being a mid­dle-of-the-road vot­er in a swing state when the elec­tion rolls around? If you thought polit­i­cal adver­tis­ing was bad now, just wait until the cam­paigns get their teeth into tar­get­ing on this per­son­al­ized lev­el. It might be bet­ter to leave the TV off for six months.

    In the demon­stra­tions I saw this week, apps were capa­ble of track­ing only what a user did inside the app in ques­tion. One sta­tion won’t be able to see what you watch on a rival, but that gets blur­ri­er in mar­kets where a sin­gle own­er oper­ates sev­er­al chan­nels.

    It’s worth remem­ber­ing that ATSC 3.0 does­n’t inevitably mean a loss in pri­va­cy. None of this mat­ters if you don’t hook up a TV to the inter­net, but then you forego addi­tion­al ser­vices like catch-up.

    ———-

    “Next-gen TV to ush­er in view­er track­ing and per­son­al­ized ads along with 4K broad­casts” By Mar­tyn Williams; Tech­Hive; 04/13/2018

    “Broad­cast­ers haven’t talked much about the adver­tis­ing aspect, and they’ve said even less about the poten­tial pri­va­cy impli­ca­tions, but it was a major focus of sev­er­al tech demon­stra­tions at the Nation­al Asso­ci­a­tion of Broad­cast­ers (NAB) con­fer­ence in Las Vegas this week.”

    Mum’s the word on the poten­tial pri­va­cy impli­ca­tions for Amer­i­can tele­vi­sion view­ers. Poten­tial pri­va­cy impli­ca­tions that could be com­ing to a media mar­ket near you soon:

    On Mon­day a lit­tle bit of U.S. tele­vi­sion his­to­ry was made when KFPH UniMás 35 became the first sta­tion to go on air using the new ATSC 3.0 broad­cast for­mat in Phoenix, Ari­zona. Over the com­ing weeks, sev­er­al more broad­cast­ers will fol­low and the first wide-scale test of the new for­mat will be under­way.
    ...

    And while the broad­cast­ing indus­try may not want to talk about poten­tial pri­va­cy vio­la­tions, they sure are excit­ed to talk about col­lect­ing view­er data for the pur­pose of serv­ing up per­son­al­ized ads:

    ...
    The for­mat attempts to blend over-the-air TV with inter­net stream­ing, can sup­port 4K broad­cast­ing and local­ized emer­gency alerts, and should be more robust for city recep­tion; but it also gives TV sta­tions the chance to start serv­ing per­son­al­ized adver­tis­ing.

    ...

    At the event, about 300 miles to the north of Phoenix, it was clear that TV sta­tions are keen to use the new for­mat to track more close­ly what view­ers are watch­ing and serve up the same kind of tar­get­ed ads that are com­mon on the Inter­net.
    ...

    And in this new app-based mod­el for per­son­al­ized broad­cast tele­vi­sion each broad­cast­er devel­op their own apps, mean­ing there’s going to be a lot of dif­fer­ent apps/broadcasters poten­tial­ly track­ing what you do with those next-gen­er­a­tion TVs:

    ...
    When view­ers tune into an ATSC 3.0, the TV sta­tion has the abil­i­ty to serve them an appli­ca­tion that will run inside a brows­er on their TV. View­ers won’t see a tra­di­tion­al brows­er win­dow, it will look some­thing like the images above, and because it’s writ­ten in HTML5 it will work across all TVs.

    But the style of the app and the fea­tures it offers will be down to each indi­vid­ual broad­cast­er. Some might offer quick links to news clips and the weath­er and access to a catch-up ser­vice (i.e., video on demand that would let you watch pre­vi­ous­ly aired pro­gram­ming you’d missed the first time), while small­er sta­tions might just pro­vide a TV guide.

    One thing many are like­ly to do is track exact­ly what you’re watch­ing and for how long.

    The ATSC 3.0 for­mat does­n’t define a pri­va­cy pol­i­cy. It’s down to each TV sta­tion so there is no guar­an­tee they will all be uni­form.

    In a demon­stra­tion app on dis­play at NAB, the TV tracked what a view­er watched and for how long. The pay-off for the view­er would be free or exclu­sive access to con­tent. So, for exam­ple, imag­ine a future where a TV sta­tion gives you free access to pre­mi­um con­tent in return for being loy­al to its news­casts.

    But the TV sta­tion would be get­ting more than loy­al­ty. The data would be used to build a pro­file of the view­er and serve them per­son­al­ized ads, deliv­ered over the inter­net to their TV.

    That will be a lucra­tive new ad mod­el for TV broadcasters–and that’s why the TV indus­try is so excit­ed about ATSC 3.0.
    ...

    Although it’s worth not­ing that the demon­stra­tion apps shown to the author of that Tech­Hive arti­cle weren’t capa­ble of track­ing what you do on dif­fer­ent app. So each broad­cast­er would, in the­o­ry, only get to see what you do with their app and not oth­er broad­cast­ers’ apps. But, of course, a lot of broad­cast­ers are going to own mul­ti­ple chan­nels in a mar­ket. Or they just might decide to share the data with each oth­er:

    ...
    In the demon­stra­tions I saw this week, apps were capa­ble of track­ing only what a user did inside the app in ques­tion. One sta­tion won’t be able to see what you watch on a rival, but that gets blur­ri­er in mar­kets where a sin­gle own­er oper­ates sev­er­al chan­nels.
    ...

    Also keep in mind that there are still sig­nif­i­cant poten­tial pri­va­cy vio­la­tions even if apps can’t read the activ­i­ty of oth­er apps. For instance, if an app is capa­ble of sim­ply detect­ing when you turn the tv off or on, that gives infor­ma­tion about your day to day liv­ing sched­ule. It’s one of the gener­ic pri­va­cy vio­la­tions that come with the “inter­net-of-things”.

    And then there’s the pos­si­ble pri­va­cy vio­la­tions that come with next-gen­er­a­tion tele­vi­sions with built in micro­phones. Imag­ine how many apps will ask for per­mis­sion to lis­ten to every­thing you say in order to bet­ter per­son­al­ize the ser­vice. Remem­ber those sto­ries about the CIA hack­ing into Sam­sung Smart TVs with built in micro­phones? That’s prob­a­bly going to be the stan­dard app behav­ior if peo­ple allow it.

    And, final­ly, the arti­cle notes that this means the night­mare of micro-tar­get­ed per­son­al­ized polit­i­cal ads is com­ing to broad­cast tele­vi­sion:

    ...
    Can you imag­ine being a mid­dle-of-the-road vot­er in a swing state when the elec­tion rolls around? If you thought polit­i­cal adver­tis­ing was bad now, just wait until the cam­paigns get their teeth into tar­get­ing on this per­son­al­ized lev­el. It might be bet­ter to leave the TV off for six months.
    ...

    Yep, just wait for Cam­bridge Ana­lyt­i­ca-style per­son­al­ized psy­cho­log­i­cal pro­fil­ing of you, a pro­file that incor­po­rates all the infor­ma­tion already gath­ered about you from all the exist­ing sources of infor­ma­tion about you — Face­book, Google, data bro­ker giants like Acx­iom — and com­bines that with the knowl­edge on you obtained through your smart tele­vi­sion, and get ready for the next-gen­er­a­tion onslaught of the full-spec­trum of per­son­al­ized polit­i­cal ads designed to inflame you and polar­ize the coun­try. The “A/B test­ing on steroids” adver­tis­ing exper­i­ments employed by the Trump team on social media is com­ing to tele­vi­sion.

    It’ll be a gold­en age for tele­vi­sion com­mer­cial actors because they’re going to have to shoot all the dif­fer­ent cus­tomized ver­sions of the same com­mer­cials used to micro-tar­get the audi­ence’s psy­cho­log­i­cal pro­files.

    Of course, there is going to be the one option for next-gen­er­a­tion tele­vi­sion own­ers for avoid­ing the data pri­va­cy night­mare of per­son­al­ized tv: unplug it from the inter­net and just watch tv the soon-to-be-old-fash­ioned way:

    ...
    It’s worth remem­ber­ing that ATSC 3.0 does­n’t inevitably mean a loss in pri­va­cy. None of this mat­ters if you don’t hook up a TV to the inter­net, but then you forego addi­tion­al ser­vices like catch-up.

    And that points towards one of the glar­ing prob­lems and solu­tions to this sit­u­a­tion: the only option Amer­i­can tele­vi­sion con­sumers are going to have is either nav­i­gate a data pri­va­cy night­mare land­scape, where each app can have its own pri­va­cy stan­dards and there are almost no rules, or unplug the smart tvs from the inter­net and for­go the inter­net-based ser­vices. And that’s because spy­ing on con­sumers in exchange for ser­vices and enhanced prof­its is the fun­da­men­tal mod­el of the inter­net and this new data pri­va­cy night­mare land­scape for smart tvs is mere­ly the log­i­cal exten­sion of that fun­da­men­tal mod­el. It’s a fun­da­men­tal prob­lem with the future of tele­vi­sion ads and a fun­da­men­tal prob­lem with the inter­net-of-things in gen­er­al: mass com­mer­cial spy­ing is just assumed in Amer­i­ca. It’s the mod­el for the inter­net in Amer­i­ca. There is no alter­na­tive. And that mod­el is com­ing to broad­cast tele­vi­sion since that com­mer­cial mass spy­ing mod­el is clear­ly enshrined in the new ATSC 3.0. broad­cast for­mat. It’s a for­mat that lets each app devel­op­er make up their own pri­va­cy stan­dards. A ‘pre­pare-for-the-worst-hope-for-the-best’ mod­el that lit­er­al­ly pre­pares the way for the worst case sce­nario for con­sumer pri­va­cy and then just hopes that it won’t be abused. Like the inter­net.

    And in the case of this next-gen­er­a­tion inter­net-con­nect­ed tele­vi­sion it’s not like there’s the same pos­si­bil­i­ty for com­pe­ti­tion that we find with Face­book because there’s the pos­si­bil­i­ty for a Face­book com­peti­tor. But there’s only one nation­al broad­cast for­mat for smart tvs and for nations that use teh ATSC 3.0 stan­dard it’s going to let each app mak­er make up their own pri­va­cy rules. Note that the ATSC 3.0 stan­dard does­n’t just apply the US. It was cre­at­ed by the Advanced Tele­vi­sion Sys­tems Com­mit­tee which is shared by the US, Cana­da, Mex­i­co, South Korea, and Hon­duras. So this is a multi­na­tion­al tele­vi­sion stan­dard and it’s a stan­dard gov­ern­ments approve so it’s not like there’s com­pe­ti­tion. This is as good as the pri­va­cy stan­dards are going to get for North Amer­i­can and South Kore­an inter­net-con­nect­ed tv con­sumers: it’s up to the app devel­op­ers i.e. no pri­va­cy stan­dards.

    And no stan­dards on the exploita­tion of all the data col­lect­ed on us to deliv­ered high­ly per­sua­sive micro-tar­get­ed ad cam­paigns. Cam­bridge Ana­lyt­i­ca-style micro-tar­get­ing psy­cho­log­i­cal oper­a­tions for tv. That’s com­ing to all elec­tions.

    So just FYI, your next smart tele­vi­sion is going to be very per­sua­sive.

    Posted by Pterrafractyl | April 15, 2018, 7:41 pm
  8. This was more or less inevitable: it sounds like the ’87 mil­lion’ fig­ure — the num­ber of Face­book pro­files that had their data scraped by Cam­bridge Ana­lyt­i­ca — is set to be raised again. Recall that it was ini­tial­ly a 50 mil­lion fig­ure before Cam­bridge Ana­lyt­i­ca whis­tle-blow­er Christo­pher Wylie raised the esti­mate to 87 mil­lion, while hint­ing that the fig­ure could be more.

    Also recall that the 87 mil­lion fig­ure, osten­si­bly derived from the 270,000 peo­ple who down­loaded the Cam­bridge Ana­lyt­i­ca Face­book app and their many friends, cor­re­spond­ed to ~322 friends for each app user on aver­age, which is very clos­er to the 338 aver­age num­ber of friends Face­book users had in 2014. In oth­er words, the 87 mil­lion fig­ure is rough­ly what we should expect if you start off with 270,000 app users and scrape the pro­file infor­ma­tion for each of their 338 friends on aver­age. So if that 87 mil­lion fig­ure was to rise sig­nif­i­cant­ly, it would raise the ques­tion of where else did Cam­brdi­ge Ana­lyt­i­ca get their data.

    Well, we have a new Cam­bridge Ana­lyt­i­ca whis­tle-blow­er, Brit­tany Kaiser, who worked full-time for SCL, Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary of 2018. And accord­ing to Kaiser, it is indeed “much greater” than 87 mil­lion users. And Kaiser has a pos­si­ble expla­na­tion for how Cam­bridge Ana­lyt­i­ca got data on all these addi­tion­al users: they had more than one app that was scrap­ing Face­book pro­file data.

    And the way Kaiser puts it, it sounds like there were quite a few dif­fer­ent apps used by Cam­bridge Ana­lyt­i­ca. Includ­ing one she calls the “sex com­pass quiz”. So, yes, the Trump team was appar­ent­ly explor­ing the sex­u­al predilec­tions of the Amer­i­can elec­torate.

    Addi­tion­al­ly, Kaiser makes ref­er­ences to Cam­bridge Ana­lyt­i­ca’s “part­ners”. As she puts it, “I am aware in a gen­er­al sense of a wide range of sur­veys which were done by CA or its part­ners, usu­al­ly with a Face­book login–for exam­ple, the ‘sex com­pass’ quiz.” So is that ref­er­ence to Cam­bridge Ana­lyt­i­ca’s “part­ners” a ref­er­ence to SCL or Alek­san­dr Kogan’s Glob­al Sci­ence Research (GSR) com­pa­ny? Or were there oth­er third-par­ty firms that are also feed­ing infor­ma­tion into Cam­bridge Ana­lyt­i­ca? The Repub­li­can Nation­al Com­mit­tee, per­haps?

    Along those lines, Kaiser has anoth­er remark­able claim that office cul­ture was like the “Wild West” and that per­son­al data was “being scraped, resold and mod­eled willy-nil­ly.” So Kaiser is assert­ing that Cam­bridge Ana­lyt­i­ca resold the data too? It sure sounds like it.

    These are the kinds of ques­tions raised by Brit­tany Kaiser’s new claims. Along with the open ques­tion of exact­ly how many peo­ple Cam­bridge Ana­lyt­i­ca was col­lect­ing this kind of Face­book data on. We know it’s “much greater” than 87 mil­lion, accord­ing to Kaiser, but we have no idea how much greater it is:

    Newsweek

    Who Is Brit­tany Kaiser? Face­book Leak ‘Much Greater’ Than 87M Accounts Warns Ex-Cam­bridge Ana­lyt­i­ca Direc­tor

    By Jason Mur­dock
    On 4/17/18 at 12:30 PM

    Cam­bridge Ana­lyt­i­ca, the Lon­don-based polit­i­cal analy­sis firm that worked on the pres­i­den­tial elec­tion cam­paign of Don­ald Trump, used mul­ti­ple apps to har­vest Face­book data—and the true scope of the abuse is like­ly “much greater” than 87 mil­lion accounts, a for­mer staffer-turned-whistle­blow­er has claimed.

    Brit­tany Kaiser, who worked full-time for the SCL Group, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary this year, told a U.K. gov­ern­ment com­mit­tee on Tues­day the firm had used Face­book data it pre­vi­ous­ly claimed to have delet­ed.

    Face­book has faced an unprece­dent­ed back­lash after user data was alleged­ly abused by a researcher called Alek­san­dr Kogan. Kogan has been accused of using a per­son­al­i­ty test app to obtain data linked to mil­lions of accounts.

    Kaiser, who released a num­ber of new doc­u­ments into the pub­lic domain alleg­ing to show how the com­pa­ny worked on pro­pos­als for the U.K. “Brex­it” cam­paign, wrote in a tes­ti­mo­ny sub­mit­ted to the government’s enquiry into fake news: “I am aware in a gen­er­al sense of a wide range of sur­veys which were done by CA or its part­ners, usu­al­ly with a Face­book login–for exam­ple, the ‘sex com­pass’ quiz.

    “I do not know the specifics of these sur­veys or how the data was acquired or processed. But I believe it is almost cer­tain that the num­ber of Face­book users whose data was com­pro­mised through routes sim­i­lar to that used by Kogan is much greater than 87 mil­lion; and that both Cam­bridge Ana­lyt­i­ca and oth­er uncon­nect­ed com­pa­nies and cam­paigns were involved in these activ­i­ties.”

    Face­book’s founder and CEO, Mark Zucker­berg, has said Kogan broke the website’s poli­cies and stressed a full audit is cur­rent­ly tak­ing place to find out if oth­er apps were using sim­i­lar tac­tics. It is believed that Kogan—who is alleged to have sold the infor­ma­tion to Cam­bridge Analytica—designed the sys­tem so users’ social media activ­i­ty could be used for inten­sive polit­i­cal pro­fil­ing.

    Zucker­berg him­self has warned all users were at risk of data scrap­ing.

    Accord­ing to Kaiser, a U.S. cit­i­zen who, along­side for­mer Cam­bridge Ana­lyt­i­ca staffer Christo­pher Wylie, is now con­sid­ered a whistle­blow­er, her for­mer employ­er used the Face­book data dur­ing sales pitch­es to poten­tial clients.

    She alleged it had links to the Lon­don bureau of far-right news web­site Bre­it­bart and sig­nif­i­cant time dur­ing the hear­ing was ded­i­cat­ed to its sus­pect­ed work with Leave.EU, a cam­paign push­ing for Britain to exit the Euro­pean Union (EU). In a series of updates via Twit­ter, Cam­bridge Ana­lyt­i­ca denied links to Leave.EU.

    In a state­ment to Newsweek, Cam­bridge Ana­lyt­i­ca said:

    “In the past Cam­bridge Ana­lyt­i­ca has designed and run quizzes for inter­nal research projects. This has includ­ed a fair­ly con­ven­tion­al per­son­al­i­ty quiz as well as broad­er quizzes such as one that probed peo­ple’s music pref­er­ences.

    “Data col­lect­ed from these quizzes were always col­lect­ed under a clear state­ment of con­sent. When mem­bers of the pub­lic logged into a quiz with their Face­book details, only their pub­lic pro­file infor­ma­tion was col­lect­ed. The vol­umes of users who took the quizzes num­bered in the tens of thou­sands: any sug­ges­tion that we col­lect­ed data on the scale of [Glob­al Sci­ence Research Lim­it­ed] is incor­rect.

    “We no longer run such quizzes or hold data that was col­lect­ed in this way.”

    Who is Brit­tany Kaiser?

    Accord­ing to her writ­ten tes­ti­mo­ny, Kaiser was born in Hous­ton, Texas, and grew up in Chica­go. She was a part of Barack Obama’s media team dur­ing the pres­i­den­tial cam­paign in 2007 and has also worked for Amnesty Inter­na­tion­al as a lob­by­ist appeal­ing for an end to crimes against human­i­ty. This month, Kaiser start­ed a Face­book cam­paign appeal­ing for trans­paren­cy called #OwnY­our­Da­ta.

    Dur­ing her time at Cam­bridge Ana­lyt­i­ca she worked on sales pro­pos­als and liaised with clients. She worked under senior man­age­ment includ­ing CEO Alexan­der Nix, who this week declined to appear before the same fake news enquiry.

    Kaiser claimed that the office cul­ture was like the “Wild West” and alleged that cit­i­zens’ data was “being scraped, resold and mod­eled willy-nil­ly.”

    “Pri­va­cy has become a myth, and track­ing people’s behav­ior has become an essen­tial part of using social media and the inter­net itself; tools that were meant to free our minds and make us more con­nect­ed, with faster access to infor­ma­tion than ever before,” she wrote in her tes­ti­mo­ny.

    “Instead of con­nect­ing us, these tools have divid­ed us. It’s time to expose their abus­es, so we can have an hon­est con­ver­sa­tion about how we build a bet­ter way for­ward,” Kaiser added.

    ———-

    “Who Is Brit­tany Kaiser? Face­book Leak ‘Much Greater’ Than 87M Accounts Warns Ex-Cam­bridge Ana­lyt­i­ca Direc­tor” by Jason Mur­dock; Newsweek; 04/17/2018

    “Kaiser claimed that the office cul­ture was like the “Wild West” and alleged that cit­i­zens’ data was “being scraped, resold and mod­eled willy-nil­ly.””

    That’s rights, Cam­bridge Ana­lyt­i­ca was­n’t just scrap­ing Face­book users’ data. They were appar­ent­ly reselling it too. These are the claims by Brit­tany Kaiser, who worked full-time for the SCL Group, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary this year, dur­ing her tes­ti­mo­ny to a UK gov­ern­ment gov­ern­ment:

    ...
    Brit­tany Kaiser, who worked full-time for the SCL Group, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, as direc­tor of busi­ness devel­op­ment between Feb­ru­ary 2015 and Jan­u­ary this year, told a U.K. gov­ern­ment com­mit­tee on Tues­day the firm had used Face­book data it pre­vi­ous­ly claimed to have delet­ed.
    ...

    And accord­ing to Kaiser, the addi­tion­al apps used by Cam­bridge Ana­lyt­i­ca include a “sex com­pass” quiz.

    ...
    Kaiser, who released a num­ber of new doc­u­ments into the pub­lic domain alleg­ing to show how the com­pa­ny worked on pro­pos­als for the U.K. “Brex­it” cam­paign, wrote in a tes­ti­mo­ny sub­mit­ted to the government’s enquiry into fake news: “I am aware in a gen­er­al sense of a wide range of sur­veys which were done by CA or its part­ners, usu­al­ly with a Face­book login–for exam­ple, the ‘sex com­pass’ quiz.
    ...

    And keep in mind that the use of this sex app quiz is prob­a­bly pret­ty sim­i­lar to how Alek­san­dr’s psy­cho­log­i­cal pro­fil­ing app worked: you use the data col­lect­ed on the peo­ple tak­ing the quiz as the “train­ing set” in order to devel­op algo­rithms for infer­ring Face­book users’ sex­u­al pref­er­ences based on their Face­book pro­file data. And then Cam­bridge Ana­lyt­i­ca uses those algo­rithms to make edu­cat­ed guess­es about the ‘sex­u­al com­pass’ of all the oth­er Face­book user they have pro­file data on. We don’t know that this is what Cam­bridge Ana­lyt­i­ca did with the ‘sex com­pass’ app, but we know that’s prob­a­bly what they did because that is the busi­ness they are in.

    And it’s the use of all these addi­tion­al apps that Kaiser saw Cam­bridge Ana­lyt­i­ca employ that appears to be the basis for her con­clu­sion that the num­ber of Face­book pro­files scraped by Cam­bridge Ana­lyt­i­ca is “much greater than 87 mil­lion”. And she also asserts, quite rea­son­ably, that Cam­bridge Ana­lyt­i­ca was­n’t the only enti­ty engaged in this kind of activ­i­ty:

    ...
    “I do not know the specifics of these sur­veys or how the data was acquired or processed. But I believe it is almost cer­tain that the num­ber of Face­book users whose data was com­pro­mised through routes sim­i­lar to that used by Kogan is much greater than 87 mil­lion; and that both Cam­bridge Ana­lyt­i­ca and oth­er uncon­nect­ed com­pa­nies and cam­paigns were involved in these activ­i­ties.”
    ...

    So how much high­er is that 87 mil­lion fig­ure going to go? Well, there’s one oth­er high­ly sig­nif­i­cant num­ber we should keep in mind when try­ing to under­stand what kind of data Cam­bridge Ana­lyt­i­ca acquired: The com­pa­ny claimed to have up to 5,000 data points on 220 mil­lion Amer­i­cans. Also keep in mind that 220 mil­lion is greater than the total num­ber of Face­book users in the US (~214 mil­lion in 2018).

    So if we’re won­der­ing how high that 87 mil­lion fig­ure might go, the answers might be some­thing along the lines of “almost all the Face­book users in the US in 2014–2015”. What­ev­er that num­ber hap­pens to be is prob­a­bly the answer.

    Posted by Pterrafractyl | April 17, 2018, 3:43 pm
  9. Here’s a set of arti­cles on one of the fig­ures who co-found­ed both Cam­bridge Ana­lyt­i­ca and its par­ent com­pa­ny SCL Group: Nigel Oakes.

    While Cam­bridge Ana­lyt­i­ca’s for­mer-CEO Alexan­der Nix has received much of the atten­tion direct­ed at Cam­bridge Ana­lyt­i­ca, espe­cial­ly fol­low­ing the shock­ing hid­den-cam­era footage of Nix talk­ing to an under­cov­er reporter he thought was a client, the sto­ry of Cam­bridge Ana­lyt­i­ca ulti­mate­ly leads to Oakes accord­ing to mul­ti­ple sources.

    So who is Nigel Oakes? Well, as the fol­low­ing arti­cle notes, Oakes got his start in the busi­ness of influ­enc­ing peo­ple in the field of “mar­ket­ing aro­mat­ics,” or the use of smells to make con­sumers spend more mon­ey. He also dat­ed Lady Helen Wind­sor when he was younger, which made him a some­what pub­licly known per­son in the UK.

    In 1993, Oakes co-found­ed Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries, the pre­de­ces­sor to SCL Group. In 2005, he co-found­ed SCL Group which, at the time, made head­lines when it billed itself at a glob­al arms fair in Lon­don as the first pri­vate com­pa­ny to pro­vide psy­cho­log­i­cal war­fare ser­vices. Oakes said he was con­fi­dent that psy­ops could short­en mil­i­tary con­flicts. As he put it, “We used to be in the busi­ness of mind bend­ing for polit­i­cal pur­pos­es, but now we are in the busi­ness of sav­ing lives.”

    SCL sold the same psy­cho­log­i­cal war­fare prod­ucts in the US. Ser­vices includ­ed manip­u­la­tion of elec­tions and “per­cep­tion man­age­ment,” or the inten­tion­al spread of fake news. And the US State Depart­ment remains a client and con­firmed that it retains SCL Group on a con­tract to “pro­vide research and ana­lyt­i­cal sup­port in con­nec­tion with our mis­sion to counter ter­ror­ist pro­pa­gan­da and dis­in­for­ma­tion over­seas.”

    So Nigel Oakes has quite an inter­est­ing his­to­ry. A his­to­ry that he unwit­ting­ly encap­su­late with a now-noto­ri­ous quote he gave in 1992:
    “We use the same tech­niques as Aris­to­tle and Hitler...We appeal to peo­ple on an emo­tion­al lev­el to get them to agree on a func­tion­al lev­el.”:

    Politi­co

    Cam­bridge Ana­lyt­i­ca boss went from ‘aro­mat­ics’ to psy­ops to Trump’s cam­paign

    While Alexan­der Nix draws head­lines for his role in the Trump 2016 dig­i­tal oper­a­tion, his col­or­ful busi­ness part­ner Nigel Oakes may be an equal­ly impor­tant fig­ure.

    By Josh Mey­er

    3/22/18, 10:15 AM CET

    Updat­ed 3/23/18, 4:17 AM CET

    WASHINGTON — Long before the polit­i­cal data firm he over­sees, Cam­bridge Ana­lyt­i­ca, helped Don­ald Trump become pres­i­dent, Nigel Oakes tried a very dif­fer­ent form of influ­enc­ing human behav­ior. It was called “mar­ket­ing aro­mat­ics,” or the use of smells to make con­sumers spend more mon­ey.

    In the decades since, the Eton-edu­cat­ed British busi­ness­man has styled him­self as an expert on a wide vari­ety of “mind-bend­ing” tech­niques — from scents to psy­cho­log­i­cal war­fare to cam­paign pol­i­tics.

    But some 25 years after his for­ay into aro­mat­ics, a bad odor has arisen around his use of data to influ­ence vot­er behav­ior. Oakes and his part­ners, who include Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix, are under intense scruti­ny over their meth­ods in the 2016 cam­paign, includ­ing the alleged improp­er use of Face­book data. Some news reports have also found links to Rus­sia that the com­pa­ny has down­played.

    Oakes and the com­pa­ny he co-found­ed in 2005 along with Nix, SCL Group, have now drawn the inter­est of con­gres­sion­al offi­cials. Three Repub­li­can sen­a­tors wrote Oakes a let­ter this week request­ing infor­ma­tion and a brief­ing relat­ed to Facebook’s sud­den sus­pen­sion last Fri­day of Cam­bridge Ana­lyt­i­ca, which is a close­ly affil­i­at­ed sub­sidiary of SCL.

    The request — from Sen­ate com­merce com­mit­tee mem­bers John Thune (R‑S.D.), Roger Wick­er (R‑Miss.) and Jer­ry Moran, (R‑Kan.) — came after recent alle­ga­tions that Cam­bridge Ana­lyt­i­ca used inap­pro­pri­ate­ly har­vest­ed pri­vate Face­book data on near­ly 50 mil­lion users and exploit­ed the infor­ma­tion to assist Pres­i­dent Don­ald Trump’s 2016 cam­paign.

    But that has trig­gered wider ques­tions about whether Cam­bridge Ana­lyt­i­ca, whose board once includ­ed for­mer Trump polit­i­cal strate­gist Steve Ban­non, could have played some role in the Kremlin’s scheme to manip­u­late U.S. social media in 2016. The com­pa­ny denies that.

    Cap­tured on an under­cov­er video by Britain’s Chan­nel 4 News, Nix boast­ed that the firm “did all the research, all the data, all the ana­lyt­ics, all the tar­get­ing,” for the Trump cam­paign, adding that “our data informed all the strat­e­gy.” (Trump offi­cials call that an exag­ger­a­tion.)

    Adding to the con­cern is the role of Alek­san­dr Kogan, a Russ­ian-born researcher at Cam­bridge Uni­ver­si­ty who col­lect­ed the Face­book data with­out dis­clos­ing that it would be used com­mer­cial­ly, and who was also work­ing for a uni­ver­si­ty in St. Peters­burg, Rus­sia at the time. Cam­bridge Ana­lyt­i­ca also report­ed­ly dis­cussed a busi­ness rela­tion­ship in 2014 and 2015 with the the Krem­lin-con­nect­ed Russ­ian oil giant Lukoil, which expressed inter­est in how data is used to tar­get Amer­i­can vot­ers, accord­ing to the New York Times.

    The recent flur­ry of cov­er­age has bare­ly men­tioned the 55-year-old Oakes, a vir­tu­al unknown in the U.S. but more famil­iar in Great Britain, in part because of his rela­tion­ship in the 1990s with a mem­ber of the roy­al Wind­sor fam­i­ly.

    But data ana­lyt­ics experts described Oakes as a hid­den hand run­ning both SCL and Cam­bridge Ana­lyt­i­ca.

    “Any­one right now that is focus­ing on the prob­lems with Cam­bridge Ana­lyt­i­ca should be back­track­ing to the source, which is Nigel Oakes,” said Sam Wool­ley, research direc­tor of the Dig­i­tal Intel­li­gence Lab at the Sil­i­con Val­ley-based Insti­tute for the Future.

    “My research has shown that Cam­bridge Ana­lyt­i­ca is the tip of the ice­berg of Nigel Oakes’ empire of psy­ops and infor­ma­tion ops around the world,” said Wool­ley, whose research aims to help pro­tect democ­ra­cy from the nefar­i­ous use of rapid­ly evolv­ing com­mu­ni­ca­tions tech­nol­o­gy. “As you start to dig in to that, you find out a lot of very con­cern­ing things.”

    Wool­ley said he attend­ed a Cam­bridge Ana­lyt­i­ca “meet-up” in April 2016 dur­ing the New York pres­i­den­tial pri­ma­ry in New York. At the time, the com­pa­ny was work­ing for anoth­er can­di­date, Sen­a­tor Ted Cruz (R‑Texas), and gave a wide-rang­ing overview of their activ­i­ties, Wool­ley said.

    It was clear from the ses­sion that the two com­pa­nies are com­plete­ly inter­twined, Wool­ley said. He recalled that Cam­bridge Ana­lyt­i­ca lead­ers “con­flat­ed all of their work with SCL’s work,” includ­ing in sev­er­al over­seas elec­tions. Based on his ongo­ing research, he described the two firms as sell­ing “polit­i­cal mar­ket­ing to the high­est bid­der, whether you’re in gov­ern­ment, the mil­i­tary or pol­i­tics, even author­i­tar­i­an” regimes.

    Oakes and SCL Group did not return calls seek­ing com­ment through a spokesper­son.

    SCL Group — like its pre­de­ces­sor, Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries, which Oakes co-found­ed in 1993 — is no stranger to con­tro­ver­sies relat­ed to for­eign elec­tions, includ­ing in con­nec­tion with alleged dirty tricks it has alleged­ly employed on behalf of polit­i­cal clients from Europe to Africa and Asia.

    The com­pa­ny also made head­lines in 2005 when it billed itself at a glob­al arms fair in Lon­don as the first pri­vate com­pa­ny to pro­vide psy­cho­log­i­cal war­fare ser­vices, or “psy­ops,” to the British mil­i­tary.

    At the time, Oakes, as chief exec­u­tive, said he was con­fi­dent that psy­ops could short­en mil­i­tary con­flicts, and that gov­ern­ments would buy such a ser­vice, which SCL had pro­vid­ed com­mer­cial­ly.

    “We used to be in the busi­ness of mind bend­ing for polit­i­cal pur­pos­es,” he told a reporter, “but now we are in the busi­ness of sav­ing lives.”

    Those who know Oakes, or know of him, are some­what skep­ti­cal.

    One pri­vate inves­ti­ga­tor said the com­pa­ny is known to have done exten­sive work for the U.S. mil­i­tary and oth­er gov­ern­ment agen­cies against tar­gets includ­ing Iran. SCL got its start in the U.S. by sell­ing the same psy­cho­log­i­cal war­fare prod­uct as it did to the British, includ­ing manip­u­la­tion of elec­tions and “per­cep­tion man­age­ment,” or the inten­tion­al spread of fake news.

    The State Depart­ment con­firmed to Defense One this week that it retains SCL Group on a con­tract to “pro­vide research and ana­lyt­i­cal sup­port in con­nec­tion with our mis­sion to counter ter­ror­ist pro­pa­gan­da and dis­in­for­ma­tion over­seas.”

    Com­pa­ny lit­er­a­ture describes some of SCL’s ser­vices, besides “psy­cho­log­i­cal war­fare,” as “influ­ence oper­a­tions” and “pub­lic diplo­ma­cy.”

    Absent from such descrip­tions is some of the more bom­bas­tic rhetoric of Oakes’ youth.

    “We use the same tech­niques as Aris­to­tle and Hitler,” he told an inter­view­er in 1992. “We appeal to peo­ple on an emo­tion­al lev­el to get them to agree on a func­tion­al lev­el.”

    On its web­site, SCL Group does not high­light its con­nec­tions to Cam­bridge Ana­lyt­i­ca.

    “Our vision is to be the pre­mier provider of data ana­lyt­ics and strat­e­gy for behav­ior change,” the web­site says.

    “Our mis­sion is to cre­ate behav­ior change through research, data, ana­lyt­ics, and strat­e­gy for both domes­tic and inter­na­tion­al gov­ern­ment clients.”

    But Oakes and his com­pa­ny have a his­to­ry of secre­cy, mak­ing the hid­den-cam­era footage of Nix all the more shock­ing. In the footage aired by Chan­nel 4, Nix appears to tell a jour­nal­ist pos­ing as a poten­tial client that the com­pa­ny could, for instance, send Ukrain­ian sex work­ers to an opponent’s house to sab­o­tage him.

    SCL Group said it has sus­pend­ed Nix while it inves­ti­gates, and sev­er­al U.S. law­mak­ers cit­ed the reports in say­ing that they want to call him back before com­mit­tees inves­ti­gat­ing Russ­ian med­dling to answer more ques­tions.

    One British jour­nal­ist who has inves­ti­gat­ed the two com­pa­nies and their lead­ers also sug­gest­ed that the real trail of ques­tions leads to Oakes.

    “Alexan­der Nix has been sus­pend­ed from a shell com­pa­ny that has no employ­ees and no assets,” said Car­ole Cad­wal­ladr of the Observ­er, who authored last weekend’s expose, and oth­ers. “If you think this ends here, think again.”

    The let­ter from the three sen­a­tors — which they also sent to Face­book CEO Mark Zucker­berg — asks Oakes whether he acknowl­edges the con­duct described in Facebook’s state­ment announc­ing the sus­pen­sion of Nix’s account, and those of both SCL Group and Cam­bridge Ana­lyt­i­ca.

    It also asks him to pro­vide infor­ma­tion about whether he was aware of oth­er activ­i­ty by Cam­bridge that Face­book said led to the sus­pen­sion, includ­ing how it accessed the data in ques­tion and whether it false­ly cer­ti­fied that it had destroyed it at Facebook’s request.

    “Con­sumers rely on app devel­op­ers to be trans­par­ent and truth­ful in their terms of ser­vice so con­sumers can make informed deci­sions about whether to con­sent to the shar­ing and use of their data,” the sen­a­tors wrote. “There­fore, the alle­ga­tion that SCL was not forth­com­ing with Face­book or trans­par­ent with con­sumers is trou­bling.”

    The sen­a­tors remind­ed Oakes that their com­mit­tee has juris­dic­tion over the inter­net and com­mu­ni­ca­tions tech­nolo­gies gen­er­al­ly, as well as over con­sumer pro­tec­tion and data pri­va­cy issues.

    Mean­while, Democ­rats who have been inves­ti­gat­ing Russ­ian elec­tion inter­fer­ence and sus­pect­ed col­lu­sion between the Krem­lin and the Trump cam­paign are express­ing height­ened inter­est in Oakes’s com­pa­ny, though for how their focus is pri­mar­i­ly on Nix.

    Rep­re­sen­ta­tive Adam Schiff, the top Demo­c­rat on the House intel­li­gence com­mit­tee, said on MSNBC Wednes­day that he was par­tic­u­lar­ly con­cerned about Nix’ com­ments, cap­tured by Chan­nel 4, about how he got off easy dur­ing his inter­view with Con­gress.

    “The Repub­li­cans asked three ques­tions. Five min­utes, done,” Nix said. And while the Democ­rats asked two hours of ques­tions, Nix said he didn’t have to answer them because “it’s vol­un­tary.”

    ...

    ———-

    “Cam­bridge Ana­lyt­i­ca boss went from ‘aro­mat­ics’ to psy­ops to Trump’s cam­paign” by Josh Mey­er; Politi­co; 03/22/2018

    ““Any­one right now that is focus­ing on the prob­lems with Cam­bridge Ana­lyt­i­ca should be back­track­ing to the source, which is Nigel Oakes,” said Sam Wool­ley, research direc­tor of the Dig­i­tal Intel­li­gence Lab at the Sil­i­con Val­ley-based Insti­tute for the Future.”

    Nigel Oakes is seen as “the source” of Cam­bridge Ana­lyt­i­ca. And Cam­bridge Ana­lyt­i­ca is seen as mere­ly “the tip of the ice­berg of Nigel Oakes’ empire of psy­ops and infor­ma­tion ops around the world”:

    ...
    My research has shown that Cam­bridge Ana­lyt­i­ca is the tip of the ice­berg of Nigel Oakes’ empire of psy­ops and infor­ma­tion ops around the world,” said Wool­ley, whose research aims to help pro­tect democ­ra­cy from the nefar­i­ous use of rapid­ly evolv­ing com­mu­ni­ca­tions tech­nol­o­gy. “As you start to dig in to that, you find out a lot of very con­cern­ing things.”
    ...

    And that’s how British jour­nal­ist Car­ole Cad­wal­ladr, who has done exten­sive report­ing on Cam­bridge Ana­lyt­i­ca over the last year, also sees it: the ques­tions about Cam­bridge Ana­lyt­i­ca leads to Oakes:

    ...
    One British jour­nal­ist who has inves­ti­gat­ed the two com­pa­nies and their lead­ers also sug­gest­ed that the real trail of ques­tions leads to Oakes.

    “Alexan­der Nix has been sus­pend­ed from a shell com­pa­ny that has no employ­ees and no assets,” said Car­ole Cad­wal­ladr of the Observ­er, who authored last weekend’s expose, and oth­ers. “If you think this ends here, think again.”
    ...

    And that’s no sur­prise that Cam­bridge Ana­lyt­i­ca ques­tions lead to Oakes. He helped co-found it, along with co-found­ing SCL Group in 2005 and Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries in 1993:

    ...
    Oakes and the com­pa­ny he co-found­ed in 2005 along with Nix, SCL Group, have now drawn the inter­est of con­gres­sion­al offi­cials. Three Repub­li­can sen­a­tors wrote Oakes a let­ter this week request­ing infor­ma­tion and a brief­ing relat­ed to Facebook’s sud­den sus­pen­sion last Fri­day of Cam­bridge Ana­lyt­i­ca, which is a close­ly affil­i­at­ed sub­sidiary of SCL.

    ...

    SCL Group — like its pre­de­ces­sor, Strate­gic Com­mu­ni­ca­tion Lab­o­ra­to­ries, which Oakes co-found­ed in 1993 — is no stranger to con­tro­ver­sies relat­ed to for­eign elec­tions, includ­ing in con­nec­tion with alleged dirty tricks it has alleged­ly employed on behalf of polit­i­cal clients from Europe to Africa and Asia.
    ...

    And Oakes has been pitch­ing SCL Group as a pri­vate psy­cho­log­i­cal war­fare ser­vice provider for years. So if we’re explor­ing how Cam­bridge Ana­lyt­i­ca got into the busi­ness of the manip­u­la­tion of the mass­es, the fact that SCL has been pro­vid­ing those ser­vices to the US and UK gov­ern­ments for years is a pret­ty big fac­tor in that sto­ry. when Cam­bridge Ana­lyt­i­ca was formed in 2013, its team was already quite expe­ri­enced in these kinds of mat­ters:

    ...
    The com­pa­ny also made head­lines in 2005 when it billed itself at a glob­al arms fair in Lon­don as the first pri­vate com­pa­ny to pro­vide psy­cho­log­i­cal war­fare ser­vices, or “psy­ops,” to the British mil­i­tary.

    At the time, Oakes, as chief exec­u­tive, said he was con­fi­dent that psy­ops could short­en mil­i­tary con­flicts, and that gov­ern­ments would buy such a ser­vice, which SCL had pro­vid­ed com­mer­cial­ly.

    “We used to be in the busi­ness of mind bend­ing for polit­i­cal pur­pos­es,” he told a reporter, “but now we are in the busi­ness of sav­ing lives.”

    Those who know Oakes, or know of him, are some­what skep­ti­cal.

    One pri­vate inves­ti­ga­tor said the com­pa­ny is known to have done exten­sive work for the U.S. mil­i­tary and oth­er gov­ern­ment agen­cies against tar­gets includ­ing Iran. SCL got its start in the U.S. by sell­ing the same psy­cho­log­i­cal war­fare prod­uct as it did to the British, includ­ing manip­u­la­tion of elec­tions and “per­cep­tion man­age­ment,” or the inten­tion­al spread of fake news.

    The State Depart­ment con­firmed to Defense One this week that it retains SCL Group on a con­tract to “pro­vide research and ana­lyt­i­cal sup­port in con­nec­tion with our mis­sion to counter ter­ror­ist pro­pa­gan­da and dis­in­for­ma­tion over­seas.”

    Com­pa­ny lit­er­a­ture describes some of SCL’s ser­vices, besides “psy­cho­log­i­cal war­fare,” as “influ­ence oper­a­tions” and “pub­lic diplo­ma­cy.”
    ...

    And as the hid­den-cam­era footage of Alexan­der Nix showed the world, those mass manip­u­la­tion ser­vices include dirty tricks. Like send­ing Ukrain­ian sex work­ers to an opponent’s house to sab­o­tage him. It’s an indi­ca­tor of the amoral char­ac­ter of the peo­ple behind Cam­bridge Ana­lyt­i­ca and its SCL Group par­ent:

    ...
    But Oakes and his com­pa­ny have a his­to­ry of secre­cy, mak­ing the hid­den-cam­era footage of Nix all the more shock­ing. In the footage aired by Chan­nel 4, Nix appears to tell a jour­nal­ist pos­ing as a poten­tial client that the com­pa­ny could, for instance, send Ukrain­ian sex work­ers to an opponent’s house to sab­o­tage him.
    ...

    And that amoral­i­ty is per­fect­ly encap­su­lat­ed in a now-noto­ri­ous 1992 quote from Oakes, where he favor­ably com­pares his work in psy­cho­log­i­cal manip­u­la­tion with the tech­niques employed by Hitler:

    ...
    Absent from such descrip­tions is some of the more bom­bas­tic rhetoric of Oakes’ youth.

    “We use the same tech­niques as Aris­to­tle and Hitler,” he told an inter­view­er in 1992. “We appeal to peo­ple on an emo­tion­al lev­el to get them to agree on a func­tion­al lev­el.”
    ...

    And 1992 quote was the only ‘we use the same tech­niques as Hitler!’ quote Oakes has made over the years. As the fol­low­ing arti­cle notes, Oakes made the same admis­sion last year in ref­er­ence to the tech­niques employed by Cam­bridge Ana­lyt­i­ca for the Trump cam­paign:

    The Huff­in­g­ton Post

    Cam­bridge Ana­lyt­i­ca Founder Once Com­pared Trump To Hitler
    Trump vil­i­fied Mus­lims the same way Hitler vil­i­fied Jews, Nigel Oakes said.

    By Willa Frej
    04/17/2018 12:32 pm ET Updat­ed

    Nigel Oakes, who runs the group that found­ed data min­ing firm Cam­bridge Ana­lyt­i­ca, admit­ted in an inter­view last year that Pres­i­dent Don­ald Trump’s con­tro­ver­sial pro­pa­gan­da tac­tics mir­ror those of Adolf Hitler.

    Both Hitler and Trump have suc­cess­ful­ly attacked anoth­er group, turn­ing it into an “arti­fi­cial ene­my,” in order to fos­ter greater sup­port among loy­al­ists, Oakes, the CEO of SCL Group, Cam­bridge Analytica’s par­ent com­pa­ny, said last Novem­ber.

    He made the com­ments as part of a series of inter­views that Emma Bri­ant, a Uni­ver­si­ty of Essex lec­tur­er, con­duct­ed with peo­ple involved in Britain’s cam­paign to leave the Euro­pean Union about pro­pa­gan­da used dur­ing the Brex­it ref­er­en­dum. Britain’s Par­lia­ment released the inter­view tran­scripts Mon­day.

    “Hitler, got to be very care­ful about say­ing so, must nev­er prob­a­bly say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a prob­lem with the Jews at all, but the peo­ple didn’t like the Jews,” Oakes said. “So if the peo­ple… He could just use them to say… So he just lever­age an arti­fi­cial ene­my. Well that’s exact­ly what Trump did. He lever­aged a Mus­lim- I mean, you know, it’s- It was a real ene­my. ISIS is a real, but how big a threat is ISIS real­ly to Amer­i­ca? Real­ly, I mean, we are still talk­ing about 9/11, well 9/11 is a long time ago.”

    Anoth­er one of the inter­vie­wees, for­mer com­mu­ni­ca­tions direc­tor for Leave.EU Andy Wig­more, com­pared the campaign’s own strat­e­gy to Hitler’s “very clever” pro­pa­gan­da machine.

    “In its pure mar­ket­ing sense, you can see the log­ic of what they were say­ing, why they were say­ing it, and how they pre­sent­ed things, and the imagery,” he said of the Nazis. “And look­ing at that now, in hind­sight, hav­ing been on the sharp end of this cam­paign, you think: crikey, this is not new, and it’s just … using the tools that you have at the time.”

    Cam­bridge Ana­lyt­i­ca said Oakes nev­er worked for the com­pa­ny or the Trump cam­paign and said he was instead “speak­ing in a per­son­al capac­i­ty about the his­tor­i­cal use of pro­pa­gan­da to an aca­d­e­m­ic he knew well from her work in the defence sphere,” accord­ing to a spokesper­son.

    ...

    ———-

    “Cam­bridge Ana­lyt­i­ca Founder Once Com­pared Trump To Hitler” by Willa Frej; The Huff­in­g­ton Post; 04/17/2018

    ““Hitler, got to be very care­ful about say­ing so, must nev­er prob­a­bly say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a prob­lem with the Jews at all, but the peo­ple didn’t like the Jews,” Oakes said. “So if the peo­ple… He could just use them to say… So he just lever­age an arti­fi­cial ene­my. Well that’s exact­ly what Trump did. He lever­aged a Mus­lim- I mean, you know, it’s- It was a real ene­my. ISIS is a real, but how big a threat is ISIS real­ly to Amer­i­ca? Real­ly, I mean, we are still talk­ing about 9/11, well 9/11 is a long time ago.””

    And that’s Nigel Oakes in his own words: he saw Trump’s sys­tem­at­ic fear mon­ger­ing about vir­tu­al­ly all Mus­lims as more or less the same cyn­i­cal tech­nique employed by Hitler.

    And when you look at the full quote pro­vid­ed to the UK par­lia­ment it sounds even worse because he’s fram­ing the use of these demo­niza­tion tech­niques as sim­ply a way to fire up “your group” (your tar­get base of sup­port­ers) by demo­niz­ing a dif­fer­ent group that you don’t expect to vote for your can­di­date:

    Clip 8 — Nigel Oakes: Nazi meth­ods of pro­pa­gan­da

    Emma Bri­ant: It didn’t mat­ter with the rest of what he’s [Don­ald Trump] say­ing, it didn’t mat­ter if he is alien­at­ing all of the lib­er­al women, actu­al­ly, and I think he was nev­er going to get them any­way.

    Nigel Oakes: That’s right

    Emma Bri­ant: You’ve got to think about what would res­onate with as many as pos­si­ble.

    Nigel Oakes: And often, as you right­ly say, it’s the things that res­onate, some­times to attack the oth­er group and know that you are going to lose them is going to rein­force and res­onate your group. Which is why, you know, Hitler, got to be very care­ful about say­ing so, must nev­er prob­a­bly say this, off the record, but of course Hitler attacked the Jews, because... He didn’t have a prob­lem with the Jews at all, but the peo­ple didn’t like the Jews. So if the peo­ple... He could just use them to say... So he just lever­age an arti­fi­cial ene­my. Well that’s exact­ly what Trump did. He lever­aged a Mus­lim — I mean, you know, it’s — It was a real ene­my. ISIS is a real, but how big a threat is ISIS real­ly to Amer­i­ca? Real­ly, I mean, we are still talk­ing about 9/11, well 9/11 is a long time ago.

    This inter­view was con­duct­ed by Dr Emma L Bri­ant, Uni­ver­si­ty of Essex, both for the upcom­ing book “What’s wrong with the Democ­rats? Media Bias, Inequal­i­ty and the rise of Don­ald Trump”, and for oth­er upcom­ing pub­li­ca­tions.

    “And often, as you right­ly say, it’s the things that res­onate, some­times to attack the oth­er group and know that you are going to lose them is going to rein­force and res­onate your group.”

    Attack­ing “the oth­er group and know that you are going to lose” in order to “rein­force and res­onate your group.” That’s how Nigel Oakes mat­ter-of-fact­ly framed the use of the same kinds of mass manip­u­la­tion tech­niques designed to gen­er­ate an emo­tion­al appeal to a tar­get polit­i­cal demo­graph­ic. An emo­tion­al appeal that hap­pens to be based on demo­niza­tion a group of peo­ple that your tar­get demo­graph­ic already gen­er­al­ly dis­likes. In oth­er words, find the exist­ing areas of hatred and inflame them.

    And offer­ing ser­vices that will strate­gi­cal­ly inflame those pas­sions is some­thing Nigel Oakes has been open­ly offer­ing clients for decades. And that’s all part of why Nigel Oakes is described as the real force behind Cam­bridge Ana­lyt­i­ca.

    At the same time, let’s not for­get the pre­vi­ous reports about Cam­bridge Ana­lyt­i­ca whis­tle-blow­er Christo­pher Wylie and Wylie char­ac­ter­i­za­tion of Steve Ban­non as Alexan­der Nix’s real boss at Cam­bridge Ana­lyt­i­ca despite tech­ni­cal­ly serv­ing as the com­pa­ny’s vice pres­i­dent and sec­re­tary. So while Nigel Oakes is clear­ly a crit­i­cal­ly impor­tant fig­ure behind Cam­bridge Ana­lyt­i­ca, the ques­tion of who was real­ly in charge of that Cam­bridge Ana­lyt­i­ca oper­a­tion for the Trump Team is still an open ques­tion. Although it was like­ly more of a Hitler-inspired group effort.

    Posted by Pterrafractyl | April 18, 2018, 3:28 pm
  10. Here’s an omi­nous arti­cle about Palan­tir (as if there aren’t omi­nous arti­cles about Palan­tir) that high­lights both the chal­lenges the com­pa­ny faces in sell­ing its sur­veil­lance ser­vices and their plans for over­com­ing those chal­lenges: It turns out the ser­vices Palan­tir offers to its clients is pret­ty labor inten­sive, includ­ing a poten­tial­ly large num­ber of on-site Palan­tir employ­ees. One notable exam­ple is JP Mor­gan that hired Palan­tir to mon­i­tor the bank’s employ­ees for the pur­pose of detect­ing mis­cre­ant behav­iors. And this ser­vice involved as many as 120 “for­ward-deployed engi­neers” from Palan­tir work­ing at JP Mor­gan, each one cost­ing the bank as much as $3,000 a day. So from a price stand­point that’s obvi­ous­ly going to be an issue, even for a finan­cial giant like JP Mor­gan. Although at JP Mor­gan it sounds like the big­ger issue was that the exec­u­tives learned that their emails and activ­i­ty were poten­tial­ly caught up in Palan­tir’s data drag­net too. But the over­all cost of these “for­ward-deployed engi­neer” Palan­tir con­trac­tors is report­ed­ly an issue for a num­ber of oth­er cor­po­rate clients that recent­ly dropped Palan­tir includ­ing Her­shey Co., Coca-Cola, Nas­daq, Amer­i­can Express, and Home Depot.

    So how is Palan­tir plan­ning on address­ing the labor-inten­sive nature of their ser­vices to attract more clients? Automa­tion, of course. And that’s already part of the new prod­uct Palan­tir is offer­ing clients called Faun­dry which is already in use by Air­bus SE and Mer­ck KGaA. In oth­er words, the automa­tion of Palan­tir’s cor­po­rate sur­veil­lance ser­vices is almost here and that means a lot more cor­po­rate clients are prob­a­bly going to be hir­ing Palan­tir. So, yeah, that’s rather omi­nous.

    The arti­cle also includes a few more Palan­tir fun facts. For instance, while there are 2,000 engi­neers at the com­pa­ny, the Pri­va­cy and Civ­il Lib­er­ties Team only con­sists of 10 peo­ple.

    A sec­ond fun fact is about Peter Thiel. Appar­ent­ly he’s plan­ning on move to Los Ange­les and start­ing up a right-wing media empire. Oh good­ie.

    The arti­cle also con­tains a cou­ple of fun facts in rela­tion to the ques­tions about Palan­tir and Cam­bridge Ana­lyt­i­ca after the rev­e­la­tion that a Palan­tir employ­ee was work­ing with Cam­bridge Ana­lyt­i­ca to devel­op its psy­cho­log­i­cal pro­fil­ing algo­rithms: First, Palan­tir claims that the com­pa­ny turned down the offers to work with Cam­bridge Ana­lyt­i­ca and that its employ­ee, Alfredas Chmieli­auskas, was pure­ly work­ing on his own. As the fol­low­ing arti­cle notes, that’s the same expla­na­tion Palan­tir gave when it was caught plan­ning an orches­trat­ed dis­in­for­ma­tion cam­paign against Wik­ileaks and Anony­mous. So the “lone employ­ee” expla­na­tion for Palan­tir appears to be a favorite.

    Addi­tion­al, the arti­cle notes tha Palan­tir does­n’t adver­tise its ser­vices and instead pure­ly relies on word-of-mouth. And that’s inter­est­ing in rela­tion to the mys­tery of how it was that Sophie Schmidt, Google CEO Eric Schmidt’s daugh­ter and a for­mer Cam­bridge Ana­lyt­i­ca intern, just hap­pened to stop by in Cam­bridge Ana­lyt­i­ca’s Lon­don head­quar­ters in mid 2013 to push the idea that the com­pa­ny should start work­ing with Palan­tir. Now, it’s impor­tant to recall that part of what made Sophie Schmidt’s seem­ing­ly ran­dom vis­it in mid-2013 so curi­ous is that Cam­bridge Ana­lyt­i­ca and Palan­tir has already start­ed talk­ing in ear­ly 2013. Still, it’s note­wor­thy if Palan­tir only relies on word-of-mouth refer­rals and Sophie Schmidt appeared to be pro­vid­ed exact­ly that kind of refer­ral seem­ing­ly ran­dom­ly and spon­ta­neous­ly.

    So that’s all some of the new infor­ma­tion we learn about Palan­tir in the fol­low­ing arti­cle. New infor­ma­tion that’s all omi­nous, of course:

    Bloomberg Busi­ness­week

    Peter Thiel’s data-min­ing com­pa­ny is using War on Ter­ror tools to track Amer­i­can cit­i­zens. The scary thing? Palan­tir is des­per­ate for new cus­tomers.

    By Peter Wald­man, Lizette Chap­man, and Jor­dan Robert­son
    April 19, 2018

    High above the Hud­son Riv­er in down­town Jer­sey City, a for­mer U.S. Secret Ser­vice agent named Peter Cav­ic­chia III ran spe­cial ops for JPMor­gan Chase & Co. His insid­er threat group—most large finan­cial insti­tu­tions have one—used com­put­er algo­rithms to mon­i­tor the bank’s employ­ees, osten­si­bly to pro­tect against per­fid­i­ous traders and oth­er mis­cre­ants.

    Aid­ed by as many as 120 “for­ward-deployed engi­neers” from the data min­ing com­pa­ny Palan­tir Tech­nolo­gies Inc., which JPMor­gan engaged in 2009, Cavicchia’s group vac­u­umed up emails and brows­er his­to­ries, GPS loca­tions from com­pa­ny-issued smart­phones, print­er and down­load activ­i­ty, and tran­scripts of dig­i­tal­ly record­ed phone con­ver­sa­tions. Palantir’s soft­ware aggre­gat­ed, searched, sort­ed, and ana­lyzed these records, sur­fac­ing key­words and pat­terns of behav­ior that Cavicchia’s team had flagged for poten­tial abuse of cor­po­rate assets. Palantir’s algo­rithm, for exam­ple, alert­ed the insid­er threat team when an employ­ee start­ed badg­ing into work lat­er than usu­al, a sign of poten­tial dis­gruntle­ment. That would trig­ger fur­ther scruti­ny and pos­si­bly phys­i­cal sur­veil­lance after hours by bank secu­ri­ty per­son­nel.

    Over time, how­ev­er, Cav­ic­chia him­self went rogue. For­mer JPMor­gan col­leagues describe the envi­ron­ment as Wall Street meets Apoc­a­lypse Now, with Cav­ic­chia as Colonel Kurtz, ensconced upriv­er in his office suite eight floors above the rest of the bank’s secu­ri­ty team. Peo­ple in the depart­ment were shocked that no one from the bank or Palan­tir set any real lim­its. They dark­ly joked that Cav­ic­chia was lis­ten­ing to their calls, read­ing their emails, watch­ing them come and go. Some plant­ed fake infor­ma­tion in their com­mu­ni­ca­tions to see if Cav­ic­chia would men­tion it at meet­ings, which he did.

    It all end­ed when the bank’s senior exec­u­tives learned that they, too, were being watched, and what began as a promis­ing mar­riage of mas­ters of big data and glob­al finance descend­ed into a spy­ing scan­dal. The mis­ad­ven­ture, which has nev­er been report­ed, also marked an omi­nous turn for Palan­tir, one of the most rich­ly val­ued star­tups in Sil­i­con Val­ley. An intel­li­gence plat­form designed for the glob­al War on Ter­ror was weaponized against ordi­nary Amer­i­cans at home.

    Found­ed in 2004 by Peter Thiel and some fel­low Pay­Pal alum­ni, Palan­tir cut its teeth work­ing for the Pen­ta­gon and the CIA in Afghanistan and Iraq. The company’s engi­neers and prod­ucts don’t do any spy­ing them­selves; they’re more like a spy’s brain, col­lect­ing and ana­lyz­ing infor­ma­tion that’s fed in from the hands, eyes, nose, and ears. The soft­ware combs through dis­parate data sources—financial doc­u­ments, air­line reser­va­tions, cell­phone records, social media postings—and search­es for con­nec­tions that human ana­lysts might miss. It then presents the link­ages in col­or­ful, easy-to-inter­pret graph­ics that look like spi­der webs. U.S. spies and spe­cial forces loved it imme­di­ate­ly; they deployed Palan­tir to syn­the­size and sort the bliz­zard of bat­tle­field intel­li­gence. It helped plan­ners avoid road­side bombs, track insur­gents for assas­si­na­tion, even hunt down Osama bin Laden. The mil­i­tary suc­cess led to fed­er­al con­tracts on the civil­ian side. The U.S. Depart­ment of Health and Human Ser­vices uses Palan­tir to detect Medicare fraud. The FBI uses it in crim­i­nal probes. The Depart­ment of Home­land Secu­ri­ty deploys it to screen air trav­el­ers and keep tabs on immi­grants.

    Police and sheriff’s depart­ments in New York, New Orleans, Chica­go, and Los Ange­les have also used it, fre­quent­ly ensnar­ing in the dig­i­tal drag­net peo­ple who aren’t sus­pect­ed of com­mit­ting any crime. Peo­ple and objects pop up on the Palan­tir screen inside box­es con­nect­ed to oth­er box­es by radi­at­ing lines labeled with the rela­tion­ship: “Col­league of,” “Lives with,” “Oper­a­tor of [cell num­ber],” “Own­er of [vehi­cle],” “Sib­ling of,” even “Lover of.” If the author­i­ties have a pic­ture, the rest is easy. Tap­ping data­bas­es of driver’s license and ID pho­tos, law enforce­ment agen­cies can now iden­ti­fy more than half the pop­u­la­tion of U.S. adults.

    JPMor­gan was effec­tive­ly Palantir’s R&D lab and test bed for a for­ay into the finan­cial sec­tor, via a prod­uct called Metrop­o­lis. The two com­pa­nies made an odd cou­ple. Palantir’s soft­ware engi­neers showed up at the bank on skate­boards. Neck­ties and hair­cuts were too much to ask, but JPMor­gan drew the line at T‑shirts. The pro­gram­mers had to agree to wear shirts with col­lars, tucked in when pos­si­ble.

    As Metrop­o­lis was installed and refined, JPMor­gan made an equi­ty invest­ment in Palan­tir and induct­ed the com­pa­ny into its Hall of Inno­va­tion, while its exec­u­tives raved about Palan­tir in the press. The soft­ware turned “data land­fills into gold mines,” Guy Chiarel­lo, who was then JPMorgan’s chief infor­ma­tion offi­cer, told Bloomberg Busi­ness­week in 2011.

    Cav­ic­chia was in charge of foren­sic inves­ti­ga­tions at the bank. Through Palan­tir, he gained admin­is­tra­tive access to a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use. He had unprece­dent­ed access to every­thing, all at once, all the time, on one ana­lyt­ic plat­form. He was a one-man Nation­al Secu­ri­ty Agency, sur­round­ed by the Palan­tir engi­neers, each one cost­ing the bank as much as $3,000 a day.

    Senior inves­ti­ga­tors stum­bled onto the full extent of the spy­ing by acci­dent. In May 2013 the bank’s lead­er­ship ordered an inter­nal probe into who had leaked a doc­u­ment to the New York Times about a fed­er­al inves­ti­ga­tion of JPMor­gan for pos­si­bly manip­u­lat­ing U.S. elec­tric­i­ty mar­kets. Evi­dence indi­cat­ed the leak­er could have been Frank Bisig­nano, who’d recent­ly resigned as JPMorgan’s co-chief oper­at­ing offi­cer to become CEO of First Data Corp., the big pay­ments proces­sor. Cav­ic­chia had used Metrop­o­lis to gain access to emails about the leak investigation—some writ­ten by top executives—and the bank believed he shared the con­tents of those emails and oth­er com­mu­ni­ca­tions with Bisig­nano after Bisig­nano had left the bank. (Inside JPMor­gan, Bisig­nano was con­sid­ered Cavicchia’s patron—a senior exec­u­tive who pro­tect­ed and pro­mot­ed him.)

    JPMor­gan offi­cials debat­ed whether to file a sus­pi­cious activ­i­ty report with fed­er­al reg­u­la­tors about the inter­nal secu­ri­ty breach, as required by law when­ev­er banks sus­pect reg­u­la­to­ry vio­la­tions. They decid­ed not to—a con­tro­ver­sial deci­sion inter­nal­ly, accord­ing to mul­ti­ple sources with the bank. Cav­ic­chia nego­ti­at­ed a sev­er­ance agree­ment and was forced to resign. He joined Bisig­nano at First Data, where he’s now a senior vice pres­i­dent. Chiarel­lo also went to First Data, as pres­i­dent. After their depar­tures, JPMor­gan dras­ti­cal­ly cur­tailed its Palan­tir use, in part because “it nev­er lived up to its promised poten­tial,” says one JPMor­gan exec­u­tive who insist­ed on anonymi­ty to dis­cuss the deci­sion.

    The bank, First Data, and Bisig­nano, Chiarel­lo, and Cav­ic­chia didn’t respond to sep­a­rate­ly emailed ques­tions for this arti­cle. Palan­tir, in a state­ment respond­ing to ques­tions about how JPMor­gan and oth­ers have used its soft­ware, declined to answer spe­cif­ic ques­tions. “We are aware that pow­er­ful tech­nol­o­gy can be abused and we spend a lot of time and ener­gy mak­ing sure our prod­ucts are used for the forces of good,” the state­ment said.

    Much depends on how the com­pa­ny choos­es to define good. In March a for­mer com­put­er engi­neer for Cam­bridge Ana­lyt­i­ca, the polit­i­cal con­sult­ing firm that worked for Don­ald Trump’s 2016 pres­i­den­tial cam­paign, tes­ti­fied in the British Par­lia­ment that a Palan­tir employ­ee had helped Cam­bridge Ana­lyt­i­ca use the per­son­al data of up to 87 mil­lion Face­book users to devel­op psy­cho­graph­ic pro­files of indi­vid­ual vot­ers. Palan­tir said it has a strict pol­i­cy against work­ing on polit­i­cal issues, includ­ing cam­paigns, and showed Bloomberg emails in which it turned down Cambridge’s request to work with Palan­tir on mul­ti­ple occa­sions. The employ­ee, Palan­tir said, worked with Cam­bridge Ana­lyt­i­ca on his own time. Still, there was no mis­tak­ing the impli­ca­tions of the inci­dent: All human rela­tions are a mat­ter of record, ready to be revealed by a clever algo­rithm. Every­one is a spi­der­gram now.

    Thiel, who turned 50 in Octo­ber, long rev­eled as the lib­er­tar­i­an black sheep in left-lean­ing Sil­i­con Val­ley. He con­tributed $1.25 mil­lion to Trump’s pres­i­den­tial vic­to­ry, spoke at the Repub­li­can con­ven­tion, and has dined with Trump at the White House. But Thiel has told friends he’s had enough of the Bay Area’s “mono­cul­tur­al” lib­er­al­ism. He’s ditch­ing his long­time base in San Fran­cis­co and mov­ing his per­son­al invest­ment firms this year to Los Ange­les, where he plans to estab­lish his next project, a con­ser­v­a­tive media empire.

    As Thiel’s wealth has grown, he’s got­ten more stri­dent. In a 2009 essay for the Cato Insti­tute, he railed against tax­es, ­gov­ern­ment, women, poor peo­ple, and society’s acqui­es­cence to the inevitabil­i­ty of death. (Thiel doesn’t accept death as inex­orable.) He wrote that he’d reached some rad­i­cal con­clu­sions: “Most impor­tant­ly, I no longer believe that free­dom and democ­ra­cy are com­pat­i­ble.” The 1920s was the last time one could feel “gen­uine­ly opti­mistic” about Amer­i­can democ­ra­cy, he said; since then, “the vast increase in wel­fare ben­e­fi­cia­ries and the exten­sion of the fran­chise to women—two con­stituen­cies that are noto­ri­ous­ly tough for libertarians—have ren­dered the notion of ‘cap­i­tal­ist democ­ra­cy’ into an oxy­moron.”

    Thiel went into tech after miss­ing a prized Supreme Court clerk­ship fol­low­ing his grad­u­a­tion from Stan­ford Law School. He co-found­ed Pay­Pal and then par­layed his win­nings from its 2002 sale to EBay Inc. into a career in ven­ture invest­ing. He made an ear­ly bet on Face­book Inc. (where he’s still on the board), which accounts for most of his $3.3 bil­lion for­tune, as esti­mat­ed by Bloomberg, and launched his career as a backer of big ideas—things like pri­vate space trav­el (through an invest­ment in SpaceX), hotel alter­na­tives (Airbnb), and float­ing island nations (the Seast­eading Insti­tute).

    He start­ed Palantir—named after the omni­scient crys­tal balls in J.R.R. Tolkien’s Lord of the Rings trilogy—three years after the attacks of Sept. 11, 2001. The CIA’s invest­ment arm, In-Q-Tel, was a seed investor. For the role of chief exec­u­tive offi­cer, he chose an old law school friend and self-described neo-Marx­ist, Alex Karp. Thiel told Bloomberg in 2011 that civ­il lib­er­tar­i­ans ought to embrace Palan­tir, because data min­ing is less repres­sive than the “crazy abus­es and dra­con­ian poli­cies” pro­posed after Sept. 11. The best way to pre­vent anoth­er cat­a­stroph­ic attack with­out becom­ing a police state, he argued, was to give the gov­ern­ment the best sur­veil­lance tools pos­si­ble, while build­ing in safe­guards against their abuse.

    Leg­end has it that Stephen Cohen, one of Thiel’s co-founders, pro­grammed the ini­tial pro­to­type for Palantir’s soft­ware in two weeks. It took years, how­ev­er, to coax cus­tomers away from the long­time leader in the intel­li­gence ana­lyt­ics mar­ket, a soft­ware com­pa­ny called I2 Inc.

    In one adven­ture miss­ing from the glow­ing accounts of Palantir’s ear­ly rise, I2 accused Palan­tir of mis­ap­pro­pri­at­ing its intel­lec­tu­al prop­er­ty through a Flori­da shell com­pa­ny reg­is­tered to the fam­i­ly of a Palan­tir exec­u­tive. A com­pa­ny claim­ing to be a pri­vate eye firm had been licens­ing I2 soft­ware and devel­op­ment tools and spir­it­ing them to Palan­tir for more than four years. I2 said the cutout was reg­is­tered to the fam­i­ly of Shyam Sankar, Palantir’s direc­tor of busi­ness devel­op­ment.

    I2 sued Palan­tir in fed­er­al court, alleg­ing fraud, con­spir­a­cy, and copy­right infringe­ment. In its legal response, Palan­tir argued it had the right to appro­pri­ate I2’s code for the greater good. “What’s at stake here is the abil­i­ty of crit­i­cal nation­al secu­ri­ty, defense and intel­li­gence agen­cies to access their own data and use it inter­op­er­a­bly in whichev­er plat­form they choose in order to most effec­tive­ly pro­tect the cit­i­zen­ry,” Palan­tir said in its motion to dis­miss I2’s suit.

    The motion was denied. Palan­tir agreed to pay I2 about $10 mil­lion to set­tle the suit. I2 was sold to IBM in 2011.

    Sankar, Palan­tir employ­ee No.13 and now one of the company’s top exec­u­tives, also showed up in anoth­er Palan­tir scan­dal: the company’s 2010 pro­pos­al for the U.S. Cham­ber of Com­merce to run a secret sab­o­tage cam­paign against the group’s lib­er­al oppo­nents. Hacked emails released by the group Anony­mous indi­cat­ed that Palan­tir and two oth­er defense con­trac­tors pitched out­side lawyers for the orga­ni­za­tion on a plan to snoop on the fam­i­lies of pro­gres­sive activists, cre­ate fake iden­ti­ties to infil­trate left-lean­ing groups, scrape social media with bots, and plant false infor­ma­tion with lib­er­al groups to sub­se­quent­ly dis­cred­it them.

    After the emails emerged in the press, Palan­tir offered an expla­na­tion sim­i­lar to the one it pro­vid­ed in March for its U.K.-based employee’s assis­tance to Cam­bridge Ana­lyt­i­ca: It was the work of a sin­gle rogue employ­ee. The com­pa­ny nev­er explained Sankar’s involve­ment. Karp issued a pub­lic apol­o­gy and said he and Palan­tir were deeply com­mit­ted to pro­gres­sive caus­es. Palan­tir set up an advi­so­ry pan­el on pri­va­cy and civ­il lib­er­ties, head­ed by a for­mer CIA attor­ney, and beefed up an engi­neer­ing group it calls the Pri­va­cy and Civ­il Lib­er­ties Team. The com­pa­ny now has about 10 PCL engi­neers on call to help vet clients’ requests for access to data troves and pitch in with per­ti­nent thoughts about law, moral­i­ty, and machines.

    Dur­ing its 14 years in start­up mode, Palan­tir has cul­ti­vat­ed a mys­tique as a haven for bril­liant engi­neers who want to solve big prob­lems such as ter­ror­ism and human traf­fick­ing, unfet­tered by pedes­tri­an con­cerns such as mak­ing mon­ey. Palan­tir exec­u­tives boast of not employ­ing a sin­gle sales­person, rely­ing instead on word-of-mouth refer­rals.

    The company’s ear­ly data min­ing daz­zled ven­ture investors, who val­ued it at $20 bil­lion in 2015. But Palan­tir has nev­er report­ed a prof­it. It oper­ates less like a con­ven­tion­al soft­ware com­pa­ny than like a con­sul­tan­cy, deploy­ing rough­ly half its 2,000 engi­neers to client sites. That works at well-fund­ed gov­ern­ment spy agen­cies seek­ing spe­cial­ized appli­ca­tions but has pro­duced mixed results with cor­po­rate clients. Palantir’s high instal­la­tion and main­te­nance costs repelled cus­tomers such as Her­shey Co., which trum­pet­ed a Palan­tir part­ner­ship in 2015 only to walk away two years lat­er. Coca-Cola, Nas­daq, Amer­i­can Express, and Home Depot have also dumped Palan­tir.

    Karp rec­og­nized the high-touch mod­el was prob­lem­at­ic ear­ly in the company’s push into the cor­po­rate mar­ket, but solu­tions have been elu­sive. “We didn’t want to be a ser­vices com­pa­ny. We want­ed to do some­thing that was cost-effi­cient,” he con­fessed at a Euro­pean con­fer­ence in 2010, in one of sev­er­al unguard­ed com­ments cap­tured in videos post­ed online. “Of course, what we didn’t rec­og­nize was that this would be much, much hard­er than we real­ized.”

    Palantir’s newest prod­uct, Foundry, aims to final­ly break through the prof­itabil­i­ty bar­ri­er with more automa­tion and less need for on-site engi­neers. Air­bus SE, the big Euro­pean plane mak­er, uses Foundry to crunch air­line data about spe­cif­ic onboard com­po­nents to track usage and main­te­nance and antic­i­pate repair prob­lems. Mer­ck KGaA, the phar­ma­ceu­ti­cal giant, has a long-term Palan­tir con­tract to use Foundry in drug devel­op­ment and sup­ply chain man­age­ment.

    Deep­er adop­tion of Foundry in the com­mer­cial mar­ket is cru­cial to Palantir’s hopes of a big pay­day. Some investors are weary and have already writ­ten down their Palan­tir stakes. Mor­gan Stan­ley now val­ues the com­pa­ny at $6 bil­lion. Fred Alger Man­age­ment Inc., which has owned stock since at least 2006, reval­ued Palan­tir in Decem­ber at about $10 bil­lion, accord­ing to Bloomberg Hold­ings. One frus­trat­ed investor, Marc Abramowitz, recent­ly won a court order for Palan­tir to show him its books, as part of a law­suit he filed alleg­ing the com­pa­ny sab­o­taged his attempt to find a buy­er for the Palan­tir shares he has owned for more than a decade.

    As shown in the pri­va­cy breach­es at Face­book and Cam­bridge Analytica—with Thiel and Palan­tir linked to both sides of the equation—the pres­sure to mon­e­tize data at tech com­pa­nies is cease­less. Face­book didn’t grow from a web­site con­nect­ing col­lege kids into a pur­vey­or of user pro­files and predilec­tions worth $478 bil­lion by walling off per­son­al data. Palan­tir says its Pri­va­cy and Civ­il Lib­er­ties Team watch­es out for inap­pro­pri­ate data demands, but it con­sists of just 10 peo­ple in a com­pa­ny of 2,000 engi­neers. No one said no to JPMor­gan, or to whomev­er at Palan­tir vol­un­teered to help Cam­bridge Analytica—or to anoth­er orga­ni­za­tion keen­ly inter­est­ed in state-of-the-art data sci­ence, the Los Ange­les Police Depart­ment.

    Palan­tir began work with the LAPD in 2009. The impe­tus was fed­er­al fund­ing. After sev­er­al Sept. 11 post­mortems called for more intel­li­gence shar­ing at all lev­els of law enforce­ment, mon­ey start­ed flow­ing to Palan­tir to help build data inte­gra­tion sys­tems for so-called fusion cen­ters, start­ing in L.A. There are now more than 1,300 trained Palan­tir users at more than a half-dozen law enforce­ment agen­cies in South­ern Cal­i­for­nia, includ­ing local police and sheriff’s depart­ments and the Bureau of Alco­hol, Tobac­co, Firearms and Explo­sives.

    The LAPD uses Palantir’s Gotham prod­uct for Oper­a­tion Laser, a pro­gram to iden­ti­fy and deter peo­ple like­ly to com­mit crimes. Infor­ma­tion from rap sheets, parole reports, police inter­views, and oth­er sources is fed into the sys­tem to gen­er­ate a list of peo­ple the depart­ment defines as chron­ic offend­ers, says Craig Uchi­da, whose con­sult­ing firm, Jus­tice & Secu­ri­ty Strate­gies Inc., designed the Laser sys­tem. The list is dis­trib­uted to patrol­men, with orders to mon­i­tor and stop the pre-crime sus­pects as often as pos­si­ble, using excus­es such as jay­walk­ing or fix-it tick­ets. At each con­tact, offi­cers fill out a field inter­view card with names, address­es, vehi­cles, phys­i­cal descrip­tions, any neigh­bor­hood intel­li­gence the per­son offers, and the officer’s own obser­va­tions on the sub­ject.

    The cards are dig­i­tized in the Palan­tir sys­tem, adding to a con­stant­ly expand­ing sur­veil­lance data­base that’s ful­ly acces­si­ble with­out a war­rant. Tomorrow’s data points are auto­mat­i­cal­ly linked to today’s, with the goal of gen­er­at­ing inves­tiga­tive leads. Say a chron­ic offend­er is tagged as a pas­sen­ger in a car that’s pulled over for a bro­ken tail­light. Two years lat­er, that same car is spot­ted by an auto­mat­ic license plate read­er near a crime scene 200 miles across the state. As soon as the plate hits the sys­tem, Palan­tir alerts the offi­cer who made the orig­i­nal stop that a car once linked to the chron­ic offend­er was spot­ted near a crime scene.

    The plat­form is sup­ple­ment­ed with what soci­ol­o­gist Sarah Brayne calls the sec­ondary sur­veil­lance net­work: the web of who is relat­ed to, friends with, or sleep­ing with whom. One woman in the sys­tem, for exam­ple, who wasn’t sus­pect­ed of com­mit­ting any crime, was iden­ti­fied as hav­ing mul­ti­ple boyfriends with­in the same net­work of asso­ciates, says Brayne, who spent two and a half years embed­ded with the LAPD while research­ing her dis­ser­ta­tion on big-data polic­ing at Prince­ton Uni­ver­si­ty and who’s now an asso­ciate pro­fes­sor at the Uni­ver­si­ty of Texas at Austin. “Any­body who logs into the sys­tem can see all these inti­mate ties,” she says. To widen the scope of pos­si­ble con­nec­tions, she adds, the LAPD has also explored pur­chas­ing pri­vate data, includ­ing social media, fore­clo­sure, and toll road infor­ma­tion, cam­era feeds from hos­pi­tals, park­ing lots, and uni­ver­si­ties, and deliv­ery infor­ma­tion from Papa John’s Inter­na­tion­al Inc. and Piz­za Hut LLC.

    The LAPD declined to com­ment for this sto­ry. Palan­tir sent Bloomberg a state­ment about its work with law enforce­ment: “Our [for­ward-deployed engi­neers] and [pri­va­cy and civ­il lib­er­ties] engi­neers work with the law enforce­ment cus­tomers (includ­ing LAPD) to ensure that the imple­men­ta­tion of our soft­ware and inte­gra­tion of their source sys­tems with the soft­ware is con­sis­tent with the Department’s legal and pol­i­cy oblig­a­tions, as well as pri­va­cy and civ­il lib­er­ties con­sid­er­a­tions that may not cur­rent­ly be leg­is­lat­ed but are on the hori­zon. We as a com­pa­ny deter­mine the types of engage­ments and gen­er­al appli­ca­tions of our soft­ware with respect to those over­ar­ch­ing con­sid­er­a­tions. Police Agen­cies have inter­nal respon­si­bil­i­ty for ensur­ing that their infor­ma­tion sys­tems are used in a man­ner con­sis­tent with their poli­cies and pro­ce­dures.”

    Oper­a­tion Laser has made L.A. cops more surgical—and, accord­ing to com­mu­ni­ty activists, unre­lent­ing. Once tar­gets are enmeshed in a spi­der­gram, they’re stuck.

    ...

    Palan­tir is twice the age most star­tups are when they cash out in a sale or ini­tial pub­lic offer­ing. The com­pa­ny needs to fig­ure out how to be reward­ed on Wall Street with­out creep­ing out Main Street. It might not be pos­si­ble. For all of Palantir’s pro­fessed con­cern for indi­vid­u­als’ pri­va­cy, the sin­gle most impor­tant safe­guard against abuse is the one it’s try­ing des­per­ate­ly to reduce through automa­tion: human judg­ment.

    As Palan­tir tries to court cor­po­rate cus­tomers as a more con­ven­tion­al soft­ware com­pa­ny, few­er for­ward-deployed engi­neers will mean few­er human deci­sions. Sen­si­tive ques­tions, such as how deeply to pry into people’s lives, will be answered increas­ing­ly by arti­fi­cial intel­li­gence and machine-learn­ing algo­rithms. The small team of Pri­va­cy and Civ­il Lib­er­ties engi­neers could find them­selves even less influ­en­tial, as the urge for omnipo­tence among clients over­whelms any self-imposed restraints.

    Com­put­ers don’t ask moral ques­tions; peo­ple do, says John Grant, one of Palantir’s top PCL engi­neers and a force­ful advo­cate for manda­to­ry ethics edu­ca­tion for engi­neers. “At a com­pa­ny like ours with mil­lions of lines of code, every tiny deci­sion could have huge impli­ca­tions,” Grant told a pri­va­cy con­fer­ence in Berke­ley last year.

    JPMorgan’s expe­ri­ence remains instruc­tive. “The world changed when it became clear every­one could be tar­get­ed using Palan­tir,” says a for­mer JPMor­gan cyber expert who worked with Cav­ic­chia at one point on the insid­er threat team. “Nefar­i­ous ideas became triv­ial to imple­ment; everyone’s a sus­pect, so we mon­i­tored every­thing. It was a pret­ty ter­ri­ble feel­ing.”
    ———–

    “Peter Thiel’s data-min­ing com­pa­ny is using War on Ter­ror tools to track Amer­i­can cit­i­zens. The scary thing? Palan­tir is des­per­ate for new cus­tomers.” by Peter Wald­man, Lizette Chap­man, and Jor­dan Robert­son; Bloomberg Busi­ness­week; 04/19/2018

    “High above the Hud­son Riv­er in down­town Jer­sey City, a for­mer U.S. Secret Ser­vice agent named Peter Cav­ic­chia III ran spe­cial ops for JPMor­gan Chase & Co. His insid­er threat group—most large finan­cial insti­tu­tions have one—used com­put­er algo­rithms to mon­i­tor the bank’s employ­ees, osten­si­bly to pro­tect against per­fid­i­ous traders and oth­er mis­cre­ants.”

    Insid­er threat ser­vices. That appears to be one of the pri­ma­ry ser­vices Palan­tir is try­ing to offer to cor­po­rate clients. It’s the kind of ser­vice that gives Palan­tir access to almost every­thing employ­ees are doing in a com­pa­ny and basi­cal­ly turns it into a Big Broth­er-for-hire enti­ty. And when JP Mor­gan hired Palan­tir to pro­vide these ser­vices they end­ed up drop­ping the after the exec­u­tives learned that it was too Big Broth­er-ish and watch­ing over the exec­u­tives too:

    ...
    Aid­ed by as many as 120 “for­ward-deployed engi­neers” from the data min­ing com­pa­ny Palan­tir Tech­nolo­gies Inc., which JPMor­gan engaged in 2009, Cavicchia’s group vac­u­umed up emails and brows­er his­to­ries, GPS loca­tions from com­pa­ny-issued smart­phones, print­er and down­load activ­i­ty, and tran­scripts of dig­i­tal­ly record­ed phone con­ver­sa­tions. Palantir’s soft­ware aggre­gat­ed, searched, sort­ed, and ana­lyzed these records, sur­fac­ing key­words and pat­terns of behav­ior that Cavicchia’s team had flagged for poten­tial abuse of cor­po­rate assets. Palantir’s algo­rithm, for exam­ple, alert­ed the insid­er threat team when an employ­ee start­ed badg­ing into work lat­er than usu­al, a sign of poten­tial dis­gruntle­ment. That would trig­ger fur­ther scruti­ny and pos­si­bly phys­i­cal sur­veil­lance after hours by bank secu­ri­ty per­son­nel.

    Over time, how­ev­er, Cav­ic­chia him­self went rogue. For­mer JPMor­gan col­leagues describe the envi­ron­ment as Wall Street meets Apoc­a­lypse Now, with Cav­ic­chia as Colonel Kurtz, ensconced upriv­er in his office suite eight floors above the rest of the bank’s secu­ri­ty team. Peo­ple in the depart­ment were shocked that no one from the bank or Palan­tir set any real lim­its. They dark­ly joked that Cav­ic­chia was lis­ten­ing to their calls, read­ing their emails, watch­ing them come and go. Some plant­ed fake infor­ma­tion in their com­mu­ni­ca­tions to see if Cav­ic­chia would men­tion it at meet­ings, which he did.

    It all end­ed when the bank’s senior exec­u­tives learned that they, too, were being watched, and what began as a promis­ing mar­riage of mas­ters of big data and glob­al finance descend­ed into a spy­ing scan­dal. The mis­ad­ven­ture, which has nev­er been report­ed, also marked an omi­nous turn for Palan­tir, one of the most rich­ly val­ued star­tups in Sil­i­con Val­ley. An intel­li­gence plat­form designed for the glob­al War on Ter­ror was weaponized against ordi­nary Amer­i­cans at home.
    ...

    And this project at JP Mor­gan was basi­cal­ly the test lab for a new ser­vice Palan­tir is try­ing to offer the finan­cial sec­tor: Metrop­o­lis:

    ...
    JPMor­gan was effec­tive­ly Palantir’s R&D lab and test bed for a for­ay into the finan­cial sec­tor, via a prod­uct called Metrop­o­lis. The two com­pa­nies made an odd cou­ple. Palantir’s soft­ware engi­neers showed up at the bank on skate­boards. Neck­ties and hair­cuts were too much to ask, but JPMor­gan drew the line at T‑shirts. The pro­gram­mers had to agree to wear shirts with col­lars, tucked in when pos­si­ble.

    As Metrop­o­lis was installed and refined, JPMor­gan made an equi­ty invest­ment in Palan­tir and induct­ed the com­pa­ny into its Hall of Inno­va­tion, while its exec­u­tives raved about Palan­tir in the press. The soft­ware turned “data land­fills into gold mines,” Guy Chiarel­lo, who was then JPMorgan’s chief infor­ma­tion offi­cer, told Bloomberg Busi­ness­week in 2011.

    Cav­ic­chia was in charge of foren­sic inves­ti­ga­tions at the bank. Through Palan­tir, he gained admin­is­tra­tive access to a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use. He had unprece­dent­ed access to every­thing, all at once, all the time, on one ana­lyt­ic plat­form. He was a one-man Nation­al Secu­ri­ty Agency, sur­round­ed by the Palan­tir engi­neers, each one cost­ing the bank as much as $3,000 a day.
    ...

    And through this JP Mor­gan test bed for Metrop­o­lis, Peter Cav­ic­chia insid­er threat group was giv­en access to “a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use”. Along with a team of Palan­tir engi­neers to help him use that data. This is the busi­ness mod­el Palan­tir was try­ing to test so it could sell to oth­er banks: using Palan­tir to give bank employ­ees unprece­dent­ed access to the bank’s inter­nal data (which, of course, means Palan­tir like­ly has access to that data too):

    ...
    Cav­ic­chia was in charge of foren­sic inves­ti­ga­tions at the bank. Through Palan­tir, he gained admin­is­tra­tive access to a full range of cor­po­rate secu­ri­ty data­bas­es that had pre­vi­ous­ly required sep­a­rate autho­riza­tions and a spe­cif­ic busi­ness jus­ti­fi­ca­tion to use. He had unprece­dent­ed access to every­thing, all at once, all the time, on one ana­lyt­ic plat­form. He was a one-man Nation­al Secu­ri­ty Agency, sur­round­ed by the Palan­tir engi­neers, each one cost­ing the bank as much as $3,000 a day.
    ...

    But Palan­tir’s test bed at JP Mor­gan ulti­mate­ly turned into a failed exper­i­ment when JP Mor­gan’s lead­er­ship learned that Cav­ic­chia had appar­ent­ly used his unprece­dent­ed access to inter­nal doc­u­ments to spy on JP Mor­gan exec­u­tives who were inves­ti­gat­ing a leak to the New York Times. The leak appeared to be done by an exec­u­tive who had just left the com­pa­ny, Frank Bisig­nano, who also hap­pened to be Cav­ic­chi­a’s patron at the com­pa­ny before he left. And that leak inves­ti­ga­tion appeared to show that Cav­ic­chia accessed exec­u­tive emails about the leak and passed them along to Bisig­nano. In oth­er words, JP Mor­gan learned that the guy they made their cor­po­rate Big Broth­er abused that pow­er (shock­er):

    ...
    Senior inves­ti­ga­tors stum­bled onto the full extent of the spy­ing by acci­dent. In May 2013 the bank’s lead­er­ship ordered an inter­nal probe into who had leaked a doc­u­ment to the New York Times about a fed­er­al inves­ti­ga­tion of JPMor­gan for pos­si­bly manip­u­lat­ing U.S. elec­tric­i­ty mar­kets. Evi­dence indi­cat­ed the leak­er could have been Frank Bisig­nano, who’d recent­ly resigned as JPMorgan’s co-chief oper­at­ing offi­cer to become CEO of First Data Corp., the big pay­ments proces­sor. Cav­ic­chia had used Metrop­o­lis to gain access to emails about the leak investigation—some writ­ten by top executives—and the bank believed he shared the con­tents of those emails and oth­er com­mu­ni­ca­tions with Bisig­nano after Bisig­nano had left the bank. (Inside JPMor­gan, Bisig­nano was con­sid­ered Cavicchia’s patron—a senior exec­u­tive who pro­tect­ed and pro­mot­ed him.)

    JPMor­gan offi­cials debat­ed whether to file a sus­pi­cious activ­i­ty report with fed­er­al reg­u­la­tors about the inter­nal secu­ri­ty breach, as required by law when­ev­er banks sus­pect reg­u­la­to­ry vio­la­tions. They decid­ed not to—a con­tro­ver­sial deci­sion inter­nal­ly, accord­ing to mul­ti­ple sources with the bank. Cav­ic­chia nego­ti­at­ed a sev­er­ance agree­ment and was forced to resign. He joined Bisig­nano at First Data, where he’s now a senior vice pres­i­dent. Chiarel­lo also went to First Data, as pres­i­dent. After their depar­tures, JPMor­gan dras­ti­cal­ly cur­tailed its Palan­tir use, in part because “it nev­er lived up to its promised poten­tial,” says one JPMor­gan exec­u­tive who insist­ed on anonymi­ty to dis­cuss the deci­sion.
    ...

    Thus end­ed Palan­tir’s test run of Metrop­o­lis, high­light­ing the fact that the exten­sive man­pow­er asso­ci­at­ed with Palan­tir’s ser­vices isn’t the only fac­tor that might keep cor­po­rate clients away. The way Palan­tir’s ser­vices cre­ate indi­vid­u­als with unprece­dent­ed access to the inter­nal doc­u­ments of a com­pa­ny might also dri­ve clients away. After all, threat assess­ment groups are intend­ed to mit­i­gate risk. Not exac­er­bate it.

    But the cost of all those on-site Palan­tir engi­neers is still a obsta­cle to wider adop­tion of Palan­tir ser­vices. As the arti­cle notes, rough­ly half of Palan­tir’s 2,000 engi­neers are work­ing on client sites:

    ...
    The company’s ear­ly data min­ing daz­zled ven­ture investors, who val­ued it at $20 bil­lion in 2015. But Palan­tir has nev­er report­ed a prof­it. It oper­ates less like a con­ven­tion­al soft­ware com­pa­ny than like a con­sul­tan­cy, deploy­ing rough­ly half its 2,000 engi­neers to client sites. That works at well-fund­ed gov­ern­ment spy agen­cies seek­ing spe­cial­ized appli­ca­tions but has pro­duced mixed results with cor­po­rate clients. Palantir’s high instal­la­tion and main­te­nance costs repelled cus­tomers such as Her­shey Co., which trum­pet­ed a Palan­tir part­ner­ship in 2015 only to walk away two years lat­er. Coca-Cola, Nas­daq, Amer­i­can Express, and Home Depot have also dumped Palan­tir.
    ...

    And that’s what Palan­tir’s newest prod­uct, Foundry, is designed to address. By increas­ing­ly automat­ing the cor­po­rate sur­veil­lance process:

    ...
    Palantir’s newest prod­uct, Foundry, aims to final­ly break through the prof­itabil­i­ty bar­ri­er with more automa­tion and less need for on-site engi­neers. Air­bus SE, the big Euro­pean plane mak­er, uses Foundry to crunch air­line data about spe­cif­ic onboard com­po­nents to track usage and main­te­nance and antic­i­pate repair prob­lems. Mer­ck KGaA, the phar­ma­ceu­ti­cal giant, has a long-term Palan­tir con­tract to use Foundry in drug devel­op­ment and sup­ply chain man­age­ment.

    Deep­er adop­tion of Foundry in the com­mer­cial mar­ket is cru­cial to Palantir’s hopes of a big pay­day. Some investors are weary and have already writ­ten down their Palan­tir stakes. Mor­gan Stan­ley now val­ues the com­pa­ny at $6 bil­lion. Fred Alger Man­age­ment Inc., which has owned stock since at least 2006, reval­ued Palan­tir in Decem­ber at about $10 bil­lion, accord­ing to Bloomberg Hold­ings. One frus­trat­ed investor, Marc Abramowitz, recent­ly won a court order for Palan­tir to show him its books, as part of a law­suit he filed alleg­ing the com­pa­ny sab­o­taged his attempt to find a buy­er for the Palan­tir shares he has owned for more than a decade.
    ...

    “Deep­er adop­tion of Foundry in the com­mer­cial mar­ket is cru­cial to Palantir’s hopes of a big pay­day.”

    And that appears to be the direc­tion Palan­tir is head­ing: auto­mat­ed cor­po­rate sur­veil­lance which will allow the com­pa­ny to offer its ser­vices cheap­er and to more clients. So if Palan­tir suc­ceeds we just might see A LOT more com­pa­nies hir­ing Palan­tir’s ser­vices, which means A LOT more employ­ees are going to have Palan­tir’s soft­ware watch­ing and ana­lyz­ing their every key­stroke and email. It real­ly is pret­ty omi­nous. Espe­cial­ly giv­en the fact that com­pa­ny’s Pri­va­cy and Civ­il Lib­er­ties Team con­sists of a whole 10 peo­ple:

    ...
    As shown in the pri­va­cy breach­es at Face­book and Cam­bridge Analytica—with Thiel and Palan­tir linked to both sides of the equation—the pres­sure to mon­e­tize data at tech com­pa­nies is cease­less. Face­book didn’t grow from a web­site con­nect­ing col­lege kids into a pur­vey­or of user pro­files and predilec­tions worth $478 bil­lion by walling off per­son­al data. Palan­tir says its Pri­va­cy and Civ­il Lib­er­ties Team watch­es out for inap­pro­pri­ate data demands, but it con­sists of just 10 peo­ple in a com­pa­ny of 2,000 engi­neers. No one said no to JPMor­gan, or to whomev­er at Palan­tir vol­un­teered to help Cam­bridge Analytica—or to anoth­er orga­ni­za­tion keen­ly inter­est­ed in state-of-the-art data sci­ence, the Los Ange­les Police Depart­ment.
    ...

    So that’s an overview of the cur­rent sta­tus of Palan­tir’s Big Broth­er-for-hire ser­vices: they’ve hit some obsta­cles, but if they can suc­ceed in over­com­ing those obsta­cle Palan­tir could become the go-to cor­po­rate sur­veil­lance firm. It’s more than a lit­tle omi­nous.

    And then there’s the to fun fact from this arti­cle that relate to the ques­tions of Palan­tir’s ties to Cam­bridge Ana­lyt­i­ca: First, just as Palan­tir claimed that it’s employ­ee found to be work­ing with Cam­bridge Ana­lyt­i­ca, Alfredas Chmieli­auskas, was doing this on his own, that’s the same excuse Palan­tir gave when it was caught pitch­ing a project to the US Cham­ber of Com­merce to run a secret cam­paign to spy on and sab­o­tage the Cham­ber’s crit­ics: it was just a lone employ­ee:

    ...
    Sankar, Palan­tir employ­ee No.13 and now one of the company’s top exec­u­tives, also showed up in anoth­er Palan­tir scan­dal: the company’s 2010 pro­pos­al for the U.S. Cham­ber of Com­merce to run a secret sab­o­tage cam­paign against the group’s lib­er­al oppo­nents. Hacked emails released by the group Anony­mous indi­cat­ed that Palan­tir and two oth­er defense con­trac­tors pitched out­side lawyers for the orga­ni­za­tion on a plan to snoop on the fam­i­lies of pro­gres­sive activists, cre­ate fake iden­ti­ties to infil­trate left-lean­ing groups, scrape social media with bots, and plant false infor­ma­tion with lib­er­al groups to sub­se­quent­ly dis­cred­it them.

    After the emails emerged in the press, Palan­tir offered an expla­na­tion sim­i­lar to the one it pro­vid­ed in March for its U.K.-based employee’s assis­tance to Cam­bridge Ana­lyt­i­ca: It was the work of a sin­gle rogue employ­ee. The com­pa­ny nev­er explained Sankar’s involve­ment. Karp issued a pub­lic apol­o­gy and said he and Palan­tir were deeply com­mit­ted to pro­gres­sive caus­es. Palan­tir set up an advi­so­ry pan­el on pri­va­cy and civ­il lib­er­ties, head­ed by a for­mer CIA attor­ney, and beefed up an engi­neer­ing group it calls the Pri­va­cy and Civ­il Lib­er­ties Team. The com­pa­ny now has about 10 PCL engi­neers on call to help vet clients’ requests for access to data troves and pitch in with per­ti­nent thoughts about law, moral­i­ty, and machines.
    ...

    Final­ly, there’s the inter­est­ing fact that the Palan­tir exe­cutes boast of not employ­ing a sin­gle sales-per­son and just rely on word of mouth:

    ...
    Dur­ing its 14 years in start­up mode, Palan­tir has cul­ti­vat­ed a mys­tique as a haven for bril­liant engi­neers who want to solve big prob­lems such as ter­ror­ism and human traf­fick­ing, unfet­tered by pedes­tri­an con­cerns such as mak­ing mon­ey. Palan­tir exec­u­tives boast of not employ­ing a sin­gle sales­person, rely­ing instead on word-of-mouth refer­rals.
    ...

    And Sophie Schmidt, Google CEO Eric Schmidt’s daugh­ter and a for­mer Cam­bridge Ana­lyt­i­ca intern, pro­vid­ed exact­ly that in June of 2013: a word of mouth endorse­ment of Palan­tir. So did Sophie Schmidt make this word of mouth pitch inde­pen­dent­ly and coin­ci­den­tal­ly? It remains an unan­swered ques­tion but it’s hard to ignore that Schmidt’s pitch appears to be the mode of how Palan­tir mar­kets itself.

    So we’ll see what hap­pens with Palan­tir and its dri­ve to use auto­mat­ed cor­po­rate sur­veil­lance to cut costs and sell its Big Broth­er-for-hire ser­vices to even more large employ­ers. But it does seem like just a mat­ter of time before Palan­tir suc­ceeds in cut­ting those costs, which means “word of mouth” isn’t just going to be Palan­tir’s approach to mar­ket­ing. Word of mouth is also going to be the only way employ­ees in the future will be able to say some­thing to each oth­er with­out Palan­tir know­ing about it.

    Posted by Pterrafractyl | April 19, 2018, 7:43 pm
  11. Here’s an update on how Face­book his plan­ning on address­ing the new con­gres­sion­al scruti­ny it’s receiv­ing from the US Con­gress as the Cam­bridge Ana­lyt­i­ca con­tin­ues to play out: Face­book’s head of pol­i­cy in the Unit­ed States, Erin Egan, was just replaced. It’s a notable posi­tion, polit­i­cal­ly speak­ing, because it’s based in Wash­ing­ton DC, so Face­book basi­cal­ly just replaced one of it’s top DC lob­by­ists.

    So who replaced Egan? Kevin Mar­tin, Face­book’s vice pres­i­dent of mobile and glob­al access pol­i­cy. Oh, and Mar­tin was also a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion. Sur­prise!

    Mar­tin will report to vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan. Oh, and Mar­tin and Kaplan worked togeth­er in the George W. Bush White House and on Bush’s 2000 pres­i­den­tial cam­paign. Sur­prise again! There’s a dis­tinct ‘K Street’ feel to it all.

    Face­book is spin­ning this by empha­siz­ing that Egan will remain chief pri­va­cy offi­cer. The com­pa­ny is act­ing like they made this move in order to have some­one with Egan’s cre­den­tials focused on rebuild­ing trust and not so they can replace her with a Repub­li­can.

    And that appears to be Face­book’s strat­e­gy for deal­ing with Con­gress: task­ing Repub­li­cans to lob­by their fel­low Repub­li­cans:

    The New York Times

    Face­book Replaces Lob­by­ing Exec­u­tive Amid Reg­u­la­to­ry Scruti­ny

    By Cecil­ia Kang
    April 24, 2018

    WASHINGTON — Face­book on Tues­day replaced its head of pol­i­cy in the Unit­ed States, Erin Egan, as the social net­work scram­bles to respond to intense scruti­ny from fed­er­al reg­u­la­tors and law­mak­ers.

    Ms. Egan, who is also Facebook’s chief pri­va­cy offi­cer, was respon­si­ble for lob­by­ing and gov­ern­ment rela­tions as head of pol­i­cy for the last two years. She will be replaced by Kevin Mar­tin on an inter­im basis, the com­pa­ny said. Mr. Mar­tin has been Facebook’s vice pres­i­dent of mobile and glob­al access pol­i­cy and is a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion.

    Ms. Egan will remain chief pri­va­cy offi­cer and focus on pri­va­cy poli­cies across the globe, Andy Stone, a Face­book spokesman, said.

    Elliot Schrage, Facebook’s vice pres­i­dent of com­mu­ni­ca­tions and pub­lic pol­i­cy, said in a state­ment on Wednes­day: “We need to focus our best peo­ple on our most impor­tant pri­or­i­ties. We are com­mit­ted to rebuild­ing people’s trust in how we han­dle their infor­ma­tion, and Erin is the best per­son to part­ner with our prod­uct teams on that task.”

    The exec­u­tive reshuf­fling in Facebook’s Wash­ing­ton offices fol­lowed a peri­od of tumult for the com­pa­ny, which has put it increas­ing­ly in the spot­light on Capi­tol Hill. Last month, The New York Times and oth­ers report­ed that the data of mil­lions of Face­book users had been har­vest­ed by the British polit­i­cal research firm Cam­bridge Ana­lyt­i­ca. The ensu­ing out­cry led Facebook’s chief exec­u­tive, Mark Zucker­berg, to tes­ti­fy at two con­gres­sion­al hear­ings this month.

    Since the rev­e­la­tions about Cam­bridge Ana­lyt­i­ca, the Fed­er­al Trade Com­mis­sion has start­ed an inves­ti­ga­tion of whether Face­book vio­lat­ed promis­es it made in 2011 to pro­tect the pri­va­cy of users, mak­ing it hard­er for the com­pa­ny to share data with third par­ties.

    At the same time, Face­book is grap­pling with increased pri­va­cy reg­u­la­tions out­side the Unit­ed States. Sweep­ing new pri­va­cy laws called the Gen­er­al Data Pro­tec­tion Reg­u­la­tion are set to take effect in Europe next month. And Face­book has been called to talk to reg­u­la­tors in sev­er­al coun­tries, includ­ing Ire­land, Ger­many and Indone­sia, about its han­dling of user data.

    Mr. Zucker­berg said told Con­gress this month that Face­book had grown too fast and that he hadn’t fore­seen the prob­lems the plat­form would con­front.

    “Face­book is an ide­al­is­tic and opti­mistic com­pa­ny,” he said. “For most of our exis­tence, we focused on all the good that con­nect­ing peo­ple can bring.”

    The exec­u­tive shifts put two Repub­li­can men in charge of Facebook’s Wash­ing­ton offices. Mr. Mar­tin will report to Joel Kaplan, vice pres­i­dent of glob­al pub­lic pol­i­cy. Mr. Mar­tin and Mr. Kaplan worked togeth­er in the George W. Bush White House and on Mr. Bush’s 2000 pres­i­den­tial cam­paign.

    Face­book hired Ms. Egan in 2011; she is a fre­quent head­lin­er at tech pol­i­cy events in Wash­ing­ton. Before join­ing Face­book, she spent 15 years as a part­ner at the law firm Cov­ing­ton & Burl­ing as co-chair­woman of the glob­al pri­va­cy and secu­ri­ty group.

    ...

    ———-

    “Face­book Replaces Lob­by­ing Exec­u­tive Amid Reg­u­la­to­ry Scruti­ny” by Cecil­ia Kang; The New York Times; 04/24/2018

    “Ms. Egan, who is also Facebook’s chief pri­va­cy offi­cer, was respon­si­ble for lob­by­ing and gov­ern­ment rela­tions as head of pol­i­cy for the last two years. She will be replaced by Kevin Mar­tin on an inter­im basis, the com­pa­ny said. Mr. Mar­tin has been Facebook’s vice pres­i­dent of mobile and glob­al access pol­i­cy and is a for­mer Repub­li­can chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion.

    When you’re a com­pa­ny as big as Face­book, that’s who you bring in to lead you’re lob­by­ing effort: The for­mer Repub­li­can chair­man of the FCC.

    And this means two Repub­li­cans will be in charge of Face­book’s Wash­ing­ton offices (which are pret­ty much there to lob­by):

    ...
    The exec­u­tive shifts put two Repub­li­can men in charge of Facebook’s Wash­ing­ton offices. Mr. Mar­tin will report to Joel Kaplan, vice pres­i­dent of glob­al pub­lic pol­i­cy. Mr. Mar­tin and Mr. Kaplan worked togeth­er in the George W. Bush White House and on Mr. Bush’s 2000 pres­i­den­tial cam­paign.
    ...

    But the way Face­book would pre­fer us to look at it, this was real­ly all about free­ing up Erin Egan to work on rebuild­ing trust over pri­va­cy con­cerns:

    ...
    Ms. Egan will remain chief pri­va­cy offi­cer and focus on pri­va­cy poli­cies across the globe, Andy Stone, a Face­book spokesman, said.

    Elliot Schrage, Facebook’s vice pres­i­dent of com­mu­ni­ca­tions and pub­lic pol­i­cy, said in a state­ment on Wednes­day: “We need to focus our best peo­ple on our most impor­tant pri­or­i­ties. We are com­mit­ted to rebuild­ing people’s trust in how we han­dle their infor­ma­tion, and Erin is the best per­son to part­ner with our prod­uct teams on that task.”
    ...

    And this move is hap­pen­ing at the same time Face­book is star­ing at a new EU data pri­va­cy regime, the GDPR:

    ...
    At the same time, Face­book is grap­pling with increased pri­va­cy reg­u­la­tions out­side the Unit­ed States. Sweep­ing new pri­va­cy laws called the Gen­er­al Data Pro­tec­tion Reg­u­la­tion are set to take effect in Europe next month. And Face­book has been called to talk to reg­u­la­tors in sev­er­al coun­tries, includ­ing Ire­land, Ger­many and Indone­sia, about its han­dling of user data.
    ...

    And those new EU GDPR rules don’t just poten­tial­ly impact how Face­book han­dles its Euro­pean users going for­ward. It poten­tial­ly impacts the poli­cies gov­ern­ing all of Face­book’s users out­side of the US.

    Why? Because Face­book’s cus­tomers out­side the US and Cana­da are han­dled by Face­book’s oper­a­tions in Ire­land and there­fore under EU rules. That’s just how Face­book decid­ed to struc­ture itself inter­na­tion­al­ly (large­ly due to Ire­land’s sta­tus is a cor­po­rate tax haven).

    So does this mean Face­book’s US users will be oper­at­ing in a data pri­va­cy reg­u­la­to­ry envi­ron­ment man­aged by the GOP while almost every­one else in the world oper­ates under the EU’s new rules? Nope, because Face­book just moved its inter­na­tion­al oper­a­tions out of Ire­land and back to its US head­quar­ters in Cal­i­for­nia. And that means the rules Face­book is lob­by­ing for in DC will apply to all Face­book users glob­al­ly out­side the EU:

    Reuters

    Exclu­sive: Face­book to put 1.5 bil­lion users out of reach of new EU pri­va­cy law

    David Ingram
    April 18, 2018 / 7:13 PM

    SAN FRANCISCO (Reuters) — If a new Euro­pean law restrict­ing what com­pa­nies can do with people’s online data went into effect tomor­row, almost 1.9 bil­lion Face­book Inc users around the world would be pro­tect­ed by it. The online social net­work is mak­ing changes that ensure the num­ber will be much small­er.

    Face­book mem­bers out­side the Unit­ed States and Cana­da, whether they know it or not, are cur­rent­ly gov­erned by terms of ser­vice agreed with the company’s inter­na­tion­al head­quar­ters in Ire­land.

    Next month, Face­book is plan­ning to make that the case for only Euro­pean users, mean­ing 1.5 bil­lion mem­bers in Africa, Asia, Aus­tralia and Latin Amer­i­ca will not fall under the Euro­pean Union’s Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR), which takes effect on May 25.

    The pre­vi­ous­ly unre­port­ed move, which Face­book con­firmed to Reuters on Tues­day, shows the world’s largest online social net­work is keen to reduce its expo­sure to GDPR, which allows Euro­pean reg­u­la­tors to fine com­pa­nies for col­lect­ing or using per­son­al data with­out users’ con­sent.

    That removes a huge poten­tial lia­bil­i­ty for Face­book, as the new EU law allows for fines of up to 4 per­cent of glob­al annu­al rev­enue for infrac­tions, which in Facebook’s case could mean bil­lions of dol­lars.

    The change comes as Face­book is under scruti­ny from reg­u­la­tors and law­mak­ers around the world since dis­clos­ing last month that the per­son­al infor­ma­tion of mil­lions of users wrong­ly end­ed up in the hands of polit­i­cal con­sul­tan­cy Cam­bridge Ana­lyt­i­ca, set­ting off wider con­cerns about how it han­dles user data.

    The change affects more than 70 per­cent of Facebook’s 2 bil­lion-plus mem­bers. As of Decem­ber, Face­book had 239 mil­lion users in the Unit­ed States and Cana­da, 370 mil­lion in Europe and 1.52 bil­lion users else­where.

    Face­book, like many oth­er U.S. tech­nol­o­gy com­pa­nies, estab­lished an Irish sub­sidiary in 2008 and took advan­tage of the country’s low cor­po­rate tax rates, rout­ing through it rev­enue from some adver­tis­ers out­side North Amer­i­ca. The unit is sub­ject to reg­u­la­tions applied by the 28-nation Euro­pean Union.

    Face­book said the lat­est change does not have tax impli­ca­tions.

    ‘IN SPIRIT’

    In a state­ment giv­en to Reuters, Face­book played down the impor­tance of the terms of ser­vice change, say­ing it plans to make the pri­va­cy con­trols and set­tings that Europe will get under GDPR avail­able to the rest of the world.

    “We apply the same pri­va­cy pro­tec­tions every­where, regard­less of whether your agree­ment is with Face­book Inc or Face­book Ire­land,” the com­pa­ny said.

    Ear­li­er this month, Face­book Chief Exec­u­tive Mark Zucker­berg told Reuters in an inter­view that his com­pa­ny would apply the EU law glob­al­ly “in spir­it,” but stopped short of com­mit­ting to it as the stan­dard for the social net­work across the world.

    In prac­tice, the change means the 1.5 bil­lion affect­ed users will not be able to file com­plaints with Ireland’s Data Pro­tec­tion Com­mis­sion­er or in Irish courts. Instead they will be gov­erned by more lenient U.S. pri­va­cy laws, said Michael Veale, a tech­nol­o­gy pol­i­cy researcher at Uni­ver­si­ty Col­lege Lon­don.

    Face­book will have more lee­way in how it han­dles data about those users, Veale said. Cer­tain types of data such as brows­ing his­to­ry, for instance, are con­sid­ered per­son­al data under EU law but are not as pro­tect­ed in the Unit­ed States, he said.

    The com­pa­ny said its ratio­nale for the change was relat­ed to the Euro­pean Union’s man­dat­ed pri­va­cy notices, “because EU law requires spe­cif­ic lan­guage.” For exam­ple, the com­pa­ny said, the new EU law requires spe­cif­ic legal ter­mi­nol­o­gy about the legal basis for pro­cess­ing data which does not exist in U.S. law.

    NO WARNING

    Ire­land was unaware of the change. One Irish offi­cial, speak­ing on con­di­tion of anonymi­ty, said he did not know of any plans by Face­book to trans­fer respon­si­bil­i­ties whole­sale to the Unit­ed States or to decrease Facebook’s pres­ence in Ire­land, where the social net­work is seek­ing to recruit more than 100 new staff.

    Face­book released a revised terms of ser­vice in draft form two weeks ago, and they are sched­uled to take effect next month.

    Oth­er multi­na­tion­al com­pa­nies are also plan­ning changes. LinkedIn, a unit of Microsoft Corp, tells users in its exist­ing terms of ser­vice that if they are out­side the Unit­ed States, they have a con­tract with LinkedIn Ire­land. New terms that take effect May 8 move non-Euro­peans to con­tracts with U.S.-based LinkedIn Corp.

    ...

    ———-
    “Exclu­sive: Face­book to put 1.5 bil­lion users out of reach of new EU pri­va­cy law” by David Ingram; Reuters; 04/18/2018

    “Face­book mem­bers out­side the Unit­ed States and Cana­da, whether they know it or not, are cur­rent­ly gov­erned by terms of ser­vice agreed with the company’s inter­na­tion­al head­quar­ters in Ire­land.”

    Yep, for Face­book and quite a few oth­er major inter­net com­pa­nies with inter­na­tion­al head­quar­ters in Ire­land, it’s the EU’s rules that deter­mine the rules for most of their glob­al cus­tomer base. But not any­more for Face­book:

    ...
    Next month, Face­book is plan­ning to make that the case for only Euro­pean users, mean­ing 1.5 bil­lion mem­bers in Africa, Asia, Aus­tralia and Latin Amer­i­ca will not fall under the Euro­pean Union’s Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR), which takes effect on May 25.

    The pre­vi­ous­ly unre­port­ed move, which Face­book con­firmed to Reuters on Tues­day, shows the world’s largest online social net­work is keen to reduce its expo­sure to GDPR, which allows Euro­pean reg­u­la­tors to fine com­pa­nies for col­lect­ing or using per­son­al data with­out users’ con­sent.

    That removes a huge poten­tial lia­bil­i­ty for Face­book, as the new EU law allows for fines of up to 4 per­cent of glob­al annu­al rev­enue for infrac­tions, which in Facebook’s case could mean bil­lions of dol­lars.
    ...

    And that move from Ire­land to Cal­i­for­nia will impact the ~1.5 bil­lion users Face­book has out­side of the US, Cana­da, and EU:

    ...
    The change affects more than 70 per­cent of Facebook’s 2 bil­lion-plus mem­bers. As of Decem­ber, Face­book had 239 mil­lion users in the Unit­ed States and Cana­da, 370 mil­lion in Europe and 1.52 bil­lion users else­where.

    ace­book, like many oth­er U.S. tech­nol­o­gy com­pa­nies, estab­lished an Irish sub­sidiary in 2008 and took advan­tage of the country’s low cor­po­rate tax rates, rout­ing through it rev­enue from some adver­tis­ers out­side North Amer­i­ca. The unit is sub­ject to reg­u­la­tions applied by the 28-nation Euro­pean Union.

    Face­book said the lat­est change does not have tax impli­ca­tions.
    ...

    But Face­book wants to assure every­one that this move will have no mean­ing­ful impact on any­one’s pri­va­cy because it’s com­mit­ted to hav­ing ALL of its users glob­al­ly fol­low the same rules as laid out by the EU’s new GDPR. At least ‘in spir­it’. That’s right, Face­book is telling the world that its going to imple­ment the GDPR glob­al­ly at the same time it moves its oper­a­tions out of the EU. That’s not sus­pi­cious or any­thing:

    ...
    ‘IN SPIRIT’

    In a state­ment giv­en to Reuters, Face­book played down the impor­tance of the terms of ser­vice change, say­ing it plans to make the pri­va­cy con­trols and set­tings that Europe will get under GDPR avail­able to the rest of the world.

    “We apply the same pri­va­cy pro­tec­tions every­where, regard­less of whether your agree­ment is with Face­book Inc or Face­book Ire­land,” the com­pa­ny said.

    Ear­li­er this month, Face­book Chief Exec­u­tive Mark Zucker­berg told Reuters in an inter­view that his com­pa­ny would apply the EU law glob­al­ly “in spir­it,” but stopped short of com­mit­ting to it as the stan­dard for the social net­work across the world.

    In prac­tice, the change means the 1.5 bil­lion affect­ed users will not be able to file com­plaints with Ireland’s Data Pro­tec­tion Com­mis­sion­er or in Irish courts. Instead they will be gov­erned by more lenient U.S. pri­va­cy laws, said Michael Veale, a tech­nol­o­gy pol­i­cy researcher at Uni­ver­si­ty Col­lege Lon­don.

    Face­book will have more lee­way in how it han­dles data about those users, Veale said. Cer­tain types of data such as brows­ing his­to­ry, for instance, are con­sid­ered per­son­al data under EU law but are not as pro­tect­ed in the Unit­ed States, he said.
    ...

    So why did Face­book make the move if it’s pledg­ing to imple­ment the GDPR ‘in spir­it’ for every­one? Well, accord­ing to Face­book, it’s “because EU law requires spe­cif­ic lan­guage.” That’s not dubi­ous or any­thing:

    ...
    The com­pa­ny said its ratio­nale for the change was relat­ed to the Euro­pean Union’s man­dat­ed pri­va­cy notices, “because EU law requires spe­cif­ic lan­guage.” For exam­ple, the com­pa­ny said, the new EU law requires spe­cif­ic legal ter­mi­nol­o­gy about the legal basis for pro­cess­ing data which does not exist in U.S. law.
    ...

    And, of course, Face­book isn’t the only multi­na­tion­al inter­net firm look­ing to move out of Ire­land. Microsoft­’s LinkedIn is mak­ing the same move, under a sim­i­lar­ly laugh­able pre­tense:

    ...
    Oth­er multi­na­tion­al com­pa­nies are also plan­ning changes. LinkedIn, a unit of Microsoft Corp, tells users in its exist­ing terms of ser­vice that if they are out­side the Unit­ed States, they have a con­tract with LinkedIn Ire­land. New terms that take effect May 8 move non-Euro­peans to con­tracts with U.S.-based LinkedIn Corp.

    LinkedIn said in a state­ment on Wednes­day that all users are enti­tled to the same pri­va­cy pro­tec­tions. “We’ve sim­ply stream­lined the con­tract loca­tion to ensure all mem­bers under­stand the LinkedIn enti­ty respon­si­ble for their per­son­al data,” the com­pa­ny said.

    “We’ve sim­ply stream­lined the con­tract loca­tion to ensure all mem­bers under­stand the LinkedIn enti­ty respon­si­ble for their per­son­al data”

    Yeah, LinkedIn is mak­ing the move so users won’t be con­fused about whether or not the US or EU LinkedIn enti­ty was respon­si­ble for their per­son­al data. LOL! We’ll no doubt get sim­i­lar­ly laugh­able expla­na­tions from all the oth­er multi­na­tion­al firms mak­ing sim­i­lar moves.

    Also don’t for­get that these moves mean the US’s data pri­va­cy rules are going to be even more impor­tant for the inter­net giants because now those rules are for going to apply to users every­where but the EU. And that means the lob­by­ing of US law­mak­ers and reg­u­la­tors is going to be even more impor­tant going for­ward. The more com­pa­nies that relo­cate to the US to escape the EU’s GDPR for the inter­na­tion­al cus­tomer base, the greater the incen­tives for under­min­ing US data pri­va­cy laws. In oth­er words, it’s a real­ly great time to be a Repub­li­can data pri­va­cy lob­by­ist.

    Posted by Pterrafractyl | April 30, 2018, 5:30 pm
  12. Here’s a pair of sto­ries that relates to both Cam­bridge Ana­lyt­i­ca as well as the bizarre col­lec­tion of sto­ries relat­ed to the ‘Sey­chelles backchan­nel’ #TrumpRus­sia sto­ry (like George Nader’s par­tic­i­pa­tion in the ‘backchan­nel’ or Nader’s hir­ing of GOP mon­ey man Elliot Broidy to lob­by on behalf of the UAE and Saud­is). And the con­nect­ing ele­ment is none oth­er than Erik Prince:

    So long Cam­bridge Ana­lyt­i­ca! Yep, Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

    Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca? Of course not. They’re just rebrand­ing under a new com­pa­ny, Emer­da­ta. It’s kind of like when Black­wa­ter renamed itself Xe, and then Acad­e­mi. And intrigu­ing­ly, Cam­bridge Ana­lyt­i­ca’s trans­for­ma­tion into Emer­da­ta intro­duces anoth­er asso­ci­a­tion with Black­wa­ter: Emerdata’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince:

    The New York Times

    Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data

    By Nicholas Con­fes­sore and Matthew Rosen­berg
    May 2, 2018

    The embat­tled polit­i­cal con­sult­ing firm Cam­bridge Ana­lyt­i­ca announced on Wednes­day that it would cease most oper­a­tions and file for bank­rupt­cy amid grow­ing legal and polit­i­cal scruti­ny of its busi­ness prac­tices and work for Don­ald J. Trump’s pres­i­den­tial cam­paign.

    The deci­sion was made less than two months after Cam­bridge Ana­lyt­i­ca and Face­book became embroiled in a data-har­vest­ing scan­dal that com­pro­mised the per­son­al infor­ma­tion of up to 87 mil­lion peo­ple. Rev­e­la­tions about the mis­use of data, pub­lished in March by The New York Times and The Observ­er of Lon­don, plunged Face­book into cri­sis and prompt­ed reg­u­la­tors and law­mak­ers to open inves­ti­ga­tions into Cam­bridge Ana­lyt­i­ca.

    In a state­ment post­ed to its web­site, Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.

    But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices.

    “Over the past sev­er­al months, Cam­bridge Ana­lyt­i­ca has been the sub­ject of numer­ous unfound­ed accu­sa­tions and, despite the company’s efforts to cor­rect the record, has been vil­i­fied for activ­i­ties that are not only legal, but also wide­ly accept­ed as a stan­dard com­po­nent of online adver­tis­ing in both the polit­i­cal and com­mer­cial are­nas,” the company’s state­ment said.

    Cam­bridge Ana­lyt­i­ca also said the results of an inde­pen­dent inves­ti­ga­tion it had com­mis­sioned, which it released on Wednes­day, con­tra­dict­ed asser­tions made by for­mer employ­ees and con­trac­tors about its acqui­si­tion of Face­book data. The report played down the role of a con­trac­tor turned whis­tle-blow­er, Christo­pher Wylie, who helped the com­pa­ny acquire Face­book data, call­ing it “very mod­est.”

    Cam­bridge Ana­lyt­i­ca did not reply to requests for com­ment. The news of Cam­bridge ceas­ing oper­a­tions was ear­li­er report­ed by The Wall Street Jour­nal and Giz­mo­do.

    The com­pa­ny, bankrolled by Robert Mer­cer, a wealthy Repub­li­can donor who invest­ed at least $15 mil­lion, offered tools that it claimed could iden­ti­fy the per­son­al­i­ties of Amer­i­can vot­ers and influ­ence their behav­ior. Those mod­el­ing tech­niques under­pinned Cam­bridge Analytica’s work for the Trump cam­paign and for oth­er can­di­dates in 2014 and 2016.

    But Cam­bridge Ana­lyt­i­ca came under scruti­ny over the past year, first for its pur­port­ed meth­ods of pro­fil­ing vot­ers and then over alle­ga­tions that it improp­er­ly har­vest­ed pri­vate data from Face­book users. Last year, the com­pa­ny was drawn into the spe­cial coun­sel inves­ti­ga­tion of Russ­ian inter­fer­ence in the 2016 elec­tion.

    The com­pa­ny was also forced to sus­pend its chief exec­u­tive, Alexan­der Nix, after a British tele­vi­sion chan­nel released an under­cov­er video. In it, Mr. Nix sug­gest­ed that the com­pa­ny had used seduc­tion and bribery to entrap politi­cians and influ­ence for­eign elec­tions.

    Face­book has since announced changes to its poli­cies for col­lect­ing and han­dling user data. Its chief exec­u­tive, Mark Zucker­berg, tes­ti­fied last month before Con­gress, where he faced crit­i­cism for fail­ing to pro­tect users’ data.

    The con­tro­ver­sy dealt a major blow to Cam­bridge Analytica’s ambi­tions of expand­ing its com­mer­cial busi­ness in the Unit­ed States, while also bring­ing unwant­ed atten­tion to the Amer­i­can gov­ern­ment con­tracts sought by SCL Group, an intel­li­gence con­trac­tor.

    Besides work­ing for the Trump cam­paign, Cam­bridge Ana­lyt­i­ca was pre­vi­ous­ly hired by the polit­i­cal action com­mit­tee found­ed by John R. Bolton, the nation­al secu­ri­ty advis­er. It had also worked for the 2016 pres­i­den­tial cam­paigns of Ben Car­son and Sen­a­tor Ted Cruz.

    But no can­di­dates for fed­er­al office in the Unit­ed States have dis­closed pay­ing Cam­bridge Ana­lyt­i­ca dur­ing the 2018 cycle. A Repub­li­can con­gres­sion­al can­di­date in Cal­i­for­nia did report void­ing a $10,000 trans­ac­tion with the com­pa­ny in ear­ly March, accord­ing to fed­er­al elec­tion records.

    The com­pa­ny also unsuc­cess­ful­ly tried to court some major com­mer­cial clients in the last year, includ­ing Mer­cedes-Benz and Anheuser-Busch InBev, the glob­al brew­er, accord­ing to one for­mer employ­ee. Cam­bridge pitched AB InBev by claim­ing that it could posi­tion Bud Light as the beer for the young par­ty crowd and Bud­weis­er for old-school con­ser­v­a­tives, accord­ing to the for­mer employ­ee, who asked not to be named because the per­son was restrict­ed from speak­ing about the company’s busi­ness.

    In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

    Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

    An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. Efforts to reach him by phone on Wednes­day were unsuc­cess­ful.

    ...

    ———

    “Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data” by Nicholas Con­fes­sore and Matthew Rosen­berg; The New York Times; 05/02/2018

    “In a state­ment post­ed to its web­site, Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.”

    So Cam­bridge Ana­lyt­i­ca is going away and the SCL Group is get­ting out of the elec­tions busi­ness. At least on the sur­face. But there’s still an open ques­tion of who is going to retain the rights to all the infor­ma­tion held by Cam­bridge Ana­lyt­i­ca, includ­ing all those psy­cho­graph­ic vot­er pro­files that are pre­sum­ably worth quite a bit of mon­ey:

    ...
    But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices.
    ...

    And that ques­tion over who is going to own the rights to all that data is par­tic­u­lar­ly rel­e­vant giv­en that exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group and the Mer­cers recent­ly formed a new com­pa­ny: Emer­da­ta. And look who hap­pens to be one of Emer­data’s direc­tors: John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince:

    ...
    In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

    Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

    An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. Efforts to reach him by phone on Wednes­day were unsuc­cess­ful.
    ...

    “Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group.”

    LOL! Yeah, the pos­si­bil­i­ty for a “Black­wa­ter-style rebrand­ing” is look­ing more like a real­i­ty at this point. Although we’ll see how many clients this new com­pa­ny gets.

    And that brings us to the fol­low­ing piece. It’s a fas­ci­nat­ing piece that sum­ma­rizes all of the var­i­ous thing we’ve learned about Erik Prince, the #TrumpRus­sia inves­ti­ga­tion, and the UAE. And as the arti­cle notes, at the same time Emer­da­ta was being formed in 2017 (August 11, 2017, was the incor­po­ra­tion dat) the UAE was already pay­ing SCL to work on run­ning a social media cam­paign for the UAE against Qatar as part of the UAE’s #Boy­cottQatar cam­paign. And as the arti­cle also notes, if you look at the name “Emer­da­ta”, it sure sounds like short­ened ver­sion of “Emerati-Data”.

    So giv­en the pres­ence of Erik Prince’s busi­ness part­ner on the board of direc­tors of Emer­da­ta, and giv­en Prince’s exten­sive ties to the UAE, we have to ask the ques­tion of whether or not Cam­bridge Ana­lyt­i­ca is about to become the new play­thing of UAE:

    Medi­um

    From the Sey­chelles to the White House to Cam­bridge Ana­lyt­i­ca, Erik Prince and the UAE are key parts of the Trump sto­ry

    Wendy Siegel­man
    Apr 8, 2018

    In Jan­u­ary 2017 Erik Prince attend­ed a meet­ing in the Sey­chelles with the Unit­ed Arab Emirate’s Crown Prince Mohammed bin Zayed Al-Nahyan, the CEO of the Russ­ian Direct Invest­ment Fund Kir­ill Dmitriev, and George Nad­er, a for­mer con­sul­tant for Erik Prince’s com­pa­ny Black­wa­ter.

    While Erik Prince tes­ti­fied in Novem­ber 2017 to the House Intel­li­gence Com­mit­tee that the meet­ing with Dmitriev was unplanned, news broke last week that Mueller has evi­dence that Prince’s meet­ing with Putin’s ally Dmitriev may not have been a chance encounter, con­tra­dict­ing Prince’s sworn tes­ti­mo­ny. And, accord­ing to George Nad­er, a main pur­pose of the meet­ing was to set up a com­mu­ni­ca­tion chan­nel between the Russ­ian gov­ern­ment and the incom­ing Trump admin­is­tra­tion.

    Ear­li­er this year, a seem­ing­ly unre­lat­ed scan­dal erupt­ed after a sto­ry broke about Trump’s data com­pa­ny Cam­bridge Ana­lyt­i­ca har­vest­ing Face­book data on tens of mil­lions of peo­ple. Short­ly after that a Chan­nel 4 News inves­ti­ga­tion revealed under­cov­er film of Cam­bridge Ana­lyt­i­ca exec­u­tives brag­ging about the dirty tricks they use to influ­ence elec­tions.

    As the Cam­bridge Ana­lyt­i­ca scan­dal was unfold­ing, I broke the sto­ry about a new com­pa­ny Emer­da­ta Lim­it­ed, cre­at­ed by Cam­bridge Ana­lyt­i­ca exec­u­tives, that in ear­ly 2018 added new board mem­bers Rebekah and Jen­nifer Mer­cer, Cheng Peng, Chun Shun Ko John­son, who is a busi­ness part­ner of Erik Prince, and Ahmed Al Khat­ib, a ‘Cit­i­zen of Sey­chelles’.

    In 2017 as Cam­bridge Ana­lyt­i­ca exec­u­tives cre­at­ed Emer­da­ta, they were also work­ing on behalf of the UAE through SCL Social, which had a $330,000 con­tract to run a social media cam­paign for the UAE against Qatar, fea­tur­ing the theme #Boy­cottQatar. One of the Emer­da­ta direc­tors may have ties to the UAE and the com­pa­ny name, coin­ci­den­tal­ly, sounds like a play on Emirati-Data…Emerdata.

    The Unit­ed Arab Emi­rates and peo­ple advo­cat­ing for the inter­ests of the UAE—including Prince, Nad­er, and Trump fundrais­er Elliot Broidy who has done large busi­ness deals with the UAE—have start­ed to appear fre­quent­ly in news relat­ed to Mueller’s inves­ti­ga­tion. Erik Prince, the broth­er of the U.S. Sec­re­tary of Edu­ca­tion Bet­sy DeVos, lived in the UAE, attend­ed the Sey­chelles meet­ing with the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, is busi­ness part­ners with Chun Shun Ko who just joined the board of the new Cam­bridge Analytica/SCL com­pa­ny Emer­da­ta, and SCL had a large con­tract to work on behalf of the UAE.

    To bet­ter under­stand the role Erik Prince and the UAE have played in the Trump-Rus­sia story—and in the much broad­er sto­ry of glob­al polit­i­cal influ­ence, and often corruption—below is a time­line track­ing some key events. Not all events are relat­ed, but review­ing the infor­ma­tion chrono­log­i­cal­ly may help answer a few ques­tions, includ­ing: Why was the UAE involved in a meet­ing with Erik Prince to set up a com­mu­ni­ca­tion chan­nel with Rus­sia? Is the UAE involved with Cam­bridge Analytica’s new com­pa­ny Emer­da­ta? Does Erik Prince have any con­nec­tion to Cam­bridge Ana­lyt­i­ca, even if only indi­rect­ly through Chun Shun Ko, Steve Ban­non, or the Mer­cers? And what role has the UAE had in influ­enc­ing the Trump admin­is­tra­tion?

    Note: this time­line may be updat­ed peri­od­i­cal­ly to include new per­ti­nent infor­ma­tion. Each event below includes the source data link. Name vari­a­tions (e.g. Chun Shun Ko John­son vs John­son Ko Chun-shun) reflect how names are pre­sent­ed by each source.

    2010

    * In a depo­si­tion Erik Prince said he had pre­vi­ous­ly hired George Nad­er to help Black­wa­ter as a “busi­ness devel­op­ment con­sul­tant that we retained in Iraq” because the com­pa­ny was look­ing for con­tracts with the Iraqi gov­ern­ment. New York Times
    * After a series of civ­il law­suits, crim­i­nal charges and Con­gres­sion­al inves­ti­ga­tions against Erik Prince’s com­pa­ny Black­wa­ter and its for­mer exec­u­tives, Prince moved to the Unit­ed Arab Emi­rates. New York Times.

    2011

    * Sheik Mohamed bin Zayed al-Nahyan of Abu Dhabi hired Erik Prince to build a fight­ing force, pay­ing $529 mil­lion to build an army. Addi­tion­al­ly, Prince “worked with the Emi­rati gov­ern­ment on var­i­ous ventures…including an oper­a­tion using South African mer­ce­nar­ies to train Soma­lis to fight pirates.” New York Times
    * A movie called “The Project,” about Erik Prince’s UAE-fund­ed pri­vate army in Soma­lia, was paid for by the Mov­ing Pic­ture Insti­tute where Rebekah Mer­cer is on the board of Trustees. Gawk­er Web­site

    2012

    * Erik Prince, who works and lives in Abu Dhabi in the Unit­ed Arab Emi­rates, cre­at­ed Fron­tier Resource Group, an Africa-ded­i­cat­ed invest­ment firm part­nered with major Chi­nese enter­pris­es. South Chi­na Morn­ing Post

    2013

    * The Russ­ian Direct Invest­ment Fund led by CEO Kir­ill Dmitriev, and the UAE’s Mubadala Devel­op­ment Com­pa­ny based in Abu Dhabi, launched a $2 bil­lion co-invest­ment fund to pur­sue oppor­tu­ni­ties in Rus­sia. PR Newswire

    2014

    * Jan­u­ary: Erik Prince was named Chair­man of DVN Hold­ings, con­trolled by Hong Kong busi­ness­man John­son Ko Chun-shun and Chi­nese state-owned Citic Group. DVN’s board pro­posed that the firm be renamed Fron­tier Ser­vices Group. South Chi­na Morn­ing Post.
    * Jan­u­ary: Erik Prince’s busi­ness part­ner, Dori­an Barak, became a Non-Exec­u­tive Direc­tor of Reori­ent Group Lim­it­ed, an invest­ment com­pa­ny where Ko Chun Shun John­son was Chair­man and Exec­u­tive Direc­tor, and had done a $350 mil­lion deal with Jack Ma. 2014 Annu­al Report. Forbes
    * Erik Prince’s busi­ness part­ner Dori­an Barak joined the board of Alu­fur Min­ing, “an inde­pen­dent min­er­al explo­ration and devel­op­ment com­pa­ny with sig­nif­i­cant baux­ite inter­ests in the Repub­lic of Guinea.” (Prince would lat­er tes­ti­fy that the pur­pose of his Sey­chelles trip was to dis­cuss min­er­als and ‘baux­ite’ with the UAE’s Mohammed bin Zayed). Alu­fur web­site
    * August: The John Bolton Super PAC found­ed by John Bolton, Pres­i­dent Trump’s incom­ing nation­al secu­ri­ty advis­er, hired Cam­bridge Ana­lyt­i­ca months after the firm was found­ed and while it was still har­vest­ing Face­book data. In the two years that fol­lowed, Bolton’s super PAC spent near­ly $1.2 mil­lion pri­mar­i­ly for “sur­vey research” and “behav­ioral micro­tar­get­ing with psy­cho­graph­ic mes­sag­ing” using Face­book data. New York Times

    2016

    * August/September: Erik Prince donat­ed $150,000 to Make Amer­i­ca Num­ber 1, a pro-Trump PAC for which Robert Mer­cer has been the largest fun­der. Open Secrets
    * Octo­ber: Erik Prince donat­ed $100,000 to the Trump Vic­to­ry fund and $33,400 to the Repub­li­can Nation­al Com­mit­tee. FEC
    * Octo­ber 11: Erik Prince did an inter­view with Bre­it­bart News Dai­ly describ­ing Hillary Clinton’s “demon­stra­ble links to Rus­sia, par­tic­u­lar­ly her com­plic­i­ty in “sell­ing 20 per­cent of the Unit­ed States’s ura­ni­um sup­ply to a Russ­ian state com­pa­ny.” Bre­it­bart
    * Novem­ber 4: Erik Prince told Bre­it­bart News Dai­ly that “The NYPD want­ed to do a press con­fer­ence announc­ing the war­rants and the addi­tion­al arrests they were mak­ing” in the Antho­ny Wein­er inves­ti­ga­tion, but received “huge push­back” from the Jus­tice Depart­ment.” Prince described crim­i­nal cul­pa­bil­i­ty in emails from Weiner’s lap­top relat­ed to “mon­ey laun­der­ing, under­age sex, pay-for-play.” Bre­it­bart
    * Decem­ber: The Unit­ed Arab Emirate’s crown prince of Abu Dhabi, Sheikh Mohamed bin Zayed al-Nahyan, vis­it­ed Trump Tow­er and met with Jared Kush­n­er, Michael Fly­nn, and Steve Ban­non. In an unusu­al breach of pro­to­col, the Oba­ma admin­is­tra­tion was not noti­fied about the vis­it. Wash­ing­ton Post
    * Erik Prince told the House Intel­li­gence Com­mit­tee that Steve Ban­non informed him about the Decem­ber Trump Tow­er meet­ing with Mohamed bin Zayed al-Nahyan. Prince also said he had sent Ban­non unso­licit­ed pol­i­cy papers dur­ing the cam­paign. CNN

    2017

    Jan­u­ary 2017

    * One week pri­or to the meet­ing in the Sey­chelles, sources report­ed that George Nad­er met with Erik Prince and lat­er sent him infor­ma­tion on Kir­ill Dmitriev, the CEO of the Russ­ian Direct Invest­ment Fund, con­tra­dict­ing Prince’s sworn tes­ti­mo­ny to the House Intel­li­gence Com­mit­tee that the meet­ing with Kir­ill Dmitriev in the Sey­chelles was unex­pect­ed. ABC News
    * Jan­u­ary 11: A meet­ing was held in the Sey­chelles with Erik Prince, the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, Kir­ill Dmitriev, and George Nad­er, who had pre­vi­ous­ly con­sult­ed for Prince’s Black­wa­ter. Accord­ing to Nad­er the meet­ing was to dis­cuss for­eign pol­i­cy and to estab­lish a line of com­mu­ni­ca­tion between the Russ­ian gov­ern­ment and the incom­ing Trump admin­is­tra­tion. ABC News

    Feb­ru­ary 2017

    * “After decades of close polit­i­cal and defense prox­im­i­ty with the Unit­ed States, the Unit­ed Arab Emi­rates have con­clud­ed three major agree­ments with Rus­sia which could lead to its air force being ulti­mate­ly re-equipped with Russ­ian com­bat air­craft.” Defense Aero­space

    March 2017

    * Elliott Broidy, a top GOP and Trump fundrais­er with hun­dreds of mil­lions of dol­lars in busi­ness deals with the UAE, sent George Nad­er a spread­sheet out­lin­ing a pro­posed $12.7 mil­lion cam­paign against Qatar and the Mus­lim Broth­er­hood. Broidy also sent an email to George Nad­er refer­ring to Secure Amer­i­ca Now as a group he worked with. New York Times
    * The largest fun­der of Secure Amer­i­ca Now, a secre­tive group that cre­ates anti-Mus­lim adver­tis­ing, is Robert Mer­cer, who is also the largest fun­der of Cam­bridge Ana­lyt­i­ca. Open Secrets

    April 2017

    * Jared Kushner’s father Charles Kush­n­er met with Qatar’s min­is­ter of finance Ali Sharif Al Ema­di to dis­cuss financ­ing of Kush­n­er Com­pa­nies’ 666 Fifth Avenue build­ing. Inter­cept
    * “Tom Bar­rack, a Trump friend who had sug­gest­ed that Thani con­sid­er invest­ing in the Kush­n­er prop­er­ty, has said Charles Kush­n­er was “crushed” when his son got the White House job because that prompt­ed the Qataris to pull out.” Wash­ing­ton Post

    May 2017

    * May 20–21: Don­ald Trump made his first over­seas trip to Riyadh, Sau­di Ara­bia, accom­pa­nied by Jared Kush­n­er, Steve Ban­non and oth­ers. On his first day there Trump signed a joint “strate­gic vision” that includ­ed $110 bil­lion in Amer­i­can arms sales and oth­er invest­ments. Wash­ing­ton Post
    * May 23: Per U.S. offi­cials, the UAE gov­ern­ment dis­cussed plans to hack Qatar. Wash­ing­ton Post
    * May 24: Per U.S. offi­cials, the UAE orches­trat­ed the hack of Qatari gov­ern­ment news and social media sites in order to post incen­di­ary false quotes attrib­uted to Qatar’s emir, Sheikh Tamim Bin Hamad al-Thani. Wash­ing­ton Post
    * Late May: Fol­low­ing the hack, Sau­di Ara­bia, UAE, Bahrain and Egypt banned Qatari media, broke rela­tions and declared a trade and diplo­mat­ic boy­cott. Wash­ing­ton Post

    June 2017

    * June 5: The Gulf Coop­er­a­tion Coun­cil mem­bers Sau­di Ara­bia, the Unit­ed Arab Emi­rates, Bahrain, and Egypt released coor­di­nat­ed state­ments accus­ing Qatar of sup­port­ing ter­ror­ist groups and say­ing that as a result they were cut­ting links to the coun­try by land, sea and air. Wash­ing­ton Post
    * June 6: Trump tweet­ed his sup­port for the block­ade against Qatar, while Rex Tiller­son and James Mat­tis called for calm. The Guardian
    * June 7: U.S. inves­ti­ga­tors from the FBI, who sent a team to Doha to help the Qatari gov­ern­ment inves­ti­gate the alleged hack­ing inci­dent, sus­pect­ed Russ­ian hack­ers plant­ed the fake news behind the Qatar cri­sis. CNN
    * June 27: Rex Tiller­son “reaf­firmed his strong sup­port for Kuwait’s efforts to medi­ate the dis­pute between Qatar and Sau­di Ara­bia, the UAE, Bahrain, and Egypt” and “lead­ers reaf­firmed the need for all par­ties to exer­cise restraint to allow for pro­duc­tive diplo­mat­ic dis­cus­sions.” U.S. Depart­ment of State Read­out
    * An aide to Tiller­son was con­vinced Trump’s sup­port for the UAE came from the UAE Ambas­sador Yousef Al Ota­bie, a close friend of Jared Kush­n­er, known to speak to Kush­n­er on a week­ly basis. The Amer­i­can Con­ser­v­a­tive

    July 2017

    * U.S. intel­li­gence offi­cials con­firmed that the UAE orches­trat­ed the hack of Qatari gov­ern­ment news and social media sites with incen­di­ary false quotes attrib­uted to Qatar’s emir, Sheikh Tamim Bin Hamad al-Thani. Wash­ing­ton Post

    August 2017

    * Emer­da­ta Lim­it­ed was incor­po­rat­ed in the UK with Cam­bridge Analytica’s Chair­man Julian Wheat­land and Chief Data Offi­cer Alexan­der Tayler as sig­nif­i­cant own­ers. Com­pa­ny fil­ing

    Sep­tem­ber 2017

    * Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix and Steve Ban­non both present at the CSLA Investors’ Forum in Hong Kong. CLSA is part of Citic Secu­ri­ties, which is part of Citic Group, the major­i­ty own­er of Erik Prince and Ko Chun Shun Johnson’s Fron­tier Ser­vices Group. Bloomberg Tweet

    Octo­ber 2017

    * Octo­ber 6: Elliott Broidy, whose com­pa­ny Circi­nus has had hun­dreds of mil­lions of dol­lars in con­tracts with the UAE, met Trump and sug­gest­ed Trump meet with the UAE’s Mohammed bin Zayed al-Nahyan. Broidy said Trump thought it was good idea. Broidy also “per­son­al­ly urged Mr. Trump to fire Mr. Tiller­son, whom the Saud­is and Emi­ratis saw as insuf­fi­cient­ly tough on Iran and Qatar.” New York Times
    * Octo­ber 6: SCL Social Lim­it­ed, part of SCL Group/Cambridge Ana­lyt­i­ca, was hired by UK com­pa­ny Project Asso­ciates for approx­i­mate­ly $330,000 to imple­ment a social media cam­paign for the UAE against Qatar, fea­tur­ing the them #Boy­cottQatar. FARA fil­ing
    * Octo­ber 23: Steve Ban­non spoke at a Hud­son Insti­tute event on “Coun­ter­ing Vio­lent Extrem­ism: Qatar, Iran, and the Mus­lim Broth­er­hood,” and called the Qatar block­ade “the sin­gle most impor­tant thing that’s hap­pen­ing in the world.” Ban­non “bragged that president’s trip to Sau­di Ara­bia in May gave the Saud­is the gump­tion to lead a block­ade against Doha.” The Nation­al Inter­est
    * Octo­ber 29: Jared Kush­n­er returned from an unan­nounced trip to Sau­di Ara­bia to dis­cuss Mid­dle East peace. Tom Bar­rack, a long­time friend and close Trump con­fi­dant said “The key to solv­ing (the Israel-Pales­tin­ian dis­pute) is Egypt. And the key to Egypt is Abu Dhabi and Sau­di Ara­bia.” Politi­co

    Novem­ber 2017

    * Novem­ber 4: A week after Kushner’s vis­it Sau­di prince Mohammed bin Salman, launched an anti-cor­rup­tion crack­down and arrest­ed dozens of mem­bers of the Sau­di roy­al fam­i­ly. Per three sources “Crown Prince Mohammed told con­fi­dants that Kush­n­er had dis­cussed the names of Saud­is dis­loy­al to the crown prince.” How­ev­er, “Kush­n­er, through his attorney’s spokesper­son, denies hav­ing done so.” Anoth­er source said Mohammed bin Salman told UAE Crown Prince Mohammed bin Zayed that Kush­n­er was “in his pock­et.” The Inter­cept
    * Novem­ber 15: Dmit­ry Rybolovlev sold his Leonar­do Da Vin­ci paint­ing ‘Sal­va­tore Mun­di’ for $450.3 mil­lion, set­ting a new record for the high­est priced paint­ing sold at auc­tion. Rybolovlev?—?who had pur­chased a Flori­da home from Trump in 2008 for $95 mil­lion, more than $50 mil­lion above Trump’s pur­chase price?—?sold the Da Vin­ci paint­ing for $322 mil­lion above his pur­chase price. The paint­ing was bought by Sau­di Prince Bad­er bin Abdul­lah on behalf of crown prince Moham­mad Bin Salman. The price was dri­ven up by a bid­ding war with the UAE’s Mohammed bin Zayed al-Nahyan, as both bid­ders feared los­ing the paint­ing to the Qatari rul­ing fam­i­ly. After the pur­chase came under crit­i­cism, the Da Vin­ci paint­ing was swapped with the UAE Min­istry of Cul­ture in exchange for an equal­ly val­ued yacht. Dai­ly Mail
    * Novem­ber 20: Erik Prince tes­ti­fied before the U.S. House of Rep­re­sen­ta­tives Intel­li­gence Com­mit­tee and said that he arranged his trip to Sey­chelles with peo­ple who worked for Mohammed bin Zayed to dis­cuss “secu­ri­ty issues and min­er­al issues and even baux­ite.” Prince then described how some­one, maybe one of Mohammed bin Zayed’s broth­ers, told Prince he should meet with Kir­ill Dim­itriev, describ­ing him as “a Russ­ian guy that we’ve done some busi­ness with in the past.” Erik Prince Tran­script

    Decem­ber 2017

    * Buz­zfeed broke a sto­ry on how Erik Prince had pitched the Trump admin­is­tra­tion on a plan to hire him to pri­va­tize the Afghan war and mine Afghanistan’s valu­able min­er­als. A slide pre­sen­ta­tion of Prince’s pitch described Afghanistan’s rich deposits of min­er­als with an esti­mat­ed val­ue of $1 tril­lion, and described his plan as “a strate­gic min­er­al resource extrac­tion fund­ed effort.” Buz­zfeed

    2018
    Jan­u­ary 2018

    * George Nad­er emailed a request to Elliott Broidy say­ing the leader of the UAE asked Trump to call the crown prince of Sau­di Ara­bia to smooth over poten­tial bad feel­ings cre­at­ed by the book “Fire and Fury.” Nad­er also reit­er­at­ed to Broidy the desire of the ruler of the UAE to meet alone with Trump. Days lat­er as Nad­er went to meet Broidy at Mar-a-lago to cel­e­brate the anniver­sary of the inau­gu­ra­tion, he was met at Dulles Air­port by FBI agents work­ing for Mueller. New York Times

    Jan­u­ary to March 2018

    * Emer­da­ta Lim­it­ed added new direc­tors Alexan­der Nix, John­son Chun Shun Ko, Cheng Peng, Ahmad Al Khat­ib, Rebekah Mer­cer, and Jen­nifer Mer­cer. John­son Chun Shun Ko is the busi­ness part­ner of Erik Prince. Ahmad Al Khat­ib is iden­ti­fied as ‘Cit­i­zen of Sey­chelles’. Shares are issued val­ued at 1,912,512 GBP. Emer­da­ta arti­cle
    * SCL/Cambridge Ana­lyt­i­ca founder Nigel Oakes told Chan­nel 4 News it was his under­stand­ing that Emer­da­ta was set up a year ago to acquire all of Cam­bridge Ana­lyt­i­ca and all of SCL. Chan­nel 4 News
    * A web­site for a com­pa­ny called Coinagelabs.org dis­played Cam­bridge Ana­lyt­i­ca as a part­ner. The team was led by CEO Sandy Peng, who pre­vi­ous­ly worked at Reori­ent Cap­i­tal, part of Reori­ent Group where Chun Shun Ko had been exec­u­tive chair­man. The site is no longer avail­able. Twit­ter Thread

    March 2018

    March 13: A key goal of UAE polit­i­cal advi­sor George Nad­er, and Elliott Broidy who had hun­dreds of mil­lions of dol­lars of busi­ness with the UAE, was accom­plished, and Rex Tiller­son was fired. New York Times
    March 19: Rus­sia-friend­ly Cal­i­for­nia rep­re­sen­ta­tive Dana Rohrabach­er, who has crit­i­cized the Mag­nit­sky Act and who Prince had interned for in the 1990’s, attend­ed a fundrais­er host­ed for him by Erik Prince and Oliv­er North, at Prince’s home in Vir­ginia. The Inter­cept

    April 2018

    * Sources report­ed that “Spe­cial Coun­sel Robert Mueller has obtained evi­dence that calls into ques­tion Con­gres­sion­al tes­ti­mo­ny giv­en by Trump sup­port­er and Black­wa­ter founder Erik Prince last year, when he described a meet­ing in Sey­chelles with a Russ­ian financier close to Vladimir Putin as a casu­al chance encounter “over a beer.”” ABC News
    * John Bolton is set to begin as Don­ald Trump’s new Nation­al Secu­ri­ty Advi­sor, replac­ing Lt. Gen. H.R. McMas­ter, who had opposed Erik Prince’s plans to pri­va­tize the war in Afghanistan. Wash­ing­ton Post
    * Robert Mer­cer, the largest fun­der of Cam­bridge Ana­lyt­i­ca, has giv­en $5 mil­lion to Bolton’s super PAC since 2013. He was the Bolton super PAC’s largest donor dur­ing the 2016 elec­tion cycle, and so far, is also the largest donor for the 2018 elec­tion cycle, accord­ing to fed­er­al cam­paign finance fil­ings. The Cen­ter for Pub­lic Integri­ty
    * A source close to Erik Prince said, “now that McMas­ter will be replaced by neo­con favorite John Bolton, and Tiller­son with CIA direc­tor Mike Pom­peo, who once ran an aero­space sup­pli­er, the dynam­ics have changed. Bolton’s selec­tion, par­tic­u­lar­ly, is “going to take us in a real­ly pos­i­tive direc­tion.”” Forbes
    * Accord­ing to an SEC fil­ing, Kush­n­er Com­pa­nies appears to have reached a deal to buy out its part­ner, Vor­na­do Real­ty Trust, in the trou­bled 666 Fifth Avenue prop­er­ty. The Kush­n­ers had pre­vi­ous­ly nego­ti­at­ed unsuc­cess­ful­ly with Chi­nese com­pa­ny Anbang Insur­ance Group, whose chair­man has since been pros­e­cut­ed and reg­u­la­tors have seized con­trol of the com­pa­ny. The Kush­n­ers also nego­ti­at­ed unsuc­cess­ful­ly with the for­mer prime min­is­ter of Qatar, and short­ly after­wards Qatar was hacked and block­ad­ed by the UAE, Sau­di Ara­bia, Bahrain and Egypt. It is not clear who will pro­vide the financ­ing for the deal. New York Times

    ...

    ———-

    “From the Sey­chelles to the White House to Cam­bridge Ana­lyt­i­ca, Erik Prince and the UAE are key parts of the Trump sto­ry” by Wendy Siegel­man; Medi­um; 04/08/2018

    In 2017 as Cam­bridge Ana­lyt­i­ca exec­u­tives cre­at­ed Emer­da­ta, they were also work­ing on behalf of the UAE through SCL Social, which had a $330,000 con­tract to run a social media cam­paign for the UAE against Qatar, fea­tur­ing the theme #Boy­cottQatar. One of the Emer­da­ta direc­tors may have ties to the UAE and the com­pa­ny name, coin­ci­den­tal­ly, sounds like a play on Emirati-Data…Emerdata.

    Emi­rati-Data = Emer­da­ta. Is that the play on words we’re see­ing in this name? It does sound like a rea­son­able infer­ence. Espe­cial­ly giv­en Erik Prince’s close asso­ci­a­tion with both the Emer­Data’s board of direc­tors and the UAE:

    ...
    As the Cam­bridge Ana­lyt­i­ca scan­dal was unfold­ing, I broke the sto­ry about a new com­pa­ny Emer­da­ta Lim­it­ed, cre­at­ed by Cam­bridge Ana­lyt­i­ca exec­u­tives, that in ear­ly 2018 added new board mem­bers Rebekah and Jen­nifer Mer­cer, Cheng Peng, Chun Shun Ko John­son, who is a busi­ness part­ner of Erik Prince, and Ahmed Al Khat­ib, a ‘Cit­i­zen of Sey­chelles’.

    ...

    The Unit­ed Arab Emi­rates and peo­ple advo­cat­ing for the inter­ests of the UAE—including Prince, Nad­er, and Trump fundrais­er Elliot Broidy who has done large busi­ness deals with the UAE—have start­ed to appear fre­quent­ly in news relat­ed to Mueller’s inves­ti­ga­tion. Erik Prince, the broth­er of the U.S. Sec­re­tary of Edu­ca­tion Bet­sy DeVos, lived in the UAE, attend­ed the Sey­chelles meet­ing with the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, is busi­ness part­ners with Chun Shun Ko who just joined the board of the new Cam­bridge Analytica/SCL com­pa­ny Emer­da­ta, and SCL had a large con­tract to work on behalf of the UAE.
    ...

    So let’s take a clos­er look at Prince’s ties to the UAE and his parn­ters in Hong Kong: He moves to the UAE in 2010, and gets hired by the Sheik Mohamed bin Zayed al-Nahyan to build a fight­ing force in 2011. In 2012, while still liv­ing in the UAE, Prince cre­ates the Fron­tier Resource Group, an Africa-ded­i­cat­ed invest­ment firm part­nered with major Chi­nese enter­pris­es:

    ...
    2010

    * In a depo­si­tion Erik Prince said he had pre­vi­ous­ly hired George Nad­er to help Black­wa­ter as a “busi­ness devel­op­ment con­sul­tant that we retained in Iraq” because the com­pa­ny was look­ing for con­tracts with the Iraqi gov­ern­ment. New York Times
    * After a series of civ­il law­suits, crim­i­nal charges and Con­gres­sion­al inves­ti­ga­tions against Erik Prince’s com­pa­ny Black­wa­ter and its for­mer exec­u­tives, Prince moved to the Unit­ed Arab Emi­rates. New York Times.

    2011

    * Sheik Mohamed bin Zayed al-Nahyan of Abu Dhabi hired Erik Prince to build a fight­ing force, pay­ing $529 mil­lion to build an army. Addi­tion­al­ly, Prince “worked with the Emi­rati gov­ern­ment on var­i­ous ventures…including an oper­a­tion using South African mer­ce­nar­ies to train Soma­lis to fight pirates.” New York Times
    * A movie called “The Project,” about Erik Prince’s UAE-fund­ed pri­vate army in Soma­lia, was paid for by the Mov­ing Pic­ture Insti­tute where Rebekah Mer­cer is on the board of Trustees. Gawk­er Web­site

    2012

    * Erik Prince, who works and lives in Abu Dhabi in the Unit­ed Arab Emi­rates, cre­at­ed Fron­tier Resource Group, an Africa-ded­i­cat­ed invest­ment firm part­nered with major Chi­nese enter­pris­es. South Chi­na Morn­ing Post

    2013

    * The Russ­ian Direct Invest­ment Fund led by CEO Kir­ill Dmitriev, and the UAE’s Mubadala Devel­op­ment Com­pa­ny based in Abu Dhabi, launched a $2 bil­lion co-invest­ment fund to pur­sue oppor­tu­ni­ties in Rus­sia. PR Newswire
    ...

    Then, in 2014, Prince gets named as Chair­man of DVN Hold­ings, con­trolled by Hong Kong busi­ness­man John­son Ko Chun-shun (who sits on the board of Emer­da­ta) and Chi­nese state-owned Citic Group:

    ...
    2014

    * Jan­u­ary: Erik Prince was named Chair­man of DVN Hold­ings, con­trolled by Hong Kong busi­ness­man John­son Ko Chun-shun and Chi­nese state-owned Citic Group. DVN’s board pro­posed that the firm be renamed Fron­tier Ser­vices Group. South Chi­na Morn­ing Post.
    * Jan­u­ary: Erik Prince’s busi­ness part­ner, Dori­an Barak, became a Non-Exec­u­tive Direc­tor of Reori­ent Group Lim­it­ed, an invest­ment com­pa­ny where Ko Chun Shun John­son was Chair­man and Exec­u­tive Direc­tor, and had done a $350 mil­lion deal with Jack Ma. 2014 Annu­al Report. Forbes
    * Erik Prince’s busi­ness part­ner Dori­an Barak joined the board of Alu­fur Min­ing, “an inde­pen­dent min­er­al explo­ration and devel­op­ment com­pa­ny with sig­nif­i­cant baux­ite inter­ests in the Repub­lic of Guinea.” (Prince would lat­er tes­ti­fy that the pur­pose of his Sey­chelles trip was to dis­cuss min­er­als and ‘baux­ite’ with the UAE’s Mohammed bin Zayed). Alu­fur web­site
    ...

    Then there’s all the shenani­gans involv­ing the Sey­chelles ‘backchan­nel’ (that inex­plic­a­bly involves the UAE) and GOP mon­ey-man Elliott Broidy:

    ...
    * Decem­ber: The Unit­ed Arab Emirate’s crown prince of Abu Dhabi, Sheikh Mohamed bin Zayed al-Nahyan, vis­it­ed Trump Tow­er and met with Jared Kush­n­er, Michael Fly­nn, and Steve Ban­non. In an unusu­al breach of pro­to­col, the Oba­ma admin­is­tra­tion was not noti­fied about the vis­it. Wash­ing­ton Post
    * Erik Prince told the House Intel­li­gence Com­mit­tee that Steve Ban­non informed him about the Decem­ber Trump Tow­er meet­ing with Mohamed bin Zayed al-Nahyan. Prince also said he had sent Ban­non unso­licit­ed pol­i­cy papers dur­ing the cam­paign. CNN

    2017

    Jan­u­ary 2017

    * One week pri­or to the meet­ing in the Sey­chelles, sources report­ed that George Nad­er met with Erik Prince and lat­er sent him infor­ma­tion on Kir­ill Dmitriev, the CEO of the Russ­ian Direct Invest­ment Fund, con­tra­dict­ing Prince’s sworn tes­ti­mo­ny to the House Intel­li­gence Com­mit­tee that the meet­ing with Kir­ill Dmitriev in the Sey­chelles was unex­pect­ed. ABC News
    * Jan­u­ary 11: A meet­ing was held in the Sey­chelles with Erik Prince, the UAE’s Crown Prince Mohammed bin Zayed Al-Nahyan, Kir­ill Dmitriev, and George Nad­er, who had pre­vi­ous­ly con­sult­ed for Prince’s Black­wa­ter. Accord­ing to Nad­er the meet­ing was to dis­cuss for­eign pol­i­cy and to estab­lish a line of com­mu­ni­ca­tion between the Russ­ian gov­ern­ment and the incom­ing Trump admin­is­tra­tion. ABC News

    Feb­ru­ary 2017

    * “After decades of close polit­i­cal and defense prox­im­i­ty with the Unit­ed States, the Unit­ed Arab Emi­rates have con­clud­ed three major agree­ments with Rus­sia which could lead to its air force being ulti­mate­ly re-equipped with Russ­ian com­bat air­craft.” Defense Aero­space

    March 2017

    * Elliott Broidy, a top GOP and Trump fundrais­er with hun­dreds of mil­lions of dol­lars in busi­ness deals with the UAE, sent George Nad­er a spread­sheet out­lin­ing a pro­posed $12.7 mil­lion cam­paign against Qatar and the Mus­lim Broth­er­hood. Broidy also sent an email to George Nad­er refer­ring to Secure Amer­i­ca Now as a group he worked with. New York Times
    * The largest fun­der of Secure Amer­i­ca Now, a secre­tive group that cre­ates anti-Mus­lim adver­tis­ing, is Robert Mer­cer, who is also the largest fun­der of Cam­bridge Ana­lyt­i­ca. Open Secrets
    ...

    Then Emer­da­ta gets formed in August of 2017. The next month, Steve Ban­non and Alexan­der Nix atten the CSLA Investors’ Forum in Hong Kong, which is run by Citic Group, the major­i­ty own­er of Prince’s Fron­tier Ser­vices Group:

    ...
    August 2017

    * Emer­da­ta Lim­it­ed was incor­po­rat­ed in the UK with Cam­bridge Analytica’s Chair­man Julian Wheat­land and Chief Data Offi­cer Alexan­der Tayler as sig­nif­i­cant own­ers. Com­pa­ny fil­ing

    Sep­tem­ber 2017

    * Cam­bridge Ana­lyt­i­ca CEO Alexan­der Nix and Steve Ban­non both present at the CSLA Investors’ Forum in Hong Kong. CLSA is part of Citic Secu­ri­ties, which is part of Citic Group, the major­i­ty own­er of Erik Prince and Ko Chun Shun Johnson’s Fron­tier Ser­vices Group. Bloomberg Tweet
    ...

    Then in Octo­ber of 2017, we have a con­tin­u­a­tion of Elliot Broidy’s lob­by­ing the Trump admin­is­tra­tion on behalf of the UAE at the same time the SCL Group gets hired to imple­ment a social media cam­paign for the UAE against Qatar:

    ...
    Octo­ber 2017

    * Octo­ber 6: Elliott Broidy, whose com­pa­ny Circi­nus has had hun­dreds of mil­lions of dol­lars in con­tracts with the UAE, met Trump and sug­gest­ed Trump meet with the UAE’s Mohammed bin Zayed al-Nahyan. Broidy said Trump thought it was good idea. Broidy also “per­son­al­ly urged Mr. Trump to fire Mr. Tiller­son, whom the Saud­is and Emi­ratis saw as insuf­fi­cient­ly tough on Iran and Qatar.” New York Times
    * Octo­ber 6: SCL Social Lim­it­ed, part of SCL Group/Cambridge Ana­lyt­i­ca, was hired by UK com­pa­ny Project Asso­ciates for approx­i­mate­ly $330,000 to imple­ment a social media cam­paign for the UAE against Qatar, fea­tur­ing the them #Boy­cottQatar. FARA fil­ing
    * Octo­ber 23: Steve Ban­non spoke at a Hud­son Insti­tute event on “Coun­ter­ing Vio­lent Extrem­ism: Qatar, Iran, and the Mus­lim Broth­er­hood,” and called the Qatar block­ade “the sin­gle most impor­tant thing that’s hap­pen­ing in the world.” Ban­non “bragged that president’s trip to Sau­di Ara­bia in May gave the Saud­is the gump­tion to lead a block­ade against Doha.” The Nation­al Inter­est
    * Octo­ber 29: Jared Kush­n­er returned from an unan­nounced trip to Sau­di Ara­bia to dis­cuss Mid­dle East peace. Tom Bar­rack, a long­time friend and close Trump con­fi­dant said “The key to solv­ing (the Israel-Pales­tin­ian dis­pute) is Egypt. And the key to Egypt is Abu Dhabi and Sau­di Ara­bia.” Politi­co
    ...

    Final­ly, in ear­ly 2018 we find Emer­da­ta adding Alexan­der Nix, John­son Chun Shun Ko (Prince’s part­ner at Fron­tier Ser­vices Group), Cheng Peng, Ahmad Al Khat­ib, Rebekah Mer­cer, and Jen­nifer Mer­cer to the board of direc­tors:

    ...
    Jan­u­ary to March 2018

    * Emer­da­ta Lim­it­ed added new direc­tors Alexan­der Nix, John­son Chun Shun Ko, Cheng Peng, Ahmad Al Khat­ib, Rebekah Mer­cer, and Jen­nifer Mer­cer. John­son Chun Shun Ko is the busi­ness part­ner of Erik Prince. Ahmad Al Khat­ib is iden­ti­fied as ‘Cit­i­zen of Sey­chelles’. Shares are issued val­ued at 1,912,512 GBP. Emer­da­ta arti­cle
    ...

    So it sure looks a lot like the new incar­na­tion of Cam­bridge Ana­lyt­i­ca is basi­cal­ly going to be apply­ing Cam­bridge Ana­lyt­i­ca’s psy­cho­log­i­cal war­fare meth­ods on behalf of the UAE, among oth­ers. The Chi­nese investors will also pre­sum­ably be inter­est­ed in these kinds of ser­vices. And any­one else who might want to hire a psy­cho­log­i­cal war­fare ser­vice provider run by a bunch of far right lumi­nar­ies.

    Posted by Pterrafractyl | May 3, 2018, 3:56 pm
  13. Oh look at that: Remem­ber how Alek­san­dr Kogan, the Uni­ver­si­ty of Cam­bridge pro­fes­sor who built the app used by Cam­bridge Ana­lyt­i­ca, claimed that what he was doing was rather typ­i­cal? Well, Face­book’s audit of the thou­sands of apps used on its plat­form appears to be prov­ing Kogan right. Face­book just announced that it has already found and sus­pend­ed 200 apps that appear to be mis­us­ing user data.

    Face­book won’t say which apps were sus­pend­ed, how many users were involved, or what the red flags were that trig­gered the sus­pen­sion, so we’re large­ly left in the dark in terms of the scope of the prob­lem.

    But there is one par­tic­u­lar prob­lem app that’s been revealed, although it was­n’t revealed by Face­book. It’s the myPer­son­al­i­ty app which was also devel­oped by Cam­bridge Uni­ver­si­ty pro­fes­sors at the Cam­bridge Psy­cho­met­rics Cen­ter. Recall how Cam­bridge Ana­lyt­i­ca end­ed up work­ing with Alek­sander Kogan only after first being rebuffed by the Cam­bridge Psy­cho­met­rics Cen­ter. And as we’re going to see in the sec­ond arti­cle below, Kogan actu­al­ly work­ing on the myPer­son­al­i­ty app until 2014 (when he went to work for Cam­bridge Ana­lyt­i­ca). So the one app of the 200 recent­ly sus­pend­ed apps that we get to know about at this point is an app Kogan helped devel­op. And the oth­er 199 apps remain a mys­tery for now:

    The Wash­ing­ton Post

    Face­book sus­pends 200 apps fol­low­ing Cam­bridge Ana­lyt­i­ca scan­dal

    by Drew Har­well and Tony Romm
    May 14, 2018

    Face­book said Mon­day morn­ing that it had sus­pend­ed rough­ly 200 apps amid an ongo­ing inves­ti­ga­tion prompt­ed by the Cam­bridge Ana­lyt­i­ca scan­dal into whether ser­vices on the site had improp­er­ly used or col­lect­ed users’ per­son­al data.

    The com­pa­ny said in an update, its first since the social net­work announced the inter­nal audit in March, that the apps would under­go a “thor­ough inves­ti­ga­tion” into whether they had mis­used user data.

    Face­book declined to pro­vide more detail on which apps were sus­pend­ed, how many peo­ple had used them or what red flags had led them to sus­pect those apps of mis­use.

    CEO Mark Zucker­berg has said the com­pa­ny will exam­ine tens of thou­sands of apps that could have accessed or col­lect­ed large amounts of users’ per­son­al infor­ma­tion before the site’s more restric­tive data rules for third-par­ty devel­op­ers took effect in 2015.

    The com­pa­ny said teams of inter­nal and exter­nal experts will con­duct inter­views and lead on-site inspec­tions of cer­tain apps dur­ing its ongo­ing audit. Thou­sands of apps have been inves­ti­gat­ed so far, the com­pa­ny said, adding that any app that refus­es to coop­er­ate or failed the audit would be banned from the site.

    The sus­pen­sions sup­port a long-run­ning defense of Alek­san­dr Kogan, the researcher who pro­vid­ed Face­book data to Cam­bridge Ana­lyt­i­ca, that many apps besides his had gath­ered vast amounts of user infor­ma­tion under Face­book’s pre­vi­ous­ly lax data-pri­va­cy rules.

    One of the 200 apps, the per­son­al­i­ty quiz myPer­son­al­i­ty, was sus­pend­ed in ear­ly April and is under inves­ti­ga­tion, Face­book offi­cials said. Researchers at the Uni­ver­si­ty of Cam­bridge had set up the app to col­lect per­son­al infor­ma­tion about Face­book users and inform aca­d­e­m­ic research. But its data may not have been prop­er­ly secured, as first report­ed by New Sci­en­tist, which found login cre­den­tials for the app’s data­base avail­able online.

    “This is clear­ly a breach of the terms that aca­d­e­mics agree to when request­ing a col­lab­o­ra­tion with myPer­son­al­i­ty,” the Uni­ver­si­ty of Cam­bridge said in a state­ment Mon­day. “Once we learned of this, we took imme­di­ate steps to stop access to the account and to stop fur­ther data shar­ing.”

    The researchers added that aca­d­e­mics who used the tool had to ver­i­fy their iden­ti­ties and the nature of their research and agree to terms of ser­vice that pro­hib­it­ed them from shar­ing Face­book data “out­side of their research group.”

    A dif­fer­ent quiz app, devel­oped by Kogan and tapped by Cam­bridge Ana­lyt­i­ca, a polit­i­cal con­sul­tan­cy hired by Pres­i­dent Trump and oth­er Repub­li­cans, was able to pull detailed data on 87 mil­lion peo­ple, includ­ing from the app’s direct users and their friends, who had not overt­ly con­sent­ed to the app’s use.

    The announce­ment comes ahead of a Wednes­day hear­ing on Capi­tol Hill focused on Cam­bridge Ana­lyt­i­ca and data pri­va­cy. Law­mak­ers on the Sen­ate Judi­cia­ry Com­mit­tee said they would ques­tion Christo­pher Wylie, a for­mer employ­ee at the firm who brought its busi­ness prac­tices to light ear­li­er this year, along with oth­er aca­d­e­mics.

    In the Unit­ed States, the Fed­er­al Trade Com­mis­sion is inves­ti­gat­ing whether Facebook’s entan­gle­ment with Cam­bridge Ana­lyt­i­ca vio­lates its 2011 set­tle­ment with the U.S. gov­ern­ment over anoth­er series of pri­va­cy mishaps. Such vio­la­tions could car­ry sky-high fines.

    ...

    Face­book said users will be able to go to this page to see if they had used one of the sus­pect­ed apps once the com­pa­ny reveals which apps are under inves­ti­ga­tion. Com­pa­ny offi­cials would not pro­vide an esti­mat­ed time­line for that dis­clo­sure.

    ———-

    “Face­book sus­pends 200 apps fol­low­ing Cam­bridge Ana­lyt­i­ca scan­dal” by Drew Har­well and Tony Romm; The Wash­ing­ton Post; 05/14/2018

    “Face­book declined to pro­vide more detail on which apps were sus­pend­ed, how many peo­ple had used them or what red flags had led them to sus­pect those apps of mis­use.”

    Did you hap­pen to use one of the 200 sus­pend­ed apps? Who knows, although Face­book says it will noti­fy peo­ple of the names of sus­pend­ed apps even­tu­al­ly. No time­line for that dis­clo­sure is giv­en:

    ...
    Face­book said users will be able to go to this page to see if they had used one of the sus­pect­ed apps once the com­pa­ny reveals which apps are under inves­ti­ga­tion. Com­pa­ny offi­cials would not pro­vide an esti­mat­ed time­line for that dis­clo­sure.
    ...

    And, again, this is exact­ly what Kogan warned us about:

    ...
    The sus­pen­sions sup­port a long-run­ning defense of Alek­san­dr Kogan, the researcher who pro­vid­ed Face­book data to Cam­bridge Ana­lyt­i­ca, that many apps besides his had gath­ered vast amounts of user infor­ma­tion under Face­book’s pre­vi­ous­ly lax data-pri­va­cy rules.
    ...

    And note how Face­book is specif­i­cal­ly say­ing it’s review­ing “tens of thou­sands of apps that could have accessed or col­lect­ed large amounts of users’ per­son­al infor­ma­tion before the site’s more restric­tive data rules for third-par­ty devel­op­ers took effect in 2015”. In oth­er words, Face­book isn’t review­ing all of it’s apps. Only those that exist­ed before the pol­i­cy change that stopped apps from exploit­ing the “friends per­mis­sion” fea­ture that let app devel­op­ers scrape the infor­ma­tion for Face­book users and their friends. So it sounds like this review process isn’t look­ing for data pri­va­cy abus­es under the cur­rent set of rules. Just abus­es under the old set of rules:

    ...
    CEO Mark Zucker­berg has said the com­pa­ny will exam­ine tens of thou­sands of apps that could have accessed or col­lect­ed large amounts of users’ per­son­al infor­ma­tion before the site’s more restric­tive data rules for third-par­ty devel­op­ers took effect in 2015.

    The com­pa­ny said teams of inter­nal and exter­nal experts will con­duct inter­views and lead on-site inspec­tions of cer­tain apps dur­ing its ongo­ing audit. Thou­sands of apps have been inves­ti­gat­ed so far, the com­pa­ny said, adding that any app that refus­es to coop­er­ate or failed the audit would be banned from the site.
    ...

    And that appar­ent focus on abus­es from the old “friends per­mis­sion” rules sug­gests that cur­rent data use prob­lems might go unde­tect­ed. And the one app we’ve learned about, the myPer­son­al­i­ty app, is a per­fect exam­ple of the kind of app that would have been vio­lat­ing Face­book’s cur­rent data pri­va­cy rules. Because as peo­ple recent­ly learned, the Face­book data gath­ered by the app was avail­able online for the pur­pose of shar­ing with oth­er researchers, but it was so poor­ly secured that any­one could have poten­tial­ly accessed it:

    ...
    One of the 200 apps, the per­son­al­i­ty quiz myPer­son­al­i­ty, was sus­pend­ed in ear­ly April and is under inves­ti­ga­tion, Face­book offi­cials said. Researchers at the Uni­ver­si­ty of Cam­bridge had set up the app to col­lect per­son­al infor­ma­tion about Face­book users and inform aca­d­e­m­ic research. But its data may not have been prop­er­ly secured, as first report­ed by New Sci­en­tist, which found login cre­den­tials for the app’s data­base avail­able online.

    “This is clear­ly a breach of the terms that aca­d­e­mics agree to when request­ing a col­lab­o­ra­tion with myPer­son­al­i­ty,” the Uni­ver­si­ty of Cam­bridge said in a state­ment Mon­day. “Once we learned of this, we took imme­di­ate steps to stop access to the account and to stop fur­ther data shar­ing.”

    The researchers added that aca­d­e­mics who used the tool had to ver­i­fy their iden­ti­ties and the nature of their research and agree to terms of ser­vice that pro­hib­it­ed them from shar­ing Face­book data “out­side of their research group.”
    ...

    But it gets worse. Because as the fol­low­ing New Sci­en­tist arti­cle that revealed the myPer­son­al­i­ty apps pri­va­cy issues points out, the data on some 6 mil­lion Face­book users was anonymized, but it was such a shod­dy anonymiza­tion scheme that some­one could have eas­i­ly deanonymized the data in an auto­mat­ed fash­ion. And access to this data­base was poten­tial­ly avail­able to any­one for the past four years. So almost any­one could have grabbed this anonymized data on 6 mil­lion Face­book users and deanonymized it with rel­a­tive ease.

    And putting aside the pos­si­ble unof­fi­cial access of this data, the peo­ple and insti­t­u­a­tions that got offi­cial access is also concerning:More than 280 peo­ple from near­ly 150 insti­tu­tions accessed this data­base, includ­ing researchers at uni­ver­si­ties and at com­pa­nies like Face­book, Google, Microsoft and Yahoo. Yep, researchers at Face­book were appar­ent­ly access­ing this data­base of poor­ly anonymized data.

    So it should come as no sur­prise that, just as Alek­san­dr Kogan defend­ed him­self by assert­ing that lots of oth­er apps did the same thing as his Cam­bridge Ana­lyt­i­ca app and Face­book was well aware of how his app was being used, we’re get­ting the exact same defense from the team by myPer­son­al­i­ty:

    New Sci­en­tist

    Huge new Face­book data leak exposed inti­mate details of 3m users

    By Phee Water­field and Tim­o­thy Rev­ell
    14 May 2018, updat­ed 15 May 2018

    Data from mil­lions of Face­book users who used a pop­u­lar per­son­al­i­ty app, includ­ing their answers to inti­mate ques­tion­naires, was left exposed online for any­one to access, a New Sci­en­tist inves­ti­ga­tion has found.

    Aca­d­e­mics at the Uni­ver­si­ty of Cam­bridge dis­trib­uted the data from the per­son­al­i­ty quiz app myPer­son­al­i­ty to hun­dreds of researchers via a web­site with insuf­fi­cient secu­ri­ty pro­vi­sions, which led to it being left vul­ner­a­ble to access for four years. Gain­ing access illic­it­ly was rel­a­tive­ly easy.

    The data was high­ly sen­si­tive, reveal­ing per­son­al details of Face­book users, such as the results of psy­cho­log­i­cal tests. It was meant to be stored and shared anony­mous­ly, how­ev­er such poor pre­cau­tions were tak­en that deanonymis­ing would not be hard.

    “This type of data is very pow­er­ful and there is real poten­tial for mis­use,” says Chris Sum­n­er at the Online Pri­va­cy Foun­da­tion. The UK’s data watch­dog, the Infor­ma­tion Commissioner’s Office, has told New Sci­en­tist that it is inves­ti­gat­ing.

    The data sets were con­trolled by David Still­well and Michal Kosin­s­ki at the Uni­ver­si­ty of Cambridge’s The Psy­cho­met­rics Cen­tre. Alexan­dr Kogan, at the cen­tre of the Cam­bridge Ana­lyt­i­ca alle­ga­tions, was list­ed as a col­lab­o­ra­tor on the myPer­son­al­i­ty project until the sum­mer of 2014.

    Face­book sus­pend­ed myPer­son­al­i­ty from its plat­form on 7 April say­ing the app may have vio­lat­ed its poli­cies due to the lan­guage used in the app and on its web­site to describe how data is shared.

    More than 6 mil­lion peo­ple com­plet­ed the tests on the myPer­son­al­i­ty app and near­ly half agreed to share data from their Face­book pro­files with the project. All of this data was then scooped up and the names removed before it was put on a web­site to share with oth­er researchers. The terms allow the myPer­son­al­i­ty team to use and dis­trib­ute the data “in an anony­mous man­ner such that the infor­ma­tion can­not be traced back to the indi­vid­ual user”.

    To get access to the full data set peo­ple had to reg­is­ter as a col­lab­o­ra­tor to the project. More than 280 peo­ple from near­ly 150 insti­tu­tions did this, includ­ing researchers at uni­ver­si­ties and at com­pa­nies like Face­book, Google, Microsoft and Yahoo.

    Easy back­door

    How­ev­er, for those who were not enti­tled to access the data set because they didn’t have a per­ma­nent aca­d­e­m­ic con­tract, for exam­ple, there was an easy workaround. For the last four years, a work­ing user­name and pass­word has been avail­able online that could be found from a sin­gle web search. Any­one who want­ed access to the data set could have found the key to down­load it in less than a minute.

    The pub­licly avail­able user­name and pass­word were sit­ting on the code-shar­ing web­site GitHub. They had been passed from a uni­ver­si­ty lec­tur­er to some stu­dents for a course project on cre­at­ing a tool for pro­cess­ing Face­book data. Upload­ing code to GitHub is very com­mon in com­put­er sci­ence as it allows oth­ers to reuse parts of your work, but the stu­dents includ­ed the work­ing login cre­den­tials too.

    myPer­son­al­i­ty wasn’t mere­ly an aca­d­e­m­ic project; researchers from com­mer­cial com­pa­nies were also enti­tled to access the data so long as they agreed to abide by strict data pro­tec­tion pro­ce­dures and didn’t direct­ly earn mon­ey from it.

    Still­well and Kosin­s­ki were both part of a spin-out com­pa­ny called Cam­bridge Per­son­al­i­ty Research, which sold access to a tool for tar­get­ing adverts based on per­son­al­i­ty types, built on the back of the myPer­son­al­i­ty data sets. The firm’s web­site described it as the tool that “mind-reads audi­ences”.

    Face­book start­ed inves­ti­gat­ing myPer­son­al­i­ty as part of a wider inves­ti­ga­tion into apps using the plat­form. This was start­ed by the alle­ga­tions sur­round­ing how Cam­bridge Ana­lyt­i­ca accessed data from an app called This Is Your Dig­i­tal Life devel­oped by Kogan.

    Today it it announced it has sus­pend­ed around 200 apps as part of its inves­ti­ga­tion into apps that had access to large amounts of infor­ma­tion on users.

    Cam­bridge Ana­lyt­i­ca had approached the myPer­son­al­i­ty app team in 2013 to get access to the data, but was turned down because of its polit­i­cal ambi­tions, accord­ing to Still­well.

    “We are cur­rent­ly inves­ti­gat­ing the app, and if myPer­son­al­i­ty refus­es to coop­er­ate or fails our audit, we will ban it,” says Ime Archi­bong, Facebook’s vice pres­i­dent of Prod­uct Part­ner­ships.

    The myPer­son­al­i­ty app web­site has now been tak­en down, the pub­licly avail­able cre­den­tials no longer work, and Stillwell’s web­site and Twit­ter account have gone offline.

    “We are aware of an inci­dent relat­ed to the My Per­son­al­i­ty app and are mak­ing enquiries,” a spokesper­son for the Infor­ma­tion Commissioner’s Office told New Sci­en­tist.

    Per­son­al infor­ma­tion exposed

    The cre­den­tials gave access to the “Big Five” per­son­al­i­ty scores of 3.1 mil­lion users. These scores are used in psy­chol­o­gy to assess people’s char­ac­ter­is­tics, such as con­sci­en­tious­ness, agree­able­ness and neu­roti­cism. The cre­den­tials also allowed access to 22 mil­lion sta­tus updates from over 150,000 users, along­side details such as age, gen­der and rela­tion­ship sta­tus from 4.3 mil­lion peo­ple.

    “If at any time a user­name and pass­word for any files that were sup­posed to be restrict­ed were made pub­lic, it would be a con­se­quen­tial and seri­ous issue,” says Pam Dixon at the World Pri­va­cy Forum. “Not only is it a bad secu­ri­ty prac­tice, it is a pro­found eth­i­cal vio­la­tion to allow strangers to access files.”

    Beyond the pass­word leak and dis­trib­ut­ing the data to hun­dreds of researchers, there are seri­ous con­cerns with the way the anonymi­sa­tion process was per­formed.

    Each user in the data set was giv­en a unique ID, which tied togeth­er data such as their age, gen­der, loca­tion, sta­tus updates, results on the per­son­al­i­ty quiz and more. With that much infor­ma­tion, de-anonymis­ing the data can be done very eas­i­ly. “You could re-iden­ti­fy some­one online from a sta­tus update, gen­der and date,” says Dixon.

    This process could be auto­mat­ed, quick­ly reveal­ing the iden­ti­ties of the mil­lions of peo­ple in the data sets, and tying them to the results of inti­mate per­son­al­i­ty tests.

    “Any data set that has enough attrib­ut­es is extreme­ly hard to anonymise,” says Yves-Alexan­dre de Mon­tjoye at Impe­r­i­al Col­lege Lon­don. So instead of dis­trib­ut­ing actu­al data sets, the best approach is to pro­vide a way for researchers to run tests on the data. That way they get aggre­gat­ed results and nev­er access to indi­vid­u­als. “The use of the data can’t be at the expense of people’s pri­va­cy,” he says.

    The Uni­ver­si­ty of Cam­bridge says it was alert­ed to the issues sur­round­ing myPer­son­al­i­ty by the Infor­ma­tion Commissioner’s Office. It says that, as the app was cre­at­ed by Still­well before he joined the uni­ver­si­ty, “it did not go through our eth­i­cal approval process­es”. It also says “the Uni­ver­si­ty of Cam­bridge does not own or con­trol the app or data”.

    ...

    When approached, Still­well says that through­out the nine years of the project there has only been one data breach, and that researchers giv­en access to the data set must agree not to de-anonymise the data. “We believe that aca­d­e­m­ic research ben­e­fits from prop­er­ly con­trolled shar­ing of anonymised data among the research com­mu­ni­ty,” he told New Sci­en­tist.

    He also says that Face­book has long been aware of the myPer­son­al­i­ty project, hold­ing meet­ings with him­self and Kosin­s­ki going back as far as 2011. “It is there­fore a lit­tle odd that Face­book should sud­den­ly now pro­fess itself to have been unaware of the myPer­son­al­i­ty research and to believe that the use of the data was a breach of its terms,” he says.

    The inves­ti­ga­tions by Face­book and the Infor­ma­tion Commissioner’s Office should try to deter­mine who accessed the myPer­son­al­i­ty data and what it was used for. How­ev­er, as it was shared with so many dif­fer­ent peo­ple, track­ing every­one who has a copy and what they did with it will prove very dif­fi­cult. We will nev­er know exact­ly who did what with this data set. “This is the tip of the ice­berg,” says Dixon. “Who else has this data?”

    ———–

    “Huge new Face­book data leak exposed inti­mate details of 3m users” by Phee Water­field and Tim­o­thy Rev­ell; New Sci­en­tist; 05/14/2018

    “Aca­d­e­mics at the Uni­ver­si­ty of Cam­bridge dis­trib­uted the data from the per­son­al­i­ty quiz app myPer­son­al­i­ty to hun­dreds of researchers via a web­site with insuf­fi­cient secu­ri­ty pro­vi­sions, which led to it being left vul­ner­a­ble to access for four years. Gain­ing access illic­it­ly was rel­a­tive­ly easy.”

    Yep, an online data­base of high­ly sen­si­tive Face­book + psy­cho­log­i­cal pro­file data was made acces­si­ble to hun­dreds of researchers. But it was also poten­tial­ly acces­si­ble to any­one due to poor secu­ri­ty. For four years.

    And those that were giv­en offi­cial access to the data includ­ed com­pa­nies like Microsoft, Google, Yahoo, and Face­book:

    ...
    To get access to the full data set peo­ple had to reg­is­ter as a col­lab­o­ra­tor to the project. More than 280 peo­ple from near­ly 150 insti­tu­tions did this, includ­ing researchers at uni­ver­si­ties and at com­pa­nies like Face­book, Google, Microsoft and Yahoo.
    ...

    While the Face­book researchers could plau­si­bly claim that they had no idea the serv­er host­ing this data had insuf­fi­cient secu­ri­ty, it would be a lot hard­er for them to claim they had no idea the anonymiza­tion scheme was high­ly inad­e­quate:

    ...
    The data was high­ly sen­si­tive, reveal­ing per­son­al details of Face­book users, such as the results of psy­cho­log­i­cal tests. It was meant to be stored and shared anony­mous­ly, how­ev­er such poor pre­cau­tions were tak­en that deanonymis­ing would not be hard.

    “This type of data is very pow­er­ful and there is real poten­tial for mis­use,” says Chris Sum­n­er at the Online Pri­va­cy Foun­da­tion. The UK’s data watch­dog, the Infor­ma­tion Commissioner’s Office, has told New Sci­en­tist that it is inves­ti­gat­ing.
    ...

    And the only thing the myPer­son­al­i­ty team appeared to do to anonymize the data was replace names with a num­ber. THAT’S IT! And when that’s the only anonymiza­tion step employed in a data set with large amounts of data on each indi­vid­ual, includ­ing sta­tus updates, it’s going to be triv­ial to auto­mate the deanonymiza­tion of these peo­ple, espe­cial­ly for com­pa­nies like Google, Yahoo, Microsoft and Face­book:

    ...
    Per­son­al infor­ma­tion exposed

    The cre­den­tials gave access to the “Big Five” per­son­al­i­ty scores of 3.1 mil­lion users. These scores are used in psy­chol­o­gy to assess people’s char­ac­ter­is­tics, such as con­sci­en­tious­ness, agree­able­ness and neu­roti­cism. The cre­den­tials also allowed access to 22 mil­lion sta­tus updates from over 150,000 users, along­side details such as age, gen­der and rela­tion­ship sta­tus from 4.3 mil­lion peo­ple.

    ...

    Each user in the data set was giv­en a unique ID, which tied togeth­er data such as their age, gen­der, loca­tion, sta­tus updates, results on the per­son­al­i­ty quiz and more. With that much infor­ma­tion, de-anonymis­ing the data can be done very eas­i­ly. “You could re-iden­ti­fy some­one online from a sta­tus update, gen­der and date,” says Dixon.

    This process could be auto­mat­ed, quick­ly reveal­ing the iden­ti­ties of the mil­lions of peo­ple in the data sets, and tying them to the results of inti­mate per­son­al­i­ty tests.

    “Any data set that has enough attrib­ut­es is extreme­ly hard to anonymise,” says Yves-Alexan­dre de Mon­tjoye at Impe­r­i­al Col­lege Lon­don. So instead of dis­trib­ut­ing actu­al data sets, the best approach is to pro­vide a way for researchers to run tests on the data. That way they get aggre­gat­ed results and nev­er access to indi­vid­u­als. “The use of the data can’t be at the expense of people’s pri­va­cy,” he says.
    ...

    Not sur­pris­ing­ly, two of the aca­d­e­mics in charge of this project were part of a spin-off com­pa­ny that sold tools for tar­get­ing ads based on per­son­al­i­ty types. So it was­n’t just com­mer­cial com­pa­nies like Google and Yahoo who got access to this data. The whole enter­prise appeared to be com­mer­cial in nature:

    ...
    myPer­son­al­i­ty wasn’t mere­ly an aca­d­e­m­ic project; researchers from com­mer­cial com­pa­nies were also enti­tled to access the data so long as they agreed to abide by strict data pro­tec­tion pro­ce­dures and didn’t direct­ly earn mon­ey from it.

    Still­well and Kosin­s­ki were both part of a spin-out com­pa­ny called Cam­bridge Per­son­al­i­ty Research, which sold access to a tool for tar­get­ing adverts based on per­son­al­i­ty types, built on the back of the myPer­son­al­i­ty data sets. The firm’s web­site described it as the tool that “mind-reads audi­ences”.
    ...

    And, of course, Alek­san­dr Kogan was part of this project before he went to work for Cam­bridge Ana­lyt­i­ca:

    ...
    The data sets were con­trolled by David Still­well and Michal Kosin­s­ki at the Uni­ver­si­ty of Cambridge’s The Psy­cho­met­rics Cen­tre. Alexan­dr Kogan, at the cen­tre of the Cam­bridge Ana­lyt­i­ca alle­ga­tions, was list­ed as a col­lab­o­ra­tor on the myPer­son­al­i­ty project until the sum­mer of 2014.
    ...

    And note how Face­book only sus­pend­ed this app on April 7th of this year, four years after Face­book end­ed its noto­ri­ous “friends per­mis­sion” fea­ture that’s received most of the atten­tion from the Cam­bridge Ana­lyt­i­ca scan­dal. It’s a big reminder that data pri­va­cy abus­es via Face­book apps aren’t lim­it­ed to that “friends per­mis­sions” fea­ture. It’s an exist­ing prob­lem, which is why it’s trou­bling to hear that Face­book was look­ing into the tens of thou­sands of apps that may have abused in pre-2015 data use poli­cies:

    ...
    Face­book sus­pend­ed myPer­son­al­i­ty from its plat­form on 7 April say­ing the app may have vio­lat­ed its poli­cies due to the lan­guage used in the app and on its web­site to describe how data is shared.

    More than 6 mil­lion peo­ple com­plet­ed the tests on the myPer­son­al­i­ty app and near­ly half agreed to share data from their Face­book pro­files with the project. All of this data was then scooped up and the names removed before it was put on a web­site to share with oth­er researchers. The terms allow the myPer­son­al­i­ty team to use and dis­trib­ute the data “in an anony­mous man­ner such that the infor­ma­tion can­not be traced back to the indi­vid­ual user”.
    ...

    But beyond the trou­bling half-assed anonymiza­tion scheme, there’s the issue of all this data being inad­ver­tent­ly made avail­able to the world due to the user cre­den­tials for the data­base get­ting uploaded into some code on GitHub, an online cod­ing repos­i­to­ry:

    ...
    Easy back­door

    How­ev­er, for those who were not enti­tled to access the data set because they didn’t have a per­ma­nent aca­d­e­m­ic con­tract, for exam­ple, there was an easy workaround. For the last four years, a work­ing user­name and pass­word has been avail­able online that could be found from a sin­gle web search. Any­one who want­ed access to the data set could have found the key to down­load it in less than a minute.

    The pub­licly avail­able user­name and pass­word were sit­ting on the code-shar­ing web­site GitHub. They had been passed from a uni­ver­si­ty lec­tur­er to some stu­dents for a course project on cre­at­ing a tool for pro­cess­ing Face­book data. Upload­ing code to GitHub is very com­mon in com­put­er sci­ence as it allows oth­ers to reuse parts of your work, but the stu­dents includ­ed the work­ing login cre­den­tials too.
    ...

    It’s impor­tant to keep in mind that the acci­den­tal release of those cre­den­tials by some stu­dents is prob­a­bly the most under­stand­able aspect of this data pri­va­cy night­mare. It’s the equiv­a­lent of writ­ing a bug in code: a com­mon care­less acci­dent. Every­thing else asso­ci­at­ed with this data pri­va­cy night­mare is far less under­stand­able because it was­n’t a mis­take but by design.

    And as we should expect at this point, the design­ers of the myPer­son­al­i­ty app are express­ing dis­may as Face­book’s dis­may. After all, Face­book has long been aware of the project and even held meet­ings with the team as far back as 2011:

    ...
    When approached, Still­well says that through­out the nine years of the project there has only been one data breach, and that researchers giv­en access to the data set must agree not to de-anonymise the data. “We believe that aca­d­e­m­ic research ben­e­fits from prop­er­ly con­trolled shar­ing of anonymised data among the research com­mu­ni­ty,” he told New Sci­en­tist.

    He also says that Face­book has long been aware of the myPer­son­al­i­ty project, hold­ing meet­ings with him­self and Kosin­s­ki going back as far as 2011. “It is there­fore a lit­tle odd that Face­book should sud­den­ly now pro­fess itself to have been unaware of the myPer­son­al­i­ty research and to believe that the use of the data was a breach of its terms,” he says.
    ...

    And don’t for­get, Face­book researchers were among the users of this data. So Face­book was obvi­ous­ly pret­ty famil­iar with the app.

    And in the end, we’ll like­ly nev­er know who accessed the data and what they did with it. It’s just the tip of the ice­berg:

    ...
    The inves­ti­ga­tions by Face­book and the Infor­ma­tion Commissioner’s Office should try to deter­mine who accessed the myPer­son­al­i­ty data and what it was used for. How­ev­er, as it was shared with so many dif­fer­ent peo­ple, track­ing every­one who has a copy and what they did with it will prove very dif­fi­cult. We will nev­er know exact­ly who did what with this data set. “This is the tip of the ice­berg,” says Dixon. “Who else has this data?”

    And note one of the oth­er chill­ing impli­ca­tions of this sto­ry: Recall how the ~270,000 user of the Cam­bridge Ana­lyt­i­ca app result­ing in Cam­bridge Ana­lyt­i­ca har­vest­ing data on ~87 mil­lion peo­ple using the “friends per­mis­sions” option. Well, if this myPer­son­al­i­ty app was been oper­at­ing for 9 years that means it also had access to the “friends per­mis­sions” option, and for much longer than the Cam­bridge Ana­lyt­i­ca app. And 6 mil­lion peo­ple appar­ent­ly down­loaded this app! So how many of that 6 mil­lion peo­ple were using this app in the pre-2015 peri­od when the “friends per­mis­sion” option was still avail­able and how many friends of those 6 mil­lion peo­ple had their pro­files har­vest­ed too?

    So it’s entire­ly pos­si­ble the peo­ple at myPer­son­al­i­ty grabbed infor­ma­tion on far more than the 6 mil­lion peo­ple who used their app and we have no idea what they did with the data. What we know know is just the tip of the ice­berg of this sto­ry.

    And this sto­ry of myPer­son­al­i­ty is just cov­er­ing one of the 200 apps that Face­book just sus­pend­ed. In oth­er words, this ice­berg of a sto­ry is just the tip of a much, much larg­er ice­berg.

    Posted by Pterrafractyl | May 17, 2018, 10:54 pm
  14. Here’s a sto­ry about explo­sive new law­suit against Face­book that could end up being a major headache for the com­pa­ny, and Mark Zucker­berg in par­tic­u­lar: The law­suit is being brought by Six4Three, a for­mer app devel­op­er start­up. Six4Three claims that, in 2012, Face­book was fac­ing a large cri­sis with its adver­tis­ing busi­ness mod­el due to the rapid adop­tion of smart­phones and the fact that Face­book’s ads were pri­mar­i­ly focused on desk­tops. Fac­ing a large drop in rev­enue, Face­book alleged­ly forced devel­op­er to buy expen­sive ads on the new, under­used Face­book mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off.

    The way Six4Three describes it, Face­book first got devel­op­ers to build their busi­ness mod­els around access to that data, and then engaged in what amounts to a shake­down of those devel­op­ers, threat­en­ing to take that access away unless expen­sive mobile ads were pur­chased.

    But beyond that, Six4Three alleges that Face­book incen­tivized devel­oped to cre­ate apps for its sys­tem by imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. Don’t for­get the Face­book friends data data (accessed via the “friends per­mis­sion” fea­ture) is the infor­ma­tion at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

    So Face­book was appar­ent­ly offer­ing long-term access to “friends per­mis­sion” data back in 2012 as a means of incen­tiviz­ing devel­op­ers to cre­ate apps and the same time it was threat­en­ing to cut off devel­op­er access to this data unless they pur­chased expen­sive mobile adds. And then, of course, that “friends per­mis­sion” fea­ture was wound down in 2015, which was undoubt­ed­ly a good thing for the pri­va­cy of Face­book users but as we can see the devel­op­ers weren’t so hap­py about this, in part because they were appar­ent­ly told by Face­book to expect long-term access to that data. Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book.

    It’s worth not­ing that Six4Three devel­oped an app called Pink­i­nis that searched through the pho­tos of your friends for pic­tures of them in swimwear. So los­ing access to friends data more or less broke Six4Three’s app.

    Beyond that, Six4Three also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. This is also note­wor­thy with respect to the Cam­bridge Ana­lyt­i­ca scan­dal since it appeared to be the case that Alek­san­dr Kogan’s psy­cho­log­i­cal pro­fil­ing app was allowed to access the “friends per­mis­sion” fea­ture lat­er than oth­er apps. In oth­er words, the Cam­bridge Ana­lyt­i­ca app did actu­al­ly appear to get pref­er­en­tial treat­ment from Face­book.

    But Six4Three’s alle­ga­tions go fur­ther, and sug­gest that Face­book’s exec­u­tives would observe which apps were the most suc­cess­ful and plot­ted to either extract mon­ey from them, co-opt them or destroy them using the threat of cut­ting off access to the user data as lever­age.

    So, basi­cal­ly, Face­book is get­ting sued by this app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

    The Guardian

    Zucker­berg set up fraud­u­lent scheme to ‘weaponise’ data, court case alleges

    Face­book CEO exploit­ed abil­i­ty to access data from any user’s friend net­work, US case claims

    Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son

    Thu 24 May 2018 08.01 EDT

    Mark Zucker­berg faces alle­ga­tions that he devel­oped a “mali­cious and fraud­u­lent scheme” to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness.

    A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive “weaponised” the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

    A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.

    Face­book rejects all claims, and has made a motion to have the case dis­missed using a free speech defence.

    It claims the first amend­ment pro­tects its right to make “edi­to­r­i­al deci­sions” as it sees fit. Zucker­berg and oth­er senior exec­u­tives have assert­ed that Face­book is a plat­form not a pub­lish­er, most recent­ly in tes­ti­mo­ny to Con­gress.

    Heather Whit­ney, a legal schol­ar who has writ­ten about social media com­pa­nies for the Knight First Amend­ment Insti­tute at Colum­bia Uni­ver­si­ty, said, in her opin­ion, this exposed a poten­tial ten­sion for Face­book.

    “Facebook’s claims in court that it is an edi­tor for first amend­ment pur­pos­es and thus free to cen­sor and alter the con­tent avail­able on its site is in ten­sion with their, espe­cial­ly recent, claims before the pub­lic and US Con­gress to be neu­tral plat­forms.”

    The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

    The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

    It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours.

    Face­book said the claims had no mer­it and the com­pa­ny would “con­tin­ue to defend our­selves vig­or­ous­ly”.

    Six4Three lodged its orig­i­nal case in 2015 short­ly after Face­book removed devel­op­ers’ access to friends’ data. The com­pa­ny said it had invest­ed $250,000 in devel­op­ing an app called Piki­nis that fil­tered users’ friends pho­tos to find any of them in swimwear. Its launch was met with con­tro­ver­sy.

    The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

    The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.

    “The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

    The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.

    Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

    “Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.

    In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends.

    Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

    Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

    The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

    It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments.

    A Face­book spokesman said: “When we changed our pol­i­cy in 2015, we gave all third-par­ty devel­op­ers ample notice of mate­r­i­al plat­form changes that could have impact­ed their appli­ca­tions.”

    Facebook’s sub­mis­sion to the court, an “anti-Slapp motion” under Cal­i­forn­ian leg­is­la­tion designed to pro­tect free­dom of speech, said: “Six4Three is tak­ing its fifth shot at an ever expand­ing set of claims and all of its claims turn on one deci­sion, which is absolute­ly pro­tect­ed: Facebook’s edi­to­r­i­al deci­sion to stop pub­lish­ing cer­tain user-gen­er­at­ed con­tent via its Plat­form to third-par­ty app devel­op­ers.”

    David God­kin, Six4Three’s lead coun­sel said: “We believe the pub­lic has a right to see the evi­dence and are con­fi­dent the evi­dence clear­ly demon­strates the truth of our alle­ga­tions, and much more.”

    Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

    “They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.”

    ...

    ———-

    “Zucker­berg set up fraud­u­lent scheme to ‘weaponise’ data, court case alleges” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 05/24/2018

    “A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.”

    It was Mark Zucker­berg who per­son­al­ly led this shake­down oper­a­tion, accord­ing to the law­suit. So what’s the evi­dence? Well, that appears to be in the form of thou­sands of cur­rent­ly redact­ed inter­nal emails. It’s unclear how those emails were obtained:

    ...
    The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

    The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

    It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours.
    ...

    Note this isn’t a new law­suit by Six4Three. They first filed a case in 2015, short­ly after Face­book removed devel­op­ers’ access to the “friends per­mis­sion” data fea­ture, where app devel­op­ers could grab exten­sive infor­ma­tion from ALL the Face­book friends of the users who down­loaded their apps. And when you look at the how the Six4Three app works it’s pret­ty clear why they would have been very upset about los­ing access to the friends data: their “Piki­nis” app is based on scan­ning your friends’ pic­tures for shots of them in swimwear:

    ...
    Six4Three lodged its orig­i­nal case in 2015 short­ly after Face­book removed devel­op­ers’ access to friends’ data. The com­pa­ny said it had invest­ed $250,000 in devel­op­ing an app called Piki­nis that fil­tered users’ friends pho­tos to find any of them in swimwear. Its launch was met with con­tro­ver­sy.
    ...

    And it’s a rather fas­ci­nat­ing law­suit by Six4Three because it’s basi­cal­ly com­plain­ing about Face­book sud­den­ly threat­en­ing to remove access to this per­son­al data after pre­vi­ous­ly imply­ing that devel­op­ers would have long-term access to it and use that pow­er to extort devel­op­ers. And in order to make that case, Six4Three also asserts that Face­book was well aware of the pri­va­cy impli­ca­tions of its data shar­ing poli­cies because access to that data was both the car­rot and the stick for devel­op­ers. So this case, if proven, would utter­ly destroy Face­book’s por­tray­al of itself as a vic­tim of Cam­bridge Ana­lyt­i­ca’s mis­use of its data:

    ...
    The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

    The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.
    ...

    And the ini­tial motive for all this was Face­book’s real­iza­tion in 2012 that it failed to antic­i­pate the speed of con­sumer adop­tion of smart­phones and effec­tive­ly dam­aged its lucra­tive adver­tis­ing busi­ness, which was focused on desk­top ads:

    ...
    “The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

    The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.
    ...

    So Face­book respond­ed to this sud­den threat to its core busi­ness by in mul­ti­ple scan­dalous ways, accord­ing to the law­suit. First, Face­book began forc­ing app devel­op­ers to buy expen­sive mobile ads on its new, under­used mobile ser­vice, or risk hav­ing their access to data at the core of their busi­ness cut off. It’s an exam­ple of how impor­tant sell­ing access to that user data to third par­ties was to Face­book’s busi­ness mod­el:

    ...
    Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

    “Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.
    ...

    But beyond that, Six4Three alleges that Face­book was simul­ta­ne­ous­ly try­ing to entice devel­op­ers to makes for its sys­tems by imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. So the “friends per­mis­sion” fea­ture for devel­op­ers that Face­book was phas­ing out in 2014–2015 was appar­ent­ly be ped­dled to devel­op­ers as a long-term fea­ture back in 2012:

    ...
    In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends.
    ...

    And, accord­ing to Six4Three, once a busi­ness became hooked on Face­book’s user data, Face­book would then look for par­tic­u­lar­ly lucra­tive apps and try to find ways to extract more mon­ey out of them. And that would appar­ent­ly include threat­en­ing to cut off access to that user data to either force com­pa­nies out of busi­ness or coerce app own­ers into sell­ing at below mar­ket prices. Up to 40,000 com­pa­nies were poten­tial­ly defraud­ed in this way and it was Face­book’s senior exec­u­tives who per­son­al­ly devised and man­aged the scheme, includ­ing Zucker­berg:

    ...
    Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

    Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

    The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

    It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments.

    A Face­book spokesman said: “When we changed our pol­i­cy in 2015, we gave all third-par­ty devel­op­ers ample notice of mate­r­i­al plat­form changes that could have impact­ed their appli­ca­tions.”
    ...

    Not sur­pris­ing­ly, Sandy Parak­i­la, the for­mer Face­book exec­u­tive turned whistle­blow­er who pre­vi­ous­ly revealed that Face­book exec­u­tives were con­scious­ly neg­li­gent in how user data was used(or abused), views this law­suit and the rev­e­la­tions con­tained in those emails a “bomb­shell” that more or less backs up what he’s been say­ing all along:

    ...
    Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

    “They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.”

    So was Mark Zucker­berg effec­tive­ly act­ing like the top mob­ster in a shake­down scheme involv­ing app devel­op­ers? A scheme where Face­book selec­tive­ly threat­ened to rescind access to its core data in order to extort ad buys from the devel­op­ers, buy the app at below mar­ket prices, or straight up dri­ve app devel­op­ers out of busi­ness? We’ll see, but this is going to be a law­suit to keep in eye on.

    “That’s a nice app you got there...it would be a shame if some­thing hap­pened to your access to user data...”

    Posted by Pterrafractyl | May 24, 2018, 12:09 pm
  15. Here’s a fas­ci­nat­ing twist to the already fas­ci­nat­ing sto­ry of Psy Group, the Israeli-owned pri­vate intel­li­gence firm that was appar­ent­ly pushed on the Trump team dur­ing the August 3, 2016, Trump Tow­er meet­ing. That’s the new­ly dis­cov­ered meet­ing where Erik Prince and George Nad­er met with Don­ald Trump, Jr. and Stephen Miller to inform the Trump team that the crown princes of Sau­di Ara­bia and the UAE were “eager” to help Trump win the elec­tion. And Psy Group, an Israeli pri­vate intel­li­gence firm that offers many of the same psy­cho­log­i­cal war­fare ser­vices of Cam­bridge Ana­lyt­i­ca, pre­sent­ed a pitch at that meet­ing for a socia media manip­u­la­tion cam­paign involv­ing thou­sands of fake accounts. And this meet­ing hap­pened a cou­ple weeks before Steve Ban­non replaced Paul Man­afort and brought Cam­bridge Ana­lyt­i­ca into promi­nence in the Trump team’s elec­toral machi­na­tions.

    So here’s the new twist to this Psy Group/Cambridge Ana­lyt­i­ca sto­ry: now we learn that Cam­bridge Ana­lyt­i­ca and Psy Group formed a busi­ness alliance with Cam­bridge Ana­lyt­i­ca after Trump’s vic­to­ry to try to win U.S. gov­ern­ment work. This alliance report­ed­ly hap­pened after the Cam­bridge Ana­lyt­i­ca and Psy Group signed a mutu­al non-dis­clo­sure agree­ment.

    Intrigu­ing­ly, the agree­ment was signed on Decem­ber 14, 2016, accord­ing to doc­u­ments seen by Bloomberg. And Decem­ber 14th, 2016, just hap­pens to be one day before the Crown Prince of the UAE secret­ly trav­eled the US — against diplo­mat­ic pro­to­col — and met with the Trump tran­si­tion team at Trump Tow­er (includ­ing Michael Fly­nn, Jared Kush­n­er, and Steve Ban­non) to help arrange the even­tu­al meet­ing in the Sey­chelles between Erik Prince, George Nad­er, and Kir­ill Dmitriev.

    So you have to won­der if the sign­ing of that non-dis­clo­sure agree­ment was part of all the schem­ing asso­ci­at­ed with the Sey­chelles. Don’t for­get that the Sey­chelles meet­ing appears to cen­ter around what amounts to a lucra­tive offer to Rus­sia to realign itself away from the gov­ern­ments of Iran and Syr­ia, which implic­it­ly sug­gests plans for ongo­ing regime change oper­a­tions in Syr­ia and a major new regime change oper­a­tion in Iran. And based on what we know about the ser­vices offered by both Psy Group and Cam­bridge Ana­lyt­i­ca — psy­cho­log­i­cal war­fare ser­vices designed to change the atti­tudes of entire nations — the two firms sound like exact­ly the kinds of com­pa­nies that might have been major con­trac­tors for those planned regime change oper­a­tions.

    Grant­ed, there would have been no short­age of poten­tial US gov­ern­ment con­tracts Cam­bridge Ana­lyt­i­ca and Psy Group would have been mutu­al­ly inter­est­ed in pur­su­ing that have noth­ing to do with the Sey­chelles scheme. But the tim­ing sure is inter­est­ing giv­en the heavy over­lap of char­ac­ters involved.

    And while the non-dis­clo­sure doc­u­ments don’t indi­cate which gov­ern­ment con­tracts pre­cise­ly the two com­pa­nies were ini­tial­ly plan­ning on joint­ly bid­ding on (which makes sense if they were ini­tial­ly plan­ning on work­ing on some­thing involv­ing a Sey­chelles/regime-change scheme), there is some infor­ma­tion on one of the con­tracts they did end up joint­ing bid­ding on which hap­pened to focus on psy­cho­log­i­cal war­fare ser­vices in the Mid­dle East. Specif­i­cal­ly, they made a joint pro­pos­al for the State Department’s Glob­al Engage­ment Cen­ter for a project focused on dis­rupt­ing the recruit­ment and rad­i­cal­iza­tion of ISIS mem­bers. It sounds like the pro­pos­al focused heav­i­ly on cre­at­ing fake online per­sonas so it’s basi­cal­ly a dif­fer­ent appli­ca­tion for the same fake-per­sona ser­vices Psy Group and Cam­bridge Ana­lyt­i­ca offer in the polit­i­cal are­na.

    And it turns out the State Department’s Glob­al Engage­ment Cen­ter did indeed sign a con­tract with Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, SCL Group, last year. Addi­tion­al­ly, one of the con­tracts Psy Group and Cam­bridge Ana­lyt­i­ca joint­ly sub­mit­ted to the US State Depart­ment also includ­ed SCL. Although it’s unclear if it involved Cam­bridge Ana­lyt­i­ca because it didn’t include pro­vi­sions for sub­con­trac­tors and the con­tract didn’t involve social media and was focused on in-per­son inter­views. So while we don’t know how suc­cess­ful Cam­brdi­ge Ana­lyt­i­ca and Psy Group were in their mutu­al hunt for gov­ern­ment con­tracts, SCL was suc­cess­ful. So if SCL was get­ting lots of oth­er con­tracts who knows how many of them also involved Cam­bridge Ana­lyt­i­ca and/or Psy Group.

    We’re also learn­ing that Psy Group appears to have shut itself down in Feb­ru­ary of 2018 short­ly after George Nad­er was inter­view by Robert Mueller’s grand jury. But it does­n’t appear to be a real shut­down and it sounds like Psy Group has qui­et­ly reopened under the new name “WhiteKnight”. Let’s not for­get that Cam­bridge Ana­lyt­i­ca appears to have already done the same thing, shut­ting down only to qui­et­ly reopen as “Emer­da­ta”. So for all we know there’s already a new WhiteKnight/Emerdata non-dis­clo­sure agree­ment in place for the pur­pose of fur­ther joint bid­ding on gov­ern­ment con­tracts. But as the fol­low­ing sto­ry makes clear, one thing we do know for sure at this point is that if the Cam­bridge Ana­lyt­i­ca and/or Psy Group end up get­ting gov­ern­ment con­tracts they’re going to go to great lengths to hide it:

    Bloomberg

    Mueller Asked About Mon­ey Flows to Israeli Social-Media Firm, Source Says

    * PSY Group’s work includ­ed fake per­sonas, firm’s doc­u­ments show
    * Founder is report­ed to have met with Don­ald Trump Jr. in 2016

    By Michael Riley and Lau­ren Etter
    May 22, 2018, 12:35 PM CDT

    Spe­cial Coun­sel Robert Mueller’s team has asked about flows of mon­ey into the Cyprus bank account of a com­pa­ny that spe­cial­ized in social-media manip­u­la­tion and whose founder report­ed­ly met with Don­ald Trump Jr. in August 2016, accord­ing to a per­son famil­iar with the inves­ti­ga­tion.

    The inquiry is draw­ing atten­tion to PSY Group, an Israeli firm that pitched its ser­vices to super-PACs and oth­er enti­ties dur­ing the 2016 elec­tion. Those ser­vices includ­ed infil­trat­ing tar­get audi­ences with elab­o­rate­ly craft­ed social-media per­sonas and spread­ing mis­lead­ing infor­ma­tion through web­sites meant to mim­ic news por­tals, accord­ing to inter­views and PSY Group doc­u­ments seen by Bloomberg News.

    The per­son doesn’t believe any of those pitch­es was suc­cess­ful, and it’s ille­gal for for­eign enti­ties to con­tribute any­thing of val­ue or to play deci­sion-mak­ing roles in U.S. polit­i­cal cam­paigns.

    One of PSY Group’s founders, Joel Zamel, met in August 2016 at Trump Tow­er with Don­ald Trump Jr. and an emis­sary to Sau­di Ara­bia and the Unit­ed Arab Emi­rates to dis­cuss how PSY Group could help Trump win, the New York Times report­ed on Sat­ur­day.

    Marc Mukasey, a lawyer for Zamel, said his client “offered noth­ing to the Trump cam­paign, received noth­ing from the Trump cam­paign, deliv­ered noth­ing to the Trump cam­paign and was not solicit­ed by, or asked to do any­thing for, the Trump cam­paign.” He also said reports that Zamel’s com­pa­nies engage in social-media manip­u­la­tion are mis­guid­ed and that the firms “har­vest pub­licly avail­able infor­ma­tion for law­ful use.”

    Don­ald Trump Jr. recalls a meet­ing at which he was pitched “on a social media plat­form or mar­ket­ing strat­e­gy,” said his attor­ney, Alan Futer­fas, in an emailed state­ment. “He was not inter­est­ed and that was the end of it.”

    Fol­low­ing Trump’s vic­to­ry, PSY Group formed an alliance with Cam­bridge Ana­lyt­i­ca, the Trump campaign’s pri­ma­ry social-media con­sul­tants, to try to win U.S. gov­ern­ment work, accord­ing to doc­u­ments obtained by Bloomberg News.

    FBI agents work­ing with Mueller’s team inter­viewed peo­ple asso­ci­at­ed with PSY Group’s U.S. oper­a­tions in Feb­ru­ary, and Mueller sub­poe­naed bank records for pay­ments made to the firm’s Cyprus bank accounts, accord­ing to a per­son who has seen one of the sub­poe­nas. Though PSY Group is based in Israel, it’s tech­ni­cal­ly head­quar­tered in Cyprus, the small Mediter­ranean island famous for its bank­ing secre­cy.

    Short­ly after those inter­views, on Feb. 25, PSY Group Chief Exec­u­tive Offi­cer Royi Burstien informed employ­ees in Tel Aviv that the com­pa­ny was clos­ing down. Burstien is a for­mer com­man­der of an Israeli psy­cho­log­i­cal war­fare unit, accord­ing to two peo­ple famil­iar with the com­pa­ny. He didn’t respond to requests for com­ment.

    ...

    ‘Poi­son­ing the Well’

    Tac­tics deployed by PSY Group in for­eign elec­tions includ­ed inflam­ing divi­sions in oppo­si­tion groups and play­ing on deep-seat­ed cul­tur­al and eth­nic con­flicts, some­thing the firm called “poi­son­ing the well,” accord­ing to the peo­ple.

    In a con­tract­ing pro­pos­al for the U.S. State Depart­ment that PSY Group pre­pared with Cam­bridge Ana­lyt­i­ca and SCL Group, Cambridge’s U.K. affil­i­ate, the firm said that it “has con­duct­ed messaging/influence oper­a­tions in well over a dozen lan­guages and dialects” and that it employs “an elite group of high-rank­ing for­mer offi­cers from some of the world’s most renowned intel­li­gence units.”

    Although the pro­pos­al says that the com­pa­ny is legal­ly bound not to reveal its clients, it also boasts that “PSY has suc­ceed­ed in plac­ing the results of its intel­li­gence activ­i­ties in top-tier pub­li­ca­tions across the globe in order to advance the inter­ests of its clients.”

    That pro­pos­al was the result of a col­lab­o­ra­tion that gelled after Trump’s vic­to­ry — a mutu­al non-dis­clo­sure agree­ment between Cam­bridge and PSY Group is dat­ed Dec. 14, 2016 — but the doc­u­ments don’t indi­cate how the com­pa­nies ini­tial­ly con­nect­ed or why they decid­ed to work togeth­er.

    Com­pa­nies Shut Down

    Cam­bridge Ana­lyt­i­ca and the elec­tions divi­sion of SCL shut down this month fol­low­ing scruti­ny of the com­pa­nies’ busi­ness prac­tices, includ­ing the release of a secret­ly record­ed inter­view of Cam­bridge CEO Alexan­der Nix say­ing he could entrap politi­cians in com­pro­mis­ing sit­u­a­tions.

    The joint pro­pos­al for the State Department’s Glob­al Engage­ment Cen­ter was for a project to inter­rupt the recruit­ment and rad­i­cal­iza­tion of ISIS mem­bers, and it pro­vides insight into PSY Group’s use of fake social-media per­sonas.

    The com­pa­ny spent months prepar­ing for the pro­pos­al by devel­op­ing a per­sona for “an aver­age Chica­go teenag­er” named Madi­son who con­vert­ed from Chris­tian­i­ty to Islam and became alien­at­ed from her par­ents. Over a peri­od of many weeks, Madi­son inter­act­ed with an ISIS recruiter, received instruc­tions for send­ing mon­ey to fight­ers in Syr­ia, and began an extend­ed flir­ta­tion with a fight­er in Raqqa, Syr­ia.

    Among the long-term objec­tives of Madison’s per­sona were obtain­ing names and con­tacts of “rad­i­cal Turk­ish Islam­ic ele­ments” and obtain­ing bank accounts and rout­ing num­bers for donat­ing to ISIS, accord­ing to the pro­pos­al seen by Bloomberg News.

    The State Department’s Glob­al Engage­ment Cen­ter entered into a con­tract with SCL Group last year, but it didn’t include pro­vi­sions for work to be per­formed by any sub­con­trac­tors, accord­ing to a depart­ment spokesman. That con­tract didn’t involve social media and was focused on in-per­son inter­views, accord­ing to an ear­li­er depart­ment brief­ing.

    Tow­er Meet­ing

    The Trump Tow­er meet­ing in August 2016 includ­ed Zamel, the PSY Group founder, and George Nad­er, an advis­er to the rul­ing fam­i­lies of Sau­di Ara­bia and the Unit­ed Arab Emi­rates, accord­ing to the New York Times report. PSY Group’s deci­sion to shut down appears to have come the same week that Nad­er tes­ti­fied before the grand jury work­ing with Mueller, accord­ing to the tim­ing of that tes­ti­mo­ny pre­vi­ous­ly report­ed in the Times.

    Fol­low­ing the elec­tion, Nad­er hired a dif­fer­ent com­pa­ny of Zamel’s called WhiteKnight, which spe­cial­izes in open-source social media research and is based in the Caribbean, accord­ing to a per­son famil­iar with the trans­ac­tion.

    The per­son described WhiteKnight as a high-end busi­ness con­sult­ing firm owned in part by Zamel that com­plet­ed a post-elec­tion analy­sis for Nad­er that exam­ined the role that social media played in the 2016 elec­tion.

    There is lit­tle pub­lic infor­ma­tion about WhiteKnight or its prod­ucts, and the com­pa­ny does not appear to have a web­site.

    Anoth­er per­son famil­iar with PSY Group’s oper­a­tions said that months ago, there was dis­cus­sion about rebrand­ing the firm under a dif­fer­ent name.

    The name being dis­cussed inter­nal­ly, accord­ing to the per­son, was WhiteKnight.

    ———-

    “Mueller Asked About Mon­ey Flows to Israeli Social-Media Firm, Source Says” by Michael Riley and Lau­ren Etter; Bloomberg; 05/22/2018

    “Spe­cial Coun­sel Robert Mueller’s team has asked about flows of mon­ey into the Cyprus bank account of a com­pa­ny that spe­cial­ized in social-media manip­u­la­tion and whose founder report­ed­ly met with Don­ald Trump Jr. in August 2016, accord­ing to a per­son famil­iar with the inves­ti­ga­tion.”

    So the Mueller probe is look­ing into mon­ey-flows of Psy Group’s Cyprus bank account, along with the activ­i­ties of George Nad­er (who pitched Psy Group to the Trump team in August 2016) and this inter­est from Mueller appears to have led to the sud­den shut­down of the com­pa­ny a few months ago:

    ...
    FBI agents work­ing with Mueller’s team inter­viewed peo­ple asso­ci­at­ed with PSY Group’s U.S. oper­a­tions in Feb­ru­ary, and Mueller sub­poe­naed bank records for pay­ments made to the firm’s Cyprus bank accounts, accord­ing to a per­son who has seen one of the sub­poe­nas. Though PSY Group is based in Israel, it’s tech­ni­cal­ly head­quar­tered in Cyprus, the small Mediter­ranean island famous for its bank­ing secre­cy.

    Short­ly after those inter­views, on Feb. 25, PSY Group Chief Exec­u­tive Offi­cer Royi Burstien informed employ­ees in Tel Aviv that the com­pa­ny was clos­ing down. Burstien is a for­mer com­man­der of an Israeli psy­cho­log­i­cal war­fare unit, accord­ing to two peo­ple famil­iar with the com­pa­ny. He didn’t respond to requests for com­ment.

    ...

    Tow­er Meet­ing

    The Trump Tow­er meet­ing in August 2016 includ­ed Zamel, the PSY Group founder, and George Nad­er, an advis­er to the rul­ing fam­i­lies of Sau­di Ara­bia and the Unit­ed Arab Emi­rates, accord­ing to the New York Times report. PSY Group’s deci­sion to shut down appears to have come the same week that Nad­er tes­ti­fied before the grand jury work­ing with Mueller, accord­ing to the tim­ing of that tes­ti­mo­ny pre­vi­ous­ly report­ed in the Times.
    ...

    Although the sud­den shut­down of Psy Group appears to real­ly be a secret rebrand­ing. Psy Group is appar­ent­ly now WhiteKnight, a rebrand­ing the com­pa­ny has been work­ing on for a white it seems since WhiteKnight was hired by Nad­er to do a post-elec­tion analy­sis on the role social media played in the 2016 elec­tion:

    ...
    Fol­low­ing the elec­tion, Nad­er hired a dif­fer­ent com­pa­ny of Zamel’s called WhiteKnight, which spe­cial­izes in open-source social media research and is based in the Caribbean, accord­ing to a per­son famil­iar with the trans­ac­tion.

    The per­son described WhiteKnight as a high-end busi­ness con­sult­ing firm owned in part by Zamel that com­plet­ed a post-elec­tion analy­sis for Nad­er that exam­ined the role that social media played in the 2016 elec­tion.

    There is lit­tle pub­lic infor­ma­tion about WhiteKnight or its prod­ucts, and the com­pa­ny does not appear to have a web­site.

    Anoth­er per­son famil­iar with PSY Group’s oper­a­tions said that months ago, there was dis­cus­sion about rebrand­ing the firm under a dif­fer­ent name.

    The name being dis­cussed inter­nal­ly, accord­ing to the per­son, was WhiteKnight.

    Just imag­ine how fas­ci­nat­ing WhiteKnight’s post-elec­tion analy­sis on the role social media played must since it was basi­cal­ly con­duct­ed by Psy Group, a social media manip­u­la­tion firm that either exe­cut­ed much of the most egre­gious (and effec­tive) social media manip­u­la­tion itself or worked direct­ly with the worst per­pe­tra­tors like Cam­bridge Ana­lyt­i­ca. There’s prob­a­bly quite a few insights in that report that would­n’t be avail­able to oth­er firms.

    So what kinds of secrets is Psy Group hop­ing to keep hid­den with its shutdown/rebranding move? Well, some of those secrets pre­sum­ably involve the alliance Psy Group cre­at­ed with Cam­bridge Ana­lyt­i­ca short­ly after Trump’s vic­to­ry, cul­mi­nat­ing the the Decem­ber 14, 2016, mutu­al non-dis­clo­sure agree­ment (one day before the Trump Tow­er meet­ing with the crown prince of the UAE to set up the Sey­chelles meet­ing). And note how the con­tract Psy Group and Cam­bridge Ana­lyt­i­ca pitched to “con­duct­ed messaging/influence oper­a­tions in well over a dozen lan­guages and dialects” was also sub­mit­ted with Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny SCL. So Psy Group’s alliance with Cam­bridge Ana­lyt­i­ca was prob­a­bly real­ly an alliance with Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny too:

    ...
    Fol­low­ing Trump’s vic­to­ry, PSY Group formed an alliance with Cam­bridge Ana­lyt­i­ca, the Trump campaign’s pri­ma­ry social-media con­sul­tants, to try to win U.S. gov­ern­ment work, accord­ing to doc­u­ments obtained by Bloomberg News.

    ...

    PSY Group devel­oped elab­o­rate infor­ma­tion oper­a­tions for com­mer­cial clients and polit­i­cal can­di­dates around the world, the peo­ple said.

    ‘Poi­son­ing the Well’

    Tac­tics deployed by PSY Group in for­eign elec­tions includ­ed inflam­ing divi­sions in oppo­si­tion groups and play­ing on deep-seat­ed cul­tur­al and eth­nic con­flicts, some­thing the firm called “poi­son­ing the well,” accord­ing to the peo­ple.

    In a con­tract­ing pro­pos­al for the U.S. State Depart­ment that PSY Group pre­pared with Cam­bridge Ana­lyt­i­ca and SCL Group, Cambridge’s U.K. affil­i­ate, the firm said that it “has con­duct­ed messaging/influence oper­a­tions in well over a dozen lan­guages and dialects” and that it employs “an elite group of high-rank­ing for­mer offi­cers from some of the world’s most renowned intel­li­gence units.”

    Although the pro­pos­al says that the com­pa­ny is legal­ly bound not to reveal its clients, it also boasts that “PSY has suc­ceed­ed in plac­ing the results of its intel­li­gence activ­i­ties in top-tier pub­li­ca­tions across the globe in order to advance the inter­ests of its clients.”

    That pro­pos­al was the result of a col­lab­o­ra­tion that gelled after Trump’s vic­to­ry — a mutu­al non-dis­clo­sure agree­ment between Cam­bridge and PSY Group is dat­ed Dec. 14, 2016 — but the doc­u­ments don’t indi­cate how the com­pa­nies ini­tial­ly con­nect­ed or why they decid­ed to work togeth­er.
    ...

    Anoth­er point to keep in mind regard­ing the tim­ing of that Decem­ber 14, 2016, mutu­al non-dis­clo­sure agree­ment: the Sey­chelles meet­ing appears to be a giant pitch designed to realign Rus­sia, indi­cat­ing the UAE was clear­ly very inter­est­ed in exploit­ing Trump’s vic­to­ry in a big way. They were ‘cash­ing in’, metaphor­i­cal­ly. So it seems rea­son­able to sus­pect that Psy Group, which is close­ly affil­i­at­ed with the UAE’s crown prince, would also be quite inter­est­ed in lit­er­al­ly ‘cash­ing in’ in a very big way too dur­ing that Decem­ber 2016 tran­si­tion peri­od. In oth­er words, while we don’t know what Psy Group and Cam­bridge Ana­lyt­i­ca decid­ed to not dis­close with their non-dis­clo­sure agree­ment, we can be pret­ty sure it was extreme­ly ambi­tious at the time.

    But at this point, the only pro­pos­als for US gov­ern­ment con­tracts that we do know about were for an anti-ISIS social media oper­a­tion for the US State Department’s Glob­al Engage­ment Cen­ter:

    ...
    The joint pro­pos­al for the State Department’s Glob­al Engage­ment Cen­ter was for a project to inter­rupt the recruit­ment and rad­i­cal­iza­tion of ISIS mem­bers, and it pro­vides insight into PSY Group’s use of fake social-media per­sonas.

    The com­pa­ny spent months prepar­ing for the pro­pos­al by devel­op­ing a per­sona for “an aver­age Chica­go teenag­er” named Madi­son who con­vert­ed from Chris­tian­i­ty to Islam and became alien­at­ed from her par­ents. Over a peri­od of many weeks, Madi­son inter­act­ed with an ISIS recruiter, received instruc­tions for send­ing mon­ey to fight­ers in Syr­ia, and began an extend­ed flir­ta­tion with a fight­er in Raqqa, Syr­ia.

    Among the long-term objec­tives of Madison’s per­sona were obtain­ing names and con­tacts of “rad­i­cal Turk­ish Islam­ic ele­ments” and obtain­ing bank accounts and rout­ing num­bers for donat­ing to ISIS, accord­ing to the pro­pos­al seen by Bloomberg News.
    ...

    And one con­tract we do know about at this point that was award­ing to this net­work of com­pa­nies was actu­al­ly award­ed to Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, SCL:

    ...
    The State Department’s Glob­al Engage­ment Cen­ter entered into a con­tract with SCL Group last year, but it didn’t include pro­vi­sions for work to be per­formed by any sub­con­trac­tors, accord­ing to a depart­ment spokesman. That con­tract didn’t involve social media and was focused on in-per­son inter­views, accord­ing to an ear­li­er depart­ment brief­ing.
    ...

    So there’s one gov­ern­ment con­tract that SCL won fol­low­ing Trump’s elec­tion, but Psy Group/Cambridge Ana­lyt­i­ca may or may not have been involved with.

    And that’s all we know about the work Psy Group may or may not have done for the US gov­ern­ment fol­low­ing Trump’s vic­to­ry at this point. Except we also know that Psy Group and Cam­bridge Ana­lyt­i­ca weren’t com­pet­ing, so what­ev­er con­tract Psy Group got Cam­bridge Ana­lyt­i­ca may have received too. And that indi­cates, at a min­i­mum, a will­ing­ness for these two com­pa­nies to work VERY close togeth­er. So close they risk reveal­ing inter­nal secrets to each oth­er. Don’t for­get, Psy Group and Cam­bridge Ana­lyt­i­ca are osten­si­bly com­peti­tors offer­ing sim­i­lar ser­vices to the same types of clients. And short­ly after the elec­tion they were will­ing to sign an agree­ment to joint­ly com­pete for con­tracts that they would work on togeth­er. Don’t for­get that one of the mas­sive ques­tions loom­ing over this whole sto­ry is whether or not Psy Group and Cam­bridge Ana­lyt­i­ca — two direct com­peti­tors — were not just on the same team but actu­al­ly work­ing close­ly togeth­er dur­ing the 2016 elec­tion to help elect Trump. And thanks to these recent rev­e­la­tions we now know Psy Group and Cam­bridge Ana­lyt­i­ca were at least will­ing to work extreme­ly close­ly with each oth­er imme­di­ate­ly after the elec­tion on a vari­ety of dif­fer­ent gov­ern­ment con­tracts. That seems like a rel­e­vant clue in this whole mess.

    Posted by Pterrafractyl | May 29, 2018, 8:14 pm
  16. Oh look, a new scary Cam­bridge Ana­lyt­i­ca oper­a­tion was just dis­cov­ered. Or rather, it’s a scary new sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot that Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment to and played a key role in also worked on the pro-Brex­it cam­paign and lat­er assist­ed a West-lean­ing East Ukraine politi­cian Sergei Taru­ta. It’s like these com­pa­nies can’t go a week with­out a new scary sto­ry. Which is extra scary.

    For scary starters, the arti­cle also notes that, despite Face­book’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. So if Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form it’s not try­ing very hard. One is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

    Anoth­er part of what makes the fol­low­ing arti­cle scary is that it’s a reminder that you don’t nec­es­sar­i­ly need to have down­loaded a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

    Addi­tion­al­ly, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing we typ­i­cal­ly asso­ciate with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. A ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made. That’s what AIQ was offer­ing and the new­ly dis­cov­ered data­base con­tained the info for that.

    In this case, the Finan­cial Times has some­how got­ten its hands on a bunch of Face­book-relat­ed data on held inter­nal­ly by AIQ. It turns out that AIQ stored a list of 759,934 Face­book users in a table that includ­ed home address­es, phone num­bers and email address­es for some pro­files. Addi­tion­al­ly, the files con­tain polit­i­cal Face­book posts and likes for the peo­ple. It all appears to be part of a soft­ware pack­age AIQ was devel­op­ing for a client that would allow them to search the polit­i­cal posts and “Likes” peo­ple made on Face­book. A per­son­al polit­i­cal brows­er that could give a far more detailed peak into some­one’s pol­i­tics than oth­er forms of tra­di­tion­al­ly avail­able infor­ma­tion on peo­ple’s pol­i­tics like polit­i­cal dona­tion records and par­ty affil­i­a­tion.

    Also keep in mind that we already know Cam­bridge Ana­lyt­i­ca col­lect­ed large amounts of infor­ma­tion on 87 mil­lion Face­book accounts. So the 759,934 num­ber should not be seen as the total num­ber of peo­ple AIQ has sim­i­lar such files on. It could just be a par­tic­u­lar batch select­ed by that client. A batch of 759,934 peo­ple a client just hap­pens to want to make per­son­al­ized polit­i­cal search­es on.

    It’s also worth not­ing that this ser­vice would be per­fect for accom­plish­ing the right-wing’s long-stand­ing goal of purg­ing the fed­er­al gov­ern­ment of lib­er­al employ­ees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. John­son and ‘Alt-Right’ neo-Nazi bil­lion­aire Peter Thiel report­ed­ly was help­ing the Trump team accom­plish dur­ing the tran­si­tion peri­od. And an ide­o­log­i­cal purge of the State Depart­ment is report­ed­ly already under­way. So it will be inter­est­ing to learn if this AIQ is being used for such pur­pos­es.

    It’s unclear if the data in these files was col­lect­ed through a Face­book app devel­oped by AIQ — in which case the peo­ple in the file at least had to click the “I accept” part of installing the app — or if the data was col­lect­ed sim­ply from scrap­ing pub­licly avail­able Face­book posts. Again, it’s a reminder that pret­ty much ANYTHING you do on a pub­licly acces­si­ble Face­book post, even a ‘Like’, is prob­a­bly get­ting col­lect­ed by some­one, aggre­gat­ed, and resold. Includ­ing, per­haps, by Aggre­gateIQ:

    Finan­cial Times

    Aggre­gateIQ had data of thou­sands of Face­book users
    Linked app found by secu­ri­ty researcher rais­es ques­tions on social network’s polic­ing

    Aliya Ram in Lon­don and Han­nah Kuch­ler in San Fran­cis­co
    June 1, 2018, 2:21 PM

    Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.

    The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

    The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

    Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

    Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

    Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

    “I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

    He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.

    Aggre­gateIQ has sought to dis­tance itself from an inter­na­tion­al pri­va­cy scan­dal engulf­ing Face­book and Cam­bridge Ana­lyt­i­ca, despite alle­ga­tions from Christo­pher Wylie, a whistle­blow­er at the now-defunct UK firm, that it had act­ed as the Cana­di­an branch of the organ­i­sa­tion.

    The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.

    The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said.

    AggregateIQ’s tech­nol­o­gy was used in the US for Ted Cruz’s cam­paign for the Repub­li­can nom­i­na­tion in 2016, and the com­pa­ny has also received mil­lions of pounds of fund­ing from British groups. These include Vote Leave, the main pro-Brex­it cam­paign front­ed by for­eign sec­re­tary Boris John­son.

    “The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.”

    ...

    ———-

    “Aggre­gateIQ had data of thou­sands of Face­book users” by Aliya Ram and Han­nah Kuch­ler; Finan­cial Times; 06/01/2018

    ““The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.”

    As secu­ri­ty researcher Chris Vick­ery put it, the whole is greater than the sum when you look at the syn­er­gys­tic way the var­i­ous tools devel­oped by com­pa­nies like Cam­bridge Ana­lyt­i­ca and AIQ work togeth­er. Syn­er­gy in the ser­vice of cre­at­ing a mass manip­u­la­tion ser­vice with per­son­al­ized micro-tar­get­ing capa­bil­i­ties.

    And that syn­er­gis­tic mass manip­u­la­tion is part of why it’s dis­turb­ing to hear that Vick­ery just dis­cov­ered 13 AIQ apps still avail­able on Face­book after Cam­bridge Ana­lyt­i­ca was declared banned and caused Face­book so much bad pub­lic­i­ty. The fact that there are still Cam­bridge Ana­lyt­i­ca-affil­i­at­ed apps sug­gests Face­book either real­ly, real­ly, real­ly likes Cam­bridge Ana­lyt­i­ca or it’s just real­ly, real­ly bad at app over­sight:

    ...
    The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

    The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

    Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

    ...

    He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.
    ...

    “How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.”

    “AIQ John­ny Scraper”. They weren’t even hid­ing it. But at least the John­ny Scraper app sounds rel­a­tive­ly innocu­ous.

    The per­son­al polit­i­cal post search engine ser­vice, on the oth­er hand, sounds far from innocu­ous. A data­base on 759,934 Face­book users cre­at­ed by AIQ soft­ware that tracked which which users liked a par­tic­u­lar page or were post­ing pos­i­tive or neg­a­tive com­ments. So soft­ware that inter­prets what peo­ple write about pol­i­tics on Face­book and aggre­gates that data into a search engine for clients. You have to won­der how sophis­ti­cat­ed that auto­mat­ed inter­pre­ta­tion soft­ware is at this point. What­ev­er the answer, AIQ’s text inter­pre­ta­tion soft­ware is only going to get more sophis­ti­cat­ed. That’s a giv­en.

    Some­day that soft­ware will prob­a­bly be able to write its own syn­op­sis of a per­son that’s bet­ter than a human could do. Who knows when that kind of soft­ware will be avail­able but some­day it will be and com­pa­nies like AIQ will be there to exploit it if that’s legal. That’s also a giv­en.

    And this 759,934 per­son data­base of polit­i­cal Likes and writ­ten polit­i­cal com­ments was what AIQ pro­vid­ed for just one client:

    ...
    Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

    Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

    “I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

    ...

    The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said.

    AggregateIQ’s tech­nol­o­gy was used in the US for Ted Cruz’s cam­paign for the Repub­li­can nom­i­na­tion in 2016, and the com­pa­ny has also received mil­lions of pounds of fund­ing from British groups. These include Vote Leave, the main pro-Brex­it cam­paign front­ed by for­eign sec­re­tary Boris John­son.
    ...

    And for all we know, AIQ’s data­base could have been data curat­ed from pub­licly avail­able posts and not AIQ app users, high­light­ing how any­thing pub­licly done on Face­book, even a Like, is going to be col­lect­ed by some­one and prob­a­bly sold:

    ...
    The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.
    ...

    You are what you Like in this com­mer­cial space. And we’re all in this com­mer­cial space to some extent. There real­ly is a com­mer­cial­ly avail­able pro­file of you. It’s just dis­trib­uted between the many dif­fer­ent data bro­kers offer­ing slices of it.

    Anoth­er key dynam­ic in all this is that Face­book’s busi­ness mod­el appears to be both a com­bi­na­tion of exploit­ing the vast infor­ma­tion monop­oly it pos­sess­es with an oppos­ing busi­ness mod­el of effec­tive­ly sell­ing off lit­tle chunks of that data by mak­ing it avail­able to app devel­op­ers. There’s an obvi­ous ten­sion in both exploit­ing your data monop­oly while sell­ing it off but that appears to be the most prof­itable path for­ward which is why that’s prob­a­bly the busi­ness mod­el AIQ was offer­ing with the data it was col­lect­ing from Face­book: ana­lyz­ing the Face­book data it’s col­lect­ed through apps and pub­lic data scrap­ing, cat­e­go­riz­ing the data (like polit­i­cal of non-polit­i­cal com­ments and if they’re pos­i­tive or neg­a­tive), and then sell slices of that vast inter­nal AIQ curat­ed con­tent to clients.

    Aggre­gate as much data as pos­si­ble. Ana­lyze it. And offer pieces of that curat­ed data pile to clients. That appears to be a busi­ness mod­el of choice in this com­mer­cial big data are­na which is why we should assume AIQ and Cam­bridge Ana­lyt­i­ca were offer­ing sim­i­lar ser­vice and should­n’t assume this par­tic­u­lar data­base of 759,934 Face­book accounts is the only one of its nature. Espe­cial­ly giv­en the 87 mil­lion pro­files they already scraped.

    And this is a busi­ness mod­el that’s going to apply for far more than just Face­book con­tent. The whole spec­trum of infor­ma­tion col­lect­ed on every­one is going to be part of this com­mer­cial space. And that’s part of what’s so scary: the data that gets fed into these inde­pen­dent Big Data repos­i­to­ries like the AIQ/Cambridge Ana­lyt­i­ca data­base is going to increas­ing­ly be the curat­ed data pro­vid­ed by oth­er Big Data providers in the same busi­ness. Every­one is col­lect­ing and ana­lyz­ing the curat­ed data every­one else is regur­gi­tat­ing out. Just as Cam­bridge Ana­lyt­i­ca and AIQ offer a slew of sep­a­rate inter­op­er­a­ble ser­vices to clients that have a ‘whole is greater than the sum’ syn­er­gis­tic qual­i­ty, the entire Big Data indus­try is going to have a sim­i­lar qual­i­ty. It’s a com­pet­i­tive coop­er­a­tive divi­sion of labor. Cam­bridge Ana­lyt­i­ca and AIQ are just the extra scary team mem­bers in a syn­er­gis­tic indus­try-wide team effort in the ser­vice of max­i­miz­ing the prof­its we all make from exploit­ing every­one’s data for sale.

    Posted by Pterrafractyl | June 3, 2018, 9:47 pm
  17. It’s that time again. Time to learn how the Cam­bridge Analytica/Facebook scan­dal just got worse. So what’s the new low? Well, it turns out Face­book has­n’t just been shar­ing egre­gious amounts of Face­book user data with app devel­op­ers. Device mak­ers, like Apple and Sam­sung, have also been giv­en sim­i­lar access to user data. At least 60 device mak­ers known thus far.

    Except, of course, it’s worse and these device mak­ers have actu­al­ly been giv­en EVEN MORE data that Face­book app devel­op­ers received. For exam­ple, Face­book allowed the device mak­ers access to the data of users’ friends with­out their explic­it con­sent, even after declar­ing that it would no longer share such infor­ma­tion with out­siders. And some device mak­ers could access per­son­al infor­ma­tion from users’ friends who thought they had turned off any shar­ing. So the “friends per­mis­sions” option that allowed Cam­bridge Ana­lyt­i­ca’s app to col­lect data on 87 mil­lion Face­book users even though just 300,000 peo­ple used their app has remained an option for device man­u­fac­tur­ers even after Face­book phased out the friends per­mis­sion option in 2014–2015.

    Beyond that, the New York Times exam­ined the kind of infor­ma­tion gath­ered from a Black­ber­ry device owned by one of its reporters and found that it was­n’t just col­lect­ing iden­ti­fy­ing infor­ma­tion on all the reporters friends. It was also grab­bing iden­ti­fy­ing infor­ma­tion on those friends’ friends. That sin­gle Black­ber­ry was able to retrieve iden­ti­fy­ing infor­ma­tion on near­ly 295,000 peo­ple!

    Face­book jus­ti­fies all this by argu­ing that the device mak­ers are basi­cal­ly an exten­sion of Face­book. The com­pa­ny also asserts that there were strict agree­ments on how the data could be used. But the main loop­hole they cite is that Face­book viewed its hard­ware part­ners as “ser­vice providers,” like a cloud com­put­ing ser­vice paid to store Face­book data or a com­pa­ny con­tract­ed to process cred­it card trans­ac­tions. And by cat­e­go­riz­ing these device mak­ers as ser­vice providers Face­book is able to get around a 2011 con­sent decree Face­book signed with the US Fed­er­al Trade Com­mis­sion over pre­vi­ous pri­va­cy vio­la­tions. Accord­ing to that con­sent decree Face­book does not need to seek addi­tion­al per­mis­sion to share friend data with ser­vice providers.

    So it’s not just Cam­bridge Ana­lyt­i­ca and the thou­sands of app devel­op­ers who have been scoop­ing up moun­tains of Face­book user data with­out peo­ple real­iz­ing it. The device mak­ers have been doing it too. More so. Much, much more so:

    The New York Times

    Face­book Gave Device Mak­ers Deep Access to Data on Users and Friends

    The com­pa­ny formed data-shar­ing part­ner­ships with Apple, Sam­sung and
    dozens of oth­er device mak­ers, rais­ing new con­cerns about its pri­va­cy pro­tec­tions.

    By GABRIEL J.X. DANCE, NICHOLAS CONFESSORE and MICHAEL LaFOR­GIA
    JUNE 3, 2018

    As Face­book sought to become the world’s dom­i­nant social media ser­vice, it struck agree­ments allow­ing phone and oth­er device mak­ers access to vast amounts of its users’ per­son­al infor­ma­tion.

    Face­book has reached data-shar­ing part­ner­ships with at least 60 device mak­ers — includ­ing Apple, Ama­zon, Black­Ber­ry, Microsoft and Sam­sung — over the last decade, start­ing before Face­book apps were wide­ly avail­able on smart­phones, com­pa­ny offi­cials said. The deals allowed Face­book to expand its reach and let device mak­ers offer cus­tomers pop­u­lar fea­tures of the social net­work, such as mes­sag­ing, “like” but­tons and address books.

    But the part­ner­ships, whose scope has not pre­vi­ous­ly been report­ed, raise con­cerns about the company’s pri­va­cy pro­tec­tions and com­pli­ance with a 2011 con­sent decree with the Fed­er­al Trade Com­mis­sion. Face­book allowed the device com­pa­nies access to the data of users’ friends with­out their explic­it con­sent, even after declar­ing that it would no longer share such infor­ma­tion with out­siders. Some device mak­ers could retrieve per­son­al infor­ma­tion even from users’ friends who believed they had barred any shar­ing, The New York Times found.

    Most of the part­ner­ships remain in effect, though Face­book began wind­ing them down in April. The com­pa­ny came under inten­si­fy­ing scruti­ny by law­mak­ers and reg­u­la­tors after news reports in March that a polit­i­cal con­sult­ing firm, Cam­bridge Ana­lyt­i­ca, mis­used the pri­vate infor­ma­tion of tens of mil­lions of Face­book users.

    In the furor that fol­lowed, Facebook’s lead­ers said that the kind of access exploit­ed by Cam­bridge in 2014 was cut off by the next year, when Face­book pro­hib­it­ed devel­op­ers from col­lect­ing infor­ma­tion from users’ friends. But the com­pa­ny offi­cials did not dis­close that Face­book had exempt­ed the mak­ers of cell­phones, tablets and oth­er hard­ware from such restric­tions.

    “You might think that Face­book or the device man­u­fac­tur­er is trust­wor­thy,” said Serge Egel­man, a pri­va­cy researcher at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, who stud­ies the secu­ri­ty of mobile apps. “But the prob­lem is that as more and more data is col­lect­ed on the device — and if it can be accessed by apps on the device — it cre­ates seri­ous pri­va­cy and secu­ri­ty risks.”

    In inter­views, Face­book offi­cials defend­ed the data shar­ing as con­sis­tent with its pri­va­cy poli­cies, the F.T.C. agree­ment and pledges to users. They said its part­ner­ships were gov­erned by con­tracts that strict­ly lim­it­ed use of the data, includ­ing any stored on part­ners’ servers. The offi­cials added that they knew of no cas­es where the infor­ma­tion had been mis­used.

    The com­pa­ny views its device part­ners as exten­sions of Face­book, serv­ing its more than two bil­lion users, the offi­cials said.

    “These part­ner­ships work very dif­fer­ent­ly from the way in which app devel­op­ers use our plat­form,” said Ime Archi­bong, a Face­book vice pres­i­dent. Unlike devel­op­ers that pro­vide games and ser­vices to Face­book users, the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence,” the offi­cials said.

    Some device part­ners can retrieve Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events, among oth­er data. Tests by The Times showed that the part­ners request­ed and received data in the same way oth­er third par­ties did.

    Facebook’s view that the device mak­ers are not out­siders lets the part­ners go even fur­ther, The Times found: They can obtain data about a user’s Face­book friends, even those who have denied Face­book per­mis­sion to share infor­ma­tion with any third par­ties.

    In inter­views, sev­er­al for­mer Face­book soft­ware engi­neers and secu­ri­ty experts said they were sur­prised at the abil­i­ty to over­ride shar­ing restric­tions.

    “It’s like hav­ing door locks installed, only to find out that the lock­smith also gave keys to all of his friends so they can come in and rifle through your stuff with­out hav­ing to ask you for per­mis­sion,” said Ashkan Soltani, a research and pri­va­cy con­sul­tant who for­mer­ly served as the F.T.C.’s chief tech­nol­o­gist.

    Details of Facebook’s part­ner­ships have emerged amid a reck­on­ing in Sil­i­con Val­ley over the vol­ume of per­son­al infor­ma­tion col­lect­ed on the inter­net and mon­e­tized by the tech indus­try. The per­va­sive col­lec­tion of data, while large­ly unreg­u­lat­ed in the Unit­ed States, has come under grow­ing crit­i­cism from elect­ed offi­cials at home and over­seas and pro­voked con­cern among con­sumers about how freely their infor­ma­tion is shared.

    In a tense appear­ance before Con­gress in March, Facebook’s chief exec­u­tive, Mark Zucker­berg, empha­sized what he said was a com­pa­ny pri­or­i­ty for Face­book users.“Every piece of con­tent that you share on Face­book you own,” he tes­ti­fied. ”You have com­plete con­trol over who sees it and how you share it.”

    But the device part­ner­ships pro­voked dis­cus­sion even with­in Face­book as ear­ly as 2012, accord­ing to Sandy Parak­i­las, who at the time led third-par­ty adver­tis­ing and pri­va­cy com­pli­ance for Facebook’s plat­form.

    “This was flagged inter­nal­ly as a pri­va­cy issue,” said Mr. Parak­i­las, who left Face­book that year and has recent­ly emerged as a harsh crit­ic of the com­pa­ny. “It is shock­ing that this prac­tice may still con­tin­ue six years lat­er, and it appears to con­tra­dict Facebook’s tes­ti­mo­ny to Con­gress that all friend per­mis­sions were dis­abled.”

    The part­ner­ships were briefly men­tioned in doc­u­ments sub­mit­ted to Ger­man law­mak­ers inves­ti­gat­ing the social media giant’s pri­va­cy prac­tices and released by Face­book in mid-May. But Face­book pro­vid­ed the law­mak­ers with the name of only one part­ner — Black­Ber­ry, mak­er of the once-ubiq­ui­tous mobile device — and lit­tle infor­ma­tion about how the agree­ments worked.

    The sub­mis­sion fol­lowed tes­ti­mo­ny by Joel Kaplan, Facebook’s vice pres­i­dent for glob­al pub­lic pol­i­cy, dur­ing a closed-door Ger­man par­lia­men­tary hear­ing in April. Elis­a­beth Winkelmeier-Beck­er, one of the law­mak­ers who ques­tioned Mr. Kaplan, said in an inter­view that she believed the data part­ner­ships dis­closed by Face­book vio­lat­ed users’ pri­va­cy rights.

    “What we have been try­ing to deter­mine is whether Face­book has know­ing­ly hand­ed over user data else­where with­out explic­it con­sent,” Ms. Winkelmeier-Beck­er said. “I would nev­er have imag­ined that this might even be hap­pen­ing secret­ly via deals with device mak­ers. Black­Ber­ry users seem to have been turned into data deal­ers, unknow­ing­ly and unwill­ing­ly.”

    In inter­views with The Times, Face­book iden­ti­fied oth­er part­ners: Apple and Sam­sung, the world’s two biggest smart­phone mak­ers, and Ama­zon, which sells tablets.

    An Apple spokesman said the com­pa­ny relied on pri­vate access to Face­book data for fea­tures that enabled users to post pho­tos to the social net­work with­out open­ing the Face­book app, among oth­er things. Apple said its phones no longer had such access to Face­book as of last Sep­tem­ber.

    ...

    Ush­er Lieber­man, a Black­Ber­ry spokesman, said in a state­ment that the com­pa­ny used Face­book data only to give its own cus­tomers access to their Face­book net­works and mes­sages. Mr. Lieber­man said that the com­pa­ny “did not col­lect or mine the Face­book data of our cus­tomers,” adding that “Black­Ber­ry has always been in the busi­ness of pro­tect­ing, not mon­e­tiz­ing, cus­tomer data.”

    Microsoft entered a part­ner­ship with Face­book in 2008 that allowed Microsoft-pow­ered devices to do things like add con­tacts and friends and receive noti­fi­ca­tions, accord­ing to a spokesman. He added that the data was stored local­ly on the phone and was not synced to Microsoft’s servers.

    Face­book acknowl­edged that some part­ners did store users’ data — includ­ing friends’ data — on their own servers. A Face­book offi­cial said that regard­less of where the data was kept, it was gov­erned by strict agree­ments between the com­pa­nies.

    “I am dumb­found­ed by the atti­tude that any­body in Facebook’s cor­po­rate office would think allow­ing third par­ties access to data would be a good idea,” said Hen­ning Schulzrinne, a com­put­er sci­ence pro­fes­sor at Colum­bia Uni­ver­si­ty who spe­cial­izes in net­work secu­ri­ty and mobile sys­tems.

    The Cam­bridge Ana­lyt­i­ca scan­dal revealed how loose­ly Face­book had policed the bustling ecosys­tem of devel­op­ers build­ing apps on its plat­form. They ranged from well-known play­ers like Zyn­ga, the mak­er of the Far­mVille game, to small­er ones, like a Cam­bridge con­trac­tor who used a quiz tak­en by about 300,000 Face­book users to gain access to the pro­files of as many as 87 mil­lion of their friends.

    Those devel­op­ers relied on Facebook’s pub­lic data chan­nels, known as appli­ca­tion pro­gram­ming inter­faces, or APIs. But start­ing in 2007, the com­pa­ny also estab­lished pri­vate data chan­nels for device man­u­fac­tur­ers.

    At the time, mobile phones were less pow­er­ful, and rel­a­tive­ly few of them could run stand-alone Face­book apps like those now com­mon on smart­phones. The com­pa­ny con­tin­ued to build new pri­vate APIs for device mak­ers through 2014, spread­ing user data through tens of mil­lions of mobile devices, game con­soles, tele­vi­sions and oth­er sys­tems out­side Facebook’s direct con­trol.

    Face­book began mov­ing to wind down the part­ner­ships in April, after assess­ing its pri­va­cy and data prac­tices in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. Mr. Archi­bong said the com­pa­ny had con­clud­ed that the part­ner­ships were no longer need­ed to serve Face­book users. About 22 of them have been shut down.

    The broad access Face­book pro­vid­ed to device mak­ers rais­es ques­tions about its com­pli­ance with a 2011 con­sent decree with the F.T.C.

    The decree barred Face­book from over­rid­ing users’ pri­va­cy set­tings with­out first get­ting explic­it con­sent. That agree­ment stemmed from an inves­ti­ga­tion that found Face­book had allowed app devel­op­ers and oth­er third par­ties to col­lect per­son­al details about users’ friends, even when those friends had asked that their infor­ma­tion remain pri­vate.

    After the Cam­bridge Ana­lyt­i­ca rev­e­la­tions, the F.T.C. began an inves­ti­ga­tion into whether Facebook’s con­tin­ued shar­ing of data after 2011 vio­lat­ed the decree, poten­tial­ly expos­ing the com­pa­ny to fines.

    Face­book offi­cials said the pri­vate data chan­nels did not vio­late the decree because the com­pa­ny viewed its hard­ware part­ners as “ser­vice providers,” akin to a cloud com­put­ing ser­vice paid to store Face­book data or a com­pa­ny con­tract­ed to process cred­it card trans­ac­tions. Accord­ing to the con­sent decree, Face­book does not need to seek addi­tion­al per­mis­sion to share friend data with ser­vice providers.

    “These con­tracts and part­ner­ships are entire­ly con­sis­tent with Facebook’s F.T.C. con­sent decree,” Mr. Archi­bong, the Face­book offi­cial, said.

    But Jes­si­ca Rich, a for­mer F.T.C. offi­cial who helped lead the commission’s ear­li­er Face­book inves­ti­ga­tion, dis­agreed with that assess­ment.

    “Under Facebook’s inter­pre­ta­tion, the excep­tion swal­lows the rule,” said Ms. Rich, now with the Con­sumers Union. “They could argue that any shar­ing of data with third par­ties is part of the Face­book expe­ri­ence. And this is not at all how the pub­lic inter­pret­ed their 2014 announce­ment that they would lim­it third-par­ty app access to friend data.”

    To test one partner’s access to Facebook’s pri­vate data chan­nels, The Times used a reporter’s Face­book account — with about 550 friends — and a 2013 Black­Ber­ry device, mon­i­tor­ing what data the device request­ed and received. (More recent Black­Ber­ry devices, which run Google’s Android oper­at­ing sys­tem, do not use the same pri­vate chan­nels, Black­Ber­ry offi­cials said.)

    Imme­di­ate­ly after the reporter con­nect­ed the device to his Face­book account, it request­ed some of his pro­file data, includ­ing user ID, name, pic­ture, “about” infor­ma­tion, loca­tion, email and cell­phone num­ber. The device then retrieved the reporter’s pri­vate mes­sages and the respons­es to them, along with the name and user ID of each per­son with whom he was com­mu­ni­cat­ing.

    The data flowed to a Black­Ber­ry app known as the Hub, which was designed to let Black­Ber­ry users view all of their mes­sages and social media accounts in one place.

    The Hub also request­ed — and received — data that Facebook’s pol­i­cy appears to pro­hib­it. Since 2015, Face­book has said that apps can request only the names of friends using the same app. But the Black­Ber­ry app had access to all of the reporter’s Face­book friends and, for most of them, returned infor­ma­tion such as user ID, birth­day, work and edu­ca­tion his­to­ry and whether they were cur­rent­ly online.

    The Black­Ber­ry device was also able to retrieve iden­ti­fy­ing infor­ma­tion for near­ly 295,000 Face­book users. Most of them were sec­ond-degree Face­book friends of the reporter, or friends of friends.

    In all, Face­book empow­ers Black­Ber­ry devices to access more than 50 types of infor­ma­tion about users and their friends, The Times found.

    ———-

    “Face­book Gave Device Mak­ers Deep Access to Data on Users and Friends” by GABRIEL J.X. DANCE, NICHOLAS CONFESSORE and MICHAEL LaFOR­GIA; The New York Times; 06/03/2018

    “Face­book has reached data-shar­ing part­ner­ships with at least 60 device mak­ers — includ­ing Apple, Ama­zon, Black­Ber­ry, Microsoft and Sam­sung — over the last decade, start­ing before Face­book apps were wide­ly avail­able on smart­phones, com­pa­ny offi­cials said. The deals allowed Face­book to expand its reach and let device mak­ers offer cus­tomers pop­u­lar fea­tures of the social net­work, such as mes­sag­ing, “like” but­tons and address books.”

    At least 60 device mak­ers are sit­ting on A LOT of Face­book data. Note how NONE of them acknowl­edge this before this report came out as this Cam­bridge Ana­lyt­i­ca scan­dal was unfold­ing. It’s one of those qui­et lessons in how the world unfor­tu­nate­ly works.

    And these 60+ device mak­ers were able to access data of users’ friends with­out their con­sent even when those friends changed their pri­va­cy set­ting to bar any shar­ing:

    ...
    But the part­ner­ships, whose scope has not pre­vi­ous­ly been report­ed, raise con­cerns about the company’s pri­va­cy pro­tec­tions and com­pli­ance with a 2011 con­sent decree with the Fed­er­al Trade Com­mis­sion. Face­book allowed the device com­pa­nies access to the data of users’ friends with­out their explic­it con­sent, even after declar­ing that it would no longer share such infor­ma­tion with out­siders. Some device mak­ers could retrieve per­son­al infor­ma­tion even from users’ friends who believed they had barred any shar­ing, The New York Times found.

    Most of the part­ner­ships remain in effect, though Face­book began wind­ing them down in April. The com­pa­ny came under inten­si­fy­ing scruti­ny by law­mak­ers and reg­u­la­tors after news reports in March that a polit­i­cal con­sult­ing firm, Cam­bridge Ana­lyt­i­ca, mis­used the pri­vate infor­ma­tion of tens of mil­lions of Face­book users.

    In the furor that fol­lowed, Facebook’s lead­ers said that the kind of access exploit­ed by Cam­bridge in 2014 was cut off by the next year, when Face­book pro­hib­it­ed devel­op­ers from col­lect­ing infor­ma­tion from users’ friends. But the com­pa­ny offi­cials did not dis­close that Face­book had exempt­ed the mak­ers of cell­phones, tablets and oth­er hard­ware from such restric­tions.
    ...

    Most of the part­ner­ships remain in effect, though Face­book began wind­ing them down in April.”

    Yep, these data shar­ing part­ner­ship large­ly remain in effect and did­n’t end in 2014–2015 when the app devel­op­ers lost access to this kind of data. It’s only now, as the Cam­bridge Ana­lyt­i­ca scan­dal unfolds, that these part­ner­ships are being end­ed.

    This was all done despite a 2011 con­sent decree that barred Face­book from over­rid­ing users’ pri­va­cy set­tings with­out first get­ting explic­it con­sent. Face­book sim­ply cat­e­go­riz­ing the device mak­ers “ser­vice providers”, exploit­ing a “ser­vice provider” loop­hole in the decree:

    ...
    The broad access Face­book pro­vid­ed to device mak­ers rais­es ques­tions about its com­pli­ance with a 2011 con­sent decree with the F.T.C.

    The decree barred Face­book from over­rid­ing users’ pri­va­cy set­tings with­out first get­ting explic­it con­sent. That agree­ment stemmed from an inves­ti­ga­tion that found Face­book had allowed app devel­op­ers and oth­er third par­ties to col­lect per­son­al details about users’ friends, even when those friends had asked that their infor­ma­tion remain pri­vate.

    After the Cam­bridge Ana­lyt­i­ca rev­e­la­tions, the F.T.C. began an inves­ti­ga­tion into whether Facebook’s con­tin­ued shar­ing of data after 2011 vio­lat­ed the decree, poten­tial­ly expos­ing the com­pa­ny to fines.

    Face­book offi­cials said the pri­vate data chan­nels did not vio­late the decree because the com­pa­ny viewed its hard­ware part­ners as “ser­vice providers,” akin to a cloud com­put­ing ser­vice paid to store Face­book data or a com­pa­ny con­tract­ed to process cred­it card trans­ac­tions. Accord­ing to the con­sent decree, Face­book does not need to seek addi­tion­al per­mis­sion to share friend data with ser­vice providers.

    “These con­tracts and part­ner­ships are entire­ly con­sis­tent with Facebook’s F.T.C. con­sent decree,” Mr. Archi­bong, the Face­book offi­cial, said.

    But Jes­si­ca Rich, a for­mer F.T.C. offi­cial who helped lead the commission’s ear­li­er Face­book inves­ti­ga­tion, dis­agreed with that assess­ment.

    “Under Facebook’s inter­pre­ta­tion, the excep­tion swal­lows the rule,” said Ms. Rich, now with the Con­sumers Union. “They could argue that any shar­ing of data with third par­ties is part of the Face­book expe­ri­ence. And this is not at all how the pub­lic inter­pret­ed their 2014 announce­ment that they would lim­it third-par­ty app access to friend data.”
    ...

    It’s also worth recall­ing that Face­book made sim­i­lar excus­es for allow­ing app devel­op­ers to grab user friends data, claim­ing that the data was sole­ly going to be used for “improv­ing user expe­ri­ences.” Which makes the Face­book expla­na­tion for how the device mak­er data shar­ing pro­gram was very dif­fer­ent from the app devel­op­er data shar­ing pro­gram rather amus­ing because, accord­ing to Face­book, the the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence” (which implic­it­ly admits that app devel­op­ers were using that data from a lot more than just improv­ing user expe­ri­ences):

    ...
    “You might think that Face­book or the device man­u­fac­tur­er is trust­wor­thy,” said Serge Egel­man, a pri­va­cy researcher at the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, who stud­ies the secu­ri­ty of mobile apps. “But the prob­lem is that as more and more data is col­lect­ed on the device — and if it can be accessed by apps on the device — it cre­ates seri­ous pri­va­cy and secu­ri­ty risks.”

    In inter­views, Face­book offi­cials defend­ed the data shar­ing as con­sis­tent with its pri­va­cy poli­cies, the F.T.C. agree­ment and pledges to users. They said its part­ner­ships were gov­erned by con­tracts that strict­ly lim­it­ed use of the data, includ­ing any stored on part­ners’ servers. The offi­cials added that they knew of no cas­es where the infor­ma­tion had been mis­used.

    ...

    “These part­ner­ships work very dif­fer­ent­ly from the way in which app devel­op­ers use our plat­form,” said Ime Archi­bong, a Face­book vice pres­i­dent. Unlike devel­op­ers that pro­vide games and ser­vices to Face­book users, the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence,” the offi­cials said.
    ...

    ““These part­ner­ships work very dif­fer­ent­ly from the way in which app devel­op­ers use our plat­form,” said Ime Archi­bong, a Face­book vice pres­i­dent. Unlike devel­op­ers that pro­vide games and ser­vices to Face­book users, the device part­ners can use Face­book data only to pro­vide ver­sions of “the Face­book expe­ri­ence,” the offi­cials said.” LOL!

    Of course, it’s basi­cal­ly impos­si­ble for Face­book to know what device mak­ers were doing with this data because, just like with app devel­op­ers, these device man­u­fac­tur­ers had the option of keep­ing this Face­book data on their own servers:

    ...
    In inter­views with The Times, Face­book iden­ti­fied oth­er part­ners: Apple and Sam­sung, the world’s two biggest smart­phone mak­ers, and Ama­zon, which sells tablets.

    An Apple spokesman said the com­pa­ny relied on pri­vate access to Face­book data for fea­tures that enabled users to post pho­tos to the social net­work with­out open­ing the Face­book app, among oth­er things. Apple said its phones no longer had such access to Face­book as of last Sep­tem­ber.

    ...

    Ush­er Lieber­man, a Black­Ber­ry spokesman, said in a state­ment that the com­pa­ny used Face­book data only to give its own cus­tomers access to their Face­book net­works and mes­sages. Mr. Lieber­man said that the com­pa­ny “did not col­lect or mine the Face­book data of our cus­tomers,” adding that “Black­Ber­ry has always been in the busi­ness of pro­tect­ing, not mon­e­tiz­ing, cus­tomer data.”

    Microsoft entered a part­ner­ship with Face­book in 2008 that allowed Microsoft-pow­ered devices to do things like add con­tacts and friends and receive noti­fi­ca­tions, accord­ing to a spokesman. He added that the data was stored local­ly on the phone and was not synced to Microsoft’s servers.

    Face­book acknowl­edged that some part­ners did store users’ data — includ­ing friends’ data — on their own servers. A Face­book offi­cial said that regard­less of where the data was kept, it was gov­erned by strict agree­ments between the com­pa­nies.

    “I am dumb­found­ed by the atti­tude that any­body in Facebook’s cor­po­rate office would think allow­ing third par­ties access to data would be a good idea,” said Hen­ning Schulzrinne, a com­put­er sci­ence pro­fes­sor at Colum­bia Uni­ver­si­ty who spe­cial­izes in net­work secu­ri­ty and mobile sys­tems.
    ...

    And this data pri­va­cy night­mare sit­u­a­tion appar­ent­ly all start­ed in 2007, when Face­book began build­ing pri­vate APIs for device mak­ers:

    ...
    The Cam­bridge Ana­lyt­i­ca scan­dal revealed how loose­ly Face­book had policed the bustling ecosys­tem of devel­op­ers build­ing apps on its plat­form. They ranged from well-known play­ers like Zyn­ga, the mak­er of the Far­mVille game, to small­er ones, like a Cam­bridge con­trac­tor who used a quiz tak­en by about 300,000 Face­book users to gain access to the pro­files of as many as 87 mil­lion of their friends.

    Those devel­op­ers relied on Facebook’s pub­lic data chan­nels, known as appli­ca­tion pro­gram­ming inter­faces, or APIs. But start­ing in 2007, the com­pa­ny also estab­lished pri­vate data chan­nels for device man­u­fac­tur­ers.

    At the time, mobile phones were less pow­er­ful, and rel­a­tive­ly few of them could run stand-alone Face­book apps like those now com­mon on smart­phones. The com­pa­ny con­tin­ued to build new pri­vate APIs for device mak­ers through 2014, spread­ing user data through tens of mil­lions of mobile devices, game con­soles, tele­vi­sions and oth­er sys­tems out­side Facebook’s direct con­trol.

    Face­book began mov­ing to wind down the part­ner­ships in April, after assess­ing its pri­va­cy and data prac­tices in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. Mr. Archi­bong said the com­pa­ny had con­clud­ed that the part­ner­ships were no longer need­ed to serve Face­book users. About 22 of them have been shut down.
    ...

    So what kind of data are device man­u­fac­tur­ers actu­al­ly col­lect­ing? Well, it’s unclear if all device mak­ers get the same lev­el of access. But Black­Ber­ry, for exam­ple, can access 50 types of infor­ma­tion on users and their friends. Infor­ma­tion like Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events:

    ...
    Some device part­ners can retrieve Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events, among oth­er data. Tests by The Times showed that the part­ners request­ed and received data in the same way oth­er third par­ties did.

    Facebook’s view that the device mak­ers are not out­siders lets the part­ners go even fur­ther, The Times found: They can obtain data about a user’s Face­book friends, even those who have denied Face­book per­mis­sion to share infor­ma­tion with any third par­ties.

    In inter­views, sev­er­al for­mer Face­book soft­ware engi­neers and secu­ri­ty experts said they were sur­prised at the abil­i­ty to over­ride shar­ing restric­tions.

    “It’s like hav­ing door locks installed, only to find out that the lock­smith also gave keys to all of his friends so they can come in and rifle through your stuff with­out hav­ing to ask you for per­mis­sion,” said Ashkan Soltani, a research and pri­va­cy con­sul­tant who for­mer­ly served as the F.T.C.’s chief tech­nol­o­gist.

    ...

    In all, Face­book empow­ers Black­Ber­ry devices to access more than 50 types of infor­ma­tion about users and their friends, The Times found.

    And as the New York Times dis­cov­ered after test­ing a reporter’s Black­ber­ry device, Black­ber­ry was able to grab infor­ma­tion on friends of friends, allow­ing the one device they test­ed to col­lect iden­ti­fy­ing infor­ma­tion on 295,000 Face­book users:

    ...
    The Black­Ber­ry device was also able to retrieve iden­ti­fy­ing infor­ma­tion for near­ly 295,000 Face­book users. Most of them were sec­ond-degree Face­book friends of the reporter, or friends of friends.
    ...

    And this infor­ma­tion was col­lect­ed and sent to the “Black­Ber­ry Hub” imme­di­ate­ly after the reporter con­nect­ed their device to his Face­book account:

    ...
    To test one partner’s access to Facebook’s pri­vate data chan­nels, The Times used a reporter’s Face­book account — with about 550 friends — and a 2013 Black­Ber­ry device, mon­i­tor­ing what data the device request­ed and received. (More recent Black­Ber­ry devices, which run Google’s Android oper­at­ing sys­tem, do not use the same pri­vate chan­nels, Black­Ber­ry offi­cials said.)

    Imme­di­ate­ly after the reporter con­nect­ed the device to his Face­book account, it request­ed some of his pro­file data, includ­ing user ID, name, pic­ture, “about” infor­ma­tion, loca­tion, email and cell­phone num­ber. The device then retrieved the reporter’s pri­vate mes­sages and the respons­es to them, along with the name and user ID of each per­son with whom he was com­mu­ni­cat­ing.

    The data flowed to a Black­Ber­ry app known as the Hub, which was designed to let Black­Ber­ry users view all of their mes­sages and social media accounts in one place.

    The Hub also request­ed — and received — data that Facebook’s pol­i­cy appears to pro­hib­it. Since 2015, Face­book has said that apps can request only the names of friends using the same app. But the Black­Ber­ry app had access to all of the reporter’s Face­book friends and, for most of them, returned infor­ma­tion such as user ID, birth­day, work and edu­ca­tion his­to­ry and whether they were cur­rent­ly online.

    ...

    Not sur­pris­ing­ly, Face­book whis­tle-blow­er Sandy Parak­i­las, who left the com­pa­ny in 2012, recalls this data shar­ing arrange­ment trig­ger­ing dis­cus­sions with­in Face­book as ear­ly as 2012. So Face­book has had inter­nal con­cerns about this kind of data shar­ing for the past six years. Con­cerns that were appar­ent­ly ignored

    ...
    But the device part­ner­ships pro­voked dis­cus­sion even with­in Face­book as ear­ly as 2012, accord­ing to Sandy Parak­i­las, who at the time led third-par­ty adver­tis­ing and pri­va­cy com­pli­ance for Facebook’s plat­form.

    “This was flagged inter­nal­ly as a pri­va­cy issue,” said Mr. Parak­i­las, who left Face­book that year and has recent­ly emerged as a harsh crit­ic of the com­pa­ny. “It is shock­ing that this prac­tice may still con­tin­ue six years lat­er, and it appears to con­tra­dict Facebook’s tes­ti­mo­ny to Con­gress that all friend per­mis­sions were dis­abled.”
    ...

    Also keep in mind that the main con­cerns Sandy Parak­i­las recalls hear­ing Face­book exec­u­tives express­ing over the app devel­op­er data shar­ing back in 2012 was con­cerns that these devel­op­ers were col­lect­ed so much infor­ma­tion that they were going to be able to cre­ate their own social net­works. As Parak­i­las put it, ““They were wor­ried that the large app devel­op­ers were build­ing their own social graphs, mean­ing they could see all the con­nec­tions between these people...They were wor­ried that they were going to build their own social net­works.”

    Well, the major device mak­ers have undoubt­ed­ly been gath­er­ing far more infor­ma­tion than major app devel­op­ers, espe­cial­ly when you fac­tor in the “friends of friends” option and the fact that they’ve appar­ent­ly had access to this kind of data up until now. And that means these device mak­ers must already pos­sess remark­ably detailed social net­works of their own at this point.

    So when you hear Face­book exec­u­tives char­ac­ter­iz­ing these device man­u­fac­tur­ers as “exten­sions of Face­book”...

    ...
    The com­pa­ny views its device part­ners as exten­sions of Face­book, serv­ing its more than two bil­lion users, the offi­cials said.
    ...

    ...it’s prob­a­bly the most hon­est thing Face­book has said about this entire scan­dal.

    Posted by Pterrafractyl | June 7, 2018, 10:41 pm
  18. Here’s an angle to the Face­book data pri­va­cy scan­dal that has received sur­pris­ing­ly lit­tle atten­tion because, when it comes to pri­va­cy vio­la­tions, this just might be the worst one we’ve seen: It turns out one of the types of data that Face­book gave app devel­op­ers per­mis­sion to access is the con­tents of their pri­vate Inbox­es.

    Yep, it’s not just your Face­book ‘pro­file’ of data points Face­book has col­lect­ed on you. Or all the things you ‘liked’. App devel­op­ers appar­ent­ly also could gain access to the pri­vate mes­sages you received. And much like the ‘friends per­mis­sion’ option exploit­ed by Cam­bridge Ana­lyt­i­ca to get pro­file infor­ma­tion on all of the friends of app users with­out the per­mis­sions of those friends, this abil­i­ty to access the con­tents of your inbox is obvi­ous­ly a pri­va­cy vio­la­tion of the peo­ple send­ing you those mes­sages.

    The one pos­i­tive aspect of this whole sto­ry is that at least app devel­op­ers had to let users know that they were giv­ing access to the inbox. So users pre­sum­ably had to agree some­how. And Face­book states that users had to explic­it­ly give per­mis­sion for this. So at least this was­n’t a default app per­mis­sion.

    But when asked about the lan­guage used in this noti­fi­ca­tion Face­book had no response. So we can’t assume that all peo­ple who used Face­book apps were giv­ing devel­op­ers access to their their pri­vate inbox mes­sages, but we also have no idea how many peo­ple were tricked into it with decep­tive lan­guage dur­ing the per­mis­sions noti­fi­ca­tions.

    Of course, one of the big ques­tions is whether or not this inbox per­mis­sions fea­ture got exploit­ed by Cam­bridge Ana­lyt­i­ca? Yes, and that’s actu­al­ly how we learned about its exis­tence: When Face­book start­ed send­ing out noti­fi­ca­tions to users that they may have been impact­ed by the Cam­bridge Ana­lyt­i­ca data col­lec­tion (which impact­ed 87 mil­lion users) via the “This Is Your Dig­i­tal Life” app cre­at­ed by Alek­san­dr Kogan, they sent the fol­low­ing noti­fi­ca­tion that informed peo­ple that they may have had their per­son­al mes­sages col­lect­ed:

    A small num­ber of peo­ple who logged into “This Is Your Dig­i­tal Life” also shared their own News Feed, time­line, posts and mes­sages which may have includ­ed post and mes­sages from you. They may also have shared your home­town.

    So Face­book casu­al­ly informed users that only a “small num­ber of peo­ple” who used the Cam­bridge Ana­lyt­i­ca “This Is Your Dig­i­tal Life” app may have giv­en access to “mes­sage from you”. Did they actu­al­ly give devel­op­ers access to mes­sages from you? That’s left a mys­tery.

    And notice that the lan­guage in that Face­book noti­fi­ca­tion says user posts were also made avail­able to devel­op­ers. That’s been one of the things that’s nev­er been entire­ly clear in the report­ing of this top­ic: were devel­op­ers giv­en access to the actu­al pri­vate posts peo­ple make? The lan­guage of that noti­fi­ca­tion is ambigu­ous as to whether or not apps could access pri­vate posts or only pub­lic posts, but giv­en the way every­thing else has played out on this sto­ry is seems like pri­vate posts were high­ly like­ly.

    The inbox per­mis­sions was phased out in 2014 along with the “friends per­mis­sion” option and many of the oth­er per­mis­sions Face­book used to grant to app devel­op­er. There was a one year grace peri­od for app devel­op­ers to adjust to the new rules that took effect in April of 2015. But as the arti­cle notes, devel­op­ers were actu­al­ly giv­en access to the Inbox per­mis­sion until Octo­ber 6 of 2015. And that’s well into the US 2016 elec­tion cycle, which rais­es the fas­ci­nat­ing pos­si­bil­i­ty that this ‘fea­ture’ could have actu­al­ly be used to spy on the US polit­i­cal cam­paign. Or the UK Brex­it cam­paign. Or any oth­er polit­i­cal cam­paign around the world around that time. Or any­thing else of impor­tance across the world from 2010 — 2015 when these mail­box read­ing options were avail­able to app devel­op­ers.

    And that’s what makes it so amaz­ing that this par­tic­u­lar sto­ry was­n’t big­ger: back in April Face­book acknowl­edged that it gave almost any­one the poten­tial capac­i­ty to spy on pri­vate Face­book mes­sages and users had almost no idea this was going on. That seems like a pret­ty mas­sive scan­dal:

    The Reg­is­ter

    Face­book admits: Apps were giv­en users’ per­mis­sion to go into their inbox­es
    Only the inbox own­er had to con­sent to it, though... not the peo­ple they con­versed with

    By Rebec­ca Hill
    11 Apr 2018 at 12:24

    Face­book has admit­ted that some apps had access to users’ pri­vate mes­sages, thanks to a pol­i­cy that allowed devs to request mail­box per­mis­sions.

    The rev­e­la­tion came as cur­rent Face­book users found out whether they or their friends had used the “This Is Your Dig­i­tal Life” app that allowed aca­d­e­m­ic Alek­san­dr Kogan to col­lect data on users and their friends.

    Users whose friends had been suck­ered in by the quiz were told that as a result, their pub­lic pro­file, Page likes, birth­day and cur­rent city were “like­ly shared” with the app.

    So far, so expect­ed. But, the noti­fi­ca­tion went on:

    A small num­ber of peo­ple who logged into “This Is Your Dig­i­tal Life” also shared their own News Feed, time­line, posts and mes­sages which may have includ­ed post and mes­sages from you. They may also have shared your home­town.

    That’s because, back in 2014 when the app was in use, devel­op­ers using Facebook’s Graph API to get data off the plat­form could ask for read_mailbox per­mis­sion, allow­ing them access to a person’s inbox.

    That was just one of a series of extend­ed per­mis­sions grant­ed to devs under v1.0 of the Graph API, which was first intro­duced in 2010.

    Fol­low­ing pres­sure from pri­va­cy activists – but much to the dis­ap­point­ment of devel­op­ers – Face­book shut that tap off for most per­mis­sions in April 2015, although the changel­og shows that read_mailbox wasn’t dep­re­cat­ed until 6 Octo­ber 2015.

    Face­book con­firmed to The Reg­is­ter that this access had been request­ed by the app and that a small num­ber of peo­ple had grant­ed it per­mis­sion.

    “In 2014, Facebook’s plat­form pol­i­cy allowed devel­op­ers to request mail­box per­mis­sions but only if the per­son explic­it­ly gave con­sent for this to hap­pen,” a spokes­borg told us.

    “Accord­ing to our records only a very small num­ber of peo­ple explic­it­ly opt­ed into shar­ing this infor­ma­tion. The fea­ture was turned off in 2015.”

    Face­book tried to down­play the sig­nif­i­cance of the eye­brow-rais­ing rev­e­la­tion, say­ing it was at a time when mail­box­es were “more of an inbox”, and claimed it was main­ly used for apps offer­ing a com­bined mes­sag­ing ser­vice.

    “At the time when peo­ple pro­vid­ed access to their mail­box­es – when Face­book mes­sages were more of an inbox and less of a real-time mes­sag­ing ser­vice – this enabled things like desk­top apps that com­bined Face­book mes­sages with mes­sages from oth­er ser­vices like SMS so that a per­son could access their mes­sages all in one place,” the spokesper­son said.

    Pre­sum­ably the aim is to imply users were well aware of the per­mis­sions they were grant­i­ng, but it’s not clear how those requests would have been phrased for each app.

    We asked Face­book what form this would have tak­en – for instance if users could have been faced with a list of pre-ticked box­es, one of which gave per­mis­sion for inbox-surf­ing – but got no response.

    Although Face­book has indi­cat­ed Kogan’s app did request mail­box per­mis­sions, Cam­bridge Ana­lyt­i­ca – which licensed the user data from Kogan – denied it received any con­tent of any pri­vate mes­sages from his firm, GSR.

    GSR did not share the con­tent of any pri­vate mes­sages with Cam­bridge Ana­lyt­i­ca or SCL Elec­tions. Nei­ther com­pa­ny has ever han­dled such data.— Cam­bridge Ana­lyt­i­ca (@CamAnalytica) April 10, 2018

    But this is about more than GSR, Cam­bridge and SCL Elec­tions: for years, Facebook’s pol­i­cy allowed all devel­op­ers to request access to users’ inbox­es.

    That it was done with only one user’s per­mis­sion – the indi­vid­u­als “Friends” weren’t alert­ed to the fact mes­sages they had every right to believe were pri­vate, were not – is yet more evi­dence of just how blasé Face­book has been about users’ pri­va­cy.

    Mean­while, the firm has yet to offer details of a full audit of all the apps that asked for sim­i­lar amounts of infor­ma­tion as Kogan’s app did – although it has shut down some.

    And it is only offer­ing cur­rent users a sim­ple way to find out if they were affect­ed by the CA scan­dal; those who have since deac­ti­vat­ed or delet­ed their accounts have yet to be noti­fied. We’ve asked the firm how it plans to offer this infor­ma­tion, but it has yet to respond.

    Amid increased scruti­ny, Face­book is try­ing to sell the idea that it’s sor­ry, that it has learned from its mis­takes and that it is putting users first.

    But it’s going to be a tough sell: just last night, Mark Zucker­berg revealed that, when the firm first found out about GSR hand­ing data over to Cam­bridge Ana­lyt­i­ca in 2015, it chose not to tell users because it felt that ask­ing the firm to delete the data meant it was a “closed case”.

    ...

    ———–

    “Face­book admits: Apps were giv­en users’ per­mis­sion to go into their inbox­es” by Rebec­ca Hill; The Reg­is­ter; 04/11/2018

    “Face­book has admit­ted that some apps had access to users’ pri­vate mes­sages, thanks to a pol­i­cy that allowed devs to request mail­box per­mis­sions.”

    Imag­ine how much Face­book did not want to admit that: they actu­al­ly allowed users to give access to their inbox­es to ran­dom app devel­op­ers until 2014. If you down­loaded a Face­book app with read_mailbox per­mis­sions, all your inbox mes­sages were scooped up:

    ...
    The rev­e­la­tion came as cur­rent Face­book users found out whether they or their friends had used the “This Is Your Dig­i­tal Life” app that allowed aca­d­e­m­ic Alek­san­dr Kogan to col­lect data on users and their friends.

    Users whose friends had been suck­ered in by the quiz were told that as a result, their pub­lic pro­file, Page likes, birth­day and cur­rent city were “like­ly shared” with the app.

    So far, so expect­ed. But, the noti­fi­ca­tion went on:

    A small num­ber of peo­ple who logged into “This Is Your Dig­i­tal Life” also shared their own News Feed, time­line, posts and mes­sages which may have includ­ed post and mes­sages from you. They may also have shared your home­town.

    That’s because, back in 2014 when the app was in use, devel­op­ers using Facebook’s Graph API to get data off the plat­form could ask for read_mailbox per­mis­sion, allow­ing them access to a person’s inbox.
    ...

    But the prac­tice actu­al­ly went on well into 2015 due to the grace peri­od Face­book gave app devel­op­ers. And the grace peri­od for the inbox per­mis­sions went until Octo­ber 6, 2015, 6 months lat­er than almost all the oth­er per­mis­sions that were get­ting phased out. This app spy­ing fea­ture was giv­en extra time:

    ...
    That was just one of a series of extend­ed per­mis­sions grant­ed to devs under v1.0 of the Graph API, which was first intro­duced in 2010.

    Fol­low­ing pres­sure from pri­va­cy activists – but much to the dis­ap­point­ment of devel­op­ers – Face­book shut that tap off for most per­mis­sions in April 2015, although the changel­og shows that read_mailbox wasn’t dep­re­cat­ed until 6 Octo­ber 2015.
    ...

    But at least peo­ple had to give explic­it con­sent, accord­ing to Face­book. And they said only a “small num­ber” of peo­ple gave that con­sent. So it sug­gests not all users of the Cam­bridge Ana­lyt­i­ca app gave this per­mis­sion and it was­n’t turned on by default. Hope­ful­ly:

    ...
    Face­book con­firmed to The Reg­is­ter that this access had been request­ed by the app and that a small num­ber of peo­ple had grant­ed it per­mis­sion.

    “In 2014, Facebook’s plat­form pol­i­cy allowed devel­op­ers to request mail­box per­mis­sions but only if the per­son explic­it­ly gave con­sent for this to hap­pen,” a spokes­borg told us.

    “Accord­ing to our records only a very small num­ber of peo­ple explic­it­ly opt­ed into shar­ing this infor­ma­tion. The fea­ture was turned off in 2015.”
    ...

    And when asked for an exam­ple of the per­mis­sions forms users would have had to sign Face­book did­n’t give a reply:

    ...
    Pre­sum­ably the aim is to imply users were well aware of the per­mis­sions they were grant­i­ng, but it’s not clear how those requests would have been phrased for each app.

    We asked Face­book what form this would have tak­en – for instance if users could have been faced with a list of pre-ticked box­es, one of which gave per­mis­sion for inbox-surf­ing – but got no response.
    ...

    Anoth­er alarm­ing aspect of this is how Face­book tries to down­play the seri­ous­ness by point­ing out that the mes­sag­ing ser­vice was less used as a ‘real-time mes­sag­ing ser­vice’ back in 2010–2015 than it is today. It mere­ly high­lights how it was inevitably used as a real-time mes­sag­ing ser­vice by some peo­ple back then just not as many as today:

    ...
    Face­book tried to down­play the sig­nif­i­cance of the eye­brow-rais­ing rev­e­la­tion, say­ing it was at a time when mail­box­es were “more of an inbox”, and claimed it was main­ly used for apps offer­ing a com­bined mes­sag­ing ser­vice.

    “At the time when peo­ple pro­vid­ed access to their mail­box­es – when Face­book mes­sages were more of an inbox and less of a real-time mes­sag­ing ser­vice – this enabled things like desk­top apps that com­bined Face­book mes­sages with mes­sages from oth­er ser­vices like SMS so that a per­son could access their mes­sages all in one place,” the spokesper­son said.
    ...

    Imag­ine how con­tent-rich those inbox­es used for real-time mes­sag­ing are for apps col­lect­ing that real-time mes­sag­ing infor­ma­tion. Along with all the non-real-time mes­sages peo­ple were send­ing. Like long mes­sages. And very pri­vate and secret stuff that peo­ple peo­ple would­n’t want to give to any ran­dom app devel­op­er. Apps just had to suc­cess­ful­ly ask for that kind of data. That was avail­able to app devel­op­ers from 2010-Octo­ber 6, 2015 (keep in mind that devel­op­ers were get­ting per­son­al infor­ma­tion as far back as 2007. There were just new rules in 2010).

    Cam­bridge Ana­lyt­i­ca denies they ever receiv­ing this mes­sag­ing data from Alek­sander Kogan’s GSR com­pa­ny that actu­al­ly built the app and col­lect­ed the data. Which is prob­a­bly as believ­able as the rest of Cam­bridge Ana­lyt­i­ca’s ini­tial denials:

    ...
    Although Face­book has indi­cat­ed Kogan’s app did request mail­box per­mis­sions, Cam­bridge Ana­lyt­i­ca – which licensed the user data from Kogan – denied it received any con­tent of any pri­vate mes­sages from his firm, GSR.

    GSR did not share the con­tent of any pri­vate mes­sages with Cam­bridge Ana­lyt­i­ca or SCL Elec­tions. Nei­ther com­pa­ny has ever han­dled such data.— Cam­bridge Ana­lyt­i­ca (@CamAnalytica) April 10, 2018

    ...

    And as the arti­cle grim­ly reminds us, this issue is about much more than Cam­bridge Ana­lyt­i­ca and its SCL Par­ent com­pa­ny. It’s about any ran­dom app devel­op­er doing this and peo­ple who send mes­sages to peo­ple who turn on the inbox spy­ing option not giv­ing their per­mis­sion. It’s just a hor­ri­ble option for Face­book to give to the devel­op­er com­mu­ni­ty. How was there not mass spy­ing going on?

    ...
    But this is about more than GSR, Cam­bridge and SCL Elec­tions: for years, Facebook’s pol­i­cy allowed all devel­op­ers to request access to users’ inbox­es.

    That it was done with only one user’s per­mis­sion – the indi­vid­u­als “Friends” weren’t alert­ed to the fact mes­sages they had every right to believe were pri­vate, were not – is yet more evi­dence of just how blasé Face­book has been about users’ pri­va­cy.
    ...

    And this sto­ry is just the lat­est hor­ri­ble sto­ry against the back drop all the oth­er data pri­va­cy hor­rors to emerge from the Cam­bridge Ana­lyt­i­ca scan­dal. Hor­rors that were rarely exclu­sive to Cam­bridge Ana­lyt­i­ca. Face­book is going with the ‘we’re sor­ry, we’ll be bet­ter’ angle, and angle that gets more and more jad­ed with each new hor­ror sto­ry:

    ...
    Mean­while, the firm has yet to offer details of a full audit of all the apps that asked for sim­i­lar amounts of infor­ma­tion as Kogan’s app did – although it has shut down some.

    And it is only offer­ing cur­rent users a sim­ple way to find out if they were affect­ed by the CA scan­dal; those who have since deac­ti­vat­ed or delet­ed their accounts have yet to be noti­fied. We’ve asked the firm how it plans to offer this infor­ma­tion, but it has yet to respond.

    Amid increased scruti­ny, Face­book is try­ing to sell the idea that it’s sor­ry, that it has learned from its mis­takes and that it is putting users first.

    But it’s going to be a tough sell: just last night, Mark Zucker­berg revealed that, when the firm first found out about GSR hand­ing data over to Cam­bridge Ana­lyt­i­ca in 2015, it chose not to tell users because it felt that ask­ing the firm to delete the data meant it was a “closed case”.

    Yep, Face­book has been pret­ty lack­ing in the gen­er­al report­ing on all the data pri­va­cy night­mares of past and present. But there was a recent update on Face­book’s big app audit: It can’t actu­al­ly do it. That’s the update:

    The Wall Street Jour­nal

    Facebook’s Lat­est Prob­lem: It Can’t Track Where Much of the Data Went.
    Company’s inter­nal probe finds that some devel­op­ers who scooped up data are now out of busi­ness, and oth­ers won’t coop­er­ate

    By Deepa Seethara­man
    June 27, 2018 9:30 a.m. ET

    Face­book Inc.’s inter­nal probe into poten­tial mis­use of user data is hit­ting fun­da­men­tal road­blocks: The com­pa­ny can’t track where much of the data went after it left the plat­form or fig­ure out where it is now.

    Three months after CEO Mark Zucker­berg pledged to inves­ti­gate all apps that had access to large amounts of Face­book data, the com­pa­ny is still comb­ing its sys­tem to locate the devel­op­ers behind those prod­ucts and find out how they used the infor­ma­tion between 2007 and 2015, when the com­pa­ny offi­cial­ly cut data access for all apps. Mr. Zucker­berg has said the process will cost mil­lions of dol­lars.

    One prob­lem is that many of the app devel­op­ers that scooped up unusu­al­ly large chunks of data are out of busi­ness, accord­ing to devel­op­ers and for­mer Face­book employ­ees. In some cas­es, the com­pa­ny says, devel­op­ers con­tact­ed by Face­book aren’t respond­ing to requests for fur­ther infor­ma­tion.

    Face­book is now try­ing to foren­si­cal­ly piece togeth­er what hap­pened to large chunks of data, and then deter­mine whether it was used in a way that needs to be dis­closed to users and reg­u­la­tors. In cas­es where the com­pa­ny spots red flags, Face­book said it would dis­patch audi­tors to ana­lyze the servers of those devel­op­ers and inter­ro­gate them about their busi­ness prac­tices.

    Ime Archi­bong, Facebook’s vice pres­i­dent of prod­uct part­ner­ships, said most devel­op­ers have been “respon­sive” but not­ed that the process requires a fair bit of detec­tive work on their end. “They have to go back and think about how these appli­ca­tions were built back in the day,” Mr. Archi­bong said.

    Face­book said in May it has sus­pend­ed 200 apps for poten­tial­ly vio­lat­ing its rules. Mr. Archi­bong declined to pro­vide a detailed update on the sta­tus of the inves­ti­ga­tion or iden­ti­fy the 200 apps that were sus­pend­ed thus far.

    Facebook’s app inves­ti­ga­tion is a response to broad­er crit­i­cism over rev­e­la­tions ear­li­er this year that data-ana­lyt­ics firm Cam­bridge Ana­lyt­i­ca improp­er­ly accessed and retained user data obtained from Alek­san­dr Kogan, a psy­chol­o­gy pro­fes­sor at the Uni­ver­si­ty of Cam­bridge. The data, which was gath­ered by Mr. Kogan and his asso­ciates through a per­son­al­i­ty-quiz app, was used by the Trump cam­paign in 2016. Face­book even­tu­al­ly noti­fied around 87 mil­lion users that their data may have been improp­er­ly shared with Cam­bridge Ana­lyt­i­ca, though many ques­tions remain about that inci­dent as well.

    Face­book was blocked from access­ing Cam­bridge Ana­lyt­i­ca servers by the U.K. gov­ern­ment and doesn’t yet know what data the now-defunct com­pa­ny may have stored.

    The results of Facebook’s inter­nal probe could have far-reach­ing ram­i­fi­ca­tions as law­mak­ers world-wide con­tin­ue to hold hear­ings and con­tem­plate tougher reg­u­la­tion of social-media plat­forms like Face­book.

    ...

    Some devel­op­ers say they have lit­tle incen­tive to respond to Facebook’s requests to coop­er­ate with the probe, either because they are out of busi­ness, have moved on to oth­er projects or are uneasy about allow­ing anoth­er com­pa­ny to look at their servers and the way their apps are con­struct­ed. Such intel­lec­tu­al prop­er­ty is “the lifeblood” of a developer’s busi­ness, said Mor­gan Reed, pres­i­dent of ACT | The App Asso­ci­a­tion, a trade group that rep­re­sents more than 5,000 app mak­ers and con­nect­ed-device com­pa­nies.

    In addi­tion, Face­book doesn’t have legal author­i­ty to force devel­op­ers to coop­er­ate.

    “They can’t real­ly com­pel these devel­op­ers to hand over infor­ma­tion,” said Ian Bogost, a pro­fes­sor at Geor­gia Insti­tute of Tech­nol­o­gy. “This is not a fed­er­al inquiry about a crime or some­thing. It’s a pri­vate com­pa­ny. What are the con­se­quences?”

    Mr. Bogost is also a game devel­op­er, and built a game for the Face­book plat­form called Cow Click­er. He said Face­book hasn’t con­tact­ed him about con­duct­ing a full-scale audit of Cow Click­er, which drew about 180,000 users.

    Face­book recent­ly sent him an auto­mat­ed mes­sage say­ing he would have to agree to an app-review process by Aug. 1 to retain access to Facebook’s plat­form and cer­tain slices of user data, includ­ing a user’s friend list, a link to their pro­file, their gen­der and age range. Mr. Bogost said he would “prob­a­bly” go through the review process.

    It is dif­fi­cult for Face­book to track down all the user data gob­bled up by devel­op­ers, owing large­ly to the way the plat­form was designed, accord­ing to devel­op­ers, for­mer Face­book employ­ees and aca­d­e­mics.

    Face­book cre­at­ed its devel­op­er plat­form in 2007, giv­ing out­siders the abil­i­ty to build busi­ness­es by lever­ag­ing the Face­book data of users and their friends. Face­book tight­ened access in 2014 and gave pre-exist­ing apps a one-year grace peri­od to com­ply with the new rules.

    Face­book engi­neers work­ing on the plat­form didn’t always doc­u­ment their changes, accord­ing to one for­mer employ­ee. At times, apps would stop work­ing because of some unan­nounced tweak by a Face­book employ­ee and devel­op­ers would have to com­plain to get it fixed, devel­op­ers said.

    Over the years, Face­book at times tried to build sys­tems that would allow the com­pa­ny to track down user info gleaned from the devel­op­er platform—but those efforts failed in part for tech­ni­cal rea­sons, for­mer employ­ees said.

    The inter­nal inves­ti­ga­tion is a sign of what Mr. Archi­bong, echo­ing oth­er Face­book exec­u­tives, described as a mas­sive cul­tur­al shift with­in Face­book to focus more on “enforce­ment as a key com­po­nent” of its sys­tem. Pre­vi­ous­ly, exec­u­tives have said, the empha­sis was on growth and con­nect­ing more users to one anoth­er around the world.

    Face­book has said its probe will start with apps that had user bases of around 100,000 peo­ple or more, or apps that pulled exten­sive data about a small­er group of peo­ple.

    Mr. Archi­bong said poten­tial exam­ples of wrong­do­ing would be stor­ing per­son­al­ly iden­ti­fi­able infor­ma­tion about users and shar­ing or sell­ing that infor­ma­tion, as the com­pa­ny says Mr. Kogan did. Mr. Kogan said at a Sen­ate hear­ing this month that he was “very regret­ful” that peo­ple were angry to learn about how their data was used but that he didn’t do any­thing dif­fer­ent than oth­er devel­op­ers.

    Mr. Archi­bong said the vast majority—“99.99999999%”— of Face­book devel­op­ers are good actors and that the firm doesn’t want to unnec­es­sar­i­ly alien­ate them. Many of the devel­op­ers involved in the probe “are going to be the same devel­op­ers that we’re going to be work­ing with five years from now on the newest and lat­est and great­est stuff and I want them to be excit­ed about our plat­form,” he added.

    Face­book said it has “large teams of inter­nal and exter­nal experts” work­ing on the inves­ti­ga­tions. Mr. Archi­bong said Face­book still expects the inves­ti­ga­tion to take “months and months” but added that the tim­ing was “some­what amor­phous.”

    ———-

    “Facebook’s Lat­est Prob­lem: It Can’t Track Where Much of the Data Went” by Deepa Seethara­man; The Wall Street Jour­nal; 06/27/2018

    “Face­book Inc.’s inter­nal probe into poten­tial mis­use of user data is hit­ting fun­da­men­tal road­blocks: The com­pa­ny can’t track where much of the data went after it left the plat­form or fig­ure out where it is now.”

    What hap­pened to that moun­tain of per­son­al data Face­book was dol­ing out to any ran­dom app devel­op­er? Face­book is still try­ing to fig­ure out who all the app devel­op­ers were:

    ...
    Three months after CEO Mark Zucker­berg pledged to inves­ti­gate all apps that had access to large amounts of Face­book data, the com­pa­ny is still comb­ing its sys­tem to locate the devel­op­ers behind those prod­ucts and find out how they used the infor­ma­tion between 2007 and 2015, when the com­pa­ny offi­cial­ly cut data access for all apps. Mr. Zucker­berg has said the process will cost mil­lions of dol­lars.

    One prob­lem is that many of the app devel­op­ers that scooped up unusu­al­ly large chunks of data are out of busi­ness, accord­ing to devel­op­ers and for­mer Face­book employ­ees. In some cas­es, the com­pa­ny says, devel­op­ers con­tact­ed by Face­book aren’t respond­ing to requests for fur­ther infor­ma­tion.

    Face­book is now try­ing to foren­si­cal­ly piece togeth­er what hap­pened to large chunks of data, and then deter­mine whether it was used in a way that needs to be dis­closed to users and reg­u­la­tors. In cas­es where the com­pa­ny spots red flags, Face­book said it would dis­patch audi­tors to ana­lyze the servers of those devel­op­ers and inter­ro­gate them about their busi­ness prac­tices.

    Ime Archi­bong, Facebook’s vice pres­i­dent of prod­uct part­ner­ships, said most devel­op­ers have been “respon­sive” but not­ed that the process requires a fair bit of detec­tive work on their end. “They have to go back and think about how these appli­ca­tions were built back in the day,” Mr. Archi­bong said.

    Face­book said in May it has sus­pend­ed 200 apps for poten­tial­ly vio­lat­ing its rules. Mr. Archi­bong declined to pro­vide a detailed update on the sta­tus of the inves­ti­ga­tion or iden­ti­fy the 200 apps that were sus­pend­ed thus far.
    ...

    And Face­book has­n’t even got­ten access to Cam­bridge Ana­lyt­i­ca’s servers to see what Cam­bridge Ana­lyt­i­ca may have done with the data:

    ...
    Facebook’s app inves­ti­ga­tion is a response to broad­er crit­i­cism over rev­e­la­tions ear­li­er this year that data-ana­lyt­ics firm Cam­bridge Ana­lyt­i­ca improp­er­ly accessed and retained user data obtained from Alek­san­dr Kogan, a psy­chol­o­gy pro­fes­sor at the Uni­ver­si­ty of Cam­bridge. The data, which was gath­ered by Mr. Kogan and his asso­ciates through a per­son­al­i­ty-quiz app, was used by the Trump cam­paign in 2016. Face­book even­tu­al­ly noti­fied around 87 mil­lion users that their data may have been improp­er­ly shared with Cam­bridge Ana­lyt­i­ca, though many ques­tions remain about that inci­dent as well.

    Face­book was blocked from access­ing Cam­bridge Ana­lyt­i­ca servers by the U.K. gov­ern­ment and doesn’t yet know what data the now-defunct com­pa­ny may have stored.
    ...

    And there’s no legal neces­si­ty for com­pa­nies to com­ply with Face­book, so this app data usage audit is prob­a­bly going to be pret­ty spot­ty at best:

    ...
    The results of Facebook’s inter­nal probe could have far-reach­ing ram­i­fi­ca­tions as law­mak­ers world-wide con­tin­ue to hold hear­ings and con­tem­plate tougher reg­u­la­tion of social-media plat­forms like Face­book.

    ...

    Some devel­op­ers say they have lit­tle incen­tive to respond to Facebook’s requests to coop­er­ate with the probe, either because they are out of busi­ness, have moved on to oth­er projects or are uneasy about allow­ing anoth­er com­pa­ny to look at their servers and the way their apps are con­struct­ed. Such intel­lec­tu­al prop­er­ty is “the lifeblood” of a developer’s busi­ness, said Mor­gan Reed, pres­i­dent of ACT | The App Asso­ci­a­tion, a trade group that rep­re­sents more than 5,000 app mak­ers and con­nect­ed-device com­pa­nies.

    In addi­tion, Face­book doesn’t have legal author­i­ty to force devel­op­ers to coop­er­ate.

    “They can’t real­ly com­pel these devel­op­ers to hand over infor­ma­tion,” said Ian Bogost, a pro­fes­sor at Geor­gia Insti­tute of Tech­nol­o­gy. “This is not a fed­er­al inquiry about a crime or some­thing. It’s a pri­vate com­pa­ny. What are the con­se­quences?”
    ...

    And note how it’s appar­ent­ly dif­fi­cult for Face­book to sim­ply known what data they gave away and this is blamed on tech­ni­cal dif­fi­cul­ties with how the devel­op­er plat­form was designed, which is absurd if true. If there’s one thing they should know it’s what they gave away:

    ...
    It is dif­fi­cult for Face­book to track down all the user data gob­bled up by devel­op­ers, owing large­ly to the way the plat­form was designed, accord­ing to devel­op­ers, for­mer Face­book employ­ees and aca­d­e­mics.

    Face­book cre­at­ed its devel­op­er plat­form in 2007, giv­ing out­siders the abil­i­ty to build busi­ness­es by lever­ag­ing the Face­book data of users and their friends. Face­book tight­ened access in 2014 and gave pre-exist­ing apps a one-year grace peri­od to com­ply with the new rules.

    Face­book engi­neers work­ing on the plat­form didn’t always doc­u­ment their changes, accord­ing to one for­mer employ­ee. At times, apps would stop work­ing because of some unan­nounced tweak by a Face­book employ­ee and devel­op­ers would have to com­plain to get it fixed, devel­op­ers said.

    Over the years, Face­book at times tried to build sys­tems that would allow the com­pa­ny to track down user info gleaned from the devel­op­er platform—but those efforts failed in part for tech­ni­cal rea­sons, for­mer employ­ees said.
    ...

    And note the trag­ic-com­ic jux­ta­po­si­tion of Face­book defin­ing data use ‘wrong­do­ing’ as includ­ing things like stor­ing per­son­al­ly iden­ti­fi­able like Kogan’s GSR did for the Cam­bridge Ana­lyt­i­ca app. And yet as Kogan has repeat­ed­ly point­ed out, this was rou­tine for app devel­op­ers, and all evi­dent sug­gests he is cor­rect.

    Despite that, Face­book’s vice pres­i­dent of prod­uct part­ner­ships pro­claimed that the vast majority—“99.99999999%”— of Face­book devel­op­ers are good actors and that the firm doesn’t want to unnec­es­sar­i­ly alien­ate them.

    So the wrong­do­ing was rou­tine and yet 99.9999999% of app devel­op­ers did noth­ing wrong with the data from Face­book’s per­spec­tive. It’s the kind of nar­ra­tive that makes it clear why Face­book does­n’t appear to want to try very hard in con­duct­ing this audit:

    ...
    Mr. Archi­bong said poten­tial exam­ples of wrong­do­ing would be stor­ing per­son­al­ly iden­ti­fi­able infor­ma­tion about users and shar­ing or sell­ing that infor­ma­tion, as the com­pa­ny says Mr. Kogan did. Mr. Kogan said at a Sen­ate hear­ing this month that he was “very regret­ful” that peo­ple were angry to learn about how their data was used but that he didn’t do any­thing dif­fer­ent than oth­er devel­op­ers.

    Mr. Archi­bong said the vast majority—“99.99999999%”— of Face­book devel­op­ers are good actors and that the firm doesn’t want to unnec­es­sar­i­ly alien­ate them. Many of the devel­op­ers involved in the probe “are going to be the same devel­op­ers that we’re going to be work­ing with five years from now on the newest and lat­est and great­est stuff and I want them to be excit­ed about our plat­form,” he added.
    ...

    Final­ly, there’s the warn­ing from Face­book at the end that the time­line for the audit is “some­what amor­phous”. Which trans­lates as “as slow­ly as pos­si­ble” (the oth­er kind of ASAP):

    ...
    Face­book said it has “large teams of inter­nal and exter­nal experts” work­ing on the inves­ti­ga­tions. Mr. Archi­bong said Face­book still expects the inves­ti­ga­tion to take “months and months” but added that the tim­ing was “some­what amor­phous.”

    So it sounds like Face­book has no idea what it gave away and not even nec­es­sar­i­ly who it gave it away to. We just know that data poten­tial­ly includ­ed your inbox or your out­box until Octo­ber 6, 2015.

    All in all, one the key lessons we can take from all this is that Face­book is deter­mined to learn noth­ing. A com­bi­na­tion of feigned igno­rance and sys­tem igno­rance real­ly is Face­book’s best defense at this point.

    Anoth­er les­son from all this is that Face­book users should all prob­a­bly do a per­son­al audit of their inbox or out­box­es. There’s prob­a­bly quite a few oth­er enti­ties doing that same audit.

    Posted by Pterrafractyl | June 30, 2018, 11:43 pm
  19. Here’s a sto­ry about pri­va­cy vio­la­tions that’s sur­pris­ing­ly good news for Face­book. Good news for Face­book in the sense that it’s poten­tial­ly real­ly bad news for oth­er tech giants like Google, Microsoft and Yahoo (now owned by Ver­i­zon) and more less ‘evens the score’ on Big Tech bad news front:

    Remem­ber the sto­ry about how Face­book was grant­i­ng app devel­op­ers poten­tial access to the mes­sage inbox­es of Face­book users? Well, as we should prob­a­bly expect, it turns out there are large num­bers of apps for Google’s, Microsoft­’s, and Yahoo’s free email ser­vices that can also pro­vide the devel­op­ers of those apps full access to read your emails.

    Yep, if you signed up for any sort of third-par­ty email app, the peo­ple at that app com­pa­ny can poten­tial­ly read all your emails. And this is the case for Gmail and Yahoo, the two biggest free email providers on the plan­et.

    On the one hand, it makes per­fect sense that human app devel­op­ers could poten­tial­ly be able to read your emails since their apps are lit­er­al­ly doing that algo­rith­mi­cal­ly and humans have to build, tweak, and main­tain those algo­rithms. But on the oth­er hand, it’s kind of amaz­ing that this is bare­ly known or rec­og­nized. As one app devel­op­er in the fol­low­ing arti­cle describes it, “Some peo­ple might con­sid­er that to be a dirty secret...It’s kind of real­i­ty.”

    Real­ty is a dirty secret. That’s an apt way to put it, because as the arti­cle also points out, data-min­ing com­pa­nies com­mon­ly use free apps and ser­vices to hook users into giv­ing up access to their inbox­es with­out mak­ing that clear.

    And it can be large cor­po­ra­tions all the way down to one-man oper­a­tions that can devel­op and poten­tial­ly gain this kind infor­ma­tion. One of the com­pa­nies described below, Return Path, part­ners with 163 dif­fer­ent email app devel­op­er com­pa­nies. In all of those cas­es, Return Path gets poten­tial access to that email data.

    Return Path’s sys­tems are sup­posed to pro­vide mar­kets with a sense of how many peo­ple actu­al­ly read their emails. So it sounds like they pro­vide use­ful email apps to users and use the access they gain to users’ inbox­es to see if peo­ple actu­al­ly click on emails sent by mar­keter clients of Return Path. Return Path says its soft­ware is sup­posed to strip out per­son­al emails from its sys­tems and only focus on com­mer­cial emails and it auto­mat­i­cal­ly does this by exam­in­ing senders’ domain names and search­ing for spe­cif­ic words, such as “grand­ma.” But in 2016, Return Path dis­cov­ered its algo­rithm was mis­la­bel­ing many per­son­al emails as com­mer­cial, allow­ing mil­lions of per­son­al mes­sages that should have been delet­ed to pass through to Return Path’s servers. And in response to this bug, Return Path gave some of its devel­op­ers access to users inbox­es so they could ham­mer out the bugs in their algo­rithm. So when Return Path’s algo­rithms turned out to be vio­lat­ing user pri­va­cy, humans are giv­en access to user emails. And this is a sin­gle anec­dote from a sin­gle com­pa­ny.

    Google asserts that it vets all of the devel­op­ers and had clear rules restrict­ing devel­op­ers’ abil­i­ties to store user data. But as the arti­cle notes, Google does­n’t actu­al­ly do much to police those poli­cies, much like what we saw with Face­book. The co-founder of an email for real-estate agents blunt­ly tells the Wall Street Jour­nal, “I have not seen any evi­dence of human review” by Google employ­ees. So if you use any third-par­ty emails apps, keep in mind that those third par­ties might have a front row view to all of your emails. It’s quite a dirty open secret

    So while we haven’t yet reached the night­mare sce­nario of learn­ing that every­one’s emails around the globe have been hacked and in the hands of unknown enti­ties, we’re steadi­ly get­ting there.:

    The Wall Street Jour­nal

    Tech’s ‘Dirty Secret’: The App Devel­op­ers Sift­ing Through Your Gmail
    Soft­ware devel­op­ers scan hun­dreds of mil­lions of emails of users who sign up for email-based ser­vices

    By Dou­glas MacMil­lan
    July 2, 2018 11:14 a.m. ET

    Google said a year ago it would stop its com­put­ers from scan­ning the inbox­es of Gmail users for infor­ma­tion to per­son­al­ize adver­tise­ments, say­ing it want­ed users to “remain con­fi­dent that Google will keep pri­va­cy and secu­ri­ty para­mount.”

    But the inter­net giant con­tin­ues to let hun­dreds of out­side soft­ware devel­op­ers scan the inbox­es of mil­lions of Gmail users who signed up for email-based ser­vices offer­ing shop­ping price com­par­isons, auto­mat­ed trav­el-itin­er­ary plan­ners or oth­er tools. Google does lit­tle to police those devel­op­ers, who train their computers—and, in some cas­es, employees—to read their users’ emails, a Wall Street Jour­nal exam­i­na­tion has found.

    One of those com­pa­nies is Return Path Inc., which col­lects data for mar­keters by scan­ning the inbox­es of more than two mil­lion peo­ple who have signed up for one of the free apps in Return Path’s part­ner net­work using a Gmail, Microsoft Corp. or Yahoo email address. Com­put­ers nor­mal­ly do the scan­ning, ana­lyz­ing about 100 mil­lion emails a day. At one point about two years ago, Return Path employ­ees read about 8,000 unredact­ed emails to help train the company’s soft­ware, peo­ple famil­iar with the episode say.

    In anoth­er case, employ­ees of Edi­son Soft­ware, anoth­er Gmail devel­op­er that makes a mobile app for read­ing and orga­niz­ing email, per­son­al­ly reviewed the emails of hun­dreds of users to build a new fea­ture, says Mikael Bern­er, the company’s CEO.

    Let­ting employ­ees read user emails has become “com­mon prac­tice” for com­pa­nies that col­lect this type of data, says Thede Loder, the for­mer chief tech­nol­o­gy offi­cer at eData­Source Inc., a rival to Return Path. He says engi­neers at eData­Source occa­sion­al­ly reviewed emails when build­ing and improv­ing soft­ware algo­rithms.

    “Some peo­ple might con­sid­er that to be a dirty secret,” says Mr. Loder. “It’s kind of real­i­ty.”

    Nei­ther Return Path nor Edi­son asked users specif­i­cal­ly whether it could read their emails. Both com­pa­nies say the prac­tice is cov­ered by their user agree­ments, and that they used strict pro­to­cols for the employ­ees who read emails. eData­Source says it pre­vi­ous­ly allowed employ­ees to read some email data but recent­ly end­ed that prac­tice to bet­ter pro­tect user pri­va­cy.

    Google, a unit of Alpha­bet Inc., says it pro­vides data only to out­side devel­op­ers it has vet­ted and to whom users have explic­it­ly grant­ed per­mis­sion to access email. Google’s own employ­ees read emails only “in very spe­cif­ic cas­es where you ask us to and give con­sent, or where we need to for secu­ri­ty pur­pos­es, such as inves­ti­gat­ing a bug or abuse,” the com­pa­ny said in a writ­ten state­ment.

    This exam­i­na­tion of email data pri­va­cy is based on inter­views with more than two dozen cur­rent and for­mer employ­ees of email app mak­ers and data com­pa­nies. The lat­i­tude out­side devel­op­ers have in han­dling user data shows how even as Google and oth­er tech giants have tout­ed efforts to tight­en pri­va­cy, they have left the door open to oth­ers with dif­fer­ent over­sight prac­tices.

    Face­book Inc. for years let out­side devel­op­ers gain access to its users’ data. That prac­tice, which Face­book has said it stopped by 2015, spawned a scan­dal when the social-media giant this year said it sus­pect­ed one devel­op­er of sell­ing data on tens of mil­lions of users to a research firm with ties to Pres­i­dent Don­ald Trump’s 2016 cam­paign. The episode led to renewed scruti­ny from law­mak­ers and reg­u­la­tors in the U.S. and Europe over how inter­net com­pa­nies pro­tect user infor­ma­tion.

    There is no indi­ca­tion that Return Path, Edi­son or oth­er devel­op­ers of Gmail add-ons have mis­used data in that fash­ion. Nev­er­the­less, pri­va­cy advo­cates and many tech indus­try exec­u­tives say open­ing access to email data risks sim­i­lar leaks.

    For com­pa­nies that want data for mar­ket­ing and oth­er pur­pos­es, tap­ping into email is attrac­tive because it con­tains shop­ping his­to­ries, trav­el itin­er­aries, finan­cial records and per­son­al com­mu­ni­ca­tions. Data-min­ing com­pa­nies com­mon­ly use free apps and ser­vices to hook users into giv­ing up access to their inbox­es with­out clear­ly stat­ing what data they col­lect and what they are doing with it, accord­ing to cur­rent and for­mer employ­ees of these com­pa­nies.

    Gmail is espe­cial­ly valu­able as the world’s dom­i­nant email ser­vice, with 1.4 bil­lion users. Near­ly two-thirds of all active email users glob­al­ly have a Gmail account, accord­ing to com­Score , and Gmail has more users than the next 25 largest email providers com­bined. The data min­ers gen­er­al­ly have access to oth­er email ser­vices besides Gmail, includ­ing those from Microsoft and Ver­i­zon Com­mu­ni­ca­tions Inc.’s Oath unit, formed after the com­pa­ny acquired email pio­neer Yahoo. Those are the next two largest email providers, accord­ing to com­Score.

    Oath says access to email data is con­sid­ered “on a case-by-case basis” and requires “express con­sent” from users. A Microsoft spokes­woman says it is com­mit­ted to pro­tect­ing cus­tomers’ pri­va­cy and that its terms of use for devel­op­ers pro­hib­it access­ing cus­tomer data with­out con­sent, and pro­vide guide­lines for how data can and can’t be used. Nei­ther company’s pri­va­cy or devel­op­er poli­cies men­tion allow­ing peo­ple to see user data.

    Google’s devel­op­er agree­ment pro­hibits expos­ing a user’s pri­vate data to any­one else “with­out explic­it opt-in con­sent from that user.” Its rules also bar app devel­op­ers from mak­ing per­ma­nent copies of user data and stor­ing them in a data­base.

    Devel­op­ers say Google does lit­tle to enforce those poli­cies. “I have not seen any evi­dence of human review” by Google employ­ees, says Zvi Band, the co-founder of Con­tac­tu­al­ly, an email app for real-estate agents. He says Con­tac­tu­al­ly has nev­er had employ­ees review emails with their own eyes.

    Google said it man­u­al­ly reviews every devel­op­er and appli­ca­tion request­ing access to Gmail. The com­pa­ny checks the domain name of the sender to look for any­one who has a his­to­ry of abus­ing Google poli­cies, and reads the pri­va­cy poli­cies to make sure they are clear. “If we ever run into areas where dis­clo­sures and prac­tices are unclear, Google takes quick action with the devel­op­er,” a spokesman said.

    Google says it lets any user revoke access to apps at any point. Busi­ness users of Gmail can also restrict access to cer­tain email apps to the employ­ees in their orga­ni­za­tion, the com­pa­ny said, “ensur­ing that only apps that have been vet­ted and are trust­ed by their orga­ni­za­tion are used.”

    Google has con­tend­ed with pri­va­cy con­cerns since it launched Gmail in 2004. The company’s soft­ware scanned email mes­sages and sold ads across the top of inbox­es relat­ed to their con­tent. That year, 31 pri­va­cy and con­sumer groups sent a let­ter to Google co-founders Lar­ry Page and Sergey Brin say­ing the prac­tice “vio­lates the implic­it trust of an email ser­vice provider.” Google respond­ed that oth­er email providers were already using com­put­ers to scan email to pro­tect against spam and hack­ers, and that show­ing ads helped off­set the cost of its free ser­vice.

    ...

    Between 2010 and 2016, Google faced at least three law­suits, brought by stu­dent users of Google apps as well as a broad­er set of email users, who accused it of vio­lat­ing fed­er­al wire­tap­ping laws. Google, in its legal defense, empha­sized that its pri­va­cy pol­i­cy for Gmail said that “no human reads your email to tar­get ads or relat­ed infor­ma­tion to you with­out your con­sent.” Google set­tled one of the law­suits; the oth­er two were dis­missed.

    In 2014, Google said it would stop scan­ning Gmail inbox­es of stu­dent, busi­ness and gov­ern­ment users. In June of last year, it said it was halt­ing all Gmail scan­ning for ads.

    Mean­while, Google in 2014 start­ed pro­mot­ing Gmail as a plat­form for devel­op­ers to lever­age the con­tents of users’ email to devel­op apps for such pro­duc­tiv­i­ty tasks as sched­ul­ing meet­ings. A new Gmail ver­sion launched this spring adds a link next to inbox­es to a curat­ed menu of 34 add-ons, includ­ing one that offers to track users’ out­go­ing emails to report whether recip­i­ents open them.

    Google says apps make Gmail more use­ful. Turn­ing Gmail into a plat­form emu­lates Microsoft’s Win­dows and Apple Inc.’s iPhone, which attract­ed out­side devel­op­ers to make their soft­ware more use­ful to cor­po­rate users.

    Google doesn’t dis­close how many apps have access to Gmail. The total num­ber of email apps in the top two mobile app stores, for Apple’s iOS and Android, jumped to 379 last year, from 142 five years ear­li­er, accord­ing to researcher App Annie. Most can link to Gmail and oth­er major providers.

    Almost any­one can build an app that con­nects to Gmail accounts using Google’s soft­ware called an appli­ca­tion pro­gram­ming inter­face, or API. When Gmail users open one of these apps, they are shown a but­ton ask­ing per­mis­sion to access their inbox. If they click it, Google grants the devel­op­er a key to access the entire con­tents of their inbox, includ­ing the abil­i­ty to read the con­tents of mes­sages and send and delete indi­vid­ual mes­sages on their behalf. Microsoft also offers API tools for email.

    With Gmail, the devel­op­ers who get this access range from one-per­son star­tups to large cor­po­ra­tions, and their process­es for pro­tect­ing data pri­va­cy vary.

    Return Path, based in New York, gains access to inbox­es when users sign up for one of its apps or one of the 163 apps offered by Return Path’s part­ners. Return Path gives the app mak­ers soft­ware tools for man­ag­ing email data in return for let­ting it peer into their users’ inbox­es.

    Return Path’s sys­tem is designed to check if com­mer­cial emails are read by their intend­ed recip­i­ents. It pro­vides cus­tomers includ­ing Overstock.com Inc. a dash­board where they can see which of their mar­ket­ing mes­sages reached the most cus­tomers. Over­stock didn’t respond to a request for com­ment.

    Mar­keters can view screen­shots of some actu­al emails—with names and address­es stripped out—to see what their com­peti­tors are send­ing. Return Path says it doesn’t let mar­keters tar­get emails specif­i­cal­ly to users.

    Navideh Forghani, 34 years old, of Phoenix, signed up this year for Earny Inc., a tool that com­pares receipts in inbox­es to prices across the web. When Earny finds a bet­ter price for items its users pur­chase, it auto­mat­i­cal­ly con­tacts the sell­ers and obtains refunds for the dif­fer­ence, which it shares with the users.

    Earny had a part­ner­ship with Return Path, which con­nect­ed its com­put­er scan­ners to Ms. Forghani’s email and began col­lect­ing and pro­cess­ing all of the new mes­sages that arrived in her inbox. Ms. Forghani says she didn’t read Earny’s pri­va­cy pol­i­cy close­ly and has nev­er heard of Return Path. “It is def­i­nite­ly con­cern­ing,” she says of the infor­ma­tion col­lec­tion.

    Matt Blum­berg, Return Path’s chief exec­u­tive, says users are giv­en clear notice that their email will be mon­i­tored. All of Return Path’s part­ner apps men­tion the email mon­i­tor­ing on their web­sites, he says, and Earny’s pri­va­cy pol­i­cy states that Return Path would “have access to your infor­ma­tion and will be per­mit­ted to use that infor­ma­tion accord­ing to their own pri­va­cy pol­i­cy.”

    Oded Vakrat, Earny’s CEO, says his com­pa­ny doesn’t sell or share data with any out­side com­pa­nies. Earny users can opt out of Return Path’s email mon­i­tor­ing, he says. “We are active­ly look­ing for ways to improve and go above and beyond with how we com­mu­ni­cate our pri­va­cy pol­i­cy,” he says.

    Return Path says its com­put­ers are sup­posed to strip out per­son­al emails from what it sends into its sys­tem by exam­in­ing senders’ domain names and search­ing for spe­cif­ic words, such as “grand­ma.” The com­put­ers are sup­posed to delete such emails.

    In 2016, Return Path dis­cov­ered its algo­rithm was mis­la­bel­ing many per­son­al emails as com­mer­cial, accord­ing to a per­son famil­iar with the mat­ter. That meant mil­lions of per­son­al mes­sages that should have been delet­ed were pass­ing through to Return Path’s servers, the per­son says.

    To cor­rect the prob­lem, Return Path assigned two data ana­lysts to spend sev­er­al days read­ing 8,000 emails and man­u­al­ly label­ing each one, the per­son says. The data helped train the company’s com­put­ers to bet­ter dis­tin­guish between per­son­al and com­mer­cial emails.

    Return Path declined to com­ment on details of the inci­dent, but said it some­times lets employ­ees see emails when fix­ing prob­lems with its algo­rithms. The com­pa­ny uses “extreme cau­tion” to safe­guard pri­va­cy by lim­it­ing access to a few engi­neers and data sci­en­tists and delet­ing all data after the work is com­plet­ed, says Mr. Blum­berg.

    Jules Polonet­sky, CEO of the non­prof­it Future of Pri­va­cy Forum, says he thinks users want to know specif­i­cal­ly whether humans are review­ing their data, and that apps should explain that clear­ly.

    At Edi­son Soft­ware, based in San Jose, Calif., exec­u­tives and engi­neers devel­op­ing a new fea­ture to sug­gest “smart replies” based on emails’ con­tent ini­tial­ly used their own emails for the process, but there wasn’t enough data to train the algo­rithm, says Mr. Bern­er, the CEO.

    Two of its arti­fi­cial-intel­li­gence engi­neers signed agree­ments not to share any­thing they read, Mr. Bern­er says. Then, work­ing on machines that pre­vent­ed them from down­load­ing infor­ma­tion to oth­er devices, they read the per­son­al email mes­sages of hun­dreds of users—with user infor­ma­tion already redacted—along with the system’s sug­gest­ed replies, man­u­al­ly indi­cat­ing whether each made sense.

    Nei­ther Return Path nor Edi­son men­tions the pos­si­bil­i­ty of humans view­ing users’ emails in their pri­va­cy poli­cies.

    Mr. Bern­er says he believes Edison’s pri­va­cy pol­i­cy cov­ers this prac­tice by telling users the com­pa­ny col­lects and stores per­son­al mes­sages to improve its arti­fi­cial-intel­li­gence algo­rithms. Edi­son users can opt out of data col­lec­tion, he says. The prac­tice, he says, is sim­i­lar to a tele­phone com­pa­ny tech­ni­cian lis­ten­ing to a phone line to make sure it is work­ing.

    ———-

    ” Tech’s ‘Dirty Secret’: The App Devel­op­ers Sift­ing Through Your Gmail” by Dou­glas MacMil­lan; The Wall Street Jour­nal; 07/02/2018

    Face­book Inc. for years let out­side devel­op­ers gain access to its users’ data. That prac­tice, which Face­book has said it stopped by 2015, spawned a scan­dal when the social-media giant this year said it sus­pect­ed one devel­op­er of sell­ing data on tens of mil­lions of users to a research firm with ties to Pres­i­dent Don­ald Trump’s 2016 cam­paign. The episode led to renewed scruti­ny from law­mak­ers and reg­u­la­tors in the U.S. and Europe over how inter­net com­pa­nies pro­tect user infor­ma­tion.”

    Will the kind pub­lic back­lash Face­book is endur­ing over its shock­ing­ly loose data pri­va­cy poli­cies for third par­ty app devel­op­ers trans­late into a more gen­er­al pub­lic back­lash against third-par­ty apps? We’ll find out. Prob­a­bly soon. Because if learn­ing that Google’s Gmail, the most pop­u­lar free email ser­vice in the world, has a sim­i­lar third-par­ty app data pri­va­cy pol­i­cy can’t gen­er­ate that back­lash, prob­a­bly noth­ing will:

    ...
    Google said a year ago it would stop its com­put­ers from scan­ning the inbox­es of Gmail users for infor­ma­tion to per­son­al­ize adver­tise­ments, say­ing it want­ed users to “remain con­fi­dent that Google will keep pri­va­cy and secu­ri­ty para­mount.”

    But the inter­net giant con­tin­ues to let hun­dreds of out­side soft­ware devel­op­ers scan the inbox­es of mil­lions of Gmail users who signed up for email-based ser­vices offer­ing shop­ping price com­par­isons, auto­mat­ed trav­el-itin­er­ary plan­ners or oth­er tools. Google does lit­tle to police those devel­op­ers, who train their computers—and, in some cas­es, employees—to read their users’ emails, a Wall Street Jour­nal exam­i­na­tion has found.
    ...

    “Google does lit­tle to police those devel­op­ers, who train their computers—and, in some cas­es, employees—to read their users’ emails, a Wall Street Jour­nal exam­i­na­tion has found.”

    That sure sounds a lot like every­one one of the Face­book scan­dals we’ve seen of late: the com­pa­ny assures us that it has poli­cies in place to pre­vent data abus­es but then we learn that those poli­cies aren’t policed.

    And it’s not just Google. Apps for Microsoft and Yahoo email ser­vices also have these data pri­va­cy issues. And as the arti­cle notes, data-min­ing com­pa­nies com­mon­ly use apps to gain access to inbox­es with­out mak­ing it clear. It real­ly is a remark­ably dirty open secret:

    ...
    For com­pa­nies that want data for mar­ket­ing and oth­er pur­pos­es, tap­ping into email is attrac­tive because it con­tains shop­ping his­to­ries, trav­el itin­er­aries, finan­cial records and per­son­al com­mu­ni­ca­tions. Data-min­ing com­pa­nies com­mon­ly use free apps and ser­vices to hook users into giv­ing up access to their inbox­es with­out clear­ly stat­ing what data they col­lect and what they are doing with it, accord­ing to cur­rent and for­mer employ­ees of these com­pa­nies.

    Gmail is espe­cial­ly valu­able as the world’s dom­i­nant email ser­vice, with 1.4 bil­lion users. Near­ly two-thirds of all active email users glob­al­ly have a Gmail account, accord­ing to com­Score , and Gmail has more users than the next 25 largest email providers com­bined. The data min­ers gen­er­al­ly have access to oth­er email ser­vices besides Gmail, includ­ing those from Microsoft and Ver­i­zon Com­mu­ni­ca­tions Inc.’s Oath unit, formed after the com­pa­ny acquired email pio­neer Yahoo. Those are the next two largest email providers, accord­ing to com­Score.

    Oath says access to email data is con­sid­ered “on a case-by-case basis” and requires “express con­sent” from users. A Microsoft spokes­woman says it is com­mit­ted to pro­tect­ing cus­tomers’ pri­va­cy and that its terms of use for devel­op­ers pro­hib­it access­ing cus­tomer data with­out con­sent, and pro­vide guide­lines for how data can and can’t be used. Nei­ther company’s pri­va­cy or devel­op­er poli­cies men­tion allow­ing peo­ple to see user data.
    ...

    And in some cas­es there’s a sin­gle data-min­ing firm that part­ners with a large num­ber of email app devel­op­ers. Return Path, has access to two mil­lion peo­ple via its net­work of app devel­op­er part­ners. It’s algo­rithms scan around 100 mil­lion emails a day:

    ...
    One of those com­pa­nies is Return Path Inc., which col­lects data for mar­keters by scan­ning the inbox­es of more than two mil­lion peo­ple who have signed up for one of the free apps in Return Path’s part­ner net­work using a Gmail, Microsoft Corp. or Yahoo email address. Com­put­ers nor­mal­ly do the scan­ning, ana­lyz­ing about 100 mil­lion emails a day. At one point about two years ago, Return Path employ­ees read about 8,000 unredact­ed emails to help train the company’s soft­ware, peo­ple famil­iar with the episode say.
    ...

    Anoth­er app devel­op­er, Edi­son Soft­ware, per­son­al­ly exam­ined hun­dreds of user emails to devel­op new fea­tures. It’s a reminder that the devel­op­ment of new fea­tures more or less acts as a per­ma­nent excuse for devel­op­ers to have humans read­ing user emails:

    ...
    In anoth­er case, employ­ees of Edi­son Soft­ware, anoth­er Gmail devel­op­er that makes a mobile app for read­ing and orga­niz­ing email, per­son­al­ly reviewed the emails of hun­dreds of users to build a new fea­ture, says Mikael Bern­er, the company’s CEO.
    ...

    And accord­ing to one the for­mer chief of one of Rival Path’s com­peti­tors, let­ting employ­ees read user emails is “com­mon prac­tice”. As they put it, that com­mon prac­tice is a dirty secret and kind of real­i­ty:

    ...
    Let­ting employ­ees read user emails has become “com­mon prac­tice” for com­pa­nies that col­lect this type of data, says Thede Loder, the for­mer chief tech­nol­o­gy offi­cer at eData­Source Inc., a rival to Return Path. He says engi­neers at eData­Source occa­sion­al­ly reviewed emails when build­ing and improv­ing soft­ware algo­rithms.

    “Some peo­ple might con­sid­er that to be a dirty secret,” says Mr. Loder. “It’s kind of real­i­ty.”
    ...

    And notice how nei­ther Return Path nor Edi­son Soft­ware explic­it­ly tell users that they are going to pos­si­bly be read­ing their emails. Instead, both say the user agree­ments cov­er that prac­tice. In oth­er words, user agree­ments don’t need to tell peo­ple they are giv­ing this third par­ty access to their emails:

    ...
    Nei­ther Return Path nor Edi­son asked users specif­i­cal­ly whether it could read their emails. Both com­pa­nies say the prac­tice is cov­ered by their user agree­ments, and that they used strict pro­to­cols for the employ­ees who read emails. eData­Source says it pre­vi­ous­ly allowed employ­ees to read some email data but recent­ly end­ed that prac­tice to bet­ter pro­tect user pri­va­cy.
    ...

    And yet Google claims that it reviews the pri­va­cy poli­cies of all apps to make sure they are clear:

    ...
    Google said it man­u­al­ly reviews every devel­op­er and appli­ca­tion request­ing access to Gmail. The com­pa­ny checks the domain name of the sender to look for any­one who has a his­to­ry of abus­ing Google poli­cies, and reads the pri­va­cy poli­cies to make sure they are clear. “If we ever run into areas where dis­clo­sures and prac­tices are unclear, Google takes quick action with the devel­op­er,” a spokesman said.
    ...

    The com­pa­ny checks the domain name of the sender to look for any­one who has a his­to­ry of abus­ing Google poli­cies, and reads the pri­va­cy poli­cies to make sure they are clear.”

    Notice how Google assures us that its check­ing the domains of the emails sent by app devel­op­ers ask­ing for access to this data to search for peo­ple with a his­to­ry abus­ing Google poli­cies. On the one hand, it’s good to hear that they are at least doing that. But if that’s the extent of the ‘vet­ting’, that’s not much vet­ting.

    Google also claims that it vets all out­side devel­op­ers who are going to get access to this kind of data and only grants it for users that explic­it­ly grant­ed the devel­op­er per­mis­sion to access their emails. So Google is tak­ing the line that users are grant­i­ng per­mis­sions for this despite every­thing we’re hear­ing about user’s not actu­al­ly grant­i­ng explic­it per­mis­sion. It’s anoth­er par­al­lel with the Face­book scan­dal:

    ...
    Google, a unit of Alpha­bet Inc., says it pro­vides data only to out­side devel­op­ers it has vet­ted and to whom users have explic­it­ly grant­ed per­mis­sion to access email. Google’s own employ­ees read emails only “in very spe­cif­ic cas­es where you ask us to and give con­sent, or where we need to for secu­ri­ty pur­pos­es, such as inves­ti­gat­ing a bug or abuse,” the com­pa­ny said in a writ­ten state­ment.

    This exam­i­na­tion of email data pri­va­cy is based on inter­views with more than two dozen cur­rent and for­mer employ­ees of email app mak­ers and data com­pa­nies. The lat­i­tude out­side devel­op­ers have in han­dling user data shows how even as Google and oth­er tech giants have tout­ed efforts to tight­en pri­va­cy, they have left the door open to oth­ers with dif­fer­ent over­sight prac­tices.

    There is no indi­ca­tion that Return Path, Edi­son or oth­er devel­op­ers of Gmail add-ons have mis­used data in that fash­ion. Nev­er­the­less, pri­va­cy advo­cates and many tech indus­try exec­u­tives say open­ing access to email data risks sim­i­lar leaks.
    ...

    And note the implic­it acknowl­edg­ment of trust issues with app devel­op­ers: Google assures us that busi­ness­es have the option of restrict­ing access to only trust app for its employ­ees’ Gmail accounts. While that might have been intend­ed to be reas­sur­ing, it sure sounds like Google acknowl­edg­ing the valid­i­ty of these trust issues:

    ...
    Google says it lets any user revoke access to apps at any point. Busi­ness users of Gmail can also restrict access to cer­tain email apps to the employ­ees in their orga­ni­za­tion, the com­pa­ny said, “ensur­ing that only apps that have been vet­ted and are trust­ed by their orga­ni­za­tion are used.”
    ...

    So what’s to stop these third-par­ty app devel­op­ers from tak­ing the data they col­lect and reselling it? Well, in Google’s case it at least does pro­hib­it app devel­op­ers from expos­ing a user’s pri­vate data to any­one else “with­out explic­it opt-in con­sent from that user” and bars app devel­op­ers from mak­ing per­ma­nent copies of user data and stor­ing them in a data­base. But much like what we saw with Face­book and its sim­i­lar poli­cies, it’s not actu­al­ly enforced:

    ...
    Google’s devel­op­er agree­ment pro­hibits expos­ing a user’s pri­vate data to any­one else “with­out explic­it opt-in con­sent from that user.” Its rules also bar app devel­op­ers from mak­ing per­ma­nent copies of user data and stor­ing them in a data­base.

    Devel­op­ers say Google does lit­tle to enforce those poli­cies. “I have not seen any evi­dence of human review” by Google employ­ees, says Zvi Band, the co-founder of Con­tac­tu­al­ly, an email app for real-estate agents. He says Con­tac­tu­al­ly has nev­er had employ­ees review emails with their own eyes.
    ...

    And note the inter­est­ing tim­ing of Google pro­mot­ing access to this kind of data for app devel­op­ers: It appar­ent­ly start­ed in 2014. Recall that 2014 is the same year Face­book began restrict­ing the user data access it had been grant­i­ng devel­op­ers since 2007. So you have to won­der if Google’s loos­er 2014 pol­i­cy was actu­al­ly intend­ed to attract app devel­op­ers bristling from the new Face­book poli­cies:

    ...
    Google has con­tend­ed with pri­va­cy con­cerns since it launched Gmail in 2004. The company’s soft­ware scanned email mes­sages and sold ads across the top of inbox­es relat­ed to their con­tent. That year, 31 pri­va­cy and con­sumer groups sent a let­ter to Google co-founders Lar­ry Page and Sergey Brin say­ing the prac­tice “vio­lates the implic­it trust of an email ser­vice provider.” Google respond­ed that oth­er email providers were already using com­put­ers to scan email to pro­tect against spam and hack­ers, and that show­ing ads helped off­set the cost of its free ser­vice.

    ...

    Between 2010 and 2016, Google faced at least three law­suits, brought by stu­dent users of Google apps as well as a broad­er set of email users, who accused it of vio­lat­ing fed­er­al wire­tap­ping laws. Google, in its legal defense, empha­sized that its pri­va­cy pol­i­cy for Gmail said that “no human reads your email to tar­get ads or relat­ed infor­ma­tion to you with­out your con­sent.” Google set­tled one of the law­suits; the oth­er two were dis­missed.

    In 2014, Google said it would stop scan­ning Gmail inbox­es of stu­dent, busi­ness and gov­ern­ment users. In June of last year, it said it was halt­ing all Gmail scan­ning for ads.

    Mean­while, Google in 2014 start­ed pro­mot­ing Gmail as a plat­form for devel­op­ers to lever­age the con­tents of users’ email to devel­op apps for such pro­duc­tiv­i­ty tasks as sched­ul­ing meet­ings. A new Gmail ver­sion launched this spring adds a link next to inbox­es to a curat­ed menu of 34 add-ons, includ­ing one that offers to track users’ out­go­ing emails to report whether recip­i­ents open them.
    ...

    Mean­while, Google in 2014 start­ed pro­mot­ing Gmail as a plat­form for devel­op­ers to lever­age the con­tents of users’ email to devel­op apps for such pro­duc­tiv­i­ty tasks as sched­ul­ing meet­ings. A new Gmail ver­sion launched this spring adds a link next to inbox­es to a curat­ed menu of 34 add-ons, includ­ing one that offers to track users’ out­go­ing emails to report whether recip­i­ents open them.”

    Wow, that sure seems like a dirty open secret.

    So how many email apps with this kind of data access are avail­able for peo­ple today? Google won’t dis­close that. Should­n’t there be a list they pro­vide some­where? But on Apple and Android app stores the num­ber more than dou­bled over the past five years, from 142 to 379:

    ...
    Google says apps make Gmail more use­ful. Turn­ing Gmail into a plat­form emu­lates Microsoft’s Win­dows and Apple Inc.’s iPhone, which attract­ed out­side devel­op­ers to make their soft­ware more use­ful to cor­po­rate users.

    Google doesn’t dis­close how many apps have access to Gmail. The total num­ber of email apps in the top two mobile app stores, for Apple’s iOS and Android, jumped to 379 last year, from 142 five years ear­li­er, accord­ing to researcher App Annie. Most can link to Gmail and oth­er major providers.
    ...

    And that num­ber is only going to keep grow­ing because almost com­pa­ny can sign up to be one of these devel­op­ers. Includ­ing one-per­son com­pa­nies:

    ...
    Almost any­one can build an app that con­nects to Gmail accounts using Google’s soft­ware called an appli­ca­tion pro­gram­ming inter­face, or API. When Gmail users open one of these apps, they are shown a but­ton ask­ing per­mis­sion to access their inbox. If they click it, Google grants the devel­op­er a key to access the entire con­tents of their inbox, includ­ing the abil­i­ty to read the con­tents of mes­sages and send and delete indi­vid­ual mes­sages on their behalf. Microsoft also offers API tools for email.

    With Gmail, the devel­op­ers who get this access range from one-per­son star­tups to large cor­po­ra­tions, and their process­es for pro­tect­ing data pri­va­cy vary.
    ...

    And note how con­cen­trat­ed this prac­tice appears to be. We just saw how there were 379 apps with this kind of email access on the Apple and Android (Google) app stores last year. And Return Path, the com­pa­ny that checks to see if you’ve clicked on a clien­t’s mar­ket­ing emails with 2 mil­lion users, has part­ner­ships with 163 apps. That puts the com­pa­ny, which few peo­ple have ever heard of, on track to have access to a major­i­ty of the peo­ple who use emails apps. Yikes:

    ...
    Return Path, based in New York, gains access to inbox­es when users sign up for one of its apps or one of the 163 apps offered by Return Path’s part­ners. Return Path gives the app mak­ers soft­ware tools for man­ag­ing email data in return for let­ting it peer into their users’ inbox­es.

    Return Path’s sys­tem is designed to check if com­mer­cial emails are read by their intend­ed recip­i­ents. It pro­vides cus­tomers includ­ing Overstock.com Inc. a dash­board where they can see which of their mar­ket­ing mes­sages reached the most cus­tomers. Over­stock didn’t respond to a request for com­ment.

    Mar­keters can view screen­shots of some actu­al emails—with names and address­es stripped out—to see what their com­peti­tors are send­ing. Return Path says it doesn’t let mar­keters tar­get emails specif­i­cal­ly to users.
    ...

    And that’s the lat­est ‘WTF Big Tech?!’ sto­ry. At least Face­book got to be a bystander on this one. By no means an inno­cent bystander, but still a bystander by virtue of the fact that Face­book does­n’t offer email ser­vices. Con­grats to Face­book.

    Posted by Pterrafractyl | July 7, 2018, 7:26 pm
  20. Fol­low­ing up on the creepy Wall Street Jour­nal sto­ry about the email app devel­op­ers for pop­u­lar emails ser­vices like Gmail and Yahoo gain­ing access to peo­ple’s inbox­es with min­i­mal dis­clo­sure — osten­si­bly just to devel­op new fea­tures and fix bugs - here’s an arti­cle about anoth­er rea­sons humans might be inter­act­ing with your app data and a whole lot of oth­er data: pseu­do-AI.

    What’s pseu­do-AI? Well, it’s basi­cal­ly an AI ser­vice that’s actu­al­ly humans and AI work­ing togeth­er, often with the humans act­ing as the pri­ma­ry actor. And as the fol­low­ing arti­cle makes, while pseu­do-AI is noth­ing new and appro­pri­ate for some appli­ca­tions, it’s also the case that many com­pa­nies are com­pa­nies offer­ing ser­vices that they adver­tise as ful­ly AI-dri­ven but are actu­al­ly pseu­do-AI with humans involved. In some cas­es, com­pa­nies are using pseu­do-AI to fool investors into think­ing they’ve already devel­oped an AI-pow­ered prod­uct. In oth­er cas­es, com­pa­nies claim to be rolling out human-pow­ered ‘AI’ ser­vices as part of the ini­tial devel­op of a prod­uct, like human-pow­ered ‘chat­bot’ ser­vices, with the intent of mak­ing it ful­ly AI-run lat­er. As one per­son put it, “It’s essen­tial­ly pro­to­typ­ing the AI with human beings”, while oth­ers per­haps more accu­rate­ly put is as “fake it till you make it”. And in vir­tu­al­ly all of these cas­es the end-user is left with the impres­sion that they’re pure­ly inter­act­ing with a com­put­er.

    This is, of course, all high­ly rel­e­vant to the scan­dal over third-par­ty devel­op­er access to per­son­al data because one of the pri­ma­ry assur­ances com­pa­nies give to end users is that it’s pri­mar­i­ly just AIs that are view­ing your per­son­al data and humans only come into the loop to fix tech­ni­cal issues. Which obvi­ous­ly isn’t going to be the case if those AI ser­vices are secret­ly pseu­do-AIs:

    The Guardian

    The rise of ‘pseu­do-AI’: how tech firms qui­et­ly use humans to do bots’ work

    Using what one expert calls a ‘Wiz­ard of Oz tech­nique’, some com­pa­nies keep their reliance on humans a secret from investors

    Olivia Solon in San Fran­cis­co
    Fri 6 Jul 2018 03.01 EDT
    Last mod­i­fied on Fri 6 Jul 2018 10.09 EDT

    It’s hard to build a ser­vice pow­ered by arti­fi­cial intel­li­gence. So hard, in fact, that some star­tups have worked out it’s cheap­er and eas­i­er to get humans to behave like robots than it is to get machines to behave like humans.

    “Using a human to do the job lets you skip over a load of tech­ni­cal and busi­ness devel­op­ment chal­lenges. It doesn’t scale, obvi­ous­ly, but it allows you to build some­thing and skip the hard part ear­ly on,” said Gre­go­ry Koberg­er, CEO of ReadMe, who says he has come across a lot of “pseu­do-AIs”.

    “It’s essen­tial­ly pro­to­typ­ing the AI with human beings,” he said.

    This prac­tice was brought to the fore this week in a Wall Street Jour­nal arti­cle high­light­ing the hun­dreds of third-par­ty app devel­op­ers that Google allows to access people’s inbox­es.

    In the case of the San Jose-based com­pa­ny Edi­son Soft­ware, arti­fi­cial intel­li­gence engi­neers went through the per­son­al email mes­sages of hun­dreds of users – with their iden­ti­ties redact­ed – to improve a “smart replies” fea­ture. The com­pa­ny did not men­tion that humans would view users’ emails in its pri­va­cy pol­i­cy.

    The third par­ties high­light­ed in the WSJ arti­cle are far from the first ones to do it. In 2008, Spin­vox, a com­pa­ny that con­vert­ed voice­mails into text mes­sages, was accused of using humans in over­seas call cen­tres rather than machines to do its work.

    In 2016, Bloomberg high­light­ed the plight of the humans spend­ing 12 hours a day pre­tend­ing to be chat­bots for cal­en­dar sched­ul­ing ser­vices such as X.ai and Clara. The job was so mind-numb­ing that human employ­ees said they were look­ing for­ward to being replaced by bots.

    In 2017, the busi­ness expense man­age­ment app Expen­si­fy admit­ted that it had been using humans to tran­scribe at least some of the receipts it claimed to process using its “smartscan tech­nol­o­gy”. Scans of the receipts were being post­ed to Amazon’s Mechan­i­cal Turk crowd­sourced labour tool, where low-paid work­ers were read­ing and tran­scrib­ing them.

    “I won­der if Expen­si­fy SmartScan users know MTurk work­ers enter their receipts,” said Rochelle LaPlante, a “Turk­er” and advo­cate for gig econ­o­my work­ers on Twit­ter. “I’m look­ing at someone’s Uber receipt with their full name, pick-up and drop-off address­es.”

    Even Face­book, which has invest­ed heav­i­ly in AI, relied on humans for its vir­tu­al assis­tant for Mes­sen­ger, M.

    In some cas­es, humans are used to train the AI sys­tem and improve its accu­ra­cy. A com­pa­ny called Scale offers a bank of human work­ers to pro­vide train­ing data for self-dri­ving cars and oth­er AI-pow­ered sys­tems. “Scalers” will, for exam­ple, look at cam­era or sen­sor feeds and label cars, pedes­tri­ans and cyclists in the frame. With enough of this human cal­i­bra­tion, the AI will learn to recog­nise these objects itself.

    In oth­er cas­es, com­pa­nies fake it until they make it, telling investors and users they have devel­oped a scal­able AI tech­nol­o­gy while secret­ly rely­ing on human intel­li­gence.

    How to start an AI startup1. Hire a bunch of min­i­mum wage humans to pre­tend to be AI pre­tend­ing to be human2. Wait for AI to be invent­ed— Gre­go­ry Koberg­er (@gkoberger) March 1, 2016

    Ali­son Dar­cy, a psy­chol­o­gist and founder of Woe­bot, a men­tal health sup­port chat­bot, describes this as the “Wiz­ard of Oz design tech­nique”.

    “You sim­u­late what the ulti­mate expe­ri­ence of some­thing is going to be. And a lot of time when it comes to AI, there is a per­son behind the cur­tain rather than an algo­rithm,” she said, adding that build­ing a good AI sys­tem required a “ton of data” and that some­times design­ers want­ed to know if there was suf­fi­cient demand for a ser­vice before mak­ing the invest­ment.

    This approach was not appro­pri­ate in the case of a psy­cho­log­i­cal sup­port ser­vice like Woe­bot, she said.

    “As psy­chol­o­gists we are guid­ed by a code of ethics. Not deceiv­ing peo­ple is very clear­ly one of those eth­i­cal prin­ci­ples.”

    Research has shown that peo­ple tend to dis­close more when they think they are talk­ing to a machine, rather than a per­son, because of the stig­ma asso­ci­at­ed with seek­ing help for one’s men­tal health.

    A team from the Uni­ver­si­ty of South­ern Cal­i­for­nia test­ed this with a vir­tu­al ther­a­pist called Ellie. They found that vet­er­ans with post-trau­mat­ic stress dis­or­der were more like­ly to divulge their symp­toms when they knew that Ellie was an AI sys­tem ver­sus when they were told there was a human oper­at­ing the machine.

    Oth­ers think com­pa­nies should always be trans­par­ent about how their ser­vices oper­ate.

    “I don’t like it,” said LaPlante of com­pa­nies that pre­tend to offer AI-pow­ered ser­vices but actu­al­ly employ humans. “It feels dis­hon­est and decep­tive to me, nei­ther of which is some­thing I’d want from a busi­ness I’m using.

    “And on the work­er side, it feels like we’re being pushed behind a cur­tain. I don’t like my labour being used by a com­pa­ny that will turn around and lie to their cus­tomers about what’s real­ly hap­pen­ing.”

    ...

    ———-

    “The rise of ‘pseu­do-AI’: how tech firms qui­et­ly use humans to do bots’ work” by Olivia Solon; The Guardian; 07/06/2018

    “It’s hard to build a ser­vice pow­ered by arti­fi­cial intel­li­gence. So hard, in fact, that some star­tups have worked out it’s cheap­er and eas­i­er to get humans to behave like robots than it is to get machines to behave like humans.”

    Yes, it is indeed hard to build a ser­vice pow­er by arti­fi­cial intel­li­gence. Much hard­er than just hir­ing a bunch of humans to fake it appar­ent­ly.

    That said, it is true that ‘pseu­do-AI’ could be use­ful for devel­op­ing a new prod­uct and allow­ing com­pa­nies to learn more about how peo­ple will actu­al­ly use and AI ser­vice before mak­ing the poten­tial­ly mas­sive invest­ment in devel­op­ing the AI:

    ...
    “Using a human to do the job lets you skip over a load of tech­ni­cal and busi­ness devel­op­ment chal­lenges. It doesn’t scale, obvi­ous­ly, but it allows you to build some­thing and skip the hard part ear­ly on,” said Gre­go­ry Koberg­er, CEO of ReadMe, who says he has come across a lot of “pseu­do-AIs”.

    “It’s essen­tial­ly pro­to­typ­ing the AI with human beings,” he said.
    ...

    Or, as was the case with Edi­son Soft­ware — one of the com­pa­nies that devel­oped email apps pro­filed in the Wall Street Jour­nal arti­cle — the ‘pseu­do-AI’ might involve peri­od­ic human inter­ac­tions (with per­son­al infor­ma­tion hope­ful­ly redact­ed) to devel­op a fea­ture or improve it. In oth­er words, while it’s accept­ed that AIs will gen­er­al get bet­ter the more data you feed into it, there’s also less acknowl­edge require­ment in many cas­es of requires con­tin­u­al human refine­ment of the algo­rithms for to make it real­ly work or improve, and that poten­tial­ly cre­ates a per­ma­nent need for humans to be inter­act­ing the data that’s assumed to be han­dled by AI sys­tems. And that’s going to be true from the tech giants like Google down to one-man app devel­op­ment oper­a­tions. Some­one is going to have to be look­ing over the data if prod­ucts are going to get bet­ter. It’s non-ide­al from a pri­va­cy stand­point but that’s real­i­ty:

    ...
    This prac­tice was brought to the fore this week in a Wall Street Jour­nal arti­cle high­light­ing the hun­dreds of third-par­ty app devel­op­ers that Google allows to access people’s inbox­es.

    In the case of the San Jose-based com­pa­ny Edi­son Soft­ware, arti­fi­cial intel­li­gence engi­neers went through the per­son­al email mes­sages of hun­dreds of users – with their iden­ti­ties redact­ed – to improve a “smart replies” fea­ture. The com­pa­ny did not men­tion that humans would view users’ emails in its pri­va­cy pol­i­cy.
    ...

    But the exam­ples of human soft­ware devel­op­ers peri­od­i­cal­ly look­ing at user data is just one exam­ple of ‘pseu­do-AI’ and a far more under­stand­able exam­ple from a tech­ni­cal stand­point. The far more egre­gious ver­sion is the ‘AI’ ser­vices that are essen­tial­ly pow­ered by large num­bers of poor­ly paid humans. Like Spin­vox, a com­pa­ny that alleged­ly used AI to con­vert voice­mails into text mes­sages but actu­al­ly just used over­seas cheap labor:

    ...
    The third par­ties high­light­ed in the WSJ arti­cle are far from the first ones to do it. In 2008, Spin­vox, a com­pa­ny that con­vert­ed voice­mails into text mes­sages, was accused of using humans in over­seas call cen­tres rather than machines to do its work.

    In 2016, Bloomberg high­light­ed the plight of the humans spend­ing 12 hours a day pre­tend­ing to be chat­bots for cal­en­dar sched­ul­ing ser­vices such as X.ai and Clara. The job was so mind-numb­ing that human employ­ees said they were look­ing for­ward to being replaced by bots.
    ...

    And fak­ing AI ser­vices with cheap labor is actu­al­ly prof­itable, it should come as no sur­prise to learn that com­pa­nies have already been caught using Ama­zon’s Mechan­i­cal Turk crowd­sourced labor ser­vice to pow­er their pseu­do-AI ser­vices. The Mechan­i­cal Turk ser­vice is per­fect for pseu­do-AI. It’s also worth recall­ing that Cam­bridge Ana­lyt­i­ca actu­al­ly used Ama­zon’s Mechan­i­cal Turks to pay the peo­ple who took its psy­cho­log­i­cal sur­vey used to test and cal­i­brate its per­son­al­ized micro-tar­get­ing ser­vices for the Trump cam­paign, high­light­ing how Mechan­i­cal Turks and oth­er crowd­sourc­ing ‘micro-task’ labor plat­forms can be used to both pow­er the AI of pseu­do-AI ser­vices and also pay indi­vid­u­als to fuel the AI with data-rich test sets from large num­bers of indi­vid­u­als. That’s what Cam­bridge Ana­lyt­i­ca did and it seems like a like­ly growth area for Ama­zon:

    ...
    In 2017, the busi­ness expense man­age­ment app Expen­si­fy admit­ted that it had been using humans to tran­scribe at least some of the receipts it claimed to process using its “smartscan tech­nol­o­gy”. Scans of the receipts were being post­ed to Amazon’s Mechan­i­cal Turk crowd­sourced labour tool, where low-paid work­ers were read­ing and tran­scrib­ing them.

    “I won­der if Expen­si­fy SmartScan users know MTurk work­ers enter their receipts,” said Rochelle LaPlante, a “Turk­er” and advo­cate for gig econ­o­my work­ers on Twit­ter. “I’m look­ing at someone’s Uber receipt with their full name, pick-up and drop-off address­es.”
    ...

    And, of course, Face­book has already been caught using pseu­do-AI for its Mes­sen­ger ser­vice vir­tu­al assis­tant helper-chat­bot. Like dig­i­tal Soy­lent Green, Face­book’s ‘vir­tu­al assis­tant’ was peo­ple:

    ...
    Even Face­book, which has invest­ed heav­i­ly in AI, relied on humans for its vir­tu­al assis­tant for Mes­sen­ger, M.
    ...

    And as scan­dalous as this all should be from a data-pri­va­cy per­spec­tive, it’s also poten­tial­ly quite scan­dalous from a busi­ness stand­point because users aren’t the only ones poten­tial­ly get­ting scammed. Investors invest­ing in com­pa­nies with func­tion­al AI prod­ucts are ripe for the pick­ings too. Now that AI with human-like qual­i­ties exist it’s cre­at­ed a fas­ci­nat­ing new area of investor scams: using humans to fake cut­ting edge AI to poten­tial investors:

    ...
    In some cas­es, humans are used to train the AI sys­tem and improve its accu­ra­cy. A com­pa­ny called Scale offers a bank of human work­ers to pro­vide train­ing data for self-dri­ving cars and oth­er AI-pow­ered sys­tems. “Scalers” will, for exam­ple, look at cam­era or sen­sor feeds and label cars, pedes­tri­ans and cyclists in the frame. With enough of this human cal­i­bra­tion, the AI will learn to recog­nise these objects itself.

    In oth­er cas­es, com­pa­nies fake it until they make it, telling investors and users they have devel­oped a scal­able AI tech­nol­o­gy while secret­ly rely­ing on human intel­li­gence.

    How to start an AI startup1. Hire a bunch of min­i­mum wage humans to pre­tend to be AI pre­tend­ing to be human2. Wait for AI to be invent­ed— Gre­go­ry Koberg­er (@gkoberger) March 1, 2016

    Ali­son Dar­cy, a psy­chol­o­gist and founder of Woe­bot, a men­tal health sup­port chat­bot, describes this as the “Wiz­ard of Oz design tech­nique”.

    “You sim­u­late what the ulti­mate expe­ri­ence of some­thing is going to be. And a lot of time when it comes to AI, there is a per­son behind the cur­tain rather than an algo­rithm,” she said, adding that build­ing a good AI sys­tem required a “ton of data” and that some­times design­ers want­ed to know if there was suf­fi­cient demand for a ser­vice before mak­ing the invest­ment.
    ...

    And note one of the oth­er poten­tial appli­ca­tions of pseu­do-AI: specif­i­cal­ly pre­sent­ing a human-pow­ered ser­vice as AI-dri­ven in order to elic­it greater hon­esty from users. Like the Woe­bot psy­cho­log­i­cal sup­port ser­vice that secret­ly used humans. As research has shown, peo­ple are more like­ly to open up about some kinds of med­ical con­di­tions, like psy­cho­log­i­cal help which can have a stig­ma, if they think they are are talk­ing to a com­put­er. And that’s mere­ly one exam­ple of how main­tain­ing the pre­tense of some­thing being AI-dri­ven with no human involve­ment can be advan­ta­geous:

    ...
    This approach was not appro­pri­ate in the case of a psy­cho­log­i­cal sup­port ser­vice like Woe­bot, she said.

    “As psy­chol­o­gists we are guid­ed by a code of ethics. Not deceiv­ing peo­ple is very clear­ly one of those eth­i­cal prin­ci­ples.”

    Research has shown that peo­ple tend to dis­close more when they think they are talk­ing to a machine, rather than a per­son, because of the stig­ma asso­ci­at­ed with seek­ing help for one’s men­tal health.

    A team from the Uni­ver­si­ty of South­ern Cal­i­for­nia test­ed this with a vir­tu­al ther­a­pist called Ellie. They found that vet­er­ans with post-trau­mat­ic stress dis­or­der were more like­ly to divulge their symp­toms when they knew that Ellie was an AI sys­tem ver­sus when they were told there was a human oper­at­ing the machine.

    Oth­ers think com­pa­nies should always be trans­par­ent about how their ser­vices oper­ate.

    “I don’t like it,” said LaPlante of com­pa­nies that pre­tend to offer AI-pow­ered ser­vices but actu­al­ly employ humans. “It feels dis­hon­est and decep­tive to me, nei­ther of which is some­thing I’d want from a busi­ness I’m using.

    “And on the work­er side, it feels like we’re being pushed behind a cur­tain. I don’t like my labour being used by a com­pa­ny that will turn around and lie to their cus­tomers about what’s real­ly hap­pen­ing.”
    ...

    m
    And we can’t for­get that that this is more than just a pri­va­cy con­cern. It’s a labor mar­ket con­cern too since the whole micro-task labor mar­ket is already demon­stra­bly bad for for work­ing con­di­tions. And track­ing the growth of the pseu­do-AI micro-task mar­ket is going to be com­pli­cat­ed by the fact that 21 Inc — the com­pa­ny ded­i­cat­ed to mak­ing mon­ey by giv­ing free things, like toast­ers, that are hooked up to the inter­net and mine bit­coins (and poten­tial­ly spy on peo­ple) for 21 Inc when plugged inhas already cre­at­ed a micro-task­ing plat­form that pays peo­ple in bit­coins (tiny frac­tions of bit­coins per task). Pay­ment in bit­coins and micro-tasks is the per­fect recipe for a stealth labor mar­ket. Espe­cial­ly when so much of the labor being sourced inter­na­tion­al­ly over the inter­net.

    And for those kinds of pseu­do-AI ser­vices that don’t quite fit into the ‘micro-task’ cat­e­go­ry and require humans to have some­thing clos­er to nor­mal employ­ment, there’s also the new con­tract-employ­ment ser­vice being cre­at­ed by Ama­zon that will undoubt­ed­ly prove use­ful in the pseu­do-AI employ­ment realm.

    Also keep in mind that the wages these pseu­do-AI jobs are going to pay will almost ensure the employ­ees will have to be on wel­fare because that’s how the micro-task labor dynam­ics work. So we have to ask the ques­tion of whether or not the wel­fare state will effec­tive­ly end up sub­si­diz­ing the pseu­do-AI pseu­do-AI indus­try, mak­ing it even more eco­nom­i­cal to ‘fake it while you make it’. So we have to recall the pro­pos­al put for­ward by a pair of GOP pol­i­cy wonks to used micro-tasks as the employ­ment mar­ket for peo­ple fac­ing work-require­ments for pub­lic wel­fare pro­grams. The plan did­n’t just make the micro-tasks avail­able for meet­ing wel­fare work require­ment con­di­tions, it was intend­ed to fos­tered the growth of a large per­ma­nent pool of peo­ple avail­able for micro-task work (online and offline), with job train­ing pro­grams that would help peo­ple move up the eco­nom­ic lad­der with­in that micro-task labor mar­ket. So micro-tasks for wel­fare is already on the GOP menu.

    And while the micro-task labor mar­ket is glob­al in nature, a lot of pseu­do-AI ser­vices might be best done by peo­ple liv­ing in the coun­try of the unwit­ting pseu­do-AI ser­vice user. There’s an inter­est­ing trade-off for the unwit­ting users: if they are ask­ing the pseu­do-AI about some­thing that requires knowl­edge about their local area they are going to have to unwit­ting­ly dis­close their per­son­al infor­ma­tion to some­one who lives local­ly. So we have to keep in mind that the growth of the pseu­do-AI mar­ket could very eas­i­ly spark a grow­ing demand by com­pa­nies for a large cheap domes­tic pseu­do-AI micro-task labor force which, in turn, will make peo­ple on wel­fare a poten­tial­ly pre­cious com­mer­cial com­mod­i­ty if the micro-task labor pool becomes the ‘default’ expec­ta­tion due to new safe­ty-net work require­ment. It’s sad­ly easy to imag­ine the US pow­er­ing its fake AI ser­vices with peo­ple on wel­fare. With the way things are going in the US that seems almost inevitable. And for oth­er coun­tries too.

    So get ready for a future of fake-AI pow­ered by the poor peo­ple on wel­fare. And, yes, this means the humans who will be see­ing the per­son­al data you think is being hand­ed over to an AI will be the work­ing poor and peo­ple on wel­fare. Many of them work­ing poor. Your local wait­ress mak­ing pover­ty wages liv­ing in pub­lic hous­ing and sur­viv­ing on food stamps might also be the mag­ic behind your new AI ‘vir­tu­al assis­tant’ chat­bot app you just dis­closed your per­son­al secrets to last week. At least that’s going to be the case if employ­ing poor peo­ple as secret AIs max­i­mizes prof­its. Anoth­er gift to the pub­lic from the grow­ing ‘gig’ econ­o­my.

    Anoth­er inter­est­ing dynam­ic intro­duced by the pseu­do-AI com­mer­cial space is that it’s going to poten­tial­ly make extreme­ly life-like AI ser­vices sus­pi­cious. AIs won’t want to seem too real because that might seem fake. The Uncan­ny Val­ley could be the com­mer­cial sweet spot. And as AIs become more and more con­ver­sa­tion­al and rich, there’s going to be more and more demand for the human pre­tend­ing to be AIs pro­vid­ing rich, nuanced com­men­tary. The Tur­ing test will become fas­ci­nat­ing. The humans secret­ly pow­er­ing that kind of ‘AI’ will have a fas­ci­nat­ing­ly tricky line to walk.

    So as we can see, if you find your­self inter­act­ing with a new ‘vir­tu­al assis­tant’ and it appears to be pass­ing the Tur­ing test with fly­ing col­ors, you prob­a­bly should­n’t be sur­prised and might not want to get too chat­ty about the per­son­al stuff.

    Posted by Pterrafractyl | July 8, 2018, 10:25 pm
  21. Uh oh, there’s anoth­er Cam­bridge Ana­lyt­i­ca off­shoot. Aus­pex. It’s a thing.

    Oh, wait, Aus­pex will have an eth­i­cal streak! No wor­ries!

    Those are the assur­ances we’re get­ting from Ahmad Al Khat­ib, one of the pri­ma­ry fig­ures behind Aus­pex. Recall that Al Khat­ib was the mys­tery investor in “Emer­da­ta”, the oth­er new off­shoot of Cam­bridge Ana­lyt­i­ca. Emer­da­ta had a board of direc­tors of Al Khat­ib along with Steve Ban­non, Robert and Rebekah Mer­cer, Alexan­der Nix, and John­son Chun Shun Ko (a busi­ness part­ner of Erik Prince). So Aus­pex this is an osten­si­bly dif­fer­ent Cam­bridge Analti­ca off­shoot involv­ing Al Khat­ib.

    And Mal­com Turn­bull, the for­mer man­ag­ing direc­tor of CA Polit­i­cal Glob­al. He’s also part of Aus­pex.

    Recall that Turn­bull was one of the fig­ures caught on film in an under­cov­er Chan­nel 4 inves­ti­ga­tion where Turn­bull, Alexan­der Nix (Cam­bridge Ana­lyt­i­ca’s CEO at the time), and Alex Tay­lor (Cam­bridge Ana­lyt­i­ca’s chief data offi­cer) met with an under­cov­er inves­tiga­tive jour­nal­ist pos­ing as fix­er for client who was work­ing to get can­di­dates elect­ed in Sri Lan­ka. It was dur­ing that meet­ing that Nix made a sales pitch to an that includ­ed get­ting Ukrain­ian women to entrap a polit­i­cal oppo­nent in a com­pro­mis­ing sit­u­a­tion and record­ing it as an exam­ple of the kinds of dirty tricks oper­a­tion Cam­bridge Ana­lyt­i­ca offered. Nix also offered to spread out­right lies as a ser­vice. When Nix is describ­ing the sce­nario involv­ing Ukrain­ian women, he starts off the pitch by sug­gest­ing they send in Turn­bull pos­ing as a wealthy devel­op­er look­ing to exchange cam­paign finance for land. Turn­bull remarks, “I’m a mas­ter of dis­guise.”.
    So Turn­bull is appar­ent­ly will­ing to go under­cov­er in their dirty tricks oper­a­tions. But he wants to assure every­one that Aus­pex is going to have a strong eth­i­cal streak.

    In Turn­bul­l’s defense, he only joined Cam­bridge Ana­lyt­i­ca and SCL in May 2016, to work on elec­tion cam­paigns. He spent the pre­vi­ous 16 years work­ing at Bell Pot­tinger, a Lon­don pub­lic rela­tions firm. Bell Pot­tinger was sanc­tioned in 2017 for stok­ing racial ten­sions in South Africa on behalf of its clients (the wealthy Gup­ta fam­i­ly that are patrons of pres­i­dent Jacob Zuma) — fol­low­ing the release of a report about “state cap­ture” by the Gup­tas. The firm lob­bied for the release of Pinochet in 1998 and has a client list that includes Belaruse’s Alexan­der Lukashenko and FW de Klerk, when he ran against Nel­son Man­dela for pres­i­dent of South Africa and is gen­er­al­ly known as a the kind of PR firm dic­ta­tors and cor­rupt regimes can turn to for a pro­fes­sion image makeover (it seems like the per­fect place for Paul Man­afort).

    When asked about Cam­bridge Ana­lyt­i­ca’s work hack­ing a polit­i­cal oppo­nent of a Niger­ian client, Turm­bull claimed he did­n’t know about it. When asked if he has any remorse about the var­i­ous crim­i­nal acts Cam­bridge Anlyt­i­ca is accused of, Turn­bull assure us, “I’ve nev­er been in a posi­tion where I’ve felt real­ly, gen­uine­ly moral­ly com­pro­mised.” So the guy who bragged about being a “mas­ter of dis­guise” in the under­cov­er video wants to assure us that he’s nev­er felt gen­uine­ly moral­ly com­pro­mised at all and that his new com­pa­ny Aus­pex is going to have a strong eth­i­cal streak.

    Ahmad Al Khat­ib, who only tech­ni­cal­ly joined the Cam­bridge Ana­lyt­i­ca crew in Jan­u­ary of this year when he joined Emer­data’s board of direc­tors, also assures us that he asked all his col­leagues about the alle­ga­tions of abus­es by Cam­bridge Ana­lyt­i­ca that he read about and they all assured him that there was no wrong­do­ing and he took their word for it.

    So we have “mas­ter of dis­guise” Mark Turn­bull, who denies any wrong­do­ing ever despite being caught on tape recent­ly, and Ahmad Al Khat­ib, who appears to be inca­pable of see­ing wrong­do­ing, assur­ing every­one that Aus­pex is noth­ing to wor­ry about and actu­al­ly on an altru­is­tic mis­sion:

    Forbes

    From Cam­bridge Analytica’s Ash­es, An Odd Pair Promis­es An ‘Eth­i­cal’ Start­up

    Thomas Fox-Brew­ster
    Forbes Staff
    Cyber­se­cu­ri­ty
    Aug 2, 2018, 09:00am

    Would you trust any­one caught up in one of the biggest pri­va­cy deba­cles in the his­to­ry of the inter­net with your data? Ahmad Al Khat­ib, a 29-year-old Egypt­ian-born entre­pre­neur who had front-row seats to the pub­lic sham­ing and down­fall of Cam­bridge Ana­lyt­i­ca, thinks you should. He’s just launched Aus­pex Inter­na­tion­al with a cadre of ex-Cam­bridge Ana­lyt­i­ca folk. It does much the same work, apply­ing data analy­sis to PR for its clien­tele, includ­ing polit­i­cal cam­paigns. But Aus­pex will have an eth­i­cal streak, says Al Khat­ib, who’s giv­ing Forbes the first in-depth inter­view on his big ven­ture.

    The events of recent months con­tin­ue to cast a long shad­ow, though. In par­tic­u­lar, a Chan­nel 4 under­cov­er sting of which his new busi­ness part­ner, Mark Turn­bull, was an unwit­ting star. In Forbes’ Lon­don office, there’s ten­sion in the air as Al Khat­ib ani­mat­ed­ly recalls the trau­ma of watch­ing the broad­cast, with Turn­bull sit­ting qui­et­ly oppo­site, smil­ing at his young col­league.

    Ahmad’s annus hor­ri­bilis

    Al Khat­ib starts at the begin­ning of the year. It was only in Jan­u­ary that he joined the board of a mys­te­ri­ous enti­ty called Emer­da­ta. The com­pa­ny was ulti­mate­ly just one of many ten­ta­cles of the Cam­bridge Ana­lyt­i­ca beast. It was reg­is­tered at the same address and run by many of the same peo­ple. The board includ­ed Jen­nifer and Rebekah Mer­cer, daugh­ters of Robert, the secre­tive, Trump-sup­port­ing hedge fund bil­lion­aire behind Cam­bridge Ana­lyt­i­ca. Alexan­der Nix, for­mer Cam­bridge Ana­lyt­i­ca chief, was also a direc­tor. (Al Khat­ib declined to say more about the direct rela­tion­ship between Cam­bridge Ana­lyt­i­ca and Emer­da­ta.)

    Then in March, The Observ­er exposed how Cam­bridge Ana­lyt­i­ca had obtained vast quan­ti­ties of Face­book users’ per­son­al data with­out their per­mis­sion. The ana­lyt­ics firm used that data for its pro­fil­ing of Amer­i­can vot­ers as part of its work for the Don­ald Trump cam­paign. After learn­ing of the abuse of its users’ data, Face­book sub­se­quent­ly banned the com­pa­ny.

    ...

    Accord­ing to his telling, he was next to his moth­er, broth­er and girl­friend as they sat down to watch the Chan­nel 4 News, a mat­ter of days after The Observ­er rev­e­la­tions. Hav­ing been told “some sil­ly things were said” to the under­cov­er reporters, Al-Khat­ib wasn’t expect­ing a hor­ror show. But by the end of the pro­gram, he says his moth­er was in tears, his girl­friend had stormed out the house and his pan­icked younger broth­er was afraid his sib­ling was about to be arrest­ed. Lat­er, Al Khat­ib says he received threats over Twit­ter, though Forbes could find no evi­dence of that.

    But now he wants to move on with Aus­pex Inter­na­tion­al. Al Khat­ib, who won’t reveal just how much of his cash he’s pumped into the firm, says Aus­pex will be an altru­is­tic, inter­ven­tion­ist force. It’ll apply big data to PR strat­e­gy just like Cam­bridge Ana­lyt­i­ca, but it’ll do so with­out any dirty tac­tics, accord­ing to his plan at least.

    “I’m an ide­al­ist,” Al Khat­ib, who is cofounder and chair­man at Aus­pex, tells me. “Do you know how many lit­tle kids get raped because some­one tells them they’re sol­diers of god, every sin­gle day? Nine-year-olds, ten-year-olds. Peo­ple say you should leave them to their own devices. I say no, this is some­thing that affects me and the peo­ple who are from my part of the world.”

    Turnbull’s tal­ent as a dis­as­ter artist

    Then there’s the enig­ma that is Turn­bull, patient­ly sit­ting through Al Khatib’s fran­tic rem­i­nisc­ing.

    In the sting, though he had more screen time than Nix, Turnbull’s rep­u­ta­tion may’ve come out in bet­ter shape. In one not so flat­ter­ing clip, Turn­bull speaks about the use of for­mer MI5 and MI6 staff to dig up dirt on clients’ hit list. In anoth­er, though, he por­trays Cam­bridge Ana­lyt­i­ca as a clean busi­ness, say­ing that it wasn’t “in the busi­ness of entrap­ment.” The com­pa­ny wouldn’t, he insists to the under­cov­er reporter, “send a pret­ty girl out to seduce a politi­cian and then film them in their bed­room and then release the film.”

    But then, in a lat­er meet­ing, Nix appears to sug­gest doing some­thing sim­i­lar to just that, sug­gest­ing Ukrain­ian women could be shipped in, before talk­ing about set­ting up fake iden­ti­ties to obtain intel­li­gence.

    Unlike Nix, Turn­bull is more than hap­py to remain in the pub­lic eye. Look­ing far more relaxed than the ani­mat­ed Al Khat­ib, he’s a walk­ing, talk­ing (and reclin­ing) case study in how to coast through what most would con­sid­er pub­lic rela­tions calami­ties. The col­lapse of Cam­bridge Ana­lyt­i­ca wasn’t Turnbull’s first rodeo; he was at Bell Pot­tinger, a PR firm that fell apart after it was report­ed­ly involved in stok­ing racial ten­sions on behalf of clients in South Africa. Turn­bull is proud of his 18 years at the com­pa­ny, where he worked on the post-war Iraq elec­tion, fol­lowed by antiter­ror and counter-rad­i­cal­iza­tion projects. Not that his for­mer employer’s work in the Mid­dle East wasn’t with­out con­tro­ver­sy. As report­ed by the Bureau of Inves­tiga­tive Jour­nal­ism in 2016, the Pen­ta­gon paid Bell Pot­tinger $500 mil­lion to pro­duce pro­pa­gan­da in Iraq, includ­ing fake insur­gent videos.

    Despite going down with two sink­ing ships, Turn­bull is still able to com­mand a high price. “He’s not the most afford­able human being,” Al-Khat­ib says, laugh­ing across the room at Turn­bull. “Thir­ty years expe­ri­ence is expen­sive.”

    Turn­bull is ner­vous about reporters too, slap­ping down a dic­ta­phone and record­ing the inter­view. He’s care­ful to avoid get­ting bogged down in the past. Now man­ag­ing direc­tor at Aus­pex, he also talks up its altru­is­tic mis­sion. “The best defence against fake news, all of its vari­ants, the whole para­pher­na­lia … is a well informed pop­u­lace. If we can con­tribute to that, that’s what we’ll do.”

    Aus­pex is only a week old and has a sin­gle client, an unnamed part­ner in Africa. The only detail Forbes is able to draw out of Al Khat­ib is that it involves “youth inclu­sion” and is more social than polit­i­cal.

    They’ll also ensure all client data was obtained with con­sent, Turn­bull and Al Khat­ib insist. They’ll go a step fur­ther and ask that the data own­ers will be informed just how their per­son­al infor­ma­tion will be used. And they don’t plan to own any cus­tomer data at all; they’ll sim­ply work on it at client sites.

    Any guilt?

    And yet, as they try to erase the stain of Cam­bridge Ana­lyt­i­ca, the pair decline knowl­edge of any wrong­do­ing at their pre­vi­ous employ­er.

    Turn­bull, despite his extra­or­di­nary career his­to­ry, shows no signs of remorse. “I’ve nev­er been in a posi­tion where I’ve felt real­ly, gen­uine­ly moral­ly com­pro­mised,” he says.

    Asked about some of the more moral­ly dubi­ous work report­ed­ly car­ried out at Cam­bridge Analytica—the alleged use of Israeli hack­ers in Nige­ria, for instance—Turnbull says he wasn’t there and wasn’t ever made aware of any such work.

    Al-Khat­ib offers much the same. It was only in Jan­u­ary that he joined the Emer­da­ta board. He says he ques­tioned col­leagues about all the reports of wrong­do­ing, and when they told him there was noth­ing unto­ward going on, he left it there. As if any­one under the age of 30 couldn’t have a mali­cious side, Al Khat­ib offers encomi­ums for his hard­work­ing, young for­mer Cam­bridge Ana­lyt­i­ca col­leagues, before ask­ing an appar­ent­ly rhetor­i­cal ques­tion: “How mali­cious can they pos­si­bly be?”

    In July, his own past was called into ques­tion in a Medi­um post from inde­pen­dent jour­nal­ist Wendy Siegel­man, who linked Al Khat­ib to busi­ness­es in Rus­sia and the U.A.E.. The report also made loose con­nec­tions between Al Khat­ib and Erik Prince, the Trump-sup­port­ing entre­pre­neur who found­ed con­tro­ver­sial pri­vate-mer­ce­nar­ies provider Black­wa­ter. Prince is now wrapped up in Robert Mueller’s FBI inves­ti­ga­tion into Russ­ian influ­ence on the 2016 elec­tion over claims he trav­elled to the Sey­chelles to try to set up a backchan­nel between Trump and Rus­sia. The Aus­pex cofounder sim­ply brushed off any impli­ca­tions in Siegel­man’s post. “None of the points the arti­cle is hint­ing towards are true,” he says in a What­sApp mes­sage after our meet.

    Al Khatib’s spleen, rather than being direct­ed at those who may have erred at Cam­bridge Ana­lyt­i­ca, is focused on the loss of over 100 jobs, a result of the company’s sud­den col­lapse. “I know a lot of them are strug­gling to find oth­er jobs now,” he says. “Who can those 100-plus peo­ple hold account­able?”

    Mov­ing onwards and upwards won’t be easy for Aus­pex or any ex-Cam­bridge Ana­lyt­i­ca folk. As the likes of the Fed­er­al Trade Com­mis­sion start their own probes, the Infor­ma­tion Commissioner’s Office (ICO), the U.K.’s chief data pro­tec­tion reg­u­la­tor, con­tin­ues its broad inves­ti­ga­tion into what hap­pened. Crim­i­nal charges, and career-end­ing judge­ments, loom.

    ———-

    “From Cam­bridge Analytica’s Ash­es, An Odd Pair Promis­es An ‘Eth­i­cal’ Start­up” by Thomas Fox-Brew­ster; Forbes; 08/02/2018

    “Would you trust any­one caught up in one of the biggest pri­va­cy deba­cles in the his­to­ry of the inter­net with your data? Ahmad Al Khat­ib, a 29-year-old Egypt­ian-born entre­pre­neur who had front-row seats to the pub­lic sham­ing and down­fall of Cam­bridge Ana­lyt­i­ca, thinks you should. He’s just launched Aus­pex Inter­na­tion­al with a cadre of ex-Cam­bridge Ana­lyt­i­ca folk. It does much the same work, apply­ing data analy­sis to PR for its clien­tele, includ­ing polit­i­cal cam­paigns. But Aus­pex will have an eth­i­cal streak, says Al Khat­ib, who’s giv­ing Forbes the first in-depth inter­view on his big ven­ture.”

    An eth­i­cal ver­sion of Cam­bridge Ana­lyt­i­ca brought to you by the peo­ple that brought you Cam­bridge Ana­lyt­i­ca. That’s more or less the pitch from Ahmad Al Khat­ib. And his part­ner Mark Turn­bull, the “mas­ter of dis­guise” who will play roles in dirty tricks oper­a­tions that involve using Ukrain­ian women to impli­cate an oppo­nent and spread­ing out­right lies:

    ...
    The events of recent months con­tin­ue to cast a long shad­ow, though. In par­tic­u­lar, a Chan­nel 4 under­cov­er sting of which his new busi­ness part­ner, Mark Turn­bull, was an unwit­ting star. In Forbes’ Lon­don office, there’s ten­sion in the air as Al Khat­ib ani­mat­ed­ly recalls the trau­ma of watch­ing the broad­cast, with Turn­bull sit­ting qui­et­ly oppo­site, smil­ing at his young col­league.
    ...

    Al Khat­ib also wants to assure us that Aus­pex will be an altru­is­tic, inter­ven­tion­ist force. It’ll apply big data to PR strat­e­gy just like Cam­bridge Ana­lyt­i­ca, but it’ll do so with­out any dirty tac­tics. In oth­er words, it will be Bizarro Cam­bridge Ana­lyt­i­ca:

    ...
    Ahmad’s annus hor­ri­bilis

    Al Khat­ib starts at the begin­ning of the year. It was only in Jan­u­ary that he joined the board of a mys­te­ri­ous enti­ty called Emer­da­ta. The com­pa­ny was ulti­mate­ly just one of many ten­ta­cles of the Cam­bridge Ana­lyt­i­ca beast. It was reg­is­tered at the same address and run by many of the same peo­ple. The board includ­ed Jen­nifer and Rebekah Mer­cer, daugh­ters of Robert, the secre­tive, Trump-sup­port­ing hedge fund bil­lion­aire behind Cam­bridge Ana­lyt­i­ca. Alexan­der Nix, for­mer Cam­bridge Ana­lyt­i­ca chief, was also a direc­tor. (Al Khat­ib declined to say more about the direct rela­tion­ship between Cam­bridge Ana­lyt­i­ca and Emer­da­ta.)

    Then in March, The Observ­er exposed how Cam­bridge Ana­lyt­i­ca had obtained vast quan­ti­ties of Face­book users’ per­son­al data with­out their per­mis­sion. The ana­lyt­ics firm used that data for its pro­fil­ing of Amer­i­can vot­ers as part of its work for the Don­ald Trump cam­paign. After learn­ing of the abuse of its users’ data, Face­book sub­se­quent­ly banned the com­pa­ny.

    ...

    Accord­ing to his telling, he was next to his moth­er, broth­er and girl­friend as they sat down to watch the Chan­nel 4 News, a mat­ter of days after The Observ­er rev­e­la­tions. Hav­ing been told “some sil­ly things were said” to the under­cov­er reporters, Al-Khat­ib wasn’t expect­ing a hor­ror show. But by the end of the pro­gram, he says his moth­er was in tears, his girl­friend had stormed out the house and his pan­icked younger broth­er was afraid his sib­ling was about to be arrest­ed. Lat­er, Al Khat­ib says he received threats over Twit­ter, though Forbes could find no evi­dence of that.

    But now he wants to move on with Aus­pex Inter­na­tion­al. Al Khat­ib, who won’t reveal just how much of his cash he’s pumped into the firm, says Aus­pex will be an altru­is­tic, inter­ven­tion­ist force. It’ll apply big data to PR strat­e­gy just like Cam­bridge Ana­lyt­i­ca, but it’ll do so with­out any dirty tac­tics, accord­ing to his plan at least.

    “I’m an ide­al­ist,” Al Khat­ib, who is cofounder and chair­man at Aus­pex, tells me. “Do you know how many lit­tle kids get raped because some­one tells them they’re sol­diers of god, every sin­gle day? Nine-year-olds, ten-year-olds. Peo­ple say you should leave them to their own devices. I say no, this is some­thing that affects me and the peo­ple who are from my part of the world.”
    ...

    And that “altru­is­tic, inter­ven­tion­ist force” But at least Al Khat­ib assures us Cam­bridge Ana­lyt­i­ca will be is going to be co-direct­ed by Mark Turn­bull. Bizarro Cam­bridge Ana­lyt­i­ca is bizarre on mul­ti­ple lev­els:

    ...
    Turnbull’s tal­ent as a dis­as­ter artist

    Then there’s the enig­ma that is Turn­bull, patient­ly sit­ting through Al Khatib’s fran­tic rem­i­nisc­ing.

    In the sting, though he had more screen time than Nix, Turnbull’s rep­u­ta­tion may’ve come out in bet­ter shape. In one not so flat­ter­ing clip, Turn­bull speaks about the use of for­mer MI5 and MI6 staff to dig up dirt on clients’ hit list. In anoth­er, though, he por­trays Cam­bridge Ana­lyt­i­ca as a clean busi­ness, say­ing that it wasn’t “in the busi­ness of entrap­ment.” The com­pa­ny wouldn’t, he insists to the under­cov­er reporter, “send a pret­ty girl out to seduce a politi­cian and then film them in their bed­room and then release the film.”

    But then, in a lat­er meet­ing, Nix appears to sug­gest doing some­thing sim­i­lar to just that, sug­gest­ing Ukrain­ian women could be shipped in, before talk­ing about set­ting up fake iden­ti­ties to obtain intel­li­gence.
    ...

    Turn­bull joined Cam­bridge Ana­lyt­i­ca in 2016 after 18 years at Bell Pot­tinger pub­lic rela­tions firm, where his work includ­ed the post-war Iraq elec­tion, fol­lowed by antiter­ror and counter-rad­i­cal­iza­tion projects. Turn­bull por­trays the mis­sion of Aus­pex as an “altru­is­tic mis­sion” that will be focused on cre­at­ing an informed pop­u­lace. That sure sounds like a pro­pa­gan­dis­tic way of describ­ing pro­pa­gan­da ser­vices:

    ...
    Unlike Nix, Turn­bull is more than hap­py to remain in the pub­lic eye. Look­ing far more relaxed than the ani­mat­ed Al Khat­ib, he’s a walk­ing, talk­ing (and reclin­ing) case study in how to coast through what most would con­sid­er pub­lic rela­tions calami­ties. The col­lapse of Cam­bridge Ana­lyt­i­ca wasn’t Turnbull’s first rodeo; he was at Bell Pot­tinger, a PR firm that fell apart after it was report­ed­ly involved in stok­ing racial ten­sions on behalf of clients in South Africa. Turn­bull is proud of his 18 years at the com­pa­ny, where he worked on the post-war Iraq elec­tion, fol­lowed by antiter­ror and counter-rad­i­cal­iza­tion projects. Not that his for­mer employer’s work in the Mid­dle East wasn’t with­out con­tro­ver­sy. As report­ed by the Bureau of Inves­tiga­tive Jour­nal­ism in 2016, the Pen­ta­gon paid Bell Pot­tinger $500 mil­lion to pro­duce pro­pa­gan­da in Iraq, includ­ing fake insur­gent videos.

    Despite going down with two sink­ing ships, Turn­bull is still able to com­mand a high price. “He’s not the most afford­able human being,” Al-Khat­ib says, laugh­ing across the room at Turn­bull. “Thir­ty years expe­ri­ence is expen­sive.”

    Turn­bull is ner­vous about reporters too, slap­ping down a dic­ta­phone and record­ing the inter­view. He’s care­ful to avoid get­ting bogged down in the past. Now man­ag­ing direc­tor at Aus­pex, he also talks up its altru­is­tic mis­sion. “The best defence against fake news, all of its vari­ants, the whole para­pher­na­lia … is a well informed pop­u­lace. If we can con­tribute to that, that’s what we’ll do.”
    ...

    An both Turn­bull and Al Khat­ib insist there will be min­i­mal dan­gers of pri­va­cy vio­la­tions because they will only work with client data and will ask that the data own­ers be informed of how their per­son­al infor­ma­tion is used. Ask, not demand. So they’re some­how going to offer Cam­bridge Ana­lyt­i­ca-style ser­vices that rely on big data from numer­ous resources, but they’re going to ask that indi­vid­u­als be informed about how their data is being used. And they’ll only be using client data. That’s the laugh­able sales pitch:

    ...
    Aus­pex is only a week old and has a sin­gle client, an unnamed part­ner in Africa. The only detail Forbes is able to draw out of Al Khat­ib is that it involves “youth inclu­sion” and is more social than polit­i­cal.

    They’ll also ensure all client data was obtained with con­sent, Turn­bull and Al Khat­ib insist. They’ll go a step fur­ther and ask that the data own­ers will be informed just how their per­son­al infor­ma­tion will be used. And they don’t plan to own any cus­tomer data at all; they’ll sim­ply work on it at client sites.
    ...

    And when ques­tions about Cam­bridge Ana­lyt­i­ca’s past crimes and abus­es it’s accused of come up, both play dumb and inno­cent. Extreme­ly inno­cent in Turn­bul­l’s case:

    ...
    Any guilt?

    And yet, as they try to erase the stain of Cam­bridge Ana­lyt­i­ca, the pair decline knowl­edge of any wrong­do­ing at their pre­vi­ous employ­er.

    Turn­bull, despite his extra­or­di­nary career his­to­ry, shows no signs of remorse. “I’ve nev­er been in a posi­tion where I’ve felt real­ly, gen­uine­ly moral­ly com­pro­mised,” he says.

    Asked about some of the more moral­ly dubi­ous work report­ed­ly car­ried out at Cam­bridge Analytica—the alleged use of Israeli hack­ers in Nige­ria, for instance—Turnbull says he wasn’t there and wasn’t ever made aware of any such work.
    ...

    Al-Khat­ib, whose Emer­da­ta part­ners include many of the peo­ple accused of wrong­do­ing at Cam­bridge Ana­lyt­i­ca assures us he’s been assured by them that noth­ing unto­wards was going on:

    ...
    Al-Khat­ib offers much the same. It was only in Jan­u­ary that he joined the Emer­da­ta board. He says he ques­tioned col­leagues about all the reports of wrong­do­ing, and when they told him there was noth­ing unto­ward going on, he left it there. As if any­one under the age of 30 couldn’t have a mali­cious side, Al Khat­ib offers encomi­ums for his hard­work­ing, young for­mer Cam­bridge Ana­lyt­i­ca col­leagues, before ask­ing an appar­ent­ly rhetor­i­cal ques­tion: “How mali­cious can they pos­si­bly be?”
    ...

    And back in July, a Medi­um post not­ed that Al-Khat­ib is a cit­i­zen of the Sey­chelles and raised ques­tions about whether or not he was involved in the whole Erik Prince/UAE/Russia/Seychelles mys­tery meet­ing. It high­lights how much of a man of mys­tery he is. Al Khat­ib is pump­ing mon­ey into both Emer­da­ta and Aus­pex, but it’s unclear if he’s a front for a coali­tion of Gulf back­ers:

    ...
    In July, his own past was called into ques­tion in a Medi­um post from inde­pen­dent jour­nal­ist Wendy Siegel­man, who linked Al Khat­ib to busi­ness­es in Rus­sia and the U.A.E.. The report also made loose con­nec­tions between Al Khat­ib and Erik Prince, the Trump-sup­port­ing entre­pre­neur who found­ed con­tro­ver­sial pri­vate-mer­ce­nar­ies provider Black­wa­ter. Prince is now wrapped up in Robert Mueller’s FBI inves­ti­ga­tion into Russ­ian influ­ence on the 2016 elec­tion over claims he trav­elled to the Sey­chelles to try to set up a backchan­nel between Trump and Rus­sia. The Aus­pex cofounder sim­ply brushed off any impli­ca­tions in Siegel­man’s post. “None of the points the arti­cle is hint­ing towards are true,” he says in a What­sApp mes­sage after our meet.
    ...

    So it looks like the world had bet­ter get ready for Bizarro Cam­bridge Ana­lyt­i­ca and its alleged altru­is­tic mis­sion of cre­at­ing a more informed pop­u­lace using data gath­ered with con­sent. Unless the “mas­ter of dis­guise” and his ‘see no evil’ part­ner are lying, in which case the world had bet­ter get ready anoth­er Cam­bridge Ana­lyt­i­ca. A more aggres­sive­ly far­ci­cal Cam­bridge Ana­lyt­i­ca.

    Posted by Pterrafractyl | August 5, 2018, 12:27 am
  22. Welp, Face­book had anoth­er pri­va­cy breach. But this time it’s dif­fer­ent. Dif­fer­ent in a much worse: Face­book just revealed that 50 mil­lion users may have had their accounts pos­si­bly tak­en over by unknown actors over the past year. Oh, and it poten­tial­ly also gave the hack­ers the abil­i­ty to log into any apps that used your Face­book account to log in. Yep. And it’s an actu­al hack, unlike the rest of pri­va­cy breach­es which have been caused by Face­book basi­cal­ly giv­ing the data away to app devel­op­ers. And it’s a hack that poten­tial­ly allowed hack­er access to any app that used a Face­book login.

    How did the hack work? It sounds like it was a result of sep­a­rate bugs that were intro­duced to Face­books video uploader in July of 2017. First, Face­book cre­at­ed a “View As” fea­ture that lets you view your Face­book pro­file as some­one else. You can select the spe­cif­ic per­son you want to view your pro­file as. One bug caused the video uploader to show up when using the View As fea­ture.

    A sec­ond bug cause the erro­neous­ly show­ing video uploader to cre­ate an “access token”, which is basi­cal­ly like a cook­ie stored on your com­put­er that’s used to avoid hav­ing to re-login to Face­book or any apps that use your Face­book login. Except the access tokens cre­ate by the video uploader were the tokens for the per­son you were view­ing your account as. In oth­er words, if you used the View As fea­ture to view your pro­file the way the “John Doe2” account sees your pro­file, the video uploader bug would cre­ate an access token for John Doe2’s account on your com­put­er. And that access token would allowed the hack­ers to log into John Doe2’s Face­book account and poten­tial­ly any oth­er app online that hap­pens to use John Doe2’s Face­book account to login.

    So this is an unusu­al­ly big deal for Face­book users. But how big a deal it is remains unclear. For instance, while Face­book assures us that no cred­it card infor­ma­tion was obtained by the hack­ers, they aren’t giv­ing the same assur­ances about hack­ers read­ing your pri­vate mes­sages. Face­book is sim­ply say­ing at this point that they think it’s unlike­ly that pri­vate mes­sages were accessed, which is an answer that implies the hack­ers poten­tial­ly could have done so. And if they could have done so, it’s hard to imag­ine they did­n’t. Why would­n’t they? That’s poten­tial­ly the most valu­able reser­voir of infor­ma­tion to steal.

    And note that while Face­book says in the fol­low­ing arti­cle that it has­n’t observed any accounts being improp­er­ly accessed, the com­pa­ny also acknowl­edges that it only came across this vul­ner­a­bil­i­ty when it observed an unusu­al spike in activ­i­ty on Sep­tem­ber 16 of this year. And that sure sounds like an acknowl­edg­ment that they were observ­ing at least one hack­er exploit­ing this vul­ner­a­bil­i­ty. And since Face­book reset the logins of 50 mil­lion direct­ly impact­ed users and anoth­er 40 mil­lion users ‘just to be safe’, it sure sounds like the com­pa­ny is aware of activ­i­ty that might indi­cate hack­er activ­i­ty for those 50 mil­lion user and the poten­tial for hack­er activ­i­ty against anoth­er 40 mil­lion accounts.

    So, for now, the extent of the impact remains ambigu­ous and it just looks like it’s going to be real­ly bad in the long run. We don’t know for sure yet. As we’ve seen from the steady drip of increas­ing­ly bad dis­clo­sures about the Cam­bridge Ana­lyt­i­ca scan­dal, the offi­cial state­ments from Face­book will like­ly remain ambigu­ous for as long as pos­si­ble but get steadi­ly less ambigu­ous as the full extent of the impact is slow­ly revealed to the pub­lic one bad rev­e­la­tion at a time. But, for now, we’re at the ear­ly phase of this scan­dal, so the offi­cial sto­ry is that Face­book found a vul­ner­a­bil­i­ty that might be real­ly, real­ly, real­ly bad, but they haven’t con­firmed that yet:

    Tech Crunch

    Every­thing you need to know about Facebook’s data breach affect­ing 50M users

    Sarah Perez, Zack Whit­tak­er
    09/28/2018

    Face­book is clean­ing up after a major secu­ri­ty inci­dent exposed the account data of mil­lions of users. What’s already been a rocky year after the Cam­bridge Ana­lyt­i­ca scan­dal, the com­pa­ny is scram­bling to regain its users trust after anoth­er secu­ri­ty inci­dent exposed user data.

    Here’s every­thing you need to know so far.

    What hap­pened?

    Face­book says at least 50 mil­lion users’ data were con­firmed at risk after attack­ers exploit­ed a vul­ner­a­bil­i­ty that allowed them access to per­son­al data. The com­pa­ny also pre­ven­tive­ly secure 40 mil­lion addi­tion­al accounts out of an abun­dance of cau­tion.

    What data were the hack­ers after?

    Face­book CEO Mark Zucker­berg said that the com­pa­ny has not seen any accounts com­pro­mised and improp­er­ly accessed — although it’s ear­ly days and that may change. But Zucker­berg said that the attack­ers were using Face­book devel­op­er APIs to obtain some infor­ma­tion, like “name, gen­der, and home­towns” that’s linked to a user’s pro­file page.

    What data wasn’t tak­en?

    Face­book said that it looks unlike­ly that pri­vate mes­sages were accessed. No cred­it card infor­ma­tion was tak­en in the breach, Face­book said. Again, that may change as the company’s inves­ti­ga­tion con­tin­ues.

    What’s an access token? Do I need to change my pass­word?

    When you enter your user­name and pass­word on most sites and apps, includ­ing Face­book, your brows­er or device is set an access tokens. This keeps you logged in, with­out you hav­ing to enter your cre­den­tials every time you log in. But the token doesn’t store your pass­word — so there’s no need to change your pass­word.

    ...

    When did this attack hap­pen?

    The vul­ner­a­bil­i­ty was intro­duced on the site in July 2017, but Face­book didn’t know about it until this month, on Sep­tem­ber 16, 2018, when it spot­ted a spike in unusu­al activ­i­ty. That means the hack­ers could have had access to user data for a long time, as Face­book is not sure right now when the attack began.

    Who would do this?

    Face­book doesn’t know who attacked the site, but the FBI is inves­ti­gat­ing, it says.

    How­ev­er, Face­book has in the past found evi­dence of Russia’s attempts to med­dle in Amer­i­can democ­ra­cy and influ­ence our elec­tions — but it’s not to say that Rus­sia is behind this new attack. Attri­bu­tion is incred­i­bly dif­fi­cult and takes a lot of time and effort. It recent­ly took the FBI more than two years to con­firm that North Korea was behind the Sony hack in 2016 — so we might be in for a long wait.

    How did the attack­ers get in?

    Not one, but three bugs led to the data expo­sure.

    In July 2017, Face­book inad­ver­tent­ly intro­duced three vul­ner­a­bil­i­ties in its video uploader, said Guy Rosen, Facebook’s vice pres­i­dent of prod­uct man­age­ment, in a call with reporters. When using the “View As” fea­ture to view your pro­file as some­one else, the video uploader would occa­sion­al­ly appear when it shouldn’t dis­play at all. When it appeared, it gen­er­at­ed an access token using the per­son who the pro­file page was being viewed as. If that token was obtained, an attack­er could log into the account of the oth­er per­son.

    Is the prob­lem fixed?

    Face­book says it fixed the vul­ner­a­bil­i­ty on Sep­tem­ber 27, and then began reset­ting the access tokens of peo­ple to pro­tect the secu­ri­ty of their accounts.

    Did this affect What­sApp and Insta­gram accounts?

    Face­book said that it’s not yet sure if Insta­gram accounts are affect­ed, but were auto­mat­i­cal­ly secured once Face­book access tokens were revoked. Affect­ed Insta­gram users will have to unlink and relink their Face­book accounts in Insta­gram in order to cross post to Face­book.

    On a call with reporters, Face­book said there is no impact on What­sApp users at all.

    Are sites that use Face­book Login also affect­ed?

    If an attack­er obtained your Face­book access token, it not only gives them access to your Face­book account as if they were you, but any oth­er site that you’ve used Face­book to login with, like dat­ing apps, games, or stream­ing ser­vices.

    Will Face­book be fined or pun­ished?

    If Face­book is found to have breached Euro­pean data pro­tec­tion rules — the new­ly imple­ment­ed Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR) — the com­pa­ny can face fines of up to four per­cent of its glob­al rev­enue.

    How­ev­er, that fine can’t be levied until Face­book knows more about the nature of the breach and the risk to users.

    Anoth­er data breach of this scale – espe­cial­ly com­ing in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal and oth­er data leaks – has some in Con­gress call­ing for the social net­work to be reg­u­lat­ed. Sen. Mark Warn­er (D‑VA) issued a stern rep­ri­mand to Face­book over today’s news, and again pushed his pro­pos­al for reg­u­lat­ing com­pa­nies hold­ing large data sets as ““infor­ma­tion fidu­cia­ries” with addi­tion­al con­se­quences for improp­er secu­ri­ty.

    FTC Com­mis­sion­er Rohit Chopra also tweet­ed that “I want answers” regard­ing the Face­book hack. It’s rea­son­able to assume that there could be inves­ti­ga­tors in both the U.S. and Europe to fig­ure out what hap­pened.

    Can I check to see if my account was improp­er­ly accessed?

    You can. Once you log back into your Face­book account, you can go to your account’s secu­ri­ty and login page, which lets you see where you’ve logged in. If you had your access tokens revoked and had to log in again, you should see only the devices that you logged back in with.

    Should I delete my Face­book account?

    That’s up to you! But you may want to take some pre­cau­tions like chang­ing your pass­word and turn­ing on two-fac­tor authen­ti­ca­tion, if you haven’t done so already. If you’re weren’t impact­ed by this, you may want to take the time to delete some of the per­son­al infor­ma­tion you’ve shared to Face­book to reduce your risk of expo­sure in future attacks, if they were to occur.

    ————

    “Every­thing you need to know about Facebook’s data breach affect­ing 50M users” by Sarah Perez, Zack Whit­tak­er; Tech Crunch; 09/28/2018

    Face­book says at least 50 mil­lion users’ data were con­firmed at risk after attack­ers exploit­ed a vul­ner­a­bil­i­ty that allowed them access to per­son­al data. The com­pa­ny also pre­ven­tive­ly secure 40 mil­lion addi­tion­al accounts out of an abun­dance of cau­tion.”

    50 mil­lion user accounts “at risk” and anoth­er 40 mil­lion secured out of “an abun­dance of cau­tion”. But no con­firmed hacks. It’s a curi­ous admis­sion. Like, aren’t the 40 mil­lion accounts implic­it­ly “at risk” too? What dif­fer­en­ti­ates those accounts from the 50 mil­lion accounts? At this point we have no idea.

    And note the con­tra­dic­to­ry lan­guage: Face­book says it has­n’t seen any accounts com­pro­mised and improp­er­ly accessed and yet Mark Zucker­berg lit­er­al­ly refers to hack­ers who were using this exploit:

    ...
    What data were the hack­ers after?

    Face­book CEO Mark Zucker­berg said that the com­pa­ny has not seen any accounts com­pro­mised and improp­er­ly accessed — although it’s ear­ly days and that may change. But Zucker­berg said that the attack­ers were using Face­book devel­op­er APIs to obtain some infor­ma­tion, like “name, gen­der, and home­towns” that’s linked to a user’s pro­file page.
    ...

    Also note how Face­book explic­it­ly says that cred­it card infor­ma­tion could­n’t have been stolen using this hack, but when it comes to the pri­vate mes­sages Face­book mere­ly says that it thinks it’s “unlike­ly” that pri­vate mes­sages were accessed. That sure sounds like pri­vate mes­sages could have been accessed:

    ...
    What data wasn’t tak­en?

    Face­book said that it looks unlike­ly that pri­vate mes­sages were accessed. No cred­it card infor­ma­tion was tak­en in the breach, Face­book said. Again, that may change as the company’s inves­ti­ga­tion con­tin­ues.
    ...

    And this vul­ner­a­bil­i­ty has been in place since July of 2017 and was­n’t dis­cov­ered until Sep­tem­ber 16, 2018. And it was only dis­cov­ered by Face­book observ­ing a spike in unusu­al activ­i­ty, mean basi­cal­ly means they caught some­one using this exploit in a BIG way. There’s no rea­son to assume this was­n’t being used in a much more tar­get­ed man­ner (that does­n’t cre­ate a spike) before that:

    ...
    When did this attack hap­pen?

    The vul­ner­a­bil­i­ty was intro­duced on the site in July 2017, but Face­book didn’t know about it until this month, on Sep­tem­ber 16, 2018, when it spot­ted a spike in unusu­al activ­i­ty. That means the hack­ers could have had access to user data for a long time, as Face­book is not sure right now when the attack began.
    ...

    The hack itself appears to be the result of a series of bugs that result­ed in oth­er users access tokens (login cook­ies) get­ting gen­er­at­ed. Bugs that iron­i­cal­ly are part of the “View As” fea­ture which is intend­ed to give users more con­trol over their pri­va­cy:

    ...
    How did the attack­ers get in?

    Not one, but three bugs led to the data expo­sure.

    In July 2017, Face­book inad­ver­tent­ly intro­duced three vul­ner­a­bil­i­ties in its video uploader, said Guy Rosen, Facebook’s vice pres­i­dent of prod­uct man­age­ment, in a call with reporters. When using the “View As” fea­ture to view your pro­file as some­one else, the video uploader would occa­sion­al­ly appear when it shouldn’t dis­play at all. When it appeared, it gen­er­at­ed an access token using the per­son who the pro­file page was being viewed as. If that token was obtained, an attack­er could log into the account of the oth­er per­son.
    ...

    And those access tokens don’t just give access to Face­book accounts. They give access to any oth­er app that uses the Face­book login sys­tem. That’s what makes this such a mas­sive breach. The impact is almost impos­si­ble to assess because it’s basi­cal­ly a hack of one of the biggest login sys­tems used on the inter­net. Any app that offered a Face­book login option was vul­ner­a­ble from July 2017 until a few days ago:

    ...
    Did this affect What­sApp and Insta­gram accounts?

    Face­book said that it’s not yet sure if Insta­gram accounts are affect­ed, but were auto­mat­i­cal­ly secured once Face­book access tokens were revoked. Affect­ed Insta­gram users will have to unlink and relink their Face­book accounts in Insta­gram in order to cross post to Face­book.

    On a call with reporters, Face­book said there is no impact on What­sApp users at all.

    Are sites that use Face­book Login also affect­ed?

    If an attack­er obtained your Face­book access token, it not only gives them access to your Face­book account as if they were you, but any oth­er site that you’ve used Face­book to login with, like dat­ing apps, games, or stream­ing ser­vices.
    ...

    So is Face­book going to be pun­ished by US and EU reg­u­la­tors and face new reg­u­la­tions after a mas­sive pri­va­cy f#ck like this? At this point that’s a big maybe:

    ...
    Will Face­book be fined or pun­ished?

    If Face­book is found to have breached Euro­pean data pro­tec­tion rules — the new­ly imple­ment­ed Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR) — the com­pa­ny can face fines of up to four per­cent of its glob­al rev­enue.

    How­ev­er, that fine can’t be levied until Face­book knows more about the nature of the breach and the risk to users.

    Anoth­er data breach of this scale – espe­cial­ly com­ing in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal and oth­er data leaks – has some in Con­gress call­ing for the social net­work to be reg­u­lat­ed. Sen. Mark Warn­er (D‑VA) issued a stern rep­ri­mand to Face­book over today’s news, and again pushed his pro­pos­al for reg­u­lat­ing com­pa­nies hold­ing large data sets as ““infor­ma­tion fidu­cia­ries” with addi­tion­al con­se­quences for improp­er secu­ri­ty.

    FTC Com­mis­sion­er Rohit Chopra also tweet­ed that “I want answers” regard­ing the Face­book hack. It’s rea­son­able to assume that there could be inves­ti­ga­tors in both the U.S. and Europe to fig­ure out what hap­pened.
    ...

    But let’s not for­get the one group that could deal the great­est pun­ish­ment to Face­book: Face­book’s users, who can just delete their accounts at this point:

    ...
    Should I delete my Face­book account?

    That’s up to you! But you may want to take some pre­cau­tions like chang­ing your pass­word and turn­ing on two-fac­tor authen­ti­ca­tion, if you haven’t done so already. If you’re weren’t impact­ed by this, you may want to take the time to delete some of the per­son­al infor­ma­tion you’ve shared to Face­book to reduce your risk of expo­sure in future attacks, if they were to occur.
    ...

    And whether or not you decide to delete your account, it seems clear at this point that the wide­spread assump­tion that a com­pa­ny with as big as Face­book would be able to avoid a vul­ner­a­bil­i­ty of this nature is an erro­neous assump­tion. Face­book just expe­ri­enced a hack that did­n’t just leave Face­book accounts open to any­one but the accounts of all sorts of third-par­ty apps all across the web. That’s like the worst hack ever by some mea­sures. And it hap­pened because Face­book did­n’t catch bugs. Bugs that should have been easy to spot like the video uploader show­ing up in the “View As” mode. How did Face­book not catch that but for over a year? It’s not like it was a sub­tle bug that’s hard to spot. It’s kind of amaz­ing. But it hap­pened. And now we get to spend the next year get­ting updates from Face­book about how this hack was actu­al­ly worse than they ear­li­er dis­closed.

    On the plus side, the Cam­bridge Ana­lyt­i­ca pri­va­cy night­mare does­n’t seem quite as bad any­more, rel­a­tive­ly speak­ing.

    Posted by Pterrafractyl | September 30, 2018, 7:29 pm
  23. With the 2018 US mid-terms less than a month away, and the GOP appear­ing to have arrived at a strat­e­gy of stok­ing right-wing rage over the grow­ing right-wing con­spir­a­cy the­o­ry that all of the accu­sa­tions of sex­u­al harass­ment and assault against Brett Kavanaugh were all made up and part of a lib­er­al plot, it’s pret­ty clear that an enti­ty like Cam­bridge Ana­lyt­i­ca with expe­ri­ence in pro­mot­ing con­spir­a­cy the­o­ries to tar­get­ed audi­ences and stok­ing out­rage would be very use­ful for the GOP in this final stretch. So it’s worth not­ing that it was report­ed back in June that the GOP appears to have already hired a Cam­bridge Ana­lyt­i­ca-like enti­ty to do work for it in the 2018 mid-terms. Specif­i­cal­ly, it hired Data Pro­pria, which employs four ex-Cam­bridge Ana­lyt­i­ca employ­ees, includ­ing Cam­bridge Ana­lyt­i­ca’s chief data sci­en­tist. Cam­bridge Ana­lyt­i­ca’s for­mer head of prod­uct, Matt Oczkows­ki, leads Data Pro­pria. Oczkows­ki led the Cam­bridge Ana­lyt­i­ca team that worked for Trump’s 2016 cam­paign.

    Brad Parscale, the head of Trump’s 2016 dig­i­tal cam­paign, is an own­er of Data Pro­pri­a’s par­ent com­pa­ny, Cloud Com­merce. Parscale has already signed on to lead Trump’s 2020 dig­i­tal cam­paign efforts and is cur­rent­ly work­ing for the GOP on the mid-terms. So Data Pro­pria is an excep­tion­al­ly well-con­nect­ed com­pa­ny just in terms of who is behind it.

    As the fol­low­ing arti­cle also makes clear, Data Pro­pria is clear­ly very inter­est­ed in hid­ing from the pub­lic the nature of the work its doing and the fact that it’s work­ing for the GOP at all. Oczkows­ki denied any links to the Trump cam­paign, but acknowl­edged that Data Pro­pria is doing 2018 cam­paign work for the Repub­li­can Nation­al Com­mit­tee. But AP reporters actu­al­ly over­heard con­ver­sa­tions between Oczkows­ki and a prospec­tive client held in pub­lic where Oczkows­ki talked about how he and Parscale were already work­ing on Trump’s 2020 cam­paign.

    Oczkows­ki also pre­vi­ous­ly told the AP Data Pro­pria had no inten­tion of seek­ing polit­i­cal clients. But after the AP told him about how their report­ed over­heard him direct­ly dis­cussing cam­paign work, Oczkows­ki claimed the com­pa­ny had changed course and that what­ev­er he’d said about the 2020 cam­paign would have been spec­u­la­tive. So he appears to con­tin­ue to deny Trump’s 2020 cam­paign work even after get­ting caught talk­ing about it, which isn’t par­tic­u­lar­ly sur­pris­ing giv­en how bad it looks for Trump to con­tin­ue work­ing with Cam­bridge Ana­lyt­i­ca after every­thing that hap­pened in 2016.

    But you also have to won­der if the rea­son they want to avoid any con­nec­tion to Trump’s 2020 cam­paign is because they have plans for some very dirty tac­tics. And that’s not just a medi­um-term con­cern about 2020. Because if those dirty tac­tics are going to be employed in 2020, they’re prob­a­bly hon­ing them in the 2018 elec­tions. Right now:

    Asso­ci­at­ed Press

    AP: Trump 2020 work­ing with ex-Cam­bridge Ana­lyt­i­ca staffers

    By JEFF HORWITZ
    Jun. 15, 2018

    WASHINGTON (AP) — A com­pa­ny run by for­mer offi­cials at Cam­bridge Ana­lyt­i­ca, the polit­i­cal con­sult­ing firm brought down by a scan­dal over how it obtained Face­book users’ pri­vate data, has qui­et­ly been work­ing for Pres­i­dent Don­ald Trump’s 2020 re-elec­tion effort, The Asso­ci­at­ed Press has learned.

    The AP con­firmed that at least four for­mer Cam­bridge Ana­lyt­i­ca employ­ees are affil­i­at­ed with Data Pro­pria, a new com­pa­ny spe­cial­iz­ing in vot­er and con­sumer tar­get­ing work sim­i­lar to Cam­bridge Analytica’s efforts before its col­lapse. The company’s for­mer head of prod­uct, Matt Oczkows­ki, leads the new firm, which also includes Cam­bridge Analytica’s for­mer chief data sci­en­tist.

    Oczkows­ki denied a link to the Trump cam­paign, but acknowl­edged that his new firm has agreed to do 2018 cam­paign work for the Repub­li­can Nation­al Com­mit­tee. Oczkows­ki led the Cam­bridge Ana­lyt­i­ca data team which worked on Trump’s suc­cess­ful 2016 cam­paign.

    The AP learned of Data Propria’s role in Trump’s re-elec­tion effort as a result of con­ver­sa­tions held with polit­i­cal con­tacts and prospec­tive clients in recent weeks by Oczkows­ki. In one such con­ver­sa­tion, which took place in a pub­lic place and was over­heard by two AP reporters, Oczkows­ki said he and Trump’s 2020 cam­paign man­ag­er, Brad Parscale, were “doing the president’s work for 2020.”

    In addi­tion, a per­son famil­iar with Data Propria’s Wash­ing­ton efforts, who spoke on con­di­tion of anonymi­ty to pro­tect busi­ness rela­tion­ships, con­firmed to the AP that Trump-relat­ed 2020 work already had begun at the firm along the lines of Cam­bridge Analytica’s 2016 work.

    Both Oczkows­ki and Parscale told the AP that no Trump re-elec­tion work by Data Pro­pria was even planned, but con­firmed that Parscale had helped Data Pro­pria line up a suc­cess­ful bid on 2018 midterm polling-relat­ed work for the RNC, award­ed ear­li­er this week. Oczkows­ki called the con­tract mod­est.

    Oczkows­ki had pre­vi­ous­ly told the AP the firm had no inten­tion of seek­ing polit­i­cal clients. After being informed the AP had over­heard him direct­ly dis­cussing cam­paign work, he said his young com­pa­ny had changed course and that what­ev­er he’d said about the 2020 cam­paign would have been spec­u­la­tive.

    “I’m obvi­ous­ly open to any work that would become avail­able,” Oczkows­ki said, not­ing that he and Parscale had worked togeth­er close­ly dur­ing Trump’s 2016 cam­paign.

    Parscale told the AP that he has not even begun award­ing con­tracts for the 2020 cam­paign, which he was appoint­ed to man­age in March.

    “I am laser-focused on the 2018 midterms and hold­ing the House and increas­ing our seats in the Sen­ate,” he said. “Once we do those things, I’ll start work­ing on re-elect­ing Pres­i­dent Trump.”

    ...

    In May, Cam­bridge Ana­lyt­i­ca filed for bank­rupt­cy and said it was “ceas­ing all oper­a­tions.” A British inves­ti­ga­tion of Cam­bridge Ana­lyt­i­ca and its par­ent com­pa­ny will con­tin­ue despite the shut­down, the U.K.s Infor­ma­tion Commissioner’s office said last month.

    The descrip­tion of Data Propria’s efforts over­heard by the AP reporters tracks close­ly with the ser­vices Cam­bridge Ana­lyt­i­ca pro­vid­ed to both com­mer­cial clients and Trump’s 2016 cam­paign, includ­ing pro­fil­ing vot­ers based on data about them in a process known as “psy­chog­ra­phy.” The tech­nique clas­si­fies peo­ple accord­ing to their atti­tudes, aspi­ra­tions and oth­er psy­cho­log­i­cal cri­te­ria to tai­lor adver­tise­ments or mar­ket­ing strate­gies.

    Oczkows­ki told the AP that three of the peo­ple on Data Propria’s 10-per­son team are Cam­bridge Ana­lyt­i­ca alum­ni, but said they were focused on cam­paign oper­a­tions and data analy­sis — not behav­ioral psy­chol­o­gy.

    Data Pro­pria is “not going down the psy­cho­met­rics side of things,” he said.

    Among the for­mer Cam­bridge Ana­lyt­i­ca employ­ees is David Wilkin­son, a British cit­i­zen who was the company’s lead data sci­en­tist. Dur­ing the 2016 cam­paign, Wilkin­son helped over­see the vot­er data mod­el­ing that informed Trump’s focus on the Rust Belt, accord­ing to a Cam­bridge Ana­lyt­i­ca press release issued after the elec­tion.

    Fed­er­al elec­tion law bars for­eign nation­als from “direct­ing, con­trol­ling or direct­ly or indi­rect­ly par­tic­i­pat­ing in the deci­sion-mak­ing process” of U.S. cam­paigns. The pub­lic advo­ca­cy group Com­mon Cause filed a com­plaint with the FEC in March alleg­ing that Cam­bridge Analytica’s for­eign employ­ees broke that law, though the com­plaint did not name Wilkin­son.

    Oczkows­ki told the AP that the Lon­don-based Wilkin­son is a con­trac­tor and will not be involved in Data Propria’s U.S. polit­i­cal work.

    Anoth­er issue raised by Data Propria’s work on Trump’s re-elec­tion effort is the firm’s finan­cial links to Parscale, Trump’s cam­paign man­ag­er.

    Parscale is a part own­er of Data Propria’s par­ent com­pa­ny, a pub­licly trad­ed firm called Cloud Com­merce that bought his dig­i­tal mar­ket­ing busi­ness in August. Over the last year, Cloud Com­merce has large­ly rebuilt itself around Parscale’s for­mer com­pa­ny, now rebrand­ed Parscale Dig­i­tal. Parscale sits on Cloud Commerce’s board of direc­tors and pro­vides the com­pa­ny with the major­i­ty of its $2.9 mil­lion in rev­enue, accord­ing to the company’s most recent Secu­ri­ties and Exchange Com­mis­sion fil­ing.

    By work­ing with a Cloud Com­merce sub­sidiary, the Trump cam­paign could be help­ing Parscale prof­it beyond his $15,000 month­ly cam­paign retain­er and the com­mis­sions he has been col­lect­ing on Trump’s dig­i­tal adver­tis­ing spend­ing.

    While Parscale’s per­son­al busi­ness still works for the cam­paign, it’s unclear how that work may be chang­ing now that he has become Trump’s offi­cial cam­paign man­ag­er.

    Under one con­tract between Parscale and Cloud Com­merce, he receives a 5 per­cent cut of every dol­lar col­lect­ed by Parscale Dig­i­tal — which is large­ly com­posed of the web mar­ket­ing busi­ness Parscale sold to Cloud Com­merce last year. In SEC fil­ings, Cloud Com­merce has esti­mat­ed that Parscale’s cut of those rev­enues, exclud­ing pass through pay­ments, would total between $850,000 and $1.3 mil­lion. Parscale Dig­i­tal would not be direct­ly receiv­ing funds from the RNC or the cam­paign.

    Even though Parscale is not direct­ly receiv­ing mon­ey from Data Pro­pria work, the firms pro­vide each oth­er with busi­ness and Data Propria’s suc­cess would help Cloud Com­merce pay off the mon­ey it owes Parscale.

    A sec­ond agree­ment oblig­ates Cloud Com­merce to pay Parscale $85,150 a month as part of its sep­a­rate $1 mil­lion pur­chase of his for­mer web-host­ing busi­ness. Parscale earns anoth­er $3,000 per month from leas­ing com­put­ers and office fur­ni­ture to Cloud Com­merce.

    Trevor Pot­ter, a Repub­li­can who once head­ed the Fed­er­al Elec­tion Com­mis­sion and now leads the non­prof­it Cam­paign Legal Cen­ter, said it was unusu­al for an incum­bent president’s cam­paign to direct large amounts of busi­ness to out­side firms tied to his cam­paign man­ag­er.

    Such arrange­ments are more com­mon for long-shot can­di­dates in need of exper­tise, he said.

    “Top-notch can­di­dates have bar­gain­ing pow­er and are less like­ly to put up with that,” Pot­ter said. “It sounds like a very rich oppor­tu­ni­ty for Mr. Parscale, but that’s real­ly the candidate’s call.”

    Aside from the ties to Parscale, Cloud Commerce’s par­ent com­pa­ny is an unusu­al can­di­date for blue chip polit­i­cal work. Found­ed in 1999, the firm has repeat­ed­ly changed its name and busi­ness mod­el, and the company’s most recent audit “expressed sub­stan­tial doubt about our abil­i­ty to con­tin­ue as a going con­cern” with­out con­tin­u­ing infu­sions of cash.

    An AP inves­ti­ga­tion of Cloud Com­merce in March found that a for­mer CEO of its pre­de­ces­sor firm plead­ed guilty to stock fraud in 2008 and remained active in Cloud Commerce’s affairs until at least 2015. Cloud Com­merce says the man has had no con­nec­tion with its busi­ness since at least 2011.

    The AP also found dis­crep­an­cies in the pro­fes­sion­al biog­ra­phy of cur­rent Cloud Com­merce chief exec­u­tive Andrew Van Noy, who has told investors that he worked for Mor­gan Stan­ley and was a pri­vate equi­ty exec­u­tive in the years imme­di­ate­ly pre­ced­ing his arrival at Cloud Com­merce in 2011.

    Van Noy’s August 2010 Utah bank­rupt­cy fil­ing con­flicts with that por­tray­al, show­ing he spent most of the pri­or two and a half years unem­ployed. The fil­ings also reveal that Van Noy had been accused of sell­ing unli­censed secu­ri­ties and using $100,000 of an investor’s mon­ey for per­son­al pur­pos­es.

    “Luck­i­ly I have not had to go to the home­less shel­ter,” Van Noy wrote to that investor in 2011 after the man asked what had hap­pened to his invest­ment. After the man sued for fraud, Van Noy agreed to pay him $105,000.

    A year lat­er, he became pres­i­dent of Cloud Com­merce.

    ———-

    “AP: Trump 2020 work­ing with ex-Cam­bridge Ana­lyt­i­ca staffers” By JEFF HORWITZ; Asso­ci­at­ed Press; 06/15/2018

    “The AP con­firmed that at least four for­mer Cam­bridge Ana­lyt­i­ca employ­ees are affil­i­at­ed with Data Pro­pria, a new com­pa­ny spe­cial­iz­ing in vot­er and con­sumer tar­get­ing work sim­i­lar to Cam­bridge Analytica’s efforts before its col­lapse. The company’s for­mer head of prod­uct, Matt Oczkows­ki, leads the new firm, which also includes Cam­bridge Analytica’s for­mer chief data sci­en­tist.

    Meet the new polit­i­cal data ana­lyt­ics night­mare com­pa­ny, same as the old polit­i­cal data ana­lyt­ics night­mare com­pa­ny.

    And while Data Pro­pria acknowl­edges doing work for the RNC in 2018, they curi­ous­ly deny hav­ing a con­tract to work on Trump’s 2020 cam­paign:

    ...
    Oczkows­ki denied a link to the Trump cam­paign, but acknowl­edged that his new firm has agreed to do 2018 cam­paign work for the Repub­li­can Nation­al Com­mit­tee. Oczkows­ki led the Cam­bridge Ana­lyt­i­ca data team which worked on Trump’s suc­cess­ful 2016 cam­paign.

    ...

    Both Oczkows­ki and Parscale told the AP that no Trump re-elec­tion work by Data Pro­pria was even planned, but con­firmed that Parscale had helped Data Pro­pria line up a suc­cess­ful bid on 2018 midterm polling-relat­ed work for the RNC, award­ed ear­li­er this week. Oczkows­ki called the con­tract mod­est.
    ...

    But those denials proved to be lies. Of course. The AP’s reporters over­heard Oczkows­ki brag­ging about “doing the pres­i­den­t’s work for 2020” with Brad Parscale. And this was anony­mous­ly con­firmed by some­one famil­iar with the com­pa­ny. But when con­front­ed with the fact that an AP report­ed over­heard him talk­ing about that 2020 con­tract, Oczkows­ki act­ed like the com­pa­ny had changed its mind ans that talk of 2020 con­tracts was pure­ly spec­u­la­tive:

    ...
    The AP learned of Data Propria’s role in Trump’s re-elec­tion effort as a result of con­ver­sa­tions held with polit­i­cal con­tacts and prospec­tive clients in recent weeks by Oczkows­ki. In one such con­ver­sa­tion, which took place in a pub­lic place and was over­heard by two AP reporters, Oczkows­ki said he and Trump’s 2020 cam­paign man­ag­er, Brad Parscale, were “doing the president’s work for 2020.”

    In addi­tion, a per­son famil­iar with Data Propria’s Wash­ing­ton efforts, who spoke on con­di­tion of anonymi­ty to pro­tect busi­ness rela­tion­ships, con­firmed to the AP that Trump-relat­ed 2020 work already had begun at the firm along the lines of Cam­bridge Analytica’s 2016 work.

    ...

    Oczkows­ki had pre­vi­ous­ly told the AP the firm had no inten­tion of seek­ing polit­i­cal clients. After being informed the AP had over­heard him direct­ly dis­cussing cam­paign work, he said his young com­pa­ny had changed course and that what­ev­er he’d said about the 2020 cam­paign would have been spec­u­la­tive.

    “I’m obvi­ous­ly open to any work that would become avail­able,” Oczkows­ki said, not­ing that he and Parscale had worked togeth­er close­ly dur­ing Trump’s 2016 cam­paign.

    Parscale told the AP that he has not even begun award­ing con­tracts for the 2020 cam­paign, which he was appoint­ed to man­age in March.

    “I am laser-focused on the 2018 midterms and hold­ing the House and increas­ing our seats in the Sen­ate,” he said. “Once we do those things, I’ll start work­ing on re-elect­ing Pres­i­dent Trump.”
    ...

    So we’ve just learned about the exis­tence of Data Pro­pria and the com­pa­ny is already engag­ing in bla­tant­ly false denials that can be eas­i­ly dis­proven. Again, Meet the new polit­i­cal data ana­lyt­ics night­mare com­pa­ny, same as the old polit­i­cal data ana­lyt­ics night­mare com­pa­ny.

    And just to be clear, it sounds like the ser­vices offered by Data Pro­pria include the “psy­chog­ra­phy” ser­vice of cre­at­ing psy­cho­me­t­ric pro­files on indi­vid­u­als and micro-tar­get­ing them with ads based on that pro­file. But, of course, Oczkows­ki denies this too:

    ...

    The descrip­tion of Data Propria’s efforts over­heard by the AP reporters tracks close­ly with the ser­vices Cam­bridge Ana­lyt­i­ca pro­vid­ed to both com­mer­cial clients and Trump’s 2016 cam­paign, includ­ing pro­fil­ing vot­ers based on data about them in a process known as “psy­chog­ra­phy.” The tech­nique clas­si­fies peo­ple accord­ing to their atti­tudes, aspi­ra­tions and oth­er psy­cho­log­i­cal cri­te­ria to tai­lor adver­tise­ments or mar­ket­ing strate­gies.

    Oczkows­ki told the AP that three of the peo­ple on Data Propria’s 10-per­son team are Cam­bridge Ana­lyt­i­ca alum­ni, but said they were focused on cam­paign oper­a­tions and data analy­sis — not behav­ioral psy­chol­o­gy.

    Data Pro­pria is “not going down the psy­cho­met­rics side of things,” he said.
    ...

    As we should have also expect­ed, it appears that Data Pro­pria is poten­tial­ly flout­ed US elec­tion laws. Specif­i­cal­ly, Fed­er­al elec­tion law bars for­eign nation­als from “direct­ing, con­trol­ling or direct­ly or indi­rect­ly par­tic­i­pat­ing in the deci­sion-mak­ing process” of U.S. cam­paigns. Cam­bridge Ana­lyt­i­ca’s chief data sci­en­tist, David Wilkin­son, is now work­ing for Data Pro­pria. He also hap­pens to be a British cit­i­zen. So if Wilkin­son is work­ing for the GOP, that’s poten­tial­ly a vio­la­tion of the law. So, of course, Oczkows­ki is claim­ing that Wilkin­son is actu­al­ly just a con­trac­tor who won’t be doing any work on the US cam­paigns:

    ...
    Among the for­mer Cam­bridge Ana­lyt­i­ca employ­ees is David Wilkin­son, a British cit­i­zen who was the company’s lead data sci­en­tist. Dur­ing the 2016 cam­paign, Wilkin­son helped over­see the vot­er data mod­el­ing that informed Trump’s focus on the Rust Belt, accord­ing to a Cam­bridge Ana­lyt­i­ca press release issued after the elec­tion.

    Fed­er­al elec­tion law bars for­eign nation­als from “direct­ing, con­trol­ling or direct­ly or indi­rect­ly par­tic­i­pat­ing in the deci­sion-mak­ing process” of U.S. cam­paigns. The pub­lic advo­ca­cy group Com­mon Cause filed a com­plaint with the FEC in March alleg­ing that Cam­bridge Analytica’s for­eign employ­ees broke that law, though the com­plaint did not name Wilkin­son.

    Oczkows­ki told the AP that the Lon­don-based Wilkin­son is a con­trac­tor and will not be involved in Data Propria’s U.S. polit­i­cal work.
    ...

    In addi­tion, there are ques­tions about whether or not Data Pro­pria is act­ing as a back-door way for Brad Parscale to earn even more mon­ey from the Trump cam­paign. Poten­tial­ly A LOT of mon­ey. It’s described as an unusu­al sit­u­a­tion for a pres­i­den­tial incum­bent to direct large sums of mon­ey to out­side firms con­trolled by the cam­paign man­agers because it’s pres­i­dents can gen­er­al­ly dri­ve a hard­er bar­gain than that. In oth­er words, Trump is appar­ent­ly allow­ing Parscale to turn the 2020 cam­paign into a per­son­al enrich­ment project. Which is unusu­al, but also very Trumpian so maybe it’s not that unusu­al in this con­text:

    ...
    Anoth­er issue raised by Data Propria’s work on Trump’s re-elec­tion effort is the firm’s finan­cial links to Parscale, Trump’s cam­paign man­ag­er.

    Parscale is a part own­er of Data Propria’s par­ent com­pa­ny, a pub­licly trad­ed firm called Cloud Com­merce that bought his dig­i­tal mar­ket­ing busi­ness in August. Over the last year, Cloud Com­merce has large­ly rebuilt itself around Parscale’s for­mer com­pa­ny, now rebrand­ed Parscale Dig­i­tal. Parscale sits on Cloud Commerce’s board of direc­tors and pro­vides the com­pa­ny with the major­i­ty of its $2.9 mil­lion in rev­enue, accord­ing to the company’s most recent Secu­ri­ties and Exchange Com­mis­sion fil­ing.

    By work­ing with a Cloud Com­merce sub­sidiary, the Trump cam­paign could be help­ing Parscale prof­it beyond his $15,000 month­ly cam­paign retain­er and the com­mis­sions he has been col­lect­ing on Trump’s dig­i­tal adver­tis­ing spend­ing.

    While Parscale’s per­son­al busi­ness still works for the cam­paign, it’s unclear how that work may be chang­ing now that he has become Trump’s offi­cial cam­paign man­ag­er.

    Under one con­tract between Parscale and Cloud Com­merce, he receives a 5 per­cent cut of every dol­lar col­lect­ed by Parscale Dig­i­tal — which is large­ly com­posed of the web mar­ket­ing busi­ness Parscale sold to Cloud Com­merce last year. In SEC fil­ings, Cloud Com­merce has esti­mat­ed that Parscale’s cut of those rev­enues, exclud­ing pass through pay­ments, would total between $850,000 and $1.3 mil­lion. Parscale Dig­i­tal would not be direct­ly receiv­ing funds from the RNC or the cam­paign.

    Even though Parscale is not direct­ly receiv­ing mon­ey from Data Pro­pria work, the firms pro­vide each oth­er with busi­ness and Data Propria’s suc­cess would help Cloud Com­merce pay off the mon­ey it owes Parscale.

    A sec­ond agree­ment oblig­ates Cloud Com­merce to pay Parscale $85,150 a month as part of its sep­a­rate $1 mil­lion pur­chase of his for­mer web-host­ing busi­ness. Parscale earns anoth­er $3,000 per month from leas­ing com­put­ers and office fur­ni­ture to Cloud Com­merce.

    Trevor Pot­ter, a Repub­li­can who once head­ed the Fed­er­al Elec­tion Com­mis­sion and now leads the non­prof­it Cam­paign Legal Cen­ter, said it was unusu­al for an incum­bent president’s cam­paign to direct large amounts of busi­ness to out­side firms tied to his cam­paign man­ag­er.

    Such arrange­ments are more com­mon for long-shot can­di­dates in need of exper­tise, he said.

    “Top-notch can­di­dates have bar­gain­ing pow­er and are less like­ly to put up with that,” Pot­ter said. “It sounds like a very rich oppor­tu­ni­ty for Mr. Parscale, but that’s real­ly the candidate’s call.”
    ...

    Keep in mind that if Data Pro­pria is struc­tured in a way that could finan­cial­ly ben­e­fit Parscale at the expense of the Trump cam­paign, we should prob­a­bly ask­ing who else is get­ting a cut. Is Parscale’s dig­i­tal mar­ket­ing empire going to dou­ble as a way for Trumpers to turn cam­paign con­tri­bu­tions into per­son­al prof­its? Who knows, but the fact that Cloud Com­merce, Data Pro­pri­a’s par­ent com­pa­ny, has a rather shady back­ground does­n’t dis­suade sus­pi­cions. For instance, the for­mer CEO plead guilty to stock fraud in 2008 but stayed with the com­pa­ny until 2015:

    ...
    Aside from the ties to Parscale, Cloud Commerce’s par­ent com­pa­ny is an unusu­al can­di­date for blue chip polit­i­cal work. Found­ed in 1999, the firm has repeat­ed­ly changed its name and busi­ness mod­el, and the company’s most recent audit “expressed sub­stan­tial doubt about our abil­i­ty to con­tin­ue as a going con­cern” with­out con­tin­u­ing infu­sions of cash.

    An AP inves­ti­ga­tion of Cloud Com­merce in March found that a for­mer CEO of its pre­de­ces­sor firm plead­ed guilty to stock fraud in 2008 and remained active in Cloud Commerce’s affairs until at least 2015. Cloud Com­merce says the man has had no con­nec­tion with its busi­ness since at least 2011.
    ...

    And the cur­rent CEO of Cloud Com­merce, Andrew Van Noy, appears to have fake his pro­fes­sion­al biog­ra­phy:

    ...
    The AP also found dis­crep­an­cies in the pro­fes­sion­al biog­ra­phy of cur­rent Cloud Com­merce chief exec­u­tive Andrew Van Noy, who has told investors that he worked for Mor­gan Stan­ley and was a pri­vate equi­ty exec­u­tive in the years imme­di­ate­ly pre­ced­ing his arrival at Cloud Com­merce in 2011.

    Van Noy’s August 2010 Utah bank­rupt­cy fil­ing con­flicts with that por­tray­al, show­ing he spent most of the pri­or two and a half years unem­ployed. The fil­ings also reveal that Van Noy had been accused of sell­ing unli­censed secu­ri­ties and using $100,000 of an investor’s mon­ey for per­son­al pur­pos­es.

    “Luck­i­ly I have not had to go to the home­less shel­ter,” Van Noy wrote to that investor in 2011 after the man asked what had hap­pened to his invest­ment. After the man sued for fraud, Van Noy agreed to pay him $105,000.

    A year lat­er, he became pres­i­dent of Cloud Com­merce.
    ...

    Again, how Trumpian.

    So Data Pro­pria has all the mark­ings of a shady oper­a­tion run by shady peo­ple. Shady peo­ple with an exper­tise in manip­u­la­tive mass pro­pa­gan­da. And it’s cur­rent­ly using that exper­tise for the GOP’s ben­e­fit in the 2018 mid-terms. Mid-terms where the GOP is poised to spend the final month spread­ing con­spir­a­cy the­o­ries designed to stoke right-wing out­rage and prac­tic­ing for 2020.

    Posted by Pterrafractyl | October 7, 2018, 6:59 pm
  24. Remem­ber the law­suit brought against Face­book by the app devel­op­er Six4Three? That was the law­suit alleg­ing that, con­trary to Face­book’s denials, the exploita­tion of the “friends per­mis­sions” poli­cies — which allowed Cam­bridge Ana­lyt­i­ca’s per­son­al­i­ty quiz app to col­lect large amounts of infor­ma­tion on the 87 mil­lion Face­book ‘friends’ of the ~300 thou­sand peo­ple who actu­al­ly down­loaded the app — was actu­al­ly aggres­sive­ly pushed on app devel­op­ers as a means of entic­ing them to make apps for Face­book. Six4Three fur­ther alleges that Face­book then ran a kind of shake­down oper­a­tion on the most suc­cess­ful apps, look­ing for ways to extract more fees from the suc­cess­ful app devel­op­ers and using the threat of cut­ting off access to all of that ‘friends per­mis­sions’ data in the nego­ti­a­tions. Six4Three goes on to allege that dur­ing the peri­od of 2014–2015, when Face­book start­ed phas­ing out the ‘friends per­mis­sions’ option, the top Face­book exec­u­tives them­selves were involved in look­ing at which app devel­op­ers they want­ed to tar­get with the threat of cut­ting off access. Final­ly, Six4Three charges that Mark Zucker­berg him­self was direct­ly involved with this scheme. In all, Six4Three claims 40,000 app devel­op­er com­pa­nies were tar­get­ed by Face­book with threats of cut­ting off data access unless they allow Face­book to co-opt them. In oth­er words, this law­suit is a poten­tial night­mare for Face­book and an even big­ger night­mare for Mark Zucker­berg.

    And it sounds like that pub­lic rela­tions night­mare has indeed got­ten big­ger, thanks to the UK legal sys­tem. Or may have got­ten big­ger. We don’t know yet. What we do know is that the UK par­lia­ment has just got­ten its hands a num­ber of inter­nal Face­book doc­u­ments that the com­pa­ny does not want released or dis­cussed. The doc­u­ments were actu­al­ly seized by Six4Three dur­ing the legal dis­cov­ery process of their law­suit. That law­suit is tak­ing place in the US legal sys­tem and the doc­u­ments in ques­tion are sub­ject to an order of a Cal­i­forn­ian supe­ri­or court and can­not be shared or made pub­lic, at risk of being found in con­tempt of court.

    So how did the UK par­lia­ment get its hands on Face­book doc­u­ments held by Six4Three that were under a Cal­i­for­nia court order to remain sealed from the pub­lic? Well, Dami­an Collins, the chair of the cul­ture, media and sport select com­mit­tee in the UK par­lia­ment, invoked a rare par­lia­men­tary mech­a­nism to com­pel the founder of Six4Three to hand over the doc­u­ments dur­ing a busi­ness trip to Lon­don. The par­lia­ment actu­al­ly sent a ser­jeant at arms to his hotel with a final warn­ing and a two-hour dead­line to com­ply with its order. The founder still failed to do hand over the doc­u­ments and was escort­ed to par­lia­ment, where he turned them over under threat of fines and impris­on­ment.

    Why was the UK par­lia­ment so inter­est­ed in these par­tic­u­lar doc­u­ments? Well, in their law­suit against Face­book, Six4Three alleges that the doc­u­ments more or less demon­strate that Face­book was not only aware of the ‘friends per­mis­sions’ loop­hole used by Cam­bridge Ana­lyt­i­ca but active­ly pro­mot­ing it to devel­op­ers as an incen­tive. So while it’s unclear if the doc­u­ments are expect­ed to con­tain evi­dence of all of Six4Three’s charges against Face­book, they’re at least expect­ed to con­tain evi­dence of some of those charges. At this point we have no idea what exact­ly is in the doc­u­ments, but now that the UK par­lia­ment has its hands these this cache of doc­u­ment we might get an idea soon­er or lat­er. And it’s already very clear that Face­book isn’t hap­py about any of this and that might be, in part, because it’s claimed that the doc­u­ments includ­ed cor­re­spon­dences with Mark Zucker­berg. So it’s entire­ly pos­si­ble that the UK par­lia­ment now has its hands on evi­dence that Zucker­berg was not just ful­ly aware of the poli­cies that led to the Cam­bridge Ana­lyt­i­ca data pri­va­cy night­mare but was active­ly using those poli­cies to lead some sort of app devel­op­er shake­down sys­tem:

    The Guardian

    Par­lia­ment seizes cache of Face­book inter­nal papers

    Doc­u­ments alleged to con­tain rev­e­la­tions on data and pri­va­cy con­trols that led to Cam­bridge Ana­lyt­i­ca scan­dal

    Car­ole Cad­wal­ladr

    Sat 24 Nov 2018 16.00 EST
    Last mod­i­fied on Sat 24 Nov 2018 18.50 EST

    Par­lia­ment has used its legal pow­ers to seize inter­nal Face­book doc­u­ments in an extra­or­di­nary attempt to hold the US social media giant to account after chief exec­u­tive Mark Zucker­berg repeat­ed­ly refused to answer MPs’ ques­tions.

    The cache of doc­u­ments is alleged to con­tain sig­nif­i­cant rev­e­la­tions about Face­book deci­sions on data and pri­va­cy con­trols that led to the Cam­bridge Ana­lyt­i­ca scan­dal. It is claimed they include con­fi­den­tial emails between senior exec­u­tives, and cor­re­spon­dence with Zucker­berg.

    Dami­an Collins, the chair of the cul­ture, media and sport select com­mit­tee, invoked a rare par­lia­men­tary mech­a­nism to com­pel the founder of a US soft­ware com­pa­ny, Six4Three, to hand over the doc­u­ments dur­ing a busi­ness trip to Lon­don. In anoth­er excep­tion­al move, par­lia­ment sent a ser­jeant at arms to his hotel with a final warn­ing and a two-hour dead­line to com­ply with its order. When the soft­ware firm founder failed to do so, it’s under­stood he was escort­ed to par­lia­ment. He was told he risked fines and even impris­on­ment if he didn’t hand over the doc­u­ments.

    “We are in unchart­ed ter­ri­to­ry,” said Collins, who also chairs an inquiry into fake news. “This is an unprece­dent­ed move but it’s an unprece­dent­ed sit­u­a­tion. We’ve failed to get answers from Face­book and we believe the doc­u­ments con­tain infor­ma­tion of very high pub­lic inter­est.”

    The seizure is the lat­est move in a bit­ter bat­tle between the British par­lia­ment and the social media giant. The strug­gle to hold Face­book to account has raised con­cerns about lim­its of British author­i­ty over inter­na­tion­al com­pa­nies that now play a key role in the demo­c­ra­t­ic process.

    Face­book, which has lost more than $100bn in val­ue since March when the Observ­er exposed how Cam­bridge Ana­lyt­i­ca had har­vest­ed data from 87m US users, faces anoth­er poten­tial PR cri­sis. It is believed the doc­u­ments will lay out how user data deci­sions were made in the years before the Cam­bridge Ana­lyt­i­ca breach, includ­ing what Zucker­berg and senior exec­u­tives knew.

    MPs lead­ing the inquiry into fake news have repeat­ed­ly tried to sum­mon Zucker­berg to explain the company’s actions. He has repeat­ed­ly refused. Collins said this reluc­tance to tes­ti­fy, plus mis­lead­ing tes­ti­mo­ny from an exec­u­tive at a hear­ing in Feb­ru­ary, had forced MPs to explore oth­er options for gath­er­ing infor­ma­tion about Face­book oper­a­tions.

    “We have very seri­ous ques­tions for Face­book. It mis­led us about Russ­ian involve­ment on the plat­form. And it has not answered our ques­tions about who knew what, when with regards to the Cam­bridge Ana­lyt­i­ca scan­dal,” he said.

    “We have fol­lowed this court case in Amer­i­ca and we believed these doc­u­ments con­tained answers to some of the ques­tions we have been seek­ing about the use of data, espe­cial­ly by exter­nal devel­op­ers.”

    The doc­u­ments seized were obtained dur­ing a legal dis­cov­ery process by Six4Three. It took action against the social media giant after invest­ing $250,000 in an app. Six4Three alleges the cache shows Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and­ef­fec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data. That raised the inter­est of Collins and his com­mit­tee.

    ...

    The files are sub­ject to an order of a Cal­i­forn­ian supe­ri­or court, so can­not be shared or made pub­lic, at risk of being found in con­tempt of court. Because the MPs’ sum­mons was issued in Lon­don where par­lia­ment has juris­dic­tion, it is under­stood the com­pa­ny founder, although a US cit­i­zen, had no choice but to com­ply. It is under­stood that Six4Three have informed both the court in Cal­i­for­nia and Facebook’s lawyers.

    Face­book said: “The mate­ri­als obtained by the DCMS com­mit­tee are sub­ject to a pro­tec­tive order of the San Mateo Supe­ri­or Court restrict­ing their dis­clo­sure. We have asked the DCMS com­mit­tee to refrain from review­ing them and to return them to coun­sel or to Face­book. We have no fur­ther com­ment.”

    It is unclear what, if any, legal moves Face­book can make to pre­vent pub­li­ca­tion. UK, Cana­da, Ire­land, Argenti­na, Brazil, Sin­ga­pore and Latvia will all have rep­re­sen­ta­tives join­ing what looks set to be a high-stakes encounter between Face­book and politi­cians.

    Richard Allan, vice-pres­i­dent for pol­i­cy who will tes­ti­fy at the spe­cial ses­sion after Zucker­berg declined to attend, said the com­pa­ny takes its respon­si­bil­i­ty around “a num­ber of impor­tant issues around pri­va­cy, safe­ty and democ­ra­cy ... very seri­ous­ly”.

    ———-

    “Par­lia­ment seizes cache of Face­book inter­nal papers” by Car­ole Cad­wal­ladr; The Guardian; 11/24/2018

    “Face­book, which has lost more than $100bn in val­ue since March when the Observ­er exposed how Cam­bridge Ana­lyt­i­ca had har­vest­ed data from 87m US users, faces anoth­er poten­tial PR cri­sis. It is believed the doc­u­ments will lay out how user data deci­sions were made in the years before the Cam­bridge Ana­lyt­i­ca breach, includ­ing what Zucker­berg and senior exec­u­tives knew.”

    Doc­u­ments reveal­ing what Mark Zucker­berg and oth­er senior exec­u­tives knew; That sounds like a big “uh oh!” for more peo­ple than just Mark Zucker­berg, but it’s still the biggest “uh oh” for Zucker­berg.

    And it’s so omi­nous specif­i­cal­ly because of what Six4Three alleges: that Face­book was active­ly using the offer of ‘friends per­mis­sions’ data on unsus­pect­ing Face­boook users as a means of first entic­ing users into mak­ing apps for Face­book and then threat­en­ing those devel­op­ers with cut­ting off access to that data unless they gave Face­book a big­ger cuts of their rev­enues. So the fact that UK par­lia­ment got its hands on this data is a pret­ty big night­mare for Face­book, but it’s real­ly just an exten­sion of an exist­ing night­mare for Face­book of that Six4Three law­suit. It’s a reflec­tion of how suc­cess Six4Three’s legal threat has been thus far:

    ...
    The doc­u­ments seized were obtained dur­ing a legal dis­cov­ery process by Six4Three. It took action against the social media giant after invest­ing $250,000 in an app. Six4Three alleges the cache shows Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and­ef­fec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data. That raised the inter­est of Collins and his com­mit­tee.
    ...

    And now that the UK par­lia­ment has these doc­u­ments and unclear what, if any­thing, Face­book can do about it. They may have run out of legal tricks:

    ...
    The files are sub­ject to an order of a Cal­i­forn­ian supe­ri­or court, so can­not be shared or made pub­lic, at risk of being found in con­tempt of court. Because the MPs’ sum­mons was issued in Lon­don where par­lia­ment has juris­dic­tion, it is under­stood the com­pa­ny founder, although a US cit­i­zen, had no choice but to com­ply. It is under­stood that Six4Three have informed both the court in Cal­i­for­nia and Facebook’s lawyers.

    Face­book said: “The mate­ri­als obtained by the DCMS com­mit­tee are sub­ject to a pro­tec­tive order of the San Mateo Supe­ri­or Court restrict­ing their dis­clo­sure. We have asked the DCMS com­mit­tee to refrain from review­ing them and to return them to coun­sel or to Face­book. We have no fur­ther com­ment.”

    It is unclear what, if any, legal moves Face­book can make to pre­vent pub­li­ca­tion. UK, Cana­da, Ire­land, Argenti­na, Brazil, Sin­ga­pore and Latvia will all have rep­re­sen­ta­tives join­ing what looks set to be a high-stakes encounter between Face­book and politi­cians.
    ...

    And that lack of fur­ther legal options, in turn, rais­es the ques­tion of what Face­book’s answers are going to be the future requests to have Mark Zucker­berg tes­ti­fy in front of the par­lia­ment. Is Face­book still going to refuse to have Zucker­berg tes­ti­fy if it turns out those seized doc­u­ments are indeed filled with con­fir­ma­tions of all of Six4Three’s scan­dalous alle­ga­tions, includ­ing con­fir­ma­tions of Zucker­berg’s per­son­al involve­ment in a shake­down scheme exploit­ing user data?

    It rais­es a gener­ic ques­tion about the future of Mark Zucker­berg at the com­pa­ny. Uber had to ditch its founder Travis Kalan­ick after it because obvi­ous that he had an amoral pub­lic image. Will ditch­ing Zucker­berg be the price Face­book ends up hav­ing to pay to earn the pub­lic’s trust back? That would depend on how exten­sive­ly Zucker­berg him­self was involved in the Face­book’s breach­es of trust and how deeply those breach­es were in the end. And that’s all part of what makes this sto­ry so omi­nous for Face­book and espe­cial­ly Zucker­berg: the pub­lic does­n’t yet know how deep the breach­es of trust were and who at Face­book was involved with that breach of trust. But the UK par­lia­ment might now know. And that just might include now know­ing about a scheme involv­ing the exploita­tion of user data to shake­down app devel­op­ers per­son­al­ly orches­trat­ed by Mark Zucker­berg.

    Posted by Pterrafractyl | November 25, 2018, 8:42 pm
  25. Christo­pher Wylie, the for­mer head of research at Cam­bridge Ana­lyt­i­ca who became one of the key insid­er whis­tle-blow­ers about how Cam­bridge Ana­lyt­i­ca oper­at­ed and the extent of Face­book’s knowl­edge about it, gave an inter­view last month to Cam­paign Mag­a­zine about his thoughts on arti­fi­cial intel­li­gence and the risk big data and AI pose to human cre­ativ­i­ty and the sup­pres­sion of good ideas. It’s an inter­est­ing inter­view for peo­ple inter­est­ed in the poten­tial impact of arti­fi­cial intel­li­gence on human civ­i­liza­tion, with Wylie con­clud­ing that reg­u­la­tions on the use of arti­fi­cial intel­li­gence is going to be required.

    But there are a few points Wiley makes about the psy­cho­log­i­cal war­fare tech­niques used by Cam­bridge Ana­lyt­i­ca that seem like the kind of thing that every­one should be inter­est­ed in. Because, of course, Wylie was talk­ing about psy­cho­log­i­cal war­fare tech­niques used by Cam­bridge Ana­lyt­i­ca on the pub­lic.

    Specif­i­cal­ly, Wylie recounts how, as direc­tor of research at Cam­bridge Ana­lyt­i­ca, his orig­i­nal role was to deter­mine how the com­pa­ny could use the infor­ma­tion war­fare tech­niques used by SCL Group — Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny and a defense con­trac­tor pro­vid­ing psy op ser­vices for the British mil­i­tary. Wylie’s job was to adapt the psy­cho­log­i­cal war­fare strate­gies that SCL had been using on the bat­tle­field to the online space.

    It was the use of mil­i­tary psy op tech­niques on the gen­er­al pub­lic that Wylie says start­ed giv­ing him sec­ond thoughts about his work. As Wylie put it, “When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my...But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists.”

    Wylie also draws par­al­lels between the psy­cho­log­i­cal oper­a­tions used on demo­c­ra­t­ic audi­ences and the bat­tle­field tech­niques used to be build an insur­gency. It starts with tar­get­ing peo­ple more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. The infor­ma­tion you’re feed­ing this tar­get audi­ence may or may not be real. The impor­tant thing is that it’s con­tent that they already agree with so that “it feels good to see that infor­ma­tion”. Keep in mind that one of the goals of the ‘psy­cho­graph­ic pro­fil­ing’ that Cam­bridge Ana­lyt­i­ca was to iden­ti­fy traits like neu­roti­cism.

    Wylie goes on to describe the next step in this insur­gency-build­ing tech­nique: keep build­ing up the inter­est in the social media group that you’re direct­ing this tar­get audi­ence towards until it hits around 1,000–2,000 peo­ple. Then set up a real life event ded­i­cat­ed to the cho­sen dis­in­for­ma­tion top­ic in some local area and try to get as many of your tar­get audi­ence to show up. Even if only 5 per­cent of them show up, that’s still 50–100 peo­ple con­verg­ing on some local cof­fee shop or what­ev­er. The peo­ple meet each oth­er in real life and start talk­ing about about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”. This tar­get audi­ence starts believ­ing that no one else is talk­ing about this stuff because “they don’t want you to know what the truth is”. As Wylie puts it, “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.”

    Wylie goes on to make an impor­tant dis­tinc­tion between the kind of tar­get­ed dig­i­tal adver­tis­ing used by the Barack Oba­ma pres­i­den­tial cam­paigns vs the Trump cam­paign. Wylie worked with Obama’s for­mer nation­al direc­tor of tar­get­ing so he real­ly does have the kind of expe­ri­ence to make this com­par­i­son. And accord­ing to Wylie, the two cam­paigns took very dif­fer­ent approach­es. “When the Oba­ma cam­paign put out infor­ma­tion, it was clear it was a cam­paign ad, and the mes­sag­ing, with­in the realm of pol­i­tics, was hon­est and gen­uine. The Oba­ma cam­paign did not use coer­cive, manip­u­la­tive dis­in­for­ma­tion as the basis of its cam­paign, full stop. So, it’s a false equiv­a­len­cy and peo­ple who say that [it is equiv­a­lent] don’t real­ly under­stand what they’re talk­ing about,” as Wylie put it, which, of course, is a reminder that the Trump cam­paign did in fact use manip­u­la­tive dis­in­for­ma­tion as the basis of its cam­paign. Those were the ser­vices Cam­bridge Ana­lyt­i­ca was pro­vid­ing.

    So based on Wylie’s recount­ing of his expe­ri­ences as the head of research for Cam­bridge Ana­lyt­i­ca, it appears that the insur­gency-build­ing tech­niques used by the mil­i­tary that rely on pump­ing out dis­in­for­ma­tion to audi­ences iden­ti­fied as neu­rot­ic and con­spir­a­to­r­i­al-mind­ed are great for polit­i­cal cam­paigns. It seems like the kind of thing every­one should know. Espe­cial­ly the neu­rot­ic and con­spir­a­to­r­i­al-mind­ed:

    Cam­paign Mag­a­zine

    Cam­bridge Ana­lyt­i­ca whistle­blow­er Christo­pher Wylie: It’s time to save cre­ativ­i­ty

    Wylie became a house­hold name when he blew the whis­tle on the Cam­bridge Ana­lyt­i­ca data scan­dal. We talk to him about the effect it has had on his life, his fears for the future of cre­ativ­i­ty and the need for reg­u­la­tion of social media.

    Kate Magee
    Novem­ber 05, 2018

    In the ear­ly hours of 17 March 2018, the 28-year-old Christo­pher Wylie tweet­ed: “Here we go….”

    Lat­er that day, The Observ­er and The New York Times pub­lished the sto­ry of Cam­bridge Analytica’s mis­use of Face­book data, which sent shock­waves around the world, caused mil­lions to #Delete­Face­book, and led the UK Infor­ma­tion Commissioner’s Office to fine the site the max­i­mum penal­ty for fail­ing to pro­tect users’ infor­ma­tion. Six weeks after the sto­ry broke, Cam­bridge Ana­lyt­i­ca closed.

    Wylie was the key source in the year-long inves­ti­ga­tion. In the months fol­low­ing pub­li­ca­tion, he has been var­i­ous­ly described as “the mil­len­ni­als’ first great whistle­blow­er”, a “fan­ta­sist char­la­tan” and, as he calls him­self, the “gay, Cana­di­an veg­an” who was respon­si­ble for cre­at­ing a “psy­cho­log­i­cal war­fare mind­fuck tool”.

    Now, as atten­tion has shift­ed to this month’s US midterm elec­tions as a test of mean­ing­ful change at social-media com­pa­nies, the bright-orange-haired Wylie is sit­ting under Campaign’s lens. He talks about his Face­book ban, the need for reg­u­la­tion and his love of the John Lewis ads: “The cre­ative is just bril­liant. Any time I see those ads I think John Lewis should run the nation!”

    He is artic­u­late, pas­sion­ate, style-con­scious and, per­haps sur­pris­ing­ly for some­one who is a data sci­en­tist, he is a huge advo­cate for human cre­ativ­i­ty. “I don’t believe in data-dri­ven any­thing, it’s the most stu­pid phrase. Data should always serve peo­ple, peo­ple should nev­er serve data,” he says.

    He believes that poor use of data is killing good ideas. And that, unless effec­tive reg­u­la­tion is enact­ed, society’s wor­ship of algo­rithms, unchecked data cap­ture and use, and the like­ly spread of AI to all parts of our lives is caus­ing us to sleep­walk into a bleak future.

    Not only are such cir­cum­stances a threat to adland – why do you need an ad to tell you about a prod­uct if an algo­rithm is choos­ing it for you? – it is a threat to human free will. “Cur­rent­ly, the only moral­i­ty of the algo­rithm is to opti­mise you as a con­sumer and, in many cas­es, you become the prod­uct. There are very few exam­ples in human his­to­ry of indus­tries where peo­ple them­selves become prod­ucts and those are scary indus­tries – slav­ery and the sex trade. And now, we have social media,” Wylie says.

    “The prob­lem with that, and what makes it inher­ent­ly dif­fer­ent to sell­ing, say, tooth­paste, is that you’re sell­ing parts of peo­ple or access to peo­ple. Peo­ple have an innate moral worth. If we don’t respect that, we can cre­ate indus­tries that do ter­ri­ble things to peo­ple. We are [head­ing] blind­ly and quick­ly into an envi­ron­ment where this men­tal­i­ty is going to be ampli­fied through AI every­where. We’re humans, we should be think­ing about peo­ple first.”

    His words car­ry weight, because he’s been on the dark side. He has seen what can hap­pen when data is used to spread mis­in­for­ma­tion, cre­ate insur­gen­cies and prey on the worst of people’s char­ac­ters.

    The polit­i­cal bat­tle­field

    A quick refresh­er on the scan­dal, in Wylie’s words: Cam­bridge Ana­lyt­i­ca was a com­pa­ny spun out of SCL Group, a British mil­i­tary con­trac­tor that worked in infor­ma­tion oper­a­tions for armed forces around the world. It was con­duct­ing research on how to scale and digi­tise infor­ma­tion war­fare – the use of infor­ma­tion to con­fuse or degrade the effi­ca­cy of an ene­my.

    Wylie was a 24-year-old fash­ion-trend-fore­cast­ing stu­dent who also worked with the Lib­er­al Democ­rats on its tar­get­ing. A con­tact intro­duced him to SCL.

    As direc­tor of research, Wylie’s orig­i­nal role was to map out how the com­pa­ny would take tra­di­tion­al infor­ma­tion oper­a­tions tac­tics into the online space – in par­tic­u­lar, by pro­fil­ing peo­ple who would be sus­cep­ti­ble to cer­tain mes­sag­ing.

    This mor­phed into the polit­i­cal are­na. After Wylie left, the com­pa­ny worked on Don­ald Trump’s US pres­i­den­tial cam­paign and – pos­si­bly – the UK’s Euro­pean Union ref­er­en­dum. In Feb­ru­ary 2016, Cam­bridge Analytica’s for­mer chief exec­u­tive, Alexan­der Nix, wrote in Cam­paign that his com­pa­ny had “already helped super­charge Leave.EU’s social-media cam­paign”. Nix has stren­u­ous­ly denied this since, includ­ing to MPs.

    It was this shift from the bat­tle­field to pol­i­tics that made Wylie uncom­fort­able. “When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my,” he says.

    “But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists.”

    One of the rea­sons these tech­niques are so insid­i­ous is that being a tar­get of a dis­in­for­ma­tion cam­paign is “usu­al­ly a plea­sur­able expe­ri­ence”, because you are being fed con­tent with which you are like­ly to agree. “You are being guid­ed through some­thing that you want to be true,” Wylie says.

    To build an insur­gency, he explains, you first tar­get peo­ple who are more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. They start engag­ing with the con­tent, which may or may not be true; either way “it feels good to see that infor­ma­tion”.

    When the group reach­es 1,000 or 2,000 mem­bers, an event is set up in the local area. Even if only 5% show up, “that’s 50 to 100 peo­ple flood­ing a local cof­fee shop”, Wylie says. This, he adds, val­i­dates their opin­ion because oth­er peo­ple there are also talk­ing about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”.

    Peo­ple then start to believe the rea­son it’s not shown on main­stream news chan­nels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.”

    Some con­ser­v­a­tives have argued that the Trump cam­paign has been unfair­ly crit­i­cised for its use of data, while for­mer Pres­i­dent Barack Oba­ma and his dig­i­tal agency Blue State Dig­i­tal were laud­ed for their use of social-media data in his suc­cess­ful 2008 elec­tion cam­paign.

    But Wylie, who has worked with Obama’s for­mer nation­al direc­tor of tar­get­ing, claims the two cam­paigns took dif­fer­ent approach­es. For exam­ple, the Oba­ma cam­paign used data to iden­ti­fy peo­ple who were eli­gi­ble to vote but had not reg­is­tered.

    “When the Oba­ma cam­paign put out infor­ma­tion, it was clear it was a cam­paign ad, and the mes­sag­ing, with­in the realm of pol­i­tics, was hon­est and gen­uine. The Oba­ma cam­paign did not use coer­cive, manip­u­la­tive dis­in­for­ma­tion as the basis of its cam­paign, full stop. So, it’s a false equiv­a­len­cy and peo­ple who say that [it is equiv­a­lent] don’t real­ly under­stand what they’re talk­ing about.”

    There’s a dif­fer­ence between per­sua­sion, and manip­u­la­tion and coer­cion, he adds – and between an opin­ion and prov­able dis­in­for­ma­tion. “Data is moral­ly neu­tral, in the same way that I can take a knife and hand it to a Miche­lin-starred chef to make the most amaz­ing meal of your life, or I can mur­der some­one with it. The tool is moral­ly neu­tral, it’s the appli­ca­tion that mat­ters,” he says.

    Psy­cho­graph­ic poten­tial

    One such appli­ca­tion was Cam­bridge Analytica’s use of psy­cho­graph­ic pro­fil­ing, a form of seg­men­ta­tion that will be famil­iar to mar­keters, although not in com­mon use.

    The com­pa­ny used the OCEAN mod­el, which judges peo­ple on scales of the Big Five per­son­al­i­ty traits: open­ness to expe­ri­ences, con­sci­en­tious­ness, extra­ver­sion, agree­able­ness and neu­roti­cism.

    Wylie believes the method could be use­ful in the com­mer­cial space. For exam­ple, a fash­ion brand that cre­ates bold, colour­ful, pat­terned clothes might want to seg­ment wealthy woman by extro­ver­sion because they will be more like­ly to buy bold items, he says.

    Scep­tics say Cam­bridge Analytica’s approach may not be the dark mag­ic that Wylie claims. Indeed, when speak­ing to Cam­paign in June 2017, Nix unchar­ac­ter­is­ti­cal­ly played down the method, claim­ing the com­pa­ny used “pret­ty bland data in a pret­ty enter­pris­ing way”.

    But Wylie argues that peo­ple under­es­ti­mate what algo­rithms allow you to do in pro­fil­ing. “I can take pieces of infor­ma­tion about you that seem innocu­ous, but what I’m able to do with an algo­rithm is find pat­terns that cor­re­late to under­ly­ing psy­cho­log­i­cal pro­files,” he explains.

    “I can ask whether you lis­ten to Justin Bieber, and you won’t feel like I’m invad­ing your pri­va­cy. You aren’t nec­es­sar­i­ly aware that when you tell me what music you lis­ten to or what TV shows you watch, you are telling me some of your deep­est and most per­son­al attrib­ut­es.”

    This is where mat­ters stray into the ques­tion of ethics. Wylie believes that as long as the com­mu­ni­ca­tion you are send­ing out is clear, not coer­cive or manip­u­la­tive, it’s fine, but it all depends on con­text. “If you are a beau­ty com­pa­ny and you use facets of neu­roti­cism – which Cam­bridge Ana­lyt­i­ca did – and you find a seg­ment of young women or men who are more prone to body dys­mor­phia, and one of the proac­tive actions they take is to buy more skin cream, you are exploit­ing some­thing which is unhealthy for that per­son and doing dam­age,” he says. “The ethics of using psy­cho­me­t­ric data real­ly depend on whether it is pro­por­tion­al to the ben­e­fit and util­i­ty that the cus­tomer is get­ting.”

    ...

    A bleak future?

    Wylie is con­cerned that tech devel­op­ments – such as the rise of AI – could fun­da­men­tal­ly dam­age soci­ety. Google Home has recent­ly launched an option to give only good news to its users, for instance.

    “What they are actu­al­ly start­ing to do is warp that person’s per­spec­tive from the very begin­ning of their day,” Wylie points out. Once you have AI in every part of your life, it would be every­where mak­ing deci­sions about you and for you.

    Wylie believes we could get to the point where AI replaces cre­atives in 20 years. “If your def­i­n­i­tion of cre­ativ­i­ty is the gen­er­a­tion of nov­el out­puts, then you can have ‘cre­ative algo­rithms’,” he says. “This is why as a com­mu­ni­ty we need to come up with prin­ci­ples of how to engage with tech­nol­o­gy. Just because we can do some­thing, doesn’t mean we should.”

    He adds: “If we replace every­one with robots, what’s the point of human­i­ty, then? Shall we all just sit in those float­ing chairs they have in the film WALL‑E and be fed through a tube and enter­tained through AI-gen­er­at­ed TV shows which are hyper-per­son­alised to my pro­file? What a shit future that would be, right? We shouldn’t be endeav­our­ing to replace human cre­ativ­i­ty with arti­fi­cial cre­ativ­i­ty.”

    Reg­u­la­tion – a glim­mer of hope

    Aside from repri­ori­tis­ing cre­ativ­i­ty over data, Wylie is adamant that reg­u­la­tion is the answer to end immoral prac­tices on the inter­net.

    “As a soci­ety we reg­u­late things we come into con­tact with that could cause us harm, such as air trav­el, doc­tors or elec­tric­i­ty. Cur­rent­ly, soft­ware tech­nol­o­gy, social media and online adver­tis­ing is the Wild West,” he says.

    “You eat food four or five times a day, you check your phone on aver­age 150 times a day. Peo­ple sleep with their phones more than they sleep with peo­ple,” he adds. “The fact that peo­ple are engag­ing so much more now with adver­tis­ing and online con­tent war­rants a dis­cus­sion on whether there should be statu­to­ry rules that are enforce­able as to the con­duct and behav­iour both of social-media and tech plat­forms and the adver­tis­ers that use them.”

    Wylie argues that reg­u­la­tion is not a bad thing for com­mer­cial via­bil­i­ty. After all, seat belts and air bags haven’t stopped peo­ple buy­ing cars. It will, he says, also help cre­ate con­sumer trust and con­fi­dence in the long run and pre­vent a back­lash.

    It will also cre­ate a lev­el play­ing field, where those that behave eth­i­cal­ly are not at a dis­ad­van­tage if com­peti­tors do not adhere to the same prin­ci­ples.

    He adds that: “A lot of tech com­pa­nies have their backs up. They’re like a dog in the cor­ner. They’re going through these exis­ten­tial con­ver­sa­tions like ‘OMG what’s hap­pen­ing?’” He goes on to argue that the sec­tor relies on every­one behav­ing well to main­tain itself. “If I were them, I’d be talk­ing about how we can help each oth­er do bet­ter.”

    The bar­ri­ers to reg­u­la­tion include the inter­na­tion­al nature of tech com­pa­nies, the con­cern that gov­ern­ments are too far behind the tech com­pa­nies and that con­sumers don’t real­ly care about pri­va­cy.

    Wylie rebuts each of these. There are com­mon rules for oth­er inter­na­tion­al indus­tries – such as reg­u­lat­ing air­port codes, aero­planes tak­ing off and land­ing in dif­fer­ent coun­tries and send­ing post around the world. He describes the sug­ges­tion that MPs don’t under­stand the indus­try well enough to act mean­ing­ful­ly as “a bull­shit argu­ment”.

    He con­tin­ues: “Tell me what con­gress­man or MP under­stands how aero­planes fly or can­cer med­i­cines work, and what is safe and not safe? Or what is the appro­pri­ate lev­el of pes­ti­cides to use on farms? They don’t. These are all high­ly tech­ni­cal, high­ly com­pli­cat­ed, ever-mov­ing indus­tries, and before they were reg­u­lat­ed they were using the same argu­ments.”

    Clash­es with Face­book

    Wylie is opposed to self-reg­u­la­tion, because indus­tries won’t become con­sumer cham­pi­ons – they are, he says, too con­flict­ed.

    “Face­book has known about what Cam­bridge Ana­lyt­i­ca was up to from the very begin­ning of those projects,” Wylie claims. “They were noti­fied, they autho­rised the appli­ca­tions, they were giv­en the terms and con­di­tions of the app that said explic­it­ly what it was doing. They hired peo­ple who worked on build­ing the app. I had legal cor­re­spon­dence with their lawyers where they acknowl­edged it hap­pened as far back as 2016.”

    He wants to cre­ate a set of endur­ing prin­ci­ples that are hand­ed over to a tech­ni­cal­ly com­pe­tent reg­u­la­tor to enforce. “Cur­rent­ly, the indus­try is not respond­ing to some pret­ty fun­da­men­tal things that have hap­pened on their watch. So I think it is the right place for gov­ern­ment to step in,” he adds.

    Face­book in par­tic­u­lar, he argues is “the most obsti­nate and bel­liger­ent in recog­nis­ing the harm that has been done and actu­al­ly doing some­thing about it”.

    ...

    Despite this, Wylie insists he is not anti-social media. “I don’t believe that peo­ple should have to delete Face­book. I’m not a sup­port­er of #Delete­Face­book because it’s like say­ing if you don’t want to get elec­tro­cut­ed, get rid of elec­tric­i­ty. It’s stu­pid. No, demand bet­ter stan­dards for your elec­tric­i­ty so you don’t get elec­tro­cut­ed,” he says.

    “Social media is now an essen­tial part of most people’s lives. You can’t apply for most jobs now with­out LinkedIn. You can’t com­mu­ni­cate prac­ti­cal­ly with friends if you don’t have a form of social media. What job can you get if you say to an employ­er: ‘I’m real­ly great but because I want to enforce my pri­va­cy stan­dards and main­tain my men­tal health, I refuse to use any­thing that touch­es Google’s ser­vices’?

    “So the solu­tion is not to delete these plat­forms, or attack them and make them the ene­my, it’s to make sure they are doing their job to make a safe envi­ron­ment for peo­ple.”

    By com­ing for­ward, Wylie has put his own safe­ty at risk. “I’ve had quite a few threats, there have been some inci­dents, but I’ve had a lot of sup­port from my lawyers and the police,” he says.

    “You can’t live your life run­ning away from some­thing. I actu­al­ly have this ten­den­cy to run at things. You can’t dwell on stuff, oth­er­wise you’ll just paral­yse your­self.”

    The sto­ry shows no signs of slow­ing down. Ofcom, the ICO and the DCMS Com­mit­tee as well as the US author­i­ties are run­ning ongo­ing inves­ti­ga­tions. Mean­while, Wylie is fly­ing across the world to give speech­es and inter­views.

    When asked what key mes­sage he would like the indus­try to remem­ber, he offers up this: “Don’t be dicks. Under­stand that we should all be serv­ing and help­ing peo­ple in any­thing that we do. You don’t need to be coer­cive, manip­u­la­tive or creepy. If you can’t win over vot­ers or con­sumers through being hon­est, then you shouldn’t work in adver­tis­ing or mar­ket­ing because you’re clear­ly not good enough.”

    ———-

    “Cam­bridge Ana­lyt­i­ca whistle­blow­er Christo­pher Wylie: It’s time to save cre­ativ­i­ty” by Kate Magee; Cam­paign; 11/05/2018

    “He believes that poor use of data is killing good ideas. And that, unless effec­tive reg­u­la­tion is enact­ed, society’s wor­ship of algo­rithms, unchecked data cap­ture and use, and the like­ly spread of AI to all parts of our lives is caus­ing us to sleep­walk into a bleak future.”

    The future is indeed bleak. That’s the view from the inside of the Psy Op AI Big Data com­plex. A Psy Op AI Big Data com­plex in the process of tak­ing the psy­cho­log­i­cal war­fare lessons from the bat­tle­field and port­ing them over to the dig­i­tal domain for com­mer­cial and polit­i­cal use, as Christo­pher Wylie would know. Uti­liz­ing those bat­tle­field lessons used by SCL Group to the dig­i­tal domain for mil­i­tary use was his first job as Cam­bridge Ana­lyt­i­ca’s research direc­tor:

    ...
    His words car­ry weight, because he’s been on the dark side. He has seen what can hap­pen when data is used to spread mis­in­for­ma­tion, cre­ate insur­gen­cies and prey on the worst of people’s char­ac­ters.

    The polit­i­cal bat­tle­field

    A quick refresh­er on the scan­dal, in Wylie’s words: Cam­bridge Ana­lyt­i­ca was a com­pa­ny spun out of SCL Group, a British mil­i­tary con­trac­tor that worked in infor­ma­tion oper­a­tions for armed forces around the world. It was con­duct­ing research on how to scale and digi­tise infor­ma­tion war­fare – the use of infor­ma­tion to con­fuse or degrade the effi­ca­cy of an ene­my.

    Wylie was a 24-year-old fash­ion-trend-fore­cast­ing stu­dent who also worked with the Lib­er­al Democ­rats on its tar­get­ing. A con­tact intro­duced him to SCL.

    As direc­tor of research, Wylie’s orig­i­nal role was to map out how the com­pa­ny would take tra­di­tion­al infor­ma­tion oper­a­tions tac­tics into the online space – in par­tic­u­lar, by pro­fil­ing peo­ple who would be sus­cep­ti­ble to cer­tain mes­sag­ing.
    ...

    But then the job shift­ed to a new focus that start­ed giv­ing Wylie sec­ond thoughts: offer­ing those new­ly devel­oped dig­i­tal psy op tech­niques to polit­i­cal clients. As Wylie saw it, this shift was like “treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists”:

    ...
    This mor­phed into the polit­i­cal are­na. After Wylie left, the com­pa­ny worked on Don­ald Trump’s US pres­i­den­tial cam­paign and – pos­si­bly – the UK’s Euro­pean Union ref­er­en­dum. In Feb­ru­ary 2016, Cam­bridge Analytica’s for­mer chief exec­u­tive, Alexan­der Nix, wrote in Cam­paign that his com­pa­ny had “already helped super­charge Leave.EU’s social-media cam­paign”. Nix has stren­u­ous­ly denied this since, includ­ing to MPs.

    It was this shift from the bat­tle­field to pol­i­tics that made Wylie uncom­fort­able. “When you are work­ing in infor­ma­tion oper­a­tions projects, where your tar­get is a com­bat­ant, the auton­o­my or agency of your tar­gets is not your pri­ma­ry con­sid­er­a­tion. It is fair game to deny and manip­u­late infor­ma­tion, coerce and exploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son because they are an ene­my,” he says.

    “But if you port that over to a demo­c­ra­t­ic sys­tem, if you run cam­paigns designed to under­mine people’s abil­i­ty to make free choic­es and to under­stand what is real and not real, you are under­min­ing democ­ra­cy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists.”
    ...

    So as we can see, Wylie was in an unusu­al­ly good posi­tion to rec­og­nize how Cam­bridge Ana­lyt­i­ca was uti­liz­ing psy­cho­log­i­cal war­fare tech­niques on the US and UK pop­u­laces: his job was lit­er­al­ly to first devel­op online psy op tech­niques for mil­i­tary clients and then his job became using those same tech­niques for polit­i­cal clients.

    And as Whylie describes, Cam­bridge Ana­lyt­i­ca was adapt­ing the dis­in­for­ma­tion tech­niques used to build insur­gen­cies to change polit­i­cal atti­tudes. And the first step in that was polit­i­cal insur­gency build­ing cam­paign was the iden­ti­fi­ca­tion of a tar­get audi­ence select­ed based on psy­cho­log­i­cal pro­files like hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing. This tar­get audi­ence is then pro­pa­gan­dized on social media until the group grows large enough to try and arrange for real life mee­tups.

    And if that dis­in­for­ma­tion pro­pa­gan­da cam­paign suc­ceeds in get­ting enough peo­ple to ‘like’ a group’s social media page, the next step is to get the tar­get peo­ple to meet up in real life. And for those that do meet up, their belief in the dis­in­for­ma­tion cam­paign is deep­ened along with a sense that the dis­in­for­ma­tion cam­paign is trust being inten­tion­al­ly cen­sored by ‘them’. In oth­er words, it’s a sys­tem for rad­i­cal­iz­ing a tar­get audi­ence with dis­in­for­ma­tion and turn­ing them into gen­uine polit­i­cal insur­gents:

    ...
    One of the rea­sons these tech­niques are so insid­i­ous is that being a tar­get of a dis­in­for­ma­tion cam­paign is “usu­al­ly a plea­sur­able expe­ri­ence”, because you are being fed con­tent with which you are like­ly to agree. “You are being guid­ed through some­thing that you want to be true,” Wylie says.

    To build an insur­gency, he explains, you first tar­get peo­ple who are more prone to hav­ing errat­ic traits, para­noia or con­spir­a­to­r­i­al think­ing, and get them to “like” a group on social media. They start engag­ing with the con­tent, which may or may not be true; either way “it feels good to see that infor­ma­tion”.

    When the group reach­es 1,000 or 2,000 mem­bers, an event is set up in the local area. Even if only 5% show up, “that’s 50 to 100 peo­ple flood­ing a local cof­fee shop”, Wylie says. This, he adds, val­i­dates their opin­ion because oth­er peo­ple there are also talk­ing about “all these things that you’ve been see­ing online in the depths of your den and get­ting angry about”.

    Peo­ple then start to believe the rea­son it’s not shown on main­stream news chan­nels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.”
    ...

    “What start­ed out as a fan­ta­sy online gets port­ed into the tem­po­ral world and becomes real to you because you see all these peo­ple around you.”

    As we can see, Christo­pher Wylie had some very under­stand­able rea­sons to become uncom­fort­able with his job. And note how he did­n’t have the same feel­ings about his time work­ing for Oba­ma’s dig­i­tal cam­paign that also used Big Data. Because Oba­ma was using that infor­ma­tion for things like iden­ti­fy­ing unreg­is­tered vot­ers and encour­ag­ing them to reg­is­ter, not psy­cho­log­i­cal­ly pro­file peo­ple to micro-tar­get an array of dis­in­for­ma­tion cam­paigns:

    ...
    Some con­ser­v­a­tives have argued that the Trump cam­paign has been unfair­ly crit­i­cized for its use of data, while for­mer Pres­i­dent Barack Oba­ma and his dig­i­tal agency Blue State Dig­i­tal were laud­ed for their use of social-media data in his suc­cess­ful 2008 elec­tion cam­paign.

    But Wylie, who has worked with Obama’s for­mer nation­al direc­tor of tar­get­ing, claims the two cam­paigns took dif­fer­ent approach­es. For exam­ple, the Oba­ma cam­paign used data to iden­ti­fy peo­ple who were eli­gi­ble to vote but had not reg­is­tered.

    “When the Oba­ma cam­paign put out infor­ma­tion, it was clear it was a cam­paign ad, and the mes­sag­ing, with­in the realm of pol­i­tics, was hon­est and gen­uine. The Oba­ma cam­paign did not use coer­cive, manip­u­la­tive dis­in­for­ma­tion as the basis of its cam­paign, full stop. So, it’s a false equiv­a­len­cy and peo­ple who say that [it is equiv­a­lent] don’t real­ly under­stand what they’re talk­ing about.”

    There’s a dif­fer­ence between per­sua­sion, and manip­u­la­tion and coer­cion, he adds – and between an opin­ion and prov­able dis­in­for­ma­tion. “Data is moral­ly neu­tral, in the same way that I can take a knife and hand it to a Miche­lin-starred chef to make the most amaz­ing meal of your life, or I can mur­der some­one with it. The tool is moral­ly neu­tral, it’s the appli­ca­tion that mat­ters,” he says.
    ...

    And note Wylie’s obser­va­tion about how per­son­al­i­ty traits can be revealed by the kinds of ques­tions peo­ple are asked all the time by the mar­ket­ing indus­try. Ques­tions like whether or not you’re a big fan of Justin Bieber. Answer enough seem­ing­ly innocu­ous ques­tions and a psy­cho­log­i­cal pro­file on you can be built

    ...
    Psy­cho­graph­ic poten­tial

    One such appli­ca­tion was Cam­bridge Analytica’s use of psy­cho­graph­ic pro­fil­ing, a form of seg­men­ta­tion that will be famil­iar to mar­keters, although not in com­mon use.

    The com­pa­ny used the OCEAN mod­el, which judges peo­ple on scales of the Big Five per­son­al­i­ty traits: open­ness to expe­ri­ences, con­sci­en­tious­ness, extra­ver­sion, agree­able­ness and neu­roti­cism.

    Wylie believes the method could be use­ful in the com­mer­cial space. For exam­ple, a fash­ion brand that cre­ates bold, colour­ful, pat­terned clothes might want to seg­ment wealthy woman by extro­ver­sion because they will be more like­ly to buy bold items, he says.

    Scep­tics say Cam­bridge Analytica’s approach may not be the dark mag­ic that Wylie claims. Indeed, when speak­ing to Cam­paign in June 2017, Nix unchar­ac­ter­is­ti­cal­ly played down the method, claim­ing the com­pa­ny used “pret­ty bland data in a pret­ty enter­pris­ing way”.

    But Wylie argues that peo­ple under­es­ti­mate what algo­rithms allow you to do in pro­fil­ing. “I can take pieces of infor­ma­tion about you that seem innocu­ous, but what I’m able to do with an algo­rithm is find pat­terns that cor­re­late to under­ly­ing psy­cho­log­i­cal pro­files,” he explains.

    “I can ask whether you lis­ten to Justin Bieber, and you won’t feel like I’m invad­ing your pri­va­cy. You aren’t nec­es­sar­i­ly aware that when you tell me what music you lis­ten to or what TV shows you watch, you are telling me some of your deep­est and most per­son­al attrib­ut­es.”

    This is where mat­ters stray into the ques­tion of ethics. Wylie believes that as long as the com­mu­ni­ca­tion you are send­ing out is clear, not coer­cive or manip­u­la­tive, it’s fine, but it all depends on con­text. “If you are a beau­ty com­pa­ny and you use facets of neu­roti­cism – which Cam­bridge Ana­lyt­i­ca did – and you find a seg­ment of young women or men who are more prone to body dys­mor­phia, and one of the proac­tive actions they take is to buy more skin cream, you are exploit­ing some­thing which is unhealthy for that per­son and doing dam­age,” he says. “The ethics of using psy­cho­me­t­ric data real­ly depend on whether it is pro­por­tion­al to the ben­e­fit and util­i­ty that the cus­tomer is get­ting.”
    ...

    And that poten­tial abil­i­ty to infer psy­cho­log­i­cal pro­file traits based on answers to con­sumer pref­er­ence ques­tions like whether or not you’re a Belieber is impor­tant to keep in mind in the con­text of these rev­e­la­tions about Cam­bridge Ana­lyt­i­ca. Because recall how Cam­bridge Ana­lyt­i­ca’s abil­i­ty to psy­cho­log­i­cal­ly pro­file Face­book users cen­tered around the strat­e­gy of pay­ing ~270,000 Face­book users to down­load a psy­cho­log­i­cal test­ing app devel­oped by Alek­san­dr Kogan and then using the “Friends per­mis­sions” fea­ture Face­book offered app devel­op­ers to get detailed pro­files on the rest of ~87 mil­lion ‘friends’ of those 300,000 peo­ple. Cam­bridge Ana­lyt­i­ca then used that group of ~270,000 psy­cho­log­i­cal pro­files as a train­ing data set to devel­op algo­rithms that could infer psy­cho­log­i­cal traits based on the rest of the pro­file data Face­book was get­ting from the ‘friends per­mis­sion’ app devel­op­er loop­hole. Data like the pages users ‘liked’ which was appar­ent­ly avail­able to app devel­op­ers for those ~87 mil­lion peo­ple. So infer­ring psy­cho­log­i­cal traits using seem­ing­ly unre­lat­ed pieces of data on some­one is some­thing Wylie has expe­ri­ence with.

    So while it might seem like repli­cat­ing what Cam­bridge Ana­lyt­i­ca did would be dif­fi­cult because Cam­bridge Ana­lyt­i­ca had psy­cho­log­i­cal pro­fil­ing data that most oth­er orga­ni­za­tions would­n’t have, don’t for­get that infer psy­cho­log­i­cal pro­files is already hap­pen­ing. The age of per­son­al­ized pro­pa­gan­da is here.

    Final­ly, note how Wylie reit­er­ates the Face­book was very aware of what Cam­bridge Ana­lyt­i­ca was up to, some­thing we’ve seen evi­dence of over and over as this scan­dal has unfold­ed:

    ...
    Clash­es with Face­book

    Wylie is opposed to self-reg­u­la­tion, because indus­tries won’t become con­sumer cham­pi­ons – they are, he says, too con­flict­ed.

    “Face­book has known about what Cam­bridge Ana­lyt­i­ca was up to from the very begin­ning of those projects,” Wylie claims. “They were noti­fied, they autho­rised the appli­ca­tions, they were giv­en the terms and con­di­tions of the app that said explic­it­ly what it was doing. They hired peo­ple who worked on build­ing the app. I had legal cor­re­spon­dence with their lawyers where they acknowl­edged it hap­pened as far back as 2016.”

    ...

    So if you’re won­der­ing how many sim­i­lar mass dis­in­for­ma­tion influ­ence oper­a­tions using insur­gency-build­ing tech­niques are oper­at­ing on Face­book, the fact that Face­book had no prob­lem with Cam­bridge Ana­lyt­i­ca’s mass dis­in­for­ma­tion polit­i­cal insur­gency-build­ing tech­niques oper­at­ing on its plat­form points towards plen­ty of oth­er mass dis­in­for­ma­tion polit­i­cal insur­gency-build­ing cam­paigns going on right now. Dis­in­for­ma­tion cam­paigns tar­get­ing the most neu­rot­ic and para­noid peo­ple that can be iden­ti­fied. Dig­i­tal dis­in­for­ma­tion cam­paigns designed to get peo­ple meet­ing up in real life and mutu­al­ly rein­forc­ing a belief in the dis­in­for­ma­tion.

    And giv­en the mod­ern Repub­li­can Par­ty has been descend­ing into an Alex Jone­sian-far right John Birch-style dis­in­fo­tain­ment par­a­digm for years now, there’s no rea­son we should­n’t expect the appli­ca­tion of these kinds of weaponized psy­cho­log­i­cal war­fare tech­niques to grow in com­ing years. Espe­cial­ly since, again, Face­book seems fine with this. Alex Jones real­ly is the heart and soul of the mod­ern Amer­i­can con­ser­v­a­tive move­ment at the grass­roots lev­el these days and that makes the appli­ca­tion of dis­in­for­ma­tion cam­paigns tar­get­ing Amer­i­can con­ser­v­a­tive audi­ences a basic neces­si­ty of day to day pol­i­tics in Amer­i­ca. There’s plen­ty of dis­in­for­ma­tion tar­get­ing all Amer­i­can audi­ences as is, but the mod­ern media tar­get­ing right-wing Amer­i­cans is excep­tion­al­ly con­spir­a­to­r­i­al in recent years and that shows no sign of abate­ment in the age of Trump.

    The dig­i­tal dis­in­for­ma­tion tech­niques for mak­ing an idea go viral and orga­nize peo­ple in the real world are tech­niques that are only going to get more and more refined. Espe­cial­ly as the abil­i­ty to infer psy­cho­log­i­cal pro­files on all of us becomes more refined and we all become much eas­i­er to pre­dict.

    It’s all one big rea­son why it’s going to be impor­tant for Amer­i­cans in par­tic­u­lar to real­ize that one of the con­se­quences of the unlim­it­ed spend­ing in pol­i­tics in mod­ern Amer­i­can in the post-Cit­i­zens Unit­ed is that Amer­i­cans are liv­ing in a sea of right-wing pro­pa­gan­da. Every­one is a tar­get of that pro­pa­gan­da machine but con­ser­v­a­tive audi­ences are espe­cial­ly tar­get­ed by the dis­in­for­ma­tion cam­paigns. The sto­ry of Cam­bridge Ana­lyt­i­ca is just one of the sto­ries about how that pro­pa­gan­da machine oper­ates using tar­get­ed dis­in­for­ma­tion cam­paigns. And the sto­ry told by Christo­pher Wylie is a sto­ry of that right-wing pro­pa­gan­da machine employ­ing mil­i­tary-grade psy­cho­log­i­cal war­fare tech­niques cen­tered on using dis­in­for­ma­tion on tar­get­ed audi­ences, espe­cial­ly con­ser­v­a­tive audi­ences. It’s an absolute­ly vital sto­ry for Amer­i­cans today, espe­cial­ly con­ser­v­a­tive Amer­i­cans who are the key tar­gets of these psy­cho­log­i­cal war­fare cam­paigns.

    The kind of dis­in­for­ma­tion cam­paign described by Wylie is the kind of cam­paign that comes in the form of pro­pa­gan­da telling you that any­one who does­n’t sup­port the pro­pa­gan­da is actu­al­ly part of a con­spir­a­cy against you it’s the truth being cen­sored by ‘Them’. And that’s a key char­ac­ter­is­tic of con­tem­po­rary right-wing media con­tent: sell­ing audi­ences on infor­ma­tion that’s sup­pos­ed­ly being cen­sored by ‘Them’, where ‘They’ are a sup­posed all pow­er lib­er­al estab­lish­ment of acad­e­mia and Hol­ly­wood. It’s a nar­ra­tive that’s absolute­ly per­fect for exact­ly the kind of dis­in­for­ma­tion cam­paign Wylie described. The Fox News/Alex Jone­si­fi­ca­tion of the Amer­i­can right-wing has an eerie resem­blance to a giant dis­in­for­ma­tion cam­paign because that’s what it is. And Trump is this right-wing dis­in­for­ma­tion cam­paign’s God King. That’s why pub­lic aware­ness of sophis­ti­cat­ed dis­in­for­ma­tion cam­paign tech­niques is so impor­tant these days. The iden­ti­fi­ca­tion of dis­in­for­ma­tion cam­paigns is a nec­es­sary basic civic skill for democ­ra­cy to func­tion. We are being unwit­ting­ly pro­pa­gan­dized. It’s the sad truth. But while ‘They’ do indeed effec­tive­ly cen­sor impor­tant sto­ries, not every sto­ry on Face­book claim­ing to be too truthy to han­dle is actu­al­ly true. That basic truth has sad­ly become excep­tion­al­ly impor­tant in con­tem­po­rary Amer­i­ca. Espe­cial­ly for right-wing con­spir­a­cy-mind­ed Amer­i­cans who are being excep­tion­al­ly tar­get­ed by insur­gency-build­ing psy­cho­log­i­cal war­fare dis­in­for­ma­tion cam­paigns right now by a lot more actors than just Cam­bridge Ana­lyt­i­ca.

    Posted by Pterrafractyl | December 9, 2018, 11:10 pm
  26. There was more news about the inner work­ings of Face­book recent­ly as a result of the law­suit by Face­book app devel­op­er Six4Three. It was, of course, bad news. Bad for Face­book and bad for Face­book’s users.

    First recall how Six4Three charged Face­book with threat­en­ing to cut off access to user data for app devel­op­ers as part of a scheme to crush its com­pe­ti­tion and extract mon­ey from the app devel­op­ers. Also recall how the inter­nal Face­book doc­u­ments that Six4Three obtained as part of its law­suit were recent­ly obtained by British MPs.

    Well, those inter­nal doc­u­ments were just released by the British par­lia­ment. And, sur­prise!, it turns out they’re filled with inter­nal Face­book dis­cus­sions that basi­cal­ly con­firm peo­ple’s worst assump­tions about how the com­pa­ny viewed its trove of per­son­al data. The rev­e­la­tions include:

    * Back in 2012, Face­book debat­ed chang­ing its rules for app devel­op­ers in order to extract more mon­ey from them. The idea was that devel­op­ers would have to pay Face­book for the user data that they were already extract­ing (as we’ve learned from the Cam­bridge Ana­lyt­i­ca scan­dal). But devel­op­ers would­n’t direct­ly pay for access to that data as long as they were mak­ing mon­ey for Face­book in oth­er ways, specif­i­cal­ly through the rev­enue-shar­ing mod­el Face­book had with devel­op­ers and also from devel­op­ers buy­ing Face­book ads. So the idea would be that the rev­enues shared with Face­book and ads pur­chased on Face­book would count towards this fee, with the expec­ta­tion that the vast major­i­ty of app devel­op­ers would­n’t end up hav­ing to pay any­thing for main­tain­ing access to the user data. In this way, Face­book could effec­tive­ly sell user data with­out direct­ly sell­ing.

    * Face­book real­ly was treat­ing app devel­op­ers very dif­fer­ent­ly depend­ing on the per­ceived com­pet­i­tive threat they posed. For exam­ple, com­pa­nies like Airbnb, Lyft and Net­flix were get­ting whitelist­ed for spe­cial access to user data at the same time Vine, an app devel­oped by Twit­ter and seen as a poten­tial com­peti­tor to Face­book’s func­tions, got its access to user data cut off.

    * Remem­ber the sto­ries about Face­book qui­et­ly grab­bing and call and text logs off of smart­phones run­ning the Android oper­at­ing sys­tem? Well, it turns out Face­book was debat­ing whether or not to bypass Android’s per­mis­sions sys­tem so this data could be grabbed while giv­ing users as lit­tle notice as pos­si­ble.

    And that’s just some of what was revealed in the doc­u­ments released by the UK par­lia­ment:

    Engad­get

    Facebook’s inter­nal doc­u­ments show its ruth­less­ness
    Zucker­berg: “That [app] may be good for the world but it’s not good for us unless peo­ple also share back to Face­book...”

    Edgar Alvarez, @abcdedgar
    12.05.18 in Inter­net

    As expect­ed, the UK Par­lia­ment has released a set of inter­nal Face­book emails that were seized as part of its inves­ti­ga­tion into the com­pa­ny’s data-pri­va­cy prac­tices. The 250-page doc­u­ment, which includes con­ver­sa­tions between Face­book CEO Mark Zucker­berg and oth­er high-lev­el exec­u­tives, is a win­dow into the social media giant’s ruth­less think­ing from 2012 to 2015 — a peri­od of time when it was grow­ing (and col­lect­ing user data) at an unstop­pable rate. While Face­book was white-list­ing com­pa­nies like Airbnb, Lyft and Net­flix to get spe­cial access to peo­ple’s infor­ma­tion in 2013, it went out of its way to block com­peti­tors such as Vine from using its tools.

    When Twit­ter launched Vine, the app had access to Face­book’s Friends API, which let Vine users see which of their Face­book friends were using the then-new app. But after approval from Zucker­berg him­self, that access was cut off. “Unless any­one rais­es objec­tions, we will shut down their [Vine’s] Friends API access today. We’ve pre­pared reac­tive PR, and I will let Jana know our deci­sion,” Justin Osof­sky, Face­book’s vice pres­i­dent of glob­al oper­a­tions and media part­ner­ships, said in an email at the time. Zucker­berg replied, “Yup, go for it.”

    The UK’s Dig­i­tal, Cul­ture, Media and Sport Com­mit­tee is using this is an exam­ple of Face­book’s anti­com­pet­i­tive nature, which is fur­ther high­light­ed in more of the inter­nal files. In Novem­ber 2012, in an email about reci­procity and data val­ue, Zucker­berg talked about how Face­book’s goal was to let peo­ple “share every­thing they want.” Devel­op­ers on the site, he said, could build apps to let users do exact­ly that, but Face­book need­ed to be wary of them becom­ing a com­peti­tor in the social media space.

    “Some­times the best way to enable peo­ple to share some­thing is to have a devel­op­er build a spe­cial pur­pose app or net­work for that type of con­tent and to make that app social by hav­ing Face­book plug into it,” Zucker­berg said. “How­ev­er, that may be good for the world but it’s not good for us unless peo­ple also share back to Face­book and that con­tent increas­es the val­ue of our net­work. So ulti­mate­ly, I think the pur­pose of plat­form — even the read side — is to increase shar­ing back into Face­book.”

    Although Zucker­berg may have changed his views on the world since then, it’s clear that at the time all he cared about was what was good for Face­book and not any­one else. But that’s some­thing he’s prob­a­bly think­ing more about today, as his com­pa­ny con­tin­ues to face scruti­ny over its mis­han­dling of user data — espe­cial­ly after the Cam­bridge Ana­lyt­i­ca data-pri­va­cy scan­dal from ear­li­er this year. In 2012, how­ev­er, Zucker­berg dis­missed the risks of shar­ing user data, since it seems he could­n’t imag­ine that the inci­dent caused by Cam­bridge Ana­lyt­i­ca was even pos­si­ble.

    In Octo­ber 2012, Zucker­berg sent an email to Sam Lessin, Face­book’s for­mer direc­tor of prod­uct man­age­ment, to say he was get­ting “more on board” with lock­ing down some access to devel­op­ers on the site, includ­ing Friends data and email address­es for mobile apps. That said, Zucker­berg told Lessin he was “gen­er­al­ly skep­ti­cal that there is as much data leak strate­gic risk as you think,” and that he agreed there was “a clear risk on the adver­tis­er side” but had­n’t fig­ured out how that relat­ed to the rest of the Face­book plat­form. “I think we leak info to devel­op­ers,” Zucker­berg added, “but I just can’t think [of] any instances where that data has leaked from devel­op­er to devel­op­er and caused a real issue for us. Do you have exam­ples of this?”

    Of course, as we now know, that’s basi­cal­ly what hap­pened with Cam­bridge Ana­lyt­i­ca after it took peo­ple’s data with­out Face­book’s knowl­edge and then used it for polit­i­cal research. Unfor­tu­nate­ly for Face­book and its users, that exam­ple Zucker­berg want­ed came too late. Around the same time, the Face­book CEO dis­cussed sell­ing user data to devel­op­ers who spent mon­ey on the site.

    “If we make it so devs can gen­er­ate rev­enue for us in dif­fer­ent ways, then it makes it more accept­able for us to charge them quite a bit more for using plat­form,” he said. “The basic idea is that any oth­er rev­enue you gen­er­ate for us earns you a cred­it towards what­ev­er fees you own us for using plat­form. For most devel­op­ers this would prob­a­bly cov­er cost com­plete­ly. So instead of every [devel­op­er] pay­ing us direct­ly, they’d just use our pay­ments or ads prod­ucts.” Zucker­berg said the basic mod­el for that could be let­ting devel­op­ers use the Login with Face­book API for free (as it is today), but if they want­ed access to things like some­one’s Friends list, then they’d have to pay $0.10 per user every year.

    The emails seized by the UK Par­lia­ment also allege that Face­book thought about bypass­ing an Android per­mis­sion screen that would ask for access to peo­ple’s call logs, which would obvi­ous­ly be a strong vio­la­tion of users’ pri­va­cy. Here’s a con­cern­ing email exchange between two Face­book exec­u­tives from Feb­ru­ary 2015:

    “Hey guys, as you know all the growth team is plan­ning on ship­ping a per­mis­sions update on Android at the end of this month. They are going to include the ‘read call log’ per­mis­sion, which will trig­ger the Android per­mis­sions dia­log on update, requir­ing users to accept the update. They will then pro­vide an in-app opt in NUX for a fea­ture that lets you con­tin­u­ous­ly upload your SMS and call log his­to­ry to Face­book to be used for improv­ing things like PYMK, coef­fi­cient cal­cu­la­tion, feed rank­ing etc. This is a pret­ty high risk thing to do from a PR per­spec­tive but it appears that the growth team will charge ahead and do it.”

    The reply is even worse:

    “The Growth team is now explor­ing a path where we only request Read Call Log per­mis­sion, and hold off on request­ing any oth­er per­mis­sions for now.

    Based on their ini­tial test­ing, it seems this would allow us to upgrade users with­out sub­ject­ing them to an Android per­mis­sions dia­log at all. It would still be a break­ing change, so users would have to click to upgrade, but no per­mis­sions dia­log screen.”

    In a state­ment, a Face­book spokesper­son told Engad­get that the doc­u­ments gath­ered as part of a law­suit between it and app devel­op­er Six4Three in Cal­i­for­nia are “only part of the sto­ry and are pre­sent­ed in a way that is very mis­lead­ing with­out addi­tion­al con­text.” The spokesper­son added, “We stand by the plat­form changes we made in 2015 to stop a per­son from shar­ing their friends’ data with devel­op­ers. Like any busi­ness, we had many inter­nal con­ver­sa­tions about the var­i­ous ways we could build a sus­tain­able busi­ness mod­el for our plat­form. But the facts are clear: we’ve nev­er sold peo­ple’s data.”

    ...

    ———–

    “Facebook’s inter­nal doc­u­ments show its ruth­less­ness” by Edgar Alvarez; Engad­get; 12/05/2018

    “As expect­ed, the UK Par­lia­ment has released a set of inter­nal Face­book emails that were seized as part of its inves­ti­ga­tion into the com­pa­ny’s data-pri­va­cy prac­tices. The 250-page doc­u­ment, which includes con­ver­sa­tions between Face­book CEO Mark Zucker­berg and oth­er high-lev­el exec­u­tives, is a win­dow into the social media giant’s ruth­less think­ing from 2012 to 2015 — a peri­od of time when it was grow­ing (and col­lect­ing user data) at an unstop­pable rate. While Face­book was white-list­ing com­pa­nies like Airbnb, Lyft and Net­flix to get spe­cial access to peo­ple’s infor­ma­tion in 2013, it went out of its way to block com­peti­tors such as Vine from using its tools.

    Yep, this released cache of Face­book emails includ­ed con­ver­sa­tions between Mark Zucker­berg and oth­er high-lev­el exec­u­tives. This was­n’t just the mus­ings of low­er lev­el employ­ees. And those high-lev­el com­mu­ni­ca­tions include Zucker­berg him­self approv­ing of a strat­e­gy to cut off access to user data to app that Face­book views as poten­tial com­peti­tors:

    ...
    When Twit­ter launched Vine, the app had access to Face­book’s Friends API, which let Vine users see which of their Face­book friends were using the then-new app. But after approval from Zucker­berg him­self, that access was cut off. “Unless any­one rais­es objec­tions, we will shut down their [Vine’s] Friends API access today. We’ve pre­pared reac­tive PR, and I will let Jana know our deci­sion,” Justin Osof­sky, Face­book’s vice pres­i­dent of glob­al oper­a­tions and media part­ner­ships, said in an email at the time. Zucker­berg replied, “Yup, go for it.”

    The UK’s Dig­i­tal, Cul­ture, Media and Sport Com­mit­tee is using this is an exam­ple of Face­book’s anti­com­pet­i­tive nature, which is fur­ther high­light­ed in more of the inter­nal files. In Novem­ber 2012, in an email about reci­procity and data val­ue, Zucker­berg talked about how Face­book’s goal was to let peo­ple “share every­thing they want.” Devel­op­ers on the site, he said, could build apps to let users do exact­ly that, but Face­book need­ed to be wary of them becom­ing a com­peti­tor in the social media space.

    “Some­times the best way to enable peo­ple to share some­thing is to have a devel­op­er build a spe­cial pur­pose app or net­work for that type of con­tent and to make that app social by hav­ing Face­book plug into it,” Zucker­berg said. “How­ev­er, that may be good for the world but it’s not good for us unless peo­ple also share back to Face­book and that con­tent increas­es the val­ue of our net­work. So ulti­mate­ly, I think the pur­pose of plat­form — even the read side — is to increase shar­ing back into Face­book.”
    ...

    We also have an email from Zucker­berg in 2012 where he express­es skep­ti­cism that there’s real­ly much of a risk of giv­ing app devel­op­ers access to so much infor­ma­tion about Face­book users. Risks like hav­ing that data leak out to the world. It’s an exam­ple of how wild­ly cav­a­lier Face­book was with hand­ing over so much user data to all sorts of dif­fer­ent app devel­op­er com­pa­nies: Zucker­berg appar­ent­ly could­n’t imag­ine that data leaks — like the leak­ing of the data col­lect­ed by Alek­san­dr Kogan’s psy­cho­log­i­cal pro­fil­ing app to Cam­bridge Ana­lyt­i­ca for uses in polit­i­cal cam­paigns — would actu­al­ly hap­pen:

    ...
    In Octo­ber 2012, Zucker­berg sent an email to Sam Lessin, Face­book’s for­mer direc­tor of prod­uct man­age­ment, to say he was get­ting “more on board” with lock­ing down some access to devel­op­ers on the site, includ­ing Friends data and email address­es for mobile apps. That said, Zucker­berg told Lessin he was “gen­er­al­ly skep­ti­cal that there is as much data leak strate­gic risk as you think,” and that he agreed there was “a clear risk on the adver­tis­er side” but had­n’t fig­ured out how that relat­ed to the rest of the Face­book plat­form. “I think we leak info to devel­op­ers,” Zucker­berg added, “but I just can’t think [of] any instances where that data has leaked from devel­op­er to devel­op­er and caused a real issue for us. Do you have exam­ples of this?”

    Of course, as we now know, that’s basi­cal­ly what hap­pened with Cam­bridge Ana­lyt­i­ca after it took peo­ple’s data with­out Face­book’s knowl­edge and then used it for polit­i­cal research. Unfor­tu­nate­ly for Face­book and its users, that exam­ple Zucker­berg want­ed came too late. Around the same time, the Face­book CEO dis­cussed sell­ing user data to devel­op­ers who spent mon­ey on the site.
    ...

    And around the same time Zucker­berg was dis­miss­ing the dan­gers of data leaks, he was con­tem­plat­ed set­ting up a scheme where app devel­op­ers would effec­tive­ly pay for con­tin­ued access to that data, but it would be indi­rect pay­ments in the form of forc­ing these devel­op­ers to buy Face­book ads:

    ...
    “If we make it so devs can gen­er­ate rev­enue for us in dif­fer­ent ways, then it makes it more accept­able for us to charge them quite a bit more for using plat­form,” he said. “The basic idea is that any oth­er rev­enue you gen­er­ate for us earns you a cred­it towards what­ev­er fees you own us for using plat­form. For most devel­op­ers this would prob­a­bly cov­er cost com­plete­ly. So instead of every [devel­op­er] pay­ing us direct­ly, they’d just use our pay­ments or ads prod­ucts.\” Zucker­berg said the basic mod­el for that could be let­ting devel­op­ers use the Login with Face­book API for free (as it is today), but if they want­ed access to things like some­one’s Friends list, then they’d have to pay $0.10 per user every year.
    ...

    “So instead of every [devel­op­er] pay­ing us direct­ly, they’d just use our pay­ments or ads prod­ucts.” That’s a Zucker­berg quote to keep in mind the next time you hear Face­book claim­ing that it does­n’t sell user data.

    And then there were the dis­cus­sion about bypass­ing an Android per­mis­sion screen that would ask for access to smart­phone call and text logs, a mas­sive poten­tial pri­va­cy vio­la­tion. Note how Face­book pri­ma­ry con­cern was that it was “a pret­ty high risk thing to do from a PR per­spec­tive”

    ...
    The emails seized by the UK Par­lia­ment also allege that Face­book thought about bypass­ing an Android per­mis­sion screen that would ask for access to peo­ple’s call logs, which would obvi­ous­ly be a strong vio­la­tion of users’ pri­va­cy. Here’s a con­cern­ing email exchange between two Face­book exec­u­tives from Feb­ru­ary 2015:

    “Hey guys, as you know all the growth team is plan­ning on ship­ping a per­mis­sions update on Android at the end of this month. They are going to include the ‘read call log’ per­mis­sion, which will trig­ger the Android per­mis­sions dia­log on update, requir­ing users to accept the update. They will then pro­vide an in-app opt in NUX for a fea­ture that lets you con­tin­u­ous­ly upload your SMS and call log his­to­ry to Face­book to be used for improv­ing things like PYMK, coef­fi­cient cal­cu­la­tion, feed rank­ing etc. This is a pret­ty high risk thing to do from a PR per­spec­tive but it appears that the growth team will charge ahead and do it.”

    The reply is even worse:

    “The Growth team is now explor­ing a path where we only request Read Call Log per­mis­sion, and hold off on request­ing any oth­er per­mis­sions for now.

    Based on their ini­tial test­ing, it seems this would allow us to upgrade users with­out sub­ject­ing them to an Android per­mis­sions dia­log at all. It would still be a break­ing change, so users would have to click to upgrade, but no per­mis­sions dia­log screen.”

    ...

    But Face­book wants to assure us that “we’ve nev­er sold peo­ple’s data”:

    ...
    In a state­ment, a Face­book spokesper­son told Engad­get that the doc­u­ments gath­ered as part of a law­suit between it and app devel­op­er Six4Three in Cal­i­for­nia are “only part of the sto­ry and are pre­sent­ed in a way that is very mis­lead­ing with­out addi­tion­al con­text.” The spokesper­son added, “We stand by the plat­form changes we made in 2015 to stop a per­son from shar­ing their friends’ data with devel­op­ers. Like any busi­ness, we had many inter­nal con­ver­sa­tions about the var­i­ous ways we could build a sus­tain­able busi­ness mod­el for our plat­form. But the facts are clear: we’ve nev­er sold peo­ple’s data.”
    ...

    So given’s Face­book’s response that, sure, it thought about sell­ing user data but did­n’t actu­al­ly do it, it’s worth keep in mind that the arrange­ment Face­book already had in place, where app devel­op­ers could get access to all of this pre­cious user data in exchange for devel­op­ing apps for Face­book, was itself a form of sell­ing the data. The ‘pay­ment’ for the data came in the form of devel­op­ing the app which, in turn, made peo­ple more like­ly to use Face­book.

    It’s also worth not­ing how much Face­book was think­ing about charg­ing for access to this data: $250,000 spent on Face­book ads:

    USA Today

    Face­book emails sug­gest com­pa­ny explored sell­ing peo­ple’s data despite pledges not to

    Jes­si­ca Guynn, USA TODAY Pub­lished 11:59 a.m. ET Dec. 5, 2018 | Updat­ed 4:36 p.m. ET Dec. 5, 2018

    SAN FRANCISCO – Inter­nal Face­book emails pub­lished online by U.K. law­mak­ers, some involv­ing CEO Mark Zucker­berg, paint a pic­ture of a com­pa­ny aggres­sive­ly hunt­ing for ways to make mon­ey from the reams of per­son­al infor­ma­tion it was col­lect­ing from users.

    Wednes­day’s release of some 250 pages of emails from 2012 to 2015 – a peri­od of dra­mat­ic growth for the new­ly pub­licly trad­ed com­pa­ny – pro­vides a rare glimpse into Face­book’s inter­nal con­ver­sa­tions, sug­gest­ing the social media giant gave pref­er­en­tial access to some third-par­ty app devel­op­ers such as Airbnb, Lyft and Net­flix, while restrict­ing access for oth­ers. It also con­sid­ered charg­ing app devel­op­ers for access to data, despite pledges that it would nev­er do so.

    There is no indi­ca­tion that Face­book went for­ward with a pro­pos­al to charge app devel­op­ers for access to the per­son­al infor­ma­tion of Face­book users. On Wednes­day, Zucker­berg denied Face­book ever sold or con­sid­ered sell­ing the data of its more than 2 bil­lion users.

    ...

    “I’ve been think­ing about plat­form busi­ness mod­el a lot this weekend....if we make it so (devel­op­ers) can gen­er­ate rev­enue for us in dif­fer­ent ways, then it makes it more accept­able for us to charge them quite a bit more for using plat­form,” Zucker­berg wrote in one email.

    ...

    Among the details in the Face­book emails:

    Face­book staffers explored how to use access to Face­book users’ data to get com­pa­nies to spend more on adver­tis­ing. In 2012, Face­book staffers debat­ed remov­ing restric­tions on user data for devel­op­ers who spent $250,000 or more on ads.

    Face­book’s response: “We explored mul­ti­ple ways to build a sus­tain­able busi­ness with devel­op­ers who were build­ing apps that were use­ful to peo­ple. ... We ulti­mate­ly set­tled on a mod­el where devel­op­ers did not need to pur­chase adver­tis­ing.”

    ...

    Face­book used its secu­ri­ty app, Ona­vo, to gath­er infor­ma­tion on how many peo­ple used cer­tain apps and how often they used them to help Face­book decide which com­pa­nies it should acquire, includ­ing mes­sag­ing app What­sApp for $19 bil­lion, and which to view as a com­pet­i­tive threat.

    Face­book’s response: “We’ve always been clear when peo­ple down­load Ona­vo about the infor­ma­tion that is col­lect­ed and how it is used, includ­ing by Face­book. ... Peo­ple can opt-out via the con­trol in their set­tings and their data won’t be used for any­thing oth­er than to pro­vide, improve and devel­op Ona­vo prod­ucts and ser­vices.”

    ———-

    “Face­book emails sug­gest com­pa­ny explored sell­ing peo­ple’s data despite pledges not to” by Jes­si­ca Guynn; USA TODAY; 12/05/2018

    “Face­book staffers explored how to use access to Face­book users’ data to get com­pa­nies to spend more on adver­tis­ing. In 2012, Face­book staffers debat­ed remov­ing restric­tions on user data for devel­op­ers who spent $250,000 or more on ads.”

    So that gives us an idea of how much Face­book thought its user data would be worth to app devel­op­ers. $250,000...in the form of Face­book ad spend­ing so no one could accuse Face­book of direct­ly sell­ing user data.

    And true to form, Mark Zucker­berg respond­ed to the release of these doc­u­ments show­ing Face­book staffers and Zucker­berg him­self con­sid­er­ing this pay to play scheme by assur­ing the world that Face­book nev­er even con­sid­ered sell­ing the data of its users:

    ...
    There is no indi­ca­tion that Face­book went for­ward with a pro­pos­al to charge app devel­op­ers for access to the per­son­al infor­ma­tion of Face­book users. On Wednes­day, Zucker­berg denied Face­book ever sold or con­sid­ered sell­ing the data of its more than 2 bil­lion users.

    ...

    “I’ve been think­ing about plat­form busi­ness mod­el a lot this weekend....if we make it so (devel­op­ers) can gen­er­ate rev­enue for us in dif­fer­ent ways, then it makes it more accept­able for us to charge them quite a bit more for using plat­form,” Zucker­berg wrote in one email.
    ...

    Then there’s Face­book’s use of its Ona­vo secu­ri­ty app to spy on which apps peo­ple use in order to deter­mine which com­pa­nies Face­book should buy. So the Face­book secu­ri­ty app dou­bled as spy­ware:

    ...
    Face­book used its secu­ri­ty app, Ona­vo, to gath­er infor­ma­tion on how many peo­ple used cer­tain apps and how often they used them to help Face­book decide which com­pa­nies it should acquire, includ­ing mes­sag­ing app What­sApp for $19 bil­lion, and which to view as a com­pet­i­tive threat.

    Face­book’s response: “We’ve always been clear when peo­ple down­load Ona­vo about the infor­ma­tion that is col­lect­ed and how it is used, includ­ing by Face­book. ... Peo­ple can opt-out via the con­trol in their set­tings and their data won’t be used for any­thing oth­er than to pro­vide, improve and devel­op Ona­vo prod­ucts and ser­vices.”
    ...

    Final­ly, the released doc­u­ments include ref­er­ences to Face­book giv­ing some app devel­op­ers for big com­pa­nies (Airbnb, Lyft, Net­flix, etc) pref­er­en­tial access to user data:

    ...
    Wednes­day’s release of some 250 pages of emails from 2012 to 2015 – a peri­od of dra­mat­ic growth for the new­ly pub­licly trad­ed com­pa­ny – pro­vides a rare glimpse into Face­book’s inter­nal con­ver­sa­tions, sug­gest­ing the social media giant gave pref­er­en­tial access to some third-par­ty app devel­op­ers such as Airbnb, Lyft and Net­flix, while restrict­ing access for oth­ers. It also con­sid­ered charg­ing app devel­op­ers for access to data, despite pledges that it would nev­er do so.
    ...

    So what was that pref­er­en­tial access to user data pro­vid­ed to this select group of app devel­op­ers? Well, accord­ing to the fol­low­ing arti­cle, it includ­ed giv­ing them extend­ed access to ‘friends’ user data in 2015 after most oth­er apps had been cut off and addi­tion­al infor­ma­tion on those ‘friends’ includ­ing phone num­bers:

    Ars Tech­ni­ca

    Face­book let select com­pa­nies have “spe­cial access” to user data, per report
    Such data shar­ing was sup­posed to have been ful­ly cut off in 2015, but it was­n’t.

    Cyrus Fari­var — 6/8/2018, 5:19 PM

    Face­book main­tained secret deals with a hand­ful of com­pa­nies, allow­ing them to gain “spe­cial access to user records,” long after it cut off most devel­op­ers’ access to such user data back in 2015, accord­ing to a new Fri­day report by the Wall Street Jour­nal, cit­ing court doc­u­ments it did not pub­lish and oth­er unnamed sources.

    These arrange­ments, which were known as “whitelists,” report­ed­ly allowed “cer­tain com­pa­nies to access addi­tion­al infor­ma­tion about a user’s Face­book friends,” includ­ing phone num­bers.

    Numer­ous com­pa­nies, includ­ing the Roy­al Bank of Cana­da and Nis­san Motor Com­pa­ny, appar­ent­ly main­tained such deals.

    Ime Archi­bong, Facebook’s vice pres­i­dent of prod­uct part­ner­ships, told the Jour­nal that the com­pa­ny had allowed some com­pa­nies to have “short-term exten­sions” to this user data.

    “But oth­er than that, things were shut down,” he said.

    The new report on Face­book is sep­a­rate from the oth­er dis­clo­sure of data shar­ing with 60 device mak­ers, and the oth­er recent rev­e­la­tion that a “bug” made pri­vate posts of 14 mil­lion users pub­lic.

    ...

    ———-

    “Face­book let select com­pa­nies have “spe­cial access” to user data, per report” by Cyrus Fari­var; Ars Tech­ni­ca; 06/08/2018

    “Face­book main­tained secret deals with a hand­ful of com­pa­nies, allow­ing them to gain “spe­cial access to user records,” long after it cut off most devel­op­ers’ access to such user data back in 2015, accord­ing to a new Fri­day report by the Wall Street Jour­nal, cit­ing court doc­u­ments it did not pub­lish and oth­er unnamed sources.”

    So we are told that these select com­pa­nies were giv­en “spe­cial access to user records” long after it was cut off to most devel­op­ers back in 2015. How long this spe­cial access was giv­en is unclear, and Face­book insists they only got short-term exten­sions

    ...
    Numer­ous com­pa­nies, includ­ing the Roy­al Bank of Cana­da and Nis­san Motor Com­pa­ny, appar­ent­ly main­tained such deals.

    Ime Archi­bong, Facebook’s vice pres­i­dent of prod­uct part­ner­ships, told the Jour­nal that the com­pa­ny had allowed some com­pa­nies to have “short-term exten­sions” to this user data.

    “But oth­er than that, things were shut down,” he said.
    ...

    So we have Face­book flat­ly con­tra­dict­ing this report, leav­ing us with a ‘should we believe Face­book or the jour­nal­ists?’ deci­sion to make. Keep in mind that Face­book’s pat­tern for vir­tu­al­ly all of these scan­dals has been to deny them until they become unde­ni­able, so it looks like we might have anoth­er one of those sit­u­a­tions with this ‘com­pa­ny white-list’ sto­ry.

    And it sounds like this sto­ry could end up get­ting a lot worse for Face­book as we learn more. Because while we don’t know the full scope of the “spe­cial access” these devel­op­ers were giv­en to user data, it sounds like it at least includes the phone num­bers of the Face­book ‘friends’ of app users:

    ...
    These arrange­ments, which were known as “whitelists,” report­ed­ly allowed “cer­tain com­pa­nies to access addi­tion­al infor­ma­tion about a user’s Face­book friends,” includ­ing phone num­bers.
    ...

    So what oth­er “addi­tion­al infor­ma­tion about a user’s Face­book friends” were these com­pa­nies giv­en access to? At this point we don’t know. But it’s worth not­ing one of the handy things hav­ing those phone num­bers would have allowed these ‘white-list­ed’ app devel­op­ers to do: tar­get these ‘friends’ with Face­book ads.

    Recall how the 2016 Trump cam­paign made exten­sive use of the “Cus­tom Audi­ences” Face­book ad fea­ture, where adver­tis­ers could feed Face­book a list of email address or phone num­bers for the pur­pose of direct­ly tar­get­ing Face­book users with ads. So by hand­ing over infor­ma­tion like friends phone num­bers, Face­book was mak­ing it much eas­i­er for these com­pa­nies to direct­ly tar­get the friends of app users with Face­book ads. In oth­er words, Face­book was like­ly hand­ing over user data for the pur­pose of sell­ing more Face­book ads.

    But rest assured, Face­book nev­er ever sold user data. ;)

    Posted by Pterrafractyl | December 13, 2018, 9:25 pm
  27. Now that Face­book has been out­ed as being the Grinch of data pri­va­cy in 2018, the com­pa­ny pre­sum­ably did­n’t have the mer­ri­est of Christ­mases this year. If you ignore all the prof­its. So it’s per­haps fit­ting that what is arguably the worst of the Face­book data pri­va­cy rev­e­la­tions of 2018 came last: The New York Times report­ed last week about pre­vi­ous­ly undis­closed arrange­ments Face­book has had with around 150 major cor­po­ra­tions. The arrange­ments start­ed in 2010 and many con­tin­ue today.

    Crit­i­cal­ly, these data shar­ing arrange­ments were defined by Face­book inter­nal­ly to not fall under the 2011 con­sent agree­ment Face­book signed with the Fed­er­al Trade Com­mis­sion (FTC) to not dis­close user data with­out get­ting user per­mis­sions first.

    The 2011 con­sent agree­ment was made by Face­book after it was dis­cov­ered that Face­book was hand­ing out user data with­out their con­sent fol­low­ing some pol­i­cy changes Face­book made in 2009. Face­book called this 2009 pri­va­cy change “instant per­son­al­iza­tion” It affect­ed about 400 mil­lion users and made some of their infor­ma­tion acces­si­ble to all of the inter­net. That 2009 change also involved shar­ing addi­tion­al infor­ma­tion, like users’ loca­tions and reli­gious and polit­i­cal lean­ings, with Microsoft and oth­er part­ners. As we’re going to see, Microsoft admits it was build­ing pro­files of Face­book users.

    But the most scan­dalous of the new rev­e­la­tions involves a rel­a­tive hand­ful of com­pa­nies: it turns out com­pa­nies like Net­flix, Spo­ti­fy, and Roy­al Bank of Cana­da were giv­en read/write/delete priv­i­leges to Face­book users pri­vate mes­sages. The osten­si­ble rea­son for this access was so Face­book’s mes­sag­ing func­tion­al­i­ty could be incor­po­rat­ed into third par­ty apps. Adding to the scan­dal is that Net­flix and Spo­ti­fy were appar­ent­ly giv­en access to pri­vate mes­sages long after they stopped using that fea­ture in their apps.

    Recall that this isn’t the first time we’ve learned about Face­book giv­ing third-par­ties access to pri­vate mes­sages. Black­ber­ry was appar­ent­ly scoop­ing up pri­vate mes­sages as part of the per­mis­sions it got as a device mak­er. And app devel­op­ers were also poten­tial­ly giv­en this per­mis­sion, includ­ing the devel­op­er of the Cam­bridge Ana­lyt­i­ca app. So the access to pri­vate mes­sages giv­en to third-par­ties isn’t a new rev­e­la­tion, but it’s clear­ly still a grow­ing rev­e­la­tion as these scan­dals con­tin­ue to trick­le out.

    We’re also learn­ing more about how Face­book was able to engage in this kind of behav­ior despite sign­ing a con­sent decree with the FTC in 2011: the audit­ing of Face­book’s pri­va­cy poli­cies was basi­cal­ly out­sourced to Price­wa­ter­house­C­oop­er, Face­book large­ly dic­tat­ed the terms of the audits, and the only thing Price­wa­ter­house­C­oop­er did was ver­i­fy that Face­book would tell them that they were inter­nal­ly polic­ing their pri­va­cy poli­cies.

    So we’re learn­ing that Face­book was forced to sign a 2011 con­sent decree with the FTC fol­low­ing its “instant per­son­al­iza­tion” scan­dal of 2009. The decree man­dat­ed that Face­book had to get user per­mis­sions before hand­ing out user data. And Face­book got around this decree by defin­ing large num­bers of com­pa­nies as “exten­sions of Face­book” that, for some rea­son, there­fore don’t require user per­mis­sions before user data is shared with them. This includ­ed giv­ing some of these large com­pa­nies access to infor­ma­tion like pri­vate mes­sages. And in some cas­es these com­pa­nies were giv­en this access to pri­vate mes­sages long after they stopped the fea­tures in their apps that used them. At this point we should expect new bad news about Face­book, but even by Face­book stan­dards this was pret­ty awful:

    The New York Times

    As Face­book Raised a Pri­va­cy Wall, It Carved an Open­ing for Tech Giants

    Inter­nal doc­u­ments show that the social net­work gave Microsoft, Ama­zon, Spo­ti­fy and oth­ers far greater access to people’s data than it has dis­closed.

    By Gabriel J.X. Dance, Michael LaFor­gia and Nicholas Con­fes­sore
    Dec. 18, 2018

    For years, Face­book gave some of the world’s largest tech­nol­o­gy com­pa­nies more intru­sive access to users’ per­son­al data than it has dis­closed, effec­tive­ly exempt­ing those busi­ness part­ners from its usu­al pri­va­cy rules, accord­ing to inter­nal records and inter­views.

    The spe­cial arrange­ments are detailed in hun­dreds of pages of Face­book doc­u­ments obtained by The New York Times. The records, gen­er­at­ed in 2017 by the company’s inter­nal sys­tem for track­ing part­ner­ships, pro­vide the most com­plete pic­ture yet of the social network’s data-shar­ing prac­tices. They also under­score how per­son­al data has become the most prized com­mod­i­ty of the dig­i­tal age, trad­ed on a vast scale by some of the most pow­er­ful com­pa­nies in Sil­i­con Val­ley and beyond.

    The exchange was intend­ed to ben­e­fit every­one. Push­ing for explo­sive growth, Face­book got more users, lift­ing its adver­tis­ing rev­enue. Part­ner com­pa­nies acquired fea­tures to make their prod­ucts more attrac­tive. Face­book users con­nect­ed with friends across dif­fer­ent devices and web­sites. But Face­book also assumed extra­or­di­nary pow­er over the per­son­al infor­ma­tion of its 2.2 bil­lion users — con­trol it has wield­ed with lit­tle trans­paren­cy or out­side over­sight.

    Face­book allowed Microsoft’s Bing search engine to see the names of vir­tu­al­ly all Face­book users’ friends with­out con­sent, the records show, and gave Net­flix and Spo­ti­fy the abil­i­ty to read Face­book users’ pri­vate mes­sages.

    The social net­work per­mit­ted Ama­zon to obtain users’ names and con­tact infor­ma­tion through their friends, and it let Yahoo view streams of friends’ posts as recent­ly as this sum­mer, despite pub­lic state­ments that it had stopped that type of shar­ing years ear­li­er.

    Face­book has been reel­ing from a series of pri­va­cy scan­dals, set off by rev­e­la­tions in March that a polit­i­cal con­sult­ing firm, Cam­bridge Ana­lyt­i­ca, improp­er­ly used Face­book data to build tools that aid­ed Pres­i­dent Trump’s 2016 cam­paign. Acknowl­edg­ing that it had breached users’ trust, Face­book insist­ed that it had insti­tut­ed stricter pri­va­cy pro­tec­tions long ago. Mark Zucker­berg, the chief exec­u­tive, assured law­mak­ers in April that peo­ple “have com­plete con­trol” over every­thing they share on Face­book.

    But the doc­u­ments, as well as inter­views with about 50 for­mer employ­ees of Face­book and its cor­po­rate part­ners, reveal that Face­book allowed cer­tain com­pa­nies access to data despite those pro­tec­tions. They also raise ques­tions about whether Face­book ran afoul of a 2011 con­sent agree­ment with the Fed­er­al Trade Com­mis­sion that barred the social net­work from shar­ing user data with­out explic­it per­mis­sion.

    In all, the deals described in the doc­u­ments ben­e­fit­ed more than 150 com­pa­nies — most of them tech busi­ness­es, includ­ing online retail­ers and enter­tain­ment sites, but also automak­ers and media orga­ni­za­tions. Their appli­ca­tions sought the data of hun­dreds of mil­lions of peo­ple a month, the records show. The deals, the old­est of which date to 2010, were all active in 2017. Some were still in effect this year.

    In an inter­view, Steve Sat­ter­field, Facebook’s direc­tor of pri­va­cy and pub­lic pol­i­cy, said none of the part­ner­ships vio­lat­ed users’ pri­va­cy or the F.T.C. agree­ment. Con­tracts required the com­pa­nies to abide by Face­book poli­cies, he added.

    Still, Face­book exec­u­tives have acknowl­edged mis­steps over the past year. “We know we’ve got work to do to regain people’s trust,” Mr. Sat­ter­field said. “Pro­tect­ing people’s infor­ma­tion requires stronger teams, bet­ter tech­nol­o­gy and clear­er poli­cies, and that’s where we’ve been focused for most of 2018.” He said that the part­ner­ships were “one area of focus” and that Face­book was in the process of wind­ing many of them down.

    Face­book has found no evi­dence of abuse by its part­ners, a spokes­woman said. Some of the largest part­ners, includ­ing Ama­zon, Microsoft and Yahoo, said they had used the data appro­pri­ate­ly, but declined to dis­cuss the shar­ing deals in detail. Face­book did say that it had mis­man­aged some of its part­ner­ships, allow­ing cer­tain com­pa­nies’ access to con­tin­ue long after they had shut down the fea­tures that required the data.

    With most of the part­ner­ships, Mr. Sat­ter­field said, the F.T.C. agree­ment did not require the social net­work to secure users’ con­sent before shar­ing data because Face­book con­sid­ered the part­ners exten­sions of itself — ser­vice providers that allowed users to inter­act with their Face­book friends. The part­ners were pro­hib­it­ed from using the per­son­al infor­ma­tion for oth­er pur­pos­es, he said. “Facebook’s part­ners don’t get to ignore people’s pri­va­cy set­tings.”

    Data pri­va­cy experts dis­put­ed Facebook’s asser­tion that most part­ner­ships were exempt­ed from the reg­u­la­to­ry require­ments, express­ing skep­ti­cism that busi­ness­es as var­ied as device mak­ers, retail­ers and search com­pa­nies would be viewed alike by the agency. “The only com­mon theme is that they are part­ner­ships that would ben­e­fit the com­pa­ny in terms of devel­op­ment or growth into an area that they oth­er­wise could not get access to,” said Ashkan Soltani, for­mer chief tech­nol­o­gist at the F.T.C.

    Mr. Soltani and three for­mer employ­ees of the F.T.C.’s con­sumer pro­tec­tion divi­sion, which brought the case that led to the con­sent decree, said in inter­views that its data-shar­ing deals had prob­a­bly vio­lat­ed the agree­ment.

    “This is just giv­ing third par­ties per­mis­sion to har­vest data with­out you being informed of it or giv­ing con­sent to it,” said David Vladeck, who for­mer­ly ran the F.T.C.’s con­sumer pro­tec­tion bureau. “I don’t under­stand how this uncon­sent­ed-to data har­vest­ing can at all be jus­ti­fied under the con­sent decree.”

    Details of the agree­ments are emerg­ing at a piv­otal moment for the world’s largest social net­work. Face­book has been ham­mered with ques­tions about its data shar­ing from law­mak­ers and reg­u­la­tors in the Unit­ed States and Europe. The F.T.C. this spring opened a new inquiry into Facebook’s com­pli­ance with the con­sent order, while the Jus­tice Depart­ment and Secu­ri­ties and Exchange Com­mis­sion are also inves­ti­gat­ing the com­pa­ny.

    ...

    This month, a British par­lia­men­tary com­mit­tee inves­ti­gat­ing inter­net dis­in­for­ma­tion released inter­nal Face­book emails, seized from the plain­tiff in anoth­er law­suit against Face­book. The mes­sages dis­closed some part­ner­ships and depict­ed a com­pa­ny pre­oc­cu­pied with growth, whose lead­ers sought to under­mine com­peti­tors and briefly con­sid­ered sell­ing access to user data.
    Richard Allan, a Face­book vice pres­i­dent, tes­ti­fy­ing before Par­lia­ment last month next to Mr. Zuckerberg’s vacant seat. The com­pa­ny is under fire from both Amer­i­can and Euro­pean lawmakers.CreditAgence France-Presse — Get­ty Images

    As Face­book has bat­tled one cri­sis after anoth­er, the company’s crit­ics, includ­ing some for­mer advis­ers and employ­ees, have sin­gled out the data-shar­ing as cause for con­cern.

    “I don’t believe it is legit­i­mate to enter into data-shar­ing part­ner­ships where there is not pri­or informed con­sent from the user,” said Roger McNamee, an ear­ly investor in Face­book. “No one should trust Face­book until they change their busi­ness mod­el.”

    An Engine for Growth

    Per­son­al data is the oil of the 21st cen­tu­ry, a resource worth bil­lions to those who can most effec­tive­ly extract and refine it. Amer­i­can com­pa­nies alone are expect­ed to spend close to $20 bil­lion by the end of 2018 to acquire and process con­sumer data, accord­ing to the Inter­ac­tive Adver­tis­ing Bureau.

    Few com­pa­nies have bet­ter data than Face­book and its rival, Google, whose pop­u­lar prod­ucts give them an inti­mate view into the dai­ly lives of bil­lions of peo­ple — and allow them to dom­i­nate the dig­i­tal adver­tis­ing mar­ket.

    Face­book has nev­er sold its user data, fear­ful of user back­lash and wary of hand­ing would-be com­peti­tors a way to dupli­cate its most prized asset. Instead, inter­nal doc­u­ments show, it did the next best thing: grant­i­ng oth­er com­pa­nies access to parts of the social net­work in ways that advanced its own inter­ests.

    Face­book began form­ing data part­ner­ships when it was still a rel­a­tive­ly young com­pa­ny. Mr. Zucker­berg was deter­mined to weave Facebook’s ser­vices into oth­er sites and plat­forms, believ­ing it would stave off obso­les­cence and insu­late Face­book from com­pe­ti­tion. Every cor­po­rate part­ner that inte­grat­ed Face­book data into its online prod­ucts helped dri­ve the platform’s expan­sion, bring­ing in new users, spurring them to spend more time on Face­book and dri­ving up adver­tis­ing rev­enue. At the same time, Face­book got crit­i­cal data back from its part­ners.

    The part­ner­ships were so impor­tant that deci­sions about form­ing them were vet­ted at high lev­els, some­times by Mr. Zucker­berg and Sheryl Sand­berg, the chief oper­at­ing offi­cer, Face­book offi­cials said. While many of the part­ner­ships were announced pub­licly, the details of the shar­ing arrange­ments typ­i­cal­ly were con­fi­den­tial.

    By 2013, Face­book had entered into more such part­ner­ships than its midlev­el employ­ees could eas­i­ly track, accord­ing to inter­views with two for­mer employ­ees. (Like the more than 30 oth­er for­mer employ­ees inter­viewed for this arti­cle, they spoke on the con­di­tion of anonymi­ty because they had signed nondis­clo­sure agree­ments or still main­tained rela­tion­ships with top Face­book offi­cials.)

    So they built a tool that did the tech­ni­cal work of turn­ing spe­cial access on and off and also kept records on what are known inter­nal­ly as “capa­bil­i­ties” — the spe­cial priv­i­leges enabling com­pa­nies to obtain data, in some cas­es with­out ask­ing per­mis­sion.

    The Times reviewed more than 270 pages of reports gen­er­at­ed by the sys­tem — records that reflect just a por­tion of Facebook’s wide-rang­ing deals. Among the rev­e­la­tions was that Face­book obtained data from mul­ti­ple part­ners for a con­tro­ver­sial friend-sug­ges­tion tool called “Peo­ple You May Know.

    The fea­ture, intro­duced in 2008, con­tin­ues even though some Face­book users have object­ed to it, unset­tled by its knowl­edge of their real-world rela­tion­ships. Giz­mo­do and oth­er news out­lets have report­ed cas­es of the tool’s rec­om­mend­ing friend con­nec­tions between patients of the same psy­chi­a­trist, estranged fam­i­ly mem­bers, and a harass­er and his vic­tim.

    Face­book, in turn, used con­tact lists from the part­ners, includ­ing Ama­zon, Yahoo and the Chi­nese com­pa­ny Huawei — which has been flagged as a secu­ri­ty threat by Amer­i­can intel­li­gence offi­cials — to gain deep­er insight into people’s rela­tion­ships and sug­gest more con­nec­tions, the records show.

    Some of the access deals described in the doc­u­ments were lim­it­ed to shar­ing non-iden­ti­fy­ing infor­ma­tion with research firms or enabling game mak­ers to accom­mo­date huge num­bers of play­ers. These raised no pri­va­cy con­cerns. But agree­ments with about a dozen com­pa­nies did. Some enabled part­ners to see users’ con­tact infor­ma­tion through their friends — even after the social net­work, respond­ing to com­plaints, said in 2014 that it was strip­ping all appli­ca­tions of that pow­er.

    As of 2017, Sony, Microsoft, Ama­zon and oth­ers could obtain users’ email address­es through their friends.

    Face­book also allowed Spo­ti­fy, Net­flix and the Roy­al Bank of Cana­da to read, write and delete users’ pri­vate mes­sages, and to see all par­tic­i­pants on a thread — priv­i­leges that appeared to go beyond what the com­pa­nies need­ed to inte­grate Face­book into their sys­tems, the records show. Face­book acknowl­edged that it did not con­sid­er any of those three com­pa­nies to be ser­vice providers. Spokes­peo­ple for Spo­ti­fy and Net­flix said those com­pa­nies were unaware of the broad pow­ers Face­book had grant­ed them. A spokesman for Net­flix said Wednes­day that it had used the access only to enable cus­tomers to rec­om­mend TV shows and movies to their friends.

    “Beyond these rec­om­men­da­tions, we nev­er accessed anyone’s per­son­al mes­sages and would nev­er do that,” he said.

    A Roy­al Bank of Cana­da spokesman dis­put­ed that the bank had had any such access. (Aspects of some shar­ing part­ner­ships, includ­ing those with the Roy­al Bank of Cana­da and Bing, were first report­ed by The Wall Street Jour­nal.)

    Spo­ti­fy, which could view mes­sages of more than 70 mil­lion users a month, still offers the option to share music through Face­book Mes­sen­ger. But Net­flix and the Cana­di­an bank no longer need­ed access to mes­sages because they had deac­ti­vat­ed fea­tures that incor­po­rat­ed it.

    These were not the only com­pa­nies that had spe­cial access longer than they need­ed it. Yahoo, The Times and oth­ers could still get Face­book users’ per­son­al infor­ma­tion in 2017.

    Yahoo could view real-time feeds of friends’ posts for a fea­ture that the com­pa­ny had end­ed in 2012. A Yahoo spokesman declined to dis­cuss the part­ner­ship in detail but said the com­pa­ny did not use the infor­ma­tion for adver­tis­ing. The Times — one of nine media com­pa­nies named in the doc­u­ments — had access to users’ friend lists for an arti­cle-shar­ing appli­ca­tion it had dis­con­tin­ued in 2011. A spokes­woman for the news orga­ni­za­tion said it was not obtain­ing any data.

    Facebook’s inter­nal records also revealed more about the extent of shar­ing deals with over 60 mak­ers of smart­phones, tablets and oth­er devices, agree­ments first report­ed by The Times in June.

    Face­book empow­ered Apple to hide from Face­book users all indi­ca­tors that its devices were ask­ing for data. Apple devices also had access to the con­tact num­bers and cal­en­dar entries of peo­ple who had changed their account set­tings to dis­able all shar­ing, the records show.

    Apple offi­cials said they were not aware that Face­book had grant­ed its devices any spe­cial access. They added that any shared data remained on the devices and was not avail­able to any­one oth­er than the users.

    Face­book offi­cials said the com­pa­ny had dis­closed its shar­ing deals in its pri­va­cy pol­i­cy since 2010. But the lan­guage in the pol­i­cy about its ser­vice providers does not spec­i­fy what data Face­book shares, and with which com­pa­nies. Mr. Sat­ter­field, Facebook’s pri­va­cy direc­tor, also said its part­ners were sub­ject to “rig­or­ous con­trols.”

    Yet Face­book has an imper­fect track record of polic­ing what out­side com­pa­nies do with its user data. In the Cam­bridge Ana­lyt­i­ca case, a Cam­bridge Uni­ver­si­ty psy­chol­o­gy pro­fes­sor cre­at­ed an appli­ca­tion in 2014 to har­vest the per­son­al data of tens of mil­lions of Face­book users for the con­sult­ing firm.

    Pam Dixon, exec­u­tive direc­tor of the World Pri­va­cy Forum, a non­prof­it pri­va­cy research group, said that Face­book would have lit­tle pow­er over what hap­pens to users’ infor­ma­tion after shar­ing it broad­ly. “It trav­els,” Ms. Dixon said. “It could be cus­tomized. It could be fed into an algo­rithm and deci­sions could be made about you based on that data.”

    400 Mil­lion Exposed

    Unlike Europe, where social media com­pa­nies have had to adapt to stricter reg­u­la­tion, the Unit­ed States has no gen­er­al con­sumer pri­va­cy law, leav­ing tech com­pa­nies free to mon­e­tize most kinds of per­son­al infor­ma­tion as long as they don’t mis­lead their users. The F.T.C., which reg­u­lates trade, can bring enforce­ment actions against com­pa­nies that deceive their cus­tomers.

    Besides Face­book, the F.T.C. has con­sent agree­ments with Google and Twit­ter stem­ming from alleged pri­va­cy vio­la­tions.

    Facebook’s agree­ment with reg­u­la­tors is a result of the company’s ear­ly exper­i­ments with data shar­ing. In late 2009, it changed the pri­va­cy set­tings of the 400 mil­lion peo­ple then using the ser­vice, mak­ing some of their infor­ma­tion acces­si­ble to all of the inter­net. Then it shared that infor­ma­tion, includ­ing users’ loca­tions and reli­gious and polit­i­cal lean­ings, with Microsoft and oth­er part­ners.

    Face­book called this “instant per­son­al­iza­tion” and pro­mot­ed it as a step toward a bet­ter inter­net, where oth­er com­pa­nies would use the infor­ma­tion to cus­tomize what peo­ple saw on sites like Bing. But the fea­ture drew com­plaints from pri­va­cy advo­cates and many Face­book users that the social net­work had shared the infor­ma­tion with­out per­mis­sion.

    The F.T.C. inves­ti­gat­ed and in 2011 cit­ed the pri­va­cy changes as a decep­tive prac­tice. Caught off guard, Face­book offi­cials stopped men­tion­ing instant per­son­al­iza­tion in pub­lic and entered into the con­sent agree­ment.

    Under the decree, the social net­work intro­duced a “com­pre­hen­sive pri­va­cy pro­gram” charged with review­ing new prod­ucts and fea­tures. It was ini­tial­ly over­seen by two chief pri­va­cy offi­cers, their lofty title an appar­ent sign of Facebook’s com­mit­ment. The com­pa­ny also hired Price­wa­ter­house­C­oop­ers to assess its pri­va­cy prac­tices every two years.

    But the pri­va­cy pro­gram faced some inter­nal resis­tance from the start, accord­ing to four for­mer Face­book employ­ees with direct knowl­edge of the company’s efforts. Some engi­neers and exec­u­tives, they said, con­sid­ered the pri­va­cy reviews an imped­i­ment to quick inno­va­tion and growth. And the core team respon­si­ble for coor­di­nat­ing the reviews — num­ber­ing about a dozen peo­ple by 2016 — was moved around with­in Facebook’s sprawl­ing orga­ni­za­tion, send­ing mixed sig­nals about how seri­ous­ly the com­pa­ny took it, the ex-employ­ees said.

    Crit­i­cal­ly, many of Facebook’s spe­cial shar­ing part­ner­ships were not sub­ject to exten­sive pri­va­cy pro­gram reviews, two of the for­mer employ­ees said. Exec­u­tives believed that because the part­ner­ships were gov­erned by busi­ness con­tracts requir­ing them to fol­low Face­book data poli­cies, they did not require the same lev­el of scruti­ny. The pri­va­cy team had lim­it­ed abil­i­ty to review or sug­gest changes to some of those data-shar­ing agree­ments, which had been nego­ti­at­ed by more senior offi­cials at the com­pa­ny.

    Face­book offi­cials said that mem­bers of the pri­va­cy team had been con­sult­ed on the shar­ing agree­ments, but that the lev­el of review “depend­ed on the spe­cif­ic part­ner­ship and the time it was cre­at­ed.”

    In 2014, Face­book end­ed instant per­son­al­iza­tion and walled off access to friends’ infor­ma­tion. But in a pre­vi­ous­ly unre­port­ed agree­ment, the social network’s engi­neers con­tin­ued allow­ing Bing; Pan­do­ra, the music stream­ing ser­vice; and Rot­ten Toma­toes, the movie and tele­vi­sion review site, access to much of the data they had got­ten for the dis­con­tin­ued fea­ture. Bing had access to the infor­ma­tion through last year, the records show, and the two oth­er com­pa­nies did as of late sum­mer, accord­ing to tests by The Times.

    Face­book offi­cials said the data shar­ing did not vio­late users’ pri­va­cy because it allowed access only to pub­lic data — though that includ­ed data that the social net­work had made pub­lic in 2009. They added that the social net­work made a mis­take in allow­ing the access to con­tin­ue for the three com­pa­nies, but declined to elab­o­rate. Spokes­women for Pan­do­ra and Rot­ten Toma­toes said the busi­ness­es were not aware of any spe­cial access.

    Face­book also declined to dis­cuss the oth­er capa­bil­i­ties Bing was giv­en, includ­ing the abil­i­ty to see all users’ friends.

    Microsoft offi­cials said that Bing was using the data to build pro­files of Face­book users on Microsoft servers. They declined to pro­vide details, oth­er than to say the infor­ma­tion was used in “fea­ture devel­op­ment” and not for adver­tis­ing. Microsoft has since delet­ed the data, the offi­cials said.

    Com­pli­ance Ques­tions

    For some advo­cates, the tor­rent of user data flow­ing out of Face­book has called into ques­tion not only Facebook’s com­pli­ance with the F.T.C. agree­ment, but also the agency’s approach to pri­va­cy reg­u­la­tion.

    “There has been an end­less bar­rage of how Face­book has ignored users’ pri­va­cy set­tings, and we tru­ly believed that in 2011 we had solved this prob­lem,” said Marc Roten­berg, head of the Elec­tron­ic Pri­va­cy Infor­ma­tion Cen­ter, an online pri­va­cy group that filed one of the first com­plaints about Face­book with fed­er­al reg­u­la­tors. “We brought Face­book under the reg­u­la­to­ry author­i­ty of the F.T.C. after a tremen­dous amount of work. The F.T.C. has failed to act.”

    Accord­ing to Face­book, most of its data part­ner­ships fall under an exemp­tion to the F.T.C. agree­ment. The com­pa­ny argues that the part­ner com­pa­nies are ser­vice providers — com­pa­nies that use the data only “for and at the direc­tion of” Face­book and func­tion as an exten­sion of the social net­work.

    But Mr. Vladeck and oth­er for­mer F.T.C. offi­cials said that Face­book was inter­pret­ing the exemp­tion too broad­ly. They said the pro­vi­sion was intend­ed to allow Face­book to per­form the same every­day func­tions as oth­er com­pa­nies, such as send­ing and receiv­ing infor­ma­tion over the inter­net or pro­cess­ing cred­it card trans­ac­tions, with­out vio­lat­ing the con­sent decree.

    When The Times report­ed last sum­mer on the part­ner­ships with device mak­ers, Face­book used the term “inte­gra­tion part­ners” to describe Black­Ber­ry, Huawei and oth­er man­u­fac­tur­ers that pulled Face­book data to pro­vide social-media-style fea­tures on smart­phones. All such inte­gra­tion part­ners, Face­book assert­ed, were cov­ered by the ser­vice provider exemp­tion.

    Since then, as the social net­work has dis­closed its data shar­ing deals with oth­er kinds of busi­ness­es — includ­ing inter­net com­pa­nies such as Yahoo — Face­book has labeled them inte­gra­tion part­ners, too.

    Face­book even recat­e­go­rized one com­pa­ny, the Russ­ian search giant Yan­dex, as an inte­gra­tion part­ner.

    Face­book records show Yan­dex had access in 2017 to Facebook’s unique user IDs even after the social net­work stopped shar­ing them with oth­er appli­ca­tions, cit­ing pri­va­cy risks. A spokes­woman for Yan­dex, which was accused last year by Ukraine’s secu­ri­ty ser­vice of fun­nel­ing its user data to the Krem­lin, said the com­pa­ny was unaware of the access and did not know why Face­book had allowed it to con­tin­ue. She added that the Ukrain­ian alle­ga­tions “have no mer­it.”

    In Octo­ber, Face­book said Yan­dex was not an inte­gra­tion part­ner. But in ear­ly Decem­ber, as The Times was prepar­ing to pub­lish this arti­cle, Face­book told con­gres­sion­al law­mak­ers that it was.

    An F.T.C. spokes­woman declined to com­ment on whether the com­mis­sion agreed with Facebook’s inter­pre­ta­tion of the ser­vice provider excep­tion, which is like­ly to fig­ure in the F.T.C.’s ongo­ing Face­book inves­ti­ga­tion. She also declined to say whether the com­mis­sion had ever received a com­plete list of part­ners that Face­book con­sid­ered ser­vice providers.

    But fed­er­al reg­u­la­tors had rea­son to know about the part­ner­ships — and to ques­tion whether Face­book was ade­quate­ly safe­guard­ing users’ pri­va­cy. Accord­ing to a let­ter that Face­book sent this fall to Sen­a­tor Ron Wyden, the Ore­gon Demo­c­rat, Price­wa­ter­house­C­oop­ers reviewed at least some of Facebook’s data part­ner­ships.

    The first assess­ment, sent to the F.T.C. in 2013, found only “lim­it­ed” evi­dence that Face­book had mon­i­tored those part­ners’ use of data. The find­ing was redact­ed from a pub­lic copy of the assess­ment, which gave Facebook’s pri­va­cy pro­gram a pass­ing grade over all.

    Mr. Wyden and oth­er crit­ics have ques­tioned whether the assess­ments — in which the F.T.C. essen­tial­ly out­sources much of its day-to-day over­sight to com­pa­nies like Price­wa­ter­house­C­oop­ers — are effec­tive. As with oth­er busi­ness­es under con­sent agree­ments with the F.T.C., Face­book pays for and large­ly dic­tat­ed the scope of its assess­ments, which are lim­it­ed most­ly to doc­u­ment­ing that Face­book has con­duct­ed the inter­nal pri­va­cy reviews it claims it had.

    How close­ly Face­book mon­i­tored its data part­ners is uncer­tain. Most of Facebook’s part­ners declined to dis­cuss what kind of reviews or audits Face­book sub­ject­ed them to. Two for­mer Face­book part­ners, whose deals with the social net­work dat­ed to 2010, said they could find no evi­dence that Face­book had ever audit­ed them. One was Black­Ber­ry. The oth­er was Yan­dex.

    Face­book offi­cials said that while the social net­work audit­ed part­ners only rarely, it man­aged them close­ly.

    “These were high-touch rela­tion­ships,” Mr. Sat­ter­field said.

    ———-

    “As Face­book Raised a Pri­va­cy Wall, It Carved an Open­ing for Tech Giants” by Gabriel J.X. Dance, Michael LaFor­gia and Nicholas Con­fes­sore; The New York Times; 12/18/2018

    “In all, the deals described in the doc­u­ments ben­e­fit­ed more than 150 com­pa­nies — most of them tech busi­ness­es, includ­ing online retail­ers and enter­tain­ment sites, but also automak­ers and media orga­ni­za­tions. Their appli­ca­tions sought the data of hun­dreds of mil­lions of peo­ple a month, the records show. The deals, the old­est of which date to 2010, were all active in 2017. Some were still in effect this year.

    So start­ing in 2010, Face­book start­ed work­ing out deals involv­ing the data on hun­dreds of mil­lions of peo­ple a month to more than 150 com­pa­nies, with some of those deals are still in effect today.

    What exact­ly was the nature of those data shar­ing arrange­ments? Well, since Face­book is is clear­ly deter­mined to reveal as lit­tle as pos­si­ble and deny as much as pos­si­ble, we have to rely on what­ev­er sources of infor­ma­tion we can get. In this case, the the New York Times some­how got its hands in the 2017 records of Face­book’s inter­nal sys­tem for track­ing these data shar­ing agree­ments, giv­ing us the best glimpse so far:

    ...
    The spe­cial arrange­ments are detailed in hun­dreds of pages of Face­book doc­u­ments obtained by The New York Times. The records, gen­er­at­ed in 2017 by the company’s inter­nal sys­tem for track­ing part­ner­ships, pro­vide the most com­plete pic­ture yet of the social network’s data-shar­ing prac­tices. They also under­score how per­son­al data has become the most prized com­mod­i­ty of the dig­i­tal age, trad­ed on a vast scale by some of the most pow­er­ful com­pa­nies in Sil­i­con Val­ley and beyond.

    The exchange was intend­ed to ben­e­fit every­one. Push­ing for explo­sive growth, Face­book got more users, lift­ing its adver­tis­ing rev­enue. Part­ner com­pa­nies acquired fea­tures to make their prod­ucts more attrac­tive. Face­book users con­nect­ed with friends across dif­fer­ent devices and web­sites. But Face­book also assumed extra­or­di­nary pow­er over the per­son­al infor­ma­tion of its 2.2 bil­lion users — con­trol it has wield­ed with lit­tle trans­paren­cy or out­side over­sight.
    ...

    It’s one of the grand ironies on the sit­u­a­tion: there’s clear­ly a mas­sive data pri­va­cy dis­as­ter that’s been going on for years but Face­book’s pri­va­cy regard­ing their data shar­ing poli­cies is pre­vent­ing us from know­ing the full scope of that dis­as­ter.

    And yet Face­book assures us that all of these data shar­ing agree­ments vio­lat­ed user pri­va­cy at all and, more impor­tant­ly for Face­book, that none of these arrange­ments vio­lat­ed the 2011 con­sent agree­ment Face­book signed with the US Fed­er­al Trade Com­mis­sion (FTC) after pre­vi­ous pri­va­cy vio­la­tions were dis­cov­ered. That’s the con­sent agree­ment Face­book had to sign after it sud­den­ly changed its pri­va­cy poli­cies of 400 mil­lion users in 2009 and made some of their infor­ma­tion acces­si­ble to the entire inter­net and start­ed shar­ing infor­ma­tion like user loca­tions, reli­gious, and polit­i­cal lean­ings with com­pa­nies like Microsoft. Face­book dubbed this scheme “instant per­son­al­iza­tion”, and tout­ed it under the ban­ner of mak­ing the inter­net expe­ri­ence bet­ter. But after the FTC con­clud­ed this pol­i­cy change as a decep­tive prac­tice, Face­book agreed to sign the con­sent decree, agree­ing to intro­duce a “com­pre­hen­sive pri­va­cy pro­gram” that involved review­ing new prod­ucts and fea­tures. Face­book also hired Price­wa­ter­house­C­oop­ers to audit its pri­va­cy poli­cies every two year. So the cur­rent pri­va­cy dis­as­ter, which start­ed in 2010, is an ongo­ing pri­va­cy dis­as­ter that Face­book start­ed short­ly before the 2011 con­sent agree­ment Face­book was forced to enter into after its 2009 “instant per­son­al­iza­tion” pri­va­cy dis­as­ter:

    ...
    But the doc­u­ments, as well as inter­views with about 50 for­mer employ­ees of Face­book and its cor­po­rate part­ners, reveal that Face­book allowed cer­tain com­pa­nies access to data despite those pro­tec­tions. They also raise ques­tions about whether Face­book ran afoul of a 2011 con­sent agree­ment with the Fed­er­al Trade Com­mis­sion that barred the social net­work from shar­ing user data with­out explic­it per­mis­sion.

    ...

    400 Mil­lion Exposed

    Unlike Europe, where social media com­pa­nies have had to adapt to stricter reg­u­la­tion, the Unit­ed States has no gen­er­al con­sumer pri­va­cy law, leav­ing tech com­pa­nies free to mon­e­tize most kinds of per­son­al infor­ma­tion as long as they don’t mis­lead their users. The F.T.C., which reg­u­lates trade, can bring enforce­ment actions against com­pa­nies that deceive their cus­tomers.

    Besides Face­book, the F.T.C. has con­sent agree­ments with Google and Twit­ter stem­ming from alleged pri­va­cy vio­la­tions.

    Facebook’s agree­ment with reg­u­la­tors is a result of the company’s ear­ly exper­i­ments with data shar­ing. In late 2009, it changed the pri­va­cy set­tings of the 400 mil­lion peo­ple then using the ser­vice, mak­ing some of their infor­ma­tion acces­si­ble to all of the inter­net. Then it shared that infor­ma­tion, includ­ing users’ loca­tions and reli­gious and polit­i­cal lean­ings, with Microsoft and oth­er part­ners.

    Face­book called this “instant per­son­al­iza­tion” and pro­mot­ed it as a step toward a bet­ter inter­net, where oth­er com­pa­nies would use the infor­ma­tion to cus­tomize what peo­ple saw on sites like Bing. But the fea­ture drew com­plaints from pri­va­cy advo­cates and many Face­book users that the social net­work had shared the infor­ma­tion with­out per­mis­sion.

    The F.T.C. inves­ti­gat­ed and in 2011 cit­ed the pri­va­cy changes as a decep­tive prac­tice. Caught off guard, Face­book offi­cials stopped men­tion­ing instant per­son­al­iza­tion in pub­lic and entered into the con­sent agree­ment.

    Under the decree, the social net­work intro­duced a “com­pre­hen­sive pri­va­cy pro­gram” charged with review­ing new prod­ucts and fea­tures. It was ini­tial­ly over­seen by two chief pri­va­cy offi­cers, their lofty title an appar­ent sign of Facebook’s com­mit­ment. The com­pa­ny also hired Price­wa­ter­house­C­oop­ers to assess its pri­va­cy prac­tices every two years.
    ...

    And, of course, the “com­pre­hen­sive pri­va­cy pro­gram” that Face­book agreed to as part of the 2011 con­sent agree­ment was basi­cal­ly a farce. There was imme­di­ate resis­tance from with­in Face­book. But, more impor­tant­ly, Face­book found a loop­hole. A far­ci­cal sub­jec­tive loop­hole, but a loop­hole nonethe­less; Face­book’s exec­u­tives con­vinced them­selves that these ‘spe­cial rela­tion­ships’ with +150 com­pa­nies did­n’t fall under the com­pre­hen­sive pri­va­cy pro­gram because they were already gov­erned by busi­ness con­tracts requir­ing them to fol­low Face­book’s data poli­cies. The fact that Face­book’s data poli­cies are the prob­lem the “com­pre­hen­sive pri­va­cy pro­gram” was sup­posed to address in the first place does­n’t appear to have been an issue for these Face­book exec­u­tives. And Face­book assures us that the pri­va­cy team was indeed review­ing these spe­cial rela­tion­ships. But the lev­el of review “depend­ed on the spe­cif­ic part­ner­ship and the time it was cre­at­ed.” So Face­book was deter­min­ing on a case-by-case basis what kind of pri­va­cy reviews the pri­va­cy team could make, which, at this point, implies that the spe­cial agree­ments that made the biggest pri­va­cy vio­la­tions prob­a­bly got the most weak­est reviews:

    ...
    But the pri­va­cy pro­gram faced some inter­nal resis­tance from the start, accord­ing to four for­mer Face­book employ­ees with direct knowl­edge of the company’s efforts. Some engi­neers and exec­u­tives, they said, con­sid­ered the pri­va­cy reviews an imped­i­ment to quick inno­va­tion and growth. And the core team respon­si­ble for coor­di­nat­ing the reviews — num­ber­ing about a dozen peo­ple by 2016 — was moved around with­in Facebook’s sprawl­ing orga­ni­za­tion, send­ing mixed sig­nals about how seri­ous­ly the com­pa­ny took it, the ex-employ­ees said.

    Crit­i­cal­ly, many of Facebook’s spe­cial shar­ing part­ner­ships were not sub­ject to exten­sive pri­va­cy pro­gram reviews, two of the for­mer employ­ees said. Exec­u­tives believed that because the part­ner­ships were gov­erned by busi­ness con­tracts requir­ing them to fol­low Face­book data poli­cies, they did not require the same lev­el of scruti­ny. The pri­va­cy team had lim­it­ed abil­i­ty to review or sug­gest changes to some of those data-shar­ing agree­ments, which had been nego­ti­at­ed by more senior offi­cials at the com­pa­ny.

    Face­book offi­cials said that mem­bers of the pri­va­cy team had been con­sult­ed on the shar­ing agree­ments, but that the lev­el of review “depend­ed on the spe­cif­ic part­ner­ship and the time it was cre­at­ed.”
    ...

    Keep in mind that Face­book start­ed these spe­cial arrange­ments in 2010, before the 2011 con­sent agree­ment. So it would be inter­est­ing to know if the spe­cial agree­ments made in 2010 got the weak­est pri­va­cy reviews due to have the great­est pri­va­cy vio­la­tions.

    We also learn that this spe­cial exemp­tion from the “com­pre­hen­sive pri­va­cy pro­gram” for these spe­cial arrange­ments includes the spe­cial data shar­ing arrange­ments Face­book had with the 60+ device mak­ers that we learned about this sum­mer, where Face­book jus­ti­fied these spe­cial data shar­ing arrange­ments by argu­ing that the device mak­ers are “exten­sions of Face­book”. The exemp­tion from the com­pre­hen­sive pri­va­cy pro­gram also appears to exempt Face­book from hav­ing to get user con­sent before shar­ing the data. So the spe­cial arrange­ment the device mak­ers got from Face­book where they were treat­ed as exten­sions of Face­book was­n’t lim­it­ed to device mak­ers. Device mak­ers were just one exam­ple of the kinds of busi­ness­es that Face­book con­sid­ered an exten­sion of itself. It’s all very spe­cial:

    ...
    With most of the part­ner­ships, Mr. Sat­ter­field said, the F.T.C. agree­ment did not require the social net­work to secure users’ con­sent before shar­ing data because Face­book con­sid­ered the part­ners exten­sions of itself — ser­vice providers that allowed users to inter­act with their Face­book friends. The part­ners were pro­hib­it­ed from using the per­son­al infor­ma­tion for oth­er pur­pos­es, he said. “Facebook’s part­ners don’t get to ignore people’s pri­va­cy set­tings.”

    ...

    Com­pli­ance Ques­tions

    For some advo­cates, the tor­rent of user data flow­ing out of Face­book has called into ques­tion not only Facebook’s com­pli­ance with the F.T.C. agree­ment, but also the agency’s approach to pri­va­cy reg­u­la­tion.

    “There has been an end­less bar­rage of how Face­book has ignored users’ pri­va­cy set­tings, and we tru­ly believed that in 2011 we had solved this prob­lem,” said Marc Roten­berg, head of the Elec­tron­ic Pri­va­cy Infor­ma­tion Cen­ter, an online pri­va­cy group that filed one of the first com­plaints about Face­book with fed­er­al reg­u­la­tors. “We brought Face­book under the reg­u­la­to­ry author­i­ty of the F.T.C. after a tremen­dous amount of work. The F.T.C. has failed to act.”

    Accord­ing to Face­book, most of its data part­ner­ships fall under an exemp­tion to the F.T.C. agree­ment. The com­pa­ny argues that the part­ner com­pa­nies are ser­vice providers — com­pa­nies that use the data only “for and at the direc­tion of” Face­book and func­tion as an exten­sion of the social net­work.

    But Mr. Vladeck and oth­er for­mer F.T.C. offi­cials said that Face­book was inter­pret­ing the exemp­tion too broad­ly. They said the pro­vi­sion was intend­ed to allow Face­book to per­form the same every­day func­tions as oth­er com­pa­nies, such as send­ing and receiv­ing infor­ma­tion over the inter­net or pro­cess­ing cred­it card trans­ac­tions, with­out vio­lat­ing the con­sent decree.

    When The Times report­ed last sum­mer on the part­ner­ships with device mak­ers, Face­book used the term “inte­gra­tion part­ners” to describe Black­Ber­ry, Huawei and oth­er man­u­fac­tur­ers that pulled Face­book data to pro­vide social-media-style fea­tures on smart­phones. All such inte­gra­tion part­ners, Face­book assert­ed, were cov­ered by the ser­vice provider exemp­tion.

    Since then, as the social net­work has dis­closed its data shar­ing deals with oth­er kinds of busi­ness­es — includ­ing inter­net com­pa­nies such as Yahoo — Face­book has labeled them inte­gra­tion part­ners, too.
    ...

    As we should expect, data pri­va­cy experts dis­agree with Face­book’s cav­a­lier inter­pre­ta­tion of what can fall out­side of its con­sent agree­ment with the FTC. And those experts include for­mer employ­ees of the FTC’s con­sumer pro­tec­tion divi­sion:

    ...
    Data pri­va­cy experts dis­put­ed Facebook’s asser­tion that most part­ner­ships were exempt­ed from the reg­u­la­to­ry require­ments, express­ing skep­ti­cism that busi­ness­es as var­ied as device mak­ers, retail­ers and search com­pa­nies would be viewed alike by the agency. “The only com­mon theme is that they are part­ner­ships that would ben­e­fit the com­pa­ny in terms of devel­op­ment or growth into an area that they oth­er­wise could not get access to,” said Ashkan Soltani, for­mer chief tech­nol­o­gist at the F.T.C.

    Mr. Soltani and three for­mer employ­ees of the F.T.C.’s con­sumer pro­tec­tion divi­sion, which brought the case that led to the con­sent decree, said in inter­views that its data-shar­ing deals had prob­a­bly vio­lat­ed the agree­ment.

    “This is just giv­ing third par­ties per­mis­sion to har­vest data with­out you being informed of it or giv­ing con­sent to it,” said David Vladeck, who for­mer­ly ran the F.T.C.’s con­sumer pro­tec­tion bureau. “I don’t under­stand how this uncon­sent­ed-to data har­vest­ing can at all be jus­ti­fied under the con­sent decree.”

    ...

    “I don’t believe it is legit­i­mate to enter into data-shar­ing part­ner­ships where there is not pri­or informed con­sent from the user,” said Roger McNamee, an ear­ly investor in Face­book. “No one should trust Face­book until they change their busi­ness mod­el.”
    ...

    “No one should trust Face­book until they change their busi­ness mod­el.” Good advice.

    Beyond that, we learn that Face­book allowed Apple to hide from Face­book user that their Apple devices were ask­ing for data and Apple devices were still shar­ing with Face­book the con­tact num­bers and cal­en­dar entries for peo­ple who changed their account set­tings to dis­able all shar­ing. it high­lights how this isn’t just a Face­book scan­dal. All of those part­ners, espe­cial­ly those device man­u­fac­tur­ers that hid the shar­ing of this data, are also impli­cat­ed in this:

    ...
    Facebook’s inter­nal records also revealed more about the extent of shar­ing deals with over 60 mak­ers of smart­phones, tablets and oth­er devices, agree­ments first report­ed by The Times in June.

    Face­book empow­ered Apple to hide from Face­book users all indi­ca­tors that its devices were ask­ing for data. Apple devices also had access to the con­tact num­bers and cal­en­dar entries of peo­ple who had changed their account set­tings to dis­able all shar­ing, the records show.

    Apple offi­cials said they were not aware that Face­book had grant­ed its devices any spe­cial access. They added that any shared data remained on the devices and was not avail­able to any­one oth­er than the users.

    Face­book offi­cials said the com­pa­ny had dis­closed its shar­ing deals in its pri­va­cy pol­i­cy since 2010. But the lan­guage in the pol­i­cy about its ser­vice providers does not spec­i­fy what data Face­book shares, and with which com­pa­nies. Mr. Sat­ter­field, Facebook’s pri­va­cy direc­tor, also said its part­ners were sub­ject to “rig­or­ous con­trols.”
    ...

    And as we’ve learned over and over in these Face­book data shar­ing scan­dals, while Face­book claimed to have over­hauled its data shar­ing poli­cies in 2014, the excep­tion was the rule in the sense that a large num­ber of the biggest com­pa­nies were get­ting excep­tions to these rule changes. In addi­tion to phas­ing out the “friends per­mis­sions fea­ture in 2014 — the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal that allowed the com­pa­ny to scoop up pro­file data on ~87 mil­lion Face­book users who were “friends” with the ~270,000 peo­ple who actu­al­ly down­loaded the Cam­bridge Ana­lyt­i­ca app — the “instant per­son­al­iza­tion” fea­ture was also phased out that year. Except it was­n’t phased out for com­pa­nies like Pan­do­ra, Rot­ten Toma­toes, Sony, Ama­zon, Yahoo, and Microsoft­’s search engine Bing. And Microsoft admits that Bing was using this data to build pro­files of Face­book users, which is a clear admis­sion that this infor­ma­tion was­n’t sim­ply being used to pro­vide some sort of ser­vice for Face­book users:

    ...
    Some of the access deals described in the doc­u­ments were lim­it­ed to shar­ing non-iden­ti­fy­ing infor­ma­tion with research firms or enabling game mak­ers to accom­mo­date huge num­bers of play­ers. These raised no pri­va­cy con­cerns. But agree­ments with about a dozen com­pa­nies did. Some enabled part­ners to see users’ con­tact infor­ma­tion through their friends — even after the social net­work, respond­ing to com­plaints, said in 2014 that it was strip­ping all appli­ca­tions of that pow­er.

    ...

    In 2014, Face­book end­ed instant per­son­al­iza­tion and walled off access to friends’ infor­ma­tion. But in a pre­vi­ous­ly unre­port­ed agree­ment, the social network’s engi­neers con­tin­ued allow­ing Bing; Pan­do­ra, the music stream­ing ser­vice; and Rot­ten Toma­toes, the movie and tele­vi­sion review site, access to much of the data they had got­ten for the dis­con­tin­ued fea­ture. Bing had access to the infor­ma­tion through last year, the records show, and the two oth­er com­pa­nies did as of late sum­mer, accord­ing to tests by The Times.

    Face­book offi­cials said the data shar­ing did not vio­late users’ pri­va­cy because it allowed access only to pub­lic data — though that includ­ed data that the social net­work had made pub­lic in 2009. They added that the social net­work made a mis­take in allow­ing the access to con­tin­ue for the three com­pa­nies, but declined to elab­o­rate. Spokes­women for Pan­do­ra and Rot­ten Toma­toes said the busi­ness­es were not aware of any spe­cial access.

    Face­book also declined to dis­cuss the oth­er capa­bil­i­ties Bing was giv­en, includ­ing the abil­i­ty to see all users’ friends.

    Microsoft offi­cials said that Bing was using the data to build pro­files of Face­book users on Microsoft servers. They declined to pro­vide details, oth­er than to say the infor­ma­tion was used in “fea­ture devel­op­ment” and not for adver­tis­ing. Microsoft has since delet­ed the data, the offi­cials said.

    ...

    The social net­work per­mit­ted Ama­zon to obtain users’ names and con­tact infor­ma­tion through their friends, and it let Yahoo view streams of friends’ posts as recent­ly as this sum­mer, despite pub­lic state­ments that it had stopped that type of shar­ing years ear­li­er.

    ...

    As of 2017, Sony, Microsoft, Ama­zon and oth­ers could obtain users’ email address­es through their friends.
    ...

    And while large Microsoft was giv­en ongo­ing access to the names and email address of Face­book user friends, far more scan­dalous is the rev­e­la­tion that Net­flix, Spo­ti­fy, Roy­al Bank of Cana­da, were giv­en the abil­i­ty to read Face­book users’ pri­vate mes­sages, which is arguably the biggest Face­book pri­va­cy vio­la­tion we know of so far. The jus­ti­fi­ca­tion of this wild pri­va­cy vio­la­tion was so these pri­vate mes­sag­ing fea­tures could be incor­po­rat­ed into these com­pa­nies’ own web­sites and app, and yet Net­flix and Roy­al Bank of Cana­da were giv­en this access to pri­vate mes­sages even after they stopped offer­ing that pri­vate mes­sag­ing fea­ture:

    ...
    Face­book allowed Microsoft’s Bing search engine to see the names of vir­tu­al­ly all Face­book users’ friends with­out con­sent, the records show, and gave Net­flix and Spo­ti­fy the abil­i­ty to read Face­book users’ pri­vate mes­sages.

    ...

    Face­book also allowed Spo­ti­fy, Net­flix and the Roy­al Bank of Cana­da to read, write and delete users’ pri­vate mes­sages, and to see all par­tic­i­pants on a thread — priv­i­leges that appeared to go beyond what the com­pa­nies need­ed to inte­grate Face­book into their sys­tems, the records show. Face­book acknowl­edged that it did not con­sid­er any of those three com­pa­nies to be ser­vice providers. Spokes­peo­ple for Spo­ti­fy and Net­flix said those com­pa­nies were unaware of the broad pow­ers Face­book had grant­ed them. A spokesman for Net­flix said Wednes­day that it had used the access only to enable cus­tomers to rec­om­mend TV shows and movies to their friends.

    “Beyond these rec­om­men­da­tions, we nev­er accessed anyone’s per­son­al mes­sages and would nev­er do that,” he said.

    A Roy­al Bank of Cana­da spokesman dis­put­ed that the bank had had any such access. (Aspects of some shar­ing part­ner­ships, includ­ing those with the Roy­al Bank of Cana­da and Bing, were first report­ed by The Wall Street Jour­nal.)

    Spo­ti­fy, which could view mes­sages of more than 70 mil­lion users a month, still offers the option to share music through Face­book Mes­sen­ger. But Net­flix and the Cana­di­an bank no longer need­ed access to mes­sages because they had deac­ti­vat­ed fea­tures that incor­po­rat­ed it.

    ...

    Face­book has found no evi­dence of abuse by its part­ners, a spokes­woman said. Some of the largest part­ners, includ­ing Ama­zon, Microsoft and Yahoo, said they had used the data appro­pri­ate­ly, but declined to dis­cuss the shar­ing deals in detail. Face­book did say that it had mis­man­aged some of its part­ner­ships, allow­ing cer­tain com­pa­nies’ access to con­tin­ue long after they had shut down the fea­tures that required the data.
    ...

    So Face­book was lit­er­al­ly giv­ing large com­pa­nies access to read Face­book users’ pri­vate mes­sages. For years. Even after these com­pa­nies stopped offer­ing the fea­tures that osten­si­bly used that data. It’s kind of amaz­ing Face­book even con­sid­ered this move, let alone exe­cut­ed it.

    But don’t for­get that we’ve already learned that Black­ber­ry was appar­ent­ly scoop­ing up pri­vate mes­sages as part of the per­mis­sions it got as a device mak­er. And app devel­op­ers were also poten­tial­ly giv­en this per­mis­sion, includ­ing the devel­op­er of the Cam­bridge Ana­lyt­i­ca app. This is just the lat­est update in an ongo­ing pri­vate mes­sages scan­dal.

    But then we learn that Face­book was­n’t just giv­ing away data to all of these large com­pa­nies. Face­book was also col­lect­ing data from them and using that data for creepy fea­tures like the “Peo­ple You May Know” friends sug­ges­tion fea­ture:

    ...
    Face­book began form­ing data part­ner­ships when it was still a rel­a­tive­ly young com­pa­ny. Mr. Zucker­berg was deter­mined to weave Facebook’s ser­vices into oth­er sites and plat­forms, believ­ing it would stave off obso­les­cence and insu­late Face­book from com­pe­ti­tion. Every cor­po­rate part­ner that inte­grat­ed Face­book data into its online prod­ucts helped dri­ve the platform’s expan­sion, bring­ing in new users, spurring them to spend more time on Face­book and dri­ving up adver­tis­ing rev­enue. At the same time, Face­book got crit­i­cal data back from its part­ners.

    ...

    The Times reviewed more than 270 pages of reports gen­er­at­ed by the sys­tem — records that reflect just a por­tion of Facebook’s wide-rang­ing deals. Among the rev­e­la­tions was that Face­book obtained data from mul­ti­ple part­ners for a con­tro­ver­sial friend-sug­ges­tion tool called “Peo­ple You May Know.

    The fea­ture, intro­duced in 2008, con­tin­ues even though some Face­book users have object­ed to it, unset­tled by its knowl­edge of their real-world rela­tion­ships. Giz­mo­do and oth­er news out­lets have report­ed cas­es of the tool’s rec­om­mend­ing friend con­nec­tions between patients of the same psy­chi­a­trist, estranged fam­i­ly mem­bers, and a harass­er and his vic­tim.

    Face­book, in turn, used con­tact lists from the part­ners, includ­ing Ama­zon, Yahoo and the Chi­nese com­pa­ny Huawei — which has been flagged as a secu­ri­ty threat by Amer­i­can intel­li­gence offi­cials — to gain deep­er insight into people’s rela­tion­ships and sug­gest more con­nec­tions, the records show.
    ...

    So what is the FTC going to do about this ever grow­ing scan­dal? Well, that’s unclear because it appears that the FTC isn’t actu­al­ly direct­ly involved in the over­sight of Face­book’s data pri­va­cy poli­cies! That task has been effec­tive­ly out­source to Price­wa­ter­house­C­oop­ers. And Face­book pays for and large­ly dic­tates the scope of these audits which are lim­it­ed most­ly to doc­u­ment­ing that Face­book has con­duct­ed the inter­nal pri­va­cy reviews it claims it had. So Face­book signs a con­sent decree in 2011 with the FTC, and the enforce­ment of that decree large­ly came down to Face­book pay­ing Price­wa­ter­house­C­oop­ers to con­firm that Face­book con­firmed that it’s doing these inter­nal pri­va­cy reviews. And, of course, the key les­son we’ve learned from this report is that Face­book inter­nal­ly decid­ed that spe­cial data shar­ing arrange­ments with all of these large com­pa­nies were actu­al­ly sub­ject to the 2011 con­sent agree­ment. So it’s not just a scan­dal involv­ing Face­book and its var­i­ous data shar­ing part­ners. The FTC and Price­wa­ter­house­C­oop­ers are also part of the scan­dal:

    ...
    Pam Dixon, exec­u­tive direc­tor of the World Pri­va­cy Forum, a non­prof­it pri­va­cy research group, said that Face­book would have lit­tle pow­er over what hap­pens to users’ infor­ma­tion after shar­ing it broad­ly. “It trav­els,” Ms. Dixon said. “It could be cus­tomized. It could be fed into an algo­rithm and deci­sions could be made about you based on that data.”

    ...

    An F.T.C. spokes­woman declined to com­ment on whether the com­mis­sion agreed with Facebook’s inter­pre­ta­tion of the ser­vice provider excep­tion, which is like­ly to fig­ure in the F.T.C.’s ongo­ing Face­book inves­ti­ga­tion. She also declined to say whether the com­mis­sion had ever received a com­plete list of part­ners that Face­book con­sid­ered ser­vice providers.

    But fed­er­al reg­u­la­tors had rea­son to know about the part­ner­ships — and to ques­tion whether Face­book was ade­quate­ly safe­guard­ing users’ pri­va­cy. Accord­ing to a let­ter that Face­book sent this fall to Sen­a­tor Ron Wyden, the Ore­gon Demo­c­rat, Price­wa­ter­house­C­oop­ers reviewed at least some of Facebook’s data part­ner­ships.

    The first assess­ment, sent to the F.T.C. in 2013, found only “lim­it­ed” evi­dence that Face­book had mon­i­tored those part­ners’ use of data. The find­ing was redact­ed from a pub­lic copy of the assess­ment, which gave Facebook’s pri­va­cy pro­gram a pass­ing grade over all.

    Mr. Wyden and oth­er crit­ics have ques­tioned whether the assess­ments — in which the F.T.C. essen­tial­ly out­sources much of its day-to-day over­sight to com­pa­nies like Price­wa­ter­house­C­oop­ers — are effec­tive. As with oth­er busi­ness­es under con­sent agree­ments with the F.T.C., Face­book pays for and large­ly dic­tat­ed the scope of its assess­ments, which are lim­it­ed most­ly to doc­u­ment­ing that Face­book has con­duct­ed the inter­nal pri­va­cy reviews it claims it had.

    How close­ly Face­book mon­i­tored its data part­ners is uncer­tain. Most of Facebook’s part­ners declined to dis­cuss what kind of reviews or audits Face­book sub­ject­ed them to. Two for­mer Face­book part­ners, whose deals with the social net­work dat­ed to 2010, said they could find no evi­dence that Face­book had ever audit­ed them. One was Black­Ber­ry. The oth­er was Yan­dex.
    ...

    So how many of these spe­cial arrange­ments are still in place today? We don’t get to know and Face­book isn’t telling. We are only told that Face­book is cur­rent­ly ‘in the process of wind­ing many of them down’:

    ...
    In an inter­view, Steve Sat­ter­field, Facebook’s direc­tor of pri­va­cy and pub­lic pol­i­cy, said none of the part­ner­ships vio­lat­ed users’ pri­va­cy or the F.T.C. agree­ment. Con­tracts required the com­pa­nies to abide by Face­book poli­cies, he added.

    Still, Face­book exec­u­tives have acknowl­edged mis­steps over the past year. “We know we’ve got work to do to regain people’s trust,” Mr. Sat­ter­field said. “Pro­tect­ing people’s infor­ma­tion requires stronger teams, bet­ter tech­nol­o­gy and clear­er poli­cies, and that’s where we’ve been focused for most of 2018.” He said that the part­ner­ships were “one area of focus” and that Face­book was in the process of wind­ing many of them down.
    ...

    Yes, Face­book would like to assure us that this time it’s actu­al­ly end­ing these data shar­ing arrange­ments. For real! Trust us!

    Beyond that, as the fol­low­ing arti­cle makes clear, Face­book wants to assure us that these kinds of secret data shar­ing agree­ments were actu­al­ly very clear to users. That’s the expla­na­tion Ime Archi­bong, Face­book’s vice pres­i­dent of prod­uct part­ner­ships, gave on the com­pa­ny’s blog fol­low­ing the above report in response to the pub­lic out­cry over the rev­e­la­tion of com­pa­nies like Net­flix, Spo­ti­fy, and Roy­al Bank of Cana­da being giv­en read/write/delete priv­i­leges to pri­vate mes­sages. Accord­ing to Archi­bong, when Face­book users signed into the ser­vices for Net­flix or Spo­ti­fy using their Face­book login they were effec­tive­ly giv­ing per­mis­sion for this pri­vate mes­sage data shar­ing.

    Keep in mind one of the key rev­e­la­tions in the above arti­cle: Face­book con­clud­ed that it did­n’t actu­al­ly need to get user per­mis­sions or even inform them that it was hap­pen­ing for these data shar­ing agree­ments because Face­book deter­mined that these third-par­ties, like device man­u­fac­tur­ers, were effec­tive­ly exten­sions of Face­book. So now that there’s an out­cry, we are told by Face­book that, actu­al­ly, users were giv­ing their per­mis­sion for these data shar­ing arrange­ments when they used fea­tures like the ‘log in with Face­book’ fea­ture.

    Also keep in mind that offer­ing the option of sign­ing into an online ser­vice using your Face­book login is pop­u­lar and near­ly ubiq­ui­tous fea­ture offered on the inter­net these days. The above report cov­ered Face­book’s data shar­ing arrange­ments with +150 com­pa­nies, but there are a lot more than 150 com­pa­nies that use the ‘log in with Face­book’ option. So if Face­book is qui­et­ly bundling all sort of user per­mis­sions into the use of these ‘log in with Face­book’ options, there’s prob­a­bly all sorts of data shar­ing arrange­ments with oth­er web­sites that we have yet to learn about.

    Amus­ing­ly, Archi­bong points to the lan­guage used by Roy­al Bank of Canada’s 2013 press release about the new Face­book fea­tures being inte­grat­ed into its app — which had pri­vate mes­sage read/write/delete priv­i­leges — as an exam­ple of how it was clear to users that Roy­al Bank of Cana­da was going to get access to this kind of infor­ma­tion. But as the arti­cle points out, that press release lan­guage does­n’t does­n’t men­tion any about the need to read and delete Face­book mes­sages and the words “pri­vate” and “pri­va­cy” appear nowhere in the press release.

    Archi­bong also refutes the idea that Face­book was “ship­ping” user pri­vate mes­sages to com­pa­nies, clar­i­fy­ing that it was actu­al­ly an auto­mat­ed process. Appar­ent­ly hav­ing this sys­tem work in an auto­mat­ed way (which is the only way it would real­is­ti­cal­ly work any­way) is sup­posed to make it ok.

    Final­ly, the arti­cle notes that in the above New York Times report, the com­pa­nies list­ed as have pri­vate mes­sage access all deny that they used this or even knew about it. And yet Archi­bong asserts that “We worked with them to build mes­sag­ing inte­gra­tions into their apps so peo­ple could send mes­sages to their Face­book friends.” It’s anoth­er reminder that while this is a Face­book-cen­tric scan­dal, it involves a lot more com­pa­nies than just Face­book. And the more we learn about how Face­book was using the ‘log in with Face­book’ fea­ture as a means of get­ting users to agree to data shar­ing agree­ments, the more com­pa­nies this scan­dal is going to involve:

    NBC News

    Face­book tries to explain why com­pa­nies could erase your mes­sages
    “In the past day, we’ve been accused of dis­clos­ing peo­ple’s pri­vate mes­sages to part­ners with­out their knowl­edge,” Face­book said. “That’s not true.”

    Dec. 20, 2018 / 12:30 AM CST
    By Alex John­son

    Face­book Inc. took a sec­ond stab at con­vinc­ing its 2.3 bil­lion users that it did­n’t allow more than 150 oth­er com­pa­nies to mis­use their per­son­al data on Wednes­day night after its val­u­a­tion fell by more than $28 bil­lion on the stock mar­ket.

    “In the past day, we’ve been accused of dis­clos­ing peo­ple’s pri­vate mes­sages to part­ners with­out their knowl­edge,” Ime Archi­bong, Face­book’s vice pres­i­dent of prod­uct part­ner­ships, said in a post on the com­pa­ny’s blog. “That’s not true — and we want­ed to pro­vide more facts about our mes­sag­ing part­ner­ships.”

    The blog post — the sec­ond since The New York Times report­ed Tues­day that Face­book for many years gave more than 150 com­pa­nies exten­sive access to per­son­al datafocused nar­row­ly on the con­tention in the Times report that emerged as the most con­tro­ver­sial: that Face­book gave four com­pa­nies access to read, write and delete users’ mes­sages.

    Face­book stock fell by more than 7 per­cent Wednes­day in the wake of the Times arti­cle and a fed­er­al law­suit that was sep­a­rate­ly filed against the com­pa­ny over its han­dling of users’ data, wip­ing more than $28 bil­lion off of its mar­ket cap val­u­a­tion.

    “Many news sto­ries imply we were ship­ping over pri­vate mes­sages to part­ners, which is not cor­rect,” Archi­bong said Wednes­day night.

    Instead, he said, the com­pa­nies — Spo­ti­fy Ltd., Net­flix Inc., Drop­box Inc. and the Roy­al Bank of Scot­land — were grant­ed auto­mat­ed access to users’ mes­sages so Face­book users could send Face­book mes­sages to oth­er Face­book users with­out leav­ing the Spo­ti­fy, Net­flix, Drop­box or Roy­al Bank apps.

    The Times report­ed that the Roy­al Bank dis­put­ed that it had access, while Spo­ti­fy and Net­flix said they were unaware that they even had such broad access. “At no time did we access peo­ple’s pri­vate mes­sages on Face­book, or ask for the abil­i­ty to do so,” Net­flix said in a state­ment, while Spo­ti­fy said, “We have no evi­dence that Spo­ti­fy ever accessed users’ pri­vate Face­book mes­sages.”

    Far from being a nefar­i­ous leak­ing of pri­vate data, the read/write/delete access “was the point of this fea­ture,” Archi­bong said.

    We worked with them to build mes­sag­ing inte­gra­tions into their apps so peo­ple could send mes­sages to their Face­book friends,” he said.

    “In order for you to write a mes­sage to a Face­book friend from with­in Spo­ti­fy, for instance, we need­ed to give Spo­ti­fy ‘write access.’ For you to be able to read mes­sages back, we need­ed Spo­ti­fy to have ‘read access.’ ‘Delete access’ meant that if you delet­ed a mes­sage from with­in Spo­ti­fy, it would also delete from Face­book,” he said.

    Your per­mis­sion was grant­ed when you signed in to the Spo­ti­fy, Net­flix, Drop­box or Roy­al Bank apps using your Face­book cre­den­tials, accord­ing to Archi­bong, who said those “expe­ri­ences” were pub­licly dis­cussed and “clear to users.”

    As an exam­ple of how the per­mis­sion grants were dis­cussed pub­licly, Archi­bong linked to a press release that the Roy­al Bank of Scot­land issued in 2013 announc­ing its inte­gra­tion of Face­book into the bank’s mon­ey-trans­fer ser­vices.

    “The receiv­er will receive a mes­sage in their Mes­sen­ger inbox, and will be direct­ed to log in to the Finan­cial Insti­tu­tion of their choice to deposit the funds,” the press release says.

    How­ev­er — even though Face­book cit­ed it as an exam­ple of the trans­paren­cy of the data-shar­ing part­ner­ships — the Roy­al Bank’s press release does­n’t say any­thing about its need to read and delete Face­book mes­sages. In fact, the words “pri­vate” and “pri­va­cy” appear nowhere in the bank’s state­ment, which empha­sizes how “seam­less­ly inte­grat­ed” Face­book and the bank’s ser­vices were.

    Archi­bong did­n’t address The Times’ oth­er dis­clo­sure about Face­book’s agree­ments with the four com­pa­nies — that they could also see the iden­ti­ties of all of the par­tic­i­pants in a Face­book user’s mes­sag­ing threads, which it described as “priv­i­leges that appeared to go beyond what the com­pa­nies need­ed to inte­grate Face­book into their sys­tems.”

    Archi­bong said that “these part­ner­ships were agreed via exten­sive nego­ti­a­tions and doc­u­men­ta­tion, detail­ing how the third par­ty would use the API, and what data they could and could­n’t access.” (“API” is short for “appli­ca­tion pro­gram inter­face” — the set of tools and rules that com­pa­nies use to reg­u­late how dif­fer­ent kinds of soft­ware inter­act with each oth­er.)

    “No third par­ty was read­ing your pri­vate mes­sages, or writ­ing mes­sages to your friends with­out your per­mis­sion,” Archi­bong stressed Wednes­day night.

    Face­book has said that per­mis­sion is grant­ed when users sign in to the third-par­ty ser­vice using their Face­book cre­den­tials. Nei­ther Archi­bong nor, in a sep­a­rate state­ment late Tues­day, Kon­stan­ti­nos Papamil­tiadis, Face­book’s direc­tor of devel­op­er plat­forms and pro­grams, sug­gest­ed that Face­book had any data indi­cat­ing how many Face­book users actu­al­ly knew that.

    ...

    ———-

    “Face­book tries to explain why com­pa­nies could erase your mes­sages” By Alex John­son; NBC News; 12/20/2018

    ““In the past day, we’ve been accused of dis­clos­ing peo­ple’s pri­vate mes­sages to part­ners with­out their knowl­edge,” Ime Archi­bong, Face­book’s vice pres­i­dent of prod­uct part­ner­ships, said in a post on the com­pa­ny’s blog. “That’s not true — and we want­ed to pro­vide more facts about our mes­sag­ing part­ner­ships.””

    That had to be a fun blog post for Face­book’s vice pres­i­dent of prod­uct part­ner­ships to write. And in this blog post, he basi­cal­ly tells Face­book users that they had in fact did have knowl­edge that Face­book was send­ing these com­pa­nies pri­vate mes­sages. At least you should have known because you grant­ed those per­mis­sions when you signed in to the Spo­ti­fy, Net­flix, Drop­box or Roy­al Bank apps with your Face­book cre­den­tials:

    ...
    Your per­mis­sion was grant­ed when you signed in to the Spo­ti­fy, Net­flix, Drop­box or Roy­al Bank apps using your Face­book cre­den­tials, accord­ing to Archi­bong, who said those “expe­ri­ences” were pub­licly dis­cussed and “clear to users.”

    ...

    “No third par­ty was read­ing your pri­vate mes­sages, or writ­ing mes­sages to your friends with­out your per­mis­sion,” Archi­bong stressed Wednes­day night.

    Face­book has said that per­mis­sion is grant­ed when users sign in to the third-par­ty ser­vice using their Face­book cre­den­tials. Nei­ther Archi­bong nor, in a sep­a­rate state­ment late Tues­day, Kon­stan­ti­nos Papamil­tiadis, Face­book’s direc­tor of devel­op­er plat­forms and pro­grams, sug­gest­ed that Face­book had any data indi­cat­ing how many Face­book users actu­al­ly knew that.
    ...

    “No third par­ty was read­ing your pri­vate mes­sages, or writ­ing mes­sages to your friends with­out your per­mis­sion,” Archi­bong stressed Wednes­day night.

    You, the Face­book user, clear­ly gave your per­mis­sions for Spo­ti­fy, Net­flix, Drop­box and Roy­al Bank of Cana­da to get access to your pri­vate mes­sages. That appears to be the offi­cial line com­ing out of Face­book. And as evi­dence of how users were made well aware of the pri­vate mes­sage shar­ing arrange­ments, Archi­bong points to the Roy­al Bank of Canada’s press release (as if a press release is mean­ing­ful for inform­ing users), and yet that press release in no way makes it clear:

    ...
    As an exam­ple of how the per­mis­sion grants were dis­cussed pub­licly, Archi­bong linked to a press release that the Roy­al Bank of Scot­land issued in 2013 announc­ing its inte­gra­tion of Face­book into the bank’s mon­ey-trans­fer ser­vices.

    “The receiv­er will receive a mes­sage in their Mes­sen­ger inbox, and will be direct­ed to log in to the Finan­cial Insti­tu­tion of their choice to deposit the funds,” the press release says.

    How­ev­er — even though Face­book cit­ed it as an exam­ple of the trans­paren­cy of the data-shar­ing part­ner­ships — the Roy­al Bank’s press release does­n’t say any­thing about its need to read and delete Face­book mes­sages. In fact, the words “pri­vate” and “pri­va­cy” appear nowhere in the bank’s state­ment, which empha­sizes how “seam­less­ly inte­grat­ed” Face­book and the bank’s ser­vices were.

    ...

    Archi­bong also sets of a kind of straw man argu­ment to shoot down by refut­ing the idea that Face­book was “ship­ping over pri­vate mes­sages to part­ners”, insist­ing that it was actu­al­ly all auto­mat­ed. As if that’s a mean­ing­ful dis­tinc­tion:

    ...
    “Many news sto­ries imply we were ship­ping over pri­vate mes­sages to part­ners, which is not cor­rect,” Archi­bong said Wednes­day night.

    Instead, he said, the com­pa­nies — Spo­ti­fy Ltd., Net­flix Inc., Drop­box Inc. and the Roy­al Bank of Scot­land — were grant­ed auto­mat­ed access to users’ mes­sages so Face­book users could send Face­book mes­sages to oth­er Face­book users with­out leav­ing the Spo­ti­fy, Net­flix, Drop­box or Roy­al Bank apps.
    ...

    And while Roy­al Bank, Spo­ti­fy, and Net­flix all deny they had any involve­ment in this scan­dal, Archi­bong points out that Face­book “worked with them to build mes­sag­ing inte­gra­tions into their apps”:

    ...
    The Times report­ed that the Roy­al Bank dis­put­ed that it had access, while Spo­ti­fy and Net­flix said they were unaware that they even had such broad access. “At no time did we access peo­ple’s pri­vate mes­sages on Face­book, or ask for the abil­i­ty to do so,” Net­flix said in a state­ment, while Spo­ti­fy said, “We have no evi­dence that Spo­ti­fy ever accessed users’ pri­vate Face­book mes­sages.”

    ...

    Far from being a nefar­i­ous leak­ing of pri­vate data, the read/write/delete access “was the point of this fea­ture,” Archi­bong said.

    We worked with them to build mes­sag­ing inte­gra­tions into their apps so peo­ple could send mes­sages to their Face­book friends,” he said.

    “In order for you to write a mes­sage to a Face­book friend from with­in Spo­ti­fy, for instance, we need­ed to give Spo­ti­fy ‘write access.’ For you to be able to read mes­sages back, we need­ed Spo­ti­fy to have ‘read access.’ ‘Delete access’ meant that if you delet­ed a mes­sage from with­in Spo­ti­fy, it would also delete from Face­book,” he said.
    ...

    Don’t for­get what we learned in the pre­vi­ous arti­cle: Face­book con­tin­ued giv­ing Spo­ti­fy and Net­flix access to pri­vate mes­sages even after they removed those fea­tures from their apps.

    We also learned that Face­book was receiv­ing infor­ma­tion from a num­ber of their data shar­ing part­ners.

    And those fun facts, com­bined with the denials by Spo­ti­fy, Net­flix, and Roy­al Bank, raise an obvi­ous ques­tion: How much worse is this scan­dal going to get? Because it appears that these kinds of mas­sive Face­book data shar­ing prac­tices have been such an open secret for years. An open secret about how peo­ple’s secrets — at least any secrets found in those pri­vate mes­sages — have lit­er­al­ly been open to a large num­ber of com­pa­nies for years.

    Iron­i­cal­ly, while Face­book is try­ing to place the blame for all of this on users — claim­ing that users were, in fact, giv­ing their per­mis­sions for these kinds of pri­va­cy vio­la­tions — that blame-the-users excuse is actu­al­ly going to be a some­what valid excuse going for­ward. Because at this point, after all of these scan­dals and Face­book’s denials, obfus­ca­tions, and attempts to pin the blame on users, if you’re hand­ing your data over to Face­book you real­ly should­n’t expect that data to remain pri­vate. You’ve been warned. Face­book itself may not have direct­ly warned you like they claim, but you’ve def­i­nite­ly been warned.

    Posted by Pterrafractyl | December 26, 2018, 8:46 pm
  28. Birds of a feath­er tweet togeth­er and tweet very sim­i­lar­ly in a pre­dictable man­ner. That’s the gist of a new study that adds a new dimen­sion to our under­stand­ing of the poten­tial impact of the Cam­bridge Ana­lyt­i­ca and the pri­va­cy risks involved with social media in gen­er­al: Accord­ing to a study con­duct­ed by aca­d­e­m­ic researchers, the social media posts of just 8 or 9 of your social media “friends” on Face­book and Twit­ter can be used to pre­dict with 95 per­cent accu­ra­cy the post you will make on the social net­work. Even if you’ve nev­er had a Face­book or Twit­ter account they can still pre­dict what you prob­a­bly would post if you did have an account.

    First, recall the sys­tem Cam­bridge Ana­lyt­i­ca set up to make infer­ences about the polit­i­cal views of Face­book user: Cam­bridge Ana­lyt­i­ca had ~270,000 peo­ple down­load an app that had them take a psy­cho­log­i­cal pro­fil­ing quiz. Then, using the exten­sive data on users that Face­book made avail­able to devel­op­ers like the the posts peo­ple “liked” on Face­book, Cam­bridge Ana­lyt­i­ca devel­oped an algo­rithm for pre­dict­ing psy­cho­log­i­cal pro­files based on peo­ple’s “likes”. Then, using the “friends per­mis­sions” fea­ture that Face­book offered to devel­op­ers at that time that allowed app devel­op­ers to get sim­i­lar infor­ma­tion about “likes” from all of the Face­book ‘friends’ of the peo­ple who down­loaded an app, Cam­bridge Ana­lyt­i­ca obtained the “likes” of all of the friends of those ~270,000 app users, which totaled around ~87 mil­lion Face­book users. Cam­bridge Ana­lyt­i­ca then used that “friends per­mis­sion” data to devel­op a psychological/political pro­file on those 87 mil­lion users. It was one big exam­ple of how the actions of your social media ‘friends’ could end up com­pro­mis­ing your pri­va­cy.

    But this new research takes pri­va­cy risks posed by social media friends to a whole new lev­el. Because it’s not lim­it­ed to your social media friends. It’s about your real life friends too who might hap­pen to have social media pro­files even if you don’t. As long as you have enough real life friends on these social media plat­forms, enti­ties with infor­ma­tion about what 8 or 9 of your friends post will be able to pre­dict what you post and build a pro­file on your likes, inter­ests and per­son­al­i­ty on social media. And as the Cam­bridge Ana­lyt­i­ca scan­dal demon­strat­ed, when you know what some­one posts and likes you can poten­tial­ly make edu­cat­ed guess­es about their psy­chol­o­gy and polit­i­cal views, so this research is show­ing how even peo­ple who stay off of social media can effec­tive­ly be pro­filed in a man­ner sim­i­lar to what Cam­bridge Ana­lyt­i­ca did.

    So if you were hop­ing that not hav­ing a Face­book or Twit­ter account will elim­i­nate the risk of hav­ing Cam­bridge Ana­lyt­i­ca-style psy­cho­log­i­cal pro­fil­ing done on you, think again. Because as long as enough of your friends are on Face­book or Twit­ter, tweet­ing and ‘lik­ing’ away, you can still be pro­filed by the com­pa­ny you keep:

    CNet

    Even if you’re off social media, your friends could be ruin­ing your pri­va­cy

    Social net­works could use friends to pre­dict what a per­son would post, researchers find.

    by Alfred Ng
    Jan­u­ary 22, 2019 1:06 PM PST

    You don’t have to post any­thing for social net­works to learn about you. Your friends are doing all the work already.

    A new study from researchers at the Uni­ver­si­ty of Ver­mont and the Uni­ver­si­ty of Ade­laide found that they could pre­dict a per­son­’s posts on social media with 95 per­cent accu­ra­cy — even if they nev­er had an account to begin with.

    The sci­en­tists got all the infor­ma­tion they need­ed from a per­son­’s friends, using posts from few­er than 10 con­tacts to build a mir­ror image of a per­son not even on the social net­work.

    The study, pub­lished Mon­day in the jour­nal Nature Human Behav­ior, looked at more than 30.8 mil­lion tweets from 13,905 accounts. Using that data, they used machine learn­ing to accu­rate­ly pre­dict what a per­son would post based on what their con­tacts have post­ed.

    So even if you nev­er post­ed on Face­book or Twit­ter, it only takes about eight or nine of your friends to build a pro­file on your likes, inter­ests and per­son­al­i­ty on social media, the researchers found.

    “You alone don’t con­trol your pri­va­cy on social media plat­forms,” Uni­ver­si­ty of Ver­mont pro­fes­sor Jim Bagrow said in a state­ment. “Your friends have a say too.”

    ...

    The research comes as social net­works like Face­book and Twit­ter have expe­ri­enced a back­lash over pri­va­cy con­cerns. As scan­dals like Cam­bridge Ana­lyt­i­ca made head­lines, a move­ment to leave Face­book arose online.

    But the new study sug­gests that even when you delete your social media accounts, if your friends are still there, tech giants are able to build pro­files on you. This is already a con­cern that pri­va­cy advo­cates have about Face­book, called “shad­ow pro­files.”

    In April, Face­book CEO Mark Zucker­berg told law­mak­ers that the social net­work col­lect­ed data on nonusers for “secu­ri­ty pur­pos­es.” That includes peo­ple’s con­tact list when they use Face­book’s mobile app, which the com­pa­ny uses to sug­gest friend rec­om­men­da­tions, it explained.

    In response to the study, a Face­book spokes­woman said the com­pa­ny does­n’t build pro­files on nonusers, even if it’s col­lect­ing data on them.

    “If you aren’t a Face­book user, we can’t iden­ti­fy you based on this infor­ma­tion, or use it to learn who you are,” the com­pa­ny said in a state­ment.

    The study shows there’s only so much you can con­trol in terms of your own pri­va­cy and secu­ri­ty online.

    In mul­ti­ple cas­es over the last year, it was often friends on social net­works that led to data leaks. With Face­book’s Cam­bridge Ana­lyt­i­ca scan­dal, the This is Your Dig­i­tal Life app gave the UK firm infor­ma­tion on not just the per­son that clicked on it, but all of that per­son­’s friends too.

    In Aus­tralia, only 53 peo­ple actu­al­ly clicked on the app, but it har­vest­ed data on 310,000 friends through that ini­tial con­tact.

    The same hap­pened with Face­book’s mas­sive breach affect­ing 29 mil­lion peo­ple. It start­ed with 400,000 peo­ple with­in the hack­ers’ net­work, and quick­ly spread to friends of friends.

    As care­ful as you are online, the study sug­gests that you’re only as pri­vate as your friends have been.

    “There’s no place to hide in a social net­work,” Lewis Mitchell, the study’s co-author and a senior lec­tur­er in applied math­e­mat­ics at the Uni­ver­si­ty of Ade­laide, said in a state­ment.

    ———-

    “Even if you’re off social media, your friends could be ruin­ing your pri­va­cy” by Alfred Ng; CNet; 01/22/2019

    “As care­ful as you are online, the study sug­gests that you’re only as pri­vate as your friends have been.”

    You’re only as pri­vate as your friends have been. More specif­i­cal­ly, you’re only as pri­vate as your least pri­vate 8 or 9 friends have been. Even if you’ve nev­er been on Face­book or Twit­ter. And accord­ing to these researchers, they could build a pro­file on your likes, inter­ests and per­son­al­i­ty on social media. Or pre­dict it if you aren’t on social media yet:

    ...
    A new study from researchers at the Uni­ver­si­ty of Ver­mont and the Uni­ver­si­ty of Ade­laide found that they could pre­dict a per­son­’s posts on social media with 95 per­cent accu­ra­cy — even if they nev­er had an account to begin with.

    The sci­en­tists got all the infor­ma­tion they need­ed from a per­son­’s friends, using posts from few­er than 10 con­tacts to build a mir­ror image of a per­son not even on the social net­work.

    The study, pub­lished Mon­day in the jour­nal Nature Human Behav­ior, looked at more than 30.8 mil­lion tweets from 13,905 accounts. Using that data, they used machine learn­ing to accu­rate­ly pre­dict what a per­son would post based on what their con­tacts have post­ed.

    So even if you nev­er post­ed on Face­book or Twit­ter, it only takes about eight or nine of your friends to build a pro­file on your likes, inter­ests and per­son­al­i­ty on social media, the researchers found.
    ...

    And as the Cam­bridge Anlyt­i­ca sto­ry demon­strat­ed, we are already in a world where the micro-tar­get­ting of the mass­es with dis­in­for­ma­tion cam­paigns fueld by pro­file of peo­ple’s likes, inter­ests and per­son­al­i­ty on social media is a real­i­ty. So if these researchers can do this, we should­n’t assume they’re the only ones try­ing to do this kind of stuff.

    Of course, if com­pa­nies don’t know who your online friends are, they won’t be able to make these kinds of infer­ences. But as we’ve seen before, the col­lec­tion of per­son­al data pro­files on indi­vid­u­als is so ubiq­ui­tous that Face­book even has “shad­ow pro­files” on non-Face­book users. Shad­ow pro­files that Face­book gen­er­ates using infor­ma­tion from the vast data-bro­ker­age indus­try that also has var­i­ous types of pro­files on all of us.. “Shad­ow Pro­files” Face­book was devel­op­ing, in part, with its pol­i­cy of grab­bing peo­ple’s smart­phone con­tact lists when­ev­er peo­ple down­loaded the Face­book mobile app. And Face­book obvi­ous­ly isn’t the only com­pa­ny devel­op­ing “shad­ow pro­files”. There’s all sorts of ver­sions of pro­files on all of use for sale, espe­cial­ly in the US where this indus­try is bare­ly reg­u­lat­ed. So one ques­tion raised by this research is the extent to which those shad­ow pro­files on indi­vid­u­als for sale in the date-bro­ker­age indus­try includes infor­ma­tion like the iden­ti­ties of your real life friends:

    ...
    But the new study sug­gests that even when you delete your social media accounts, if your friends are still there, tech giants are able to build pro­files on you. This is already a con­cern that pri­va­cy advo­cates have about Face­book, called “shad­ow pro­files.”

    In April, Face­book CEO Mark Zucker­berg told law­mak­ers that the social net­work col­lect­ed data on nonusers for “secu­ri­ty pur­pos­es.” That includes peo­ple’s con­tact list when they use Face­book’s mobile app, which the com­pa­ny uses to sug­gest friend rec­om­men­da­tions, it explained.

    In response to the study, a Face­book spokes­woman said the com­pa­ny does­n’t build pro­files on nonusers, even if it’s col­lect­ing data on them.

    “If you aren’t a Face­book user, we can’t iden­ti­fy you based on this infor­ma­tion, or use it to learn who you are,” the com­pa­ny said in a state­ment.
    ...

    So do those “shad­ow pro­files” avail­able on all of us in the data-bro­ker­age indus­try include infor­ma­tion like who your online friends are in real life? If so, Face­book, and any oth­er enti­ty with access to infor­ma­tion about your friends’ social media activ­i­ty, can apply the same approach these researchers used to make all sorts of edu­cat­ed guess­es about you whether you use Face­book or not.

    And when Face­book claims that, “If you aren’t a Face­book user, we can’t iden­ti­fy you based on this infor­ma­tion, or use it to learn who you are,” keep in mind that they were respond­ing to the rev­e­la­tions that Face­book is track­ing inter­net users across the web at every site that uses Face­book’s apps. And while that may or may not be true that Face­book does­n’t have the capa­bil­i­ty to iden­ti­ties of non-Face­book users they were track­ing across the web, that’s no rea­son to assume the infor­ma­tion Face­book was col­lect­ing on non-Face­book users across the web isn’t real­ly use­ful for iden­ti­fy­ing those users when com­bined with oth­er infor­ma­tion. Like third-par­ty infor­ma­tion on you that Face­book, and any oth­er com­pa­ny, can buy com­mer­cial­ly in the giant data bro­ker­age indus­try.

    This is also a good time to remind our­selves about anoth­er one of the many Face­book scan­dals to emerge last year: the rev­e­la­tion that Face­book was giv­ing over 60 device mak­ers — com­pa­nies like Apple, Ama­zon, Black­Ber­ry, Microsoft and Sam­sung — access to exten­sive data about Face­book users, includ­ing lists of all of your Face­book friends if you use your Face­book on one of their devices. Some device mak­ers were giv­en the abil­i­ty to retrieve data like Face­book users’ rela­tion­ship sta­tus, reli­gion, polit­i­cal lean­ing and upcom­ing events. And Black­ber­ry was giv­en access to pri­vate Face­book direct mes­sages and sec­ond-degree Face­book friend lists. So we should­n’t be sur­prise if the mak­er of your smart­phone has enough Face­book data to make edu­cat­ed guess­es about the views their cus­tomers (the peo­ple who buy their devices) and their non-cus­tomer friends. For exam­ple, Sam­sung can pre­sum­ably use the Face­book user data from via this device mak­er data shar­ing arrange­ment to build pre­dic­tive pro­files about not just Face­book-using cus­tomers but all of their friends too whether they use Face­book or not and whether they own a Sam­sung device or not. Sam­sung would just have to some­how deter­mine who those non-Face­book-using real life friends of their cus­tomers are, which is pre­sum­ably the kind of infor­ma­tion avail­able in the large third-par­ty data-bro­ker­age indus­try.

    Also keep in mind that plen­ty of oth­er pri­va­cy-vio­lat­ing tech­nolo­gies we’re learn­ing gath­er the kind of data that could be used to make high guess­es about the iden­ti­ties of your real life friends. Remem­ber all those sto­ries about Google secret­ly using very pre­cise loca­tion track­ing tech­niques with Android smart­phones? And how about the third-par­ty mar­ket for cell­phone loca­tion data that cell­phone com­pa­nies in the US have been mak­ing avail­able? Those seem like the kinds of tech­nolo­gies that will be real­ly use­ful for mak­ing guess­es about real-world friends. And what kind of friend-detect­ing capa­bil­i­ties will Soli — the radar tech­nol­o­gy for ever­day devices that Google is work­ing on with the abil­i­ty to map the objects in a room — make avail­able to device mak­ers?

    And, of course, Google and all the oth­er email providers can just read our emails and learn sorts of infor­ma­tion about our friends. And then there’s all the infor­ma­tion Google and oth­er search engine providers col­lect. The grow­ing num­ber of data points get­ting col­lect­ed in our dai­ly lives is get­ting to the point where avoid­ing the mass col­lec­tion of the iden­ti­ties of our per­son­al friends will require not hav­ing per­son­al friends.

    But while Face­book is by no means unique in this data pro­file com­mer­cial ecosys­tem, it’s worth not­ing how Face­book does play a unique­ly role. Whether it’s the device mak­ers or app devel­op­ers, Face­book has made the acqui­si­tion of per­son­al pro­files and net­works of rela­tion­ship con­nec­tions eas­i­er than ever for com­mer­cial enti­ties. As this aca­d­e­m­ic study demon­strat­ed, rela­tion­ship data real­ly is use­ful when com­bined with a data­base of per­son­al pro­file data and Face­book pro­vides both. It’s one of Face­book’s more iron­ic lega­cies: turn­ing our friend­ships and shar­ing into a lia­bil­i­ty.

    So when Face­book tries to explain away one scan­dal after anoth­er by pro­claim­ing that the com­pa­ny mere­ly wants to get every­one ‘con­nect­ed’, keep in mind this study and how poten­tial­ly lucra­tive and pow­er­ful those ‘con­nec­tions’ real­ly are to Face­book and all the third-par­ty enti­ties Face­book is shar­ing that data with.

    Posted by Pterrafractyl | January 23, 2019, 11:20 pm
  29. Here’s a pair of arti­cles that point towards an area where the mass deploy­ment of facial recog­ni­tion tech­nol­o­gy, AI, and the pub­lic’s legal pro­tec­tions against abus­es from the mass deploy­ment of facial recog­ni­tion tech­nol­o­gy and AI all col­lide in inter­est­ing and impor­tant ways. First, here’s an arti­cle from last month about Google win­ning a law­suit in Illi­nois, which hap­pens to be the state that has the strongest pub­lic pro­tec­tions against abus­es of bio­met­ric tech­nol­o­gy so it’s was a big win for Google from a legal prece­dent stand­point. The only oth­er state with laws reg­u­lat­ing the use of bio­met­ric data by pri­vate com­pa­nies is Texas. And only Illi­nois allows peo­ple to sue for dam­ages. This is all due to the Illi­nois Bio­met­ric Infor­ma­tion Pri­va­cy Act (BIPA) passed in 2008.

    Texas and Wash­ing­ton are the only oth­er states with law reg­u­lat­ing how pri­vate com­pa­nies may use bio­met­ric data, but Illi­nois is still the only state that autho­rizes statu­to­ry dam­ages for vio­la­tions. So the legal prece­dents from Illi­nois’s legal bat­tles over con­sumer bio­met­ric pri­va­cy pro­tec­tions have lim­it­ed appli­ca­tions at the moment since almost all states just use fed­er­al law. But this is an area where legal chal­lenges are inevitably going to come in the future so all of these these cas­es com­ing out of Illi­nois are going to be some­thing to watch.

    The law­suit against Google that was dis­missed last month relied on the fact that BIPA bans the col­lec­tion and stor­age of bio­met­ric data with­out some­one’s con­sent. And that includes “faceprint­ing” some­one with facial recog­ni­tion tech­nol­o­gy. “Faceprint­ing” involves col­lect­ing the bio­met­ric data on an indi­vid­ual that can be used for iden­ti­fy­ing them in images and video via facial recog­ni­tion. The woman who brought the law­suit made the point that she is get­ting “faceprint­ed” by Google with­out her per­mis­sion or sign­ing up for Google’s ser­vices as a result of Google’s appli­ca­tion of facial recog­ni­tion tech­nol­o­gy to the mil­lions of pho­tos uploaded to Google’s cloud-based Google Pho­to ser­vice.

    Much to the relief of Google and the rest of the grow­ing num­bers of com­pa­nies that are using facial recog­ni­tion tech­nol­o­gy on the pub­lic, the Illi­nois judge dis­missed the case on the grounds that the plain­tiff in the case did not suf­fer “con­crete injuries.” Illi­nois’s BIPA does­n’t appear to apply to the col­lec­tion of faceprints, at least in this com­mer­cial con­text of peo­ple giv­ing pho­tos and videos to ser­vices like Google. So a real legal chal­lenge to the unhin­dered use of facial recog­ni­tion col­lec­tion on the pub­lic just got shot down. And that’s a green light to com­pa­nies not just in Illi­nois but across the US to pro­ceed ahead with faceprint­ing facial recog­ni­tion tech­nol­o­gy get­ting used to the pub­lic with­out fear of legal repur­cus­sions. So smile, you’re on a grow­ing num­ber of cam­eras and those cam­eras knows who you are and record it in a data­base:

    Giz­mo­do

    Google Has Law­suit in Illi­nois Over Facial Recog­ni­tion Scan­ning in Google Pho­tos Dis­missed

    Tom McK­ay
    12/29/18 4:30pm

    Google has had a law­suit in Illi­nois over its facial-recog­ni­tion soft­ware thrown out, with a judge dis­miss­ing the case on the grounds that the plain­tiff in the case did not suf­fer “con­crete injuries,” Bloomberg report­ed on Sat­ur­day. The rul­ing puts to rest one of three law­suits against major tech com­pa­nies for alleged vio­la­tions of the state’s Bio­met­ric Infor­ma­tion Pri­va­cy Act (BIPA), with the Verge not­ing that cas­es against Face­book and Snapchat are still pend­ing.

    MORE: The Illi­nois judge in the case grant­ed Google a sum­ma­ry judg­ment based on a lack of ‘con­crete injuries’ to plain­tiffs #tic­toc­news pic.twitter.com/b5tD1Ss82X— Tic­Toc by Bloomberg (@tictoc) Decem­ber 29, 2018

    Indi­vid­u­als in Illi­nois who believe their rights under BIPA, the nation’s strongest bio­met­rics pri­va­cy law, have been vio­lat­ed can sue for dam­ages.

    Bloomberg wrote that plain­tiffs in this case alleged that Google vio­lat­ed BIPA by col­lect­ing facial recog­ni­tion data with­out express user con­sent, specif­i­cal­ly by extract­ing mil­lions of “face tem­plates” from images uploaded to the cloud-based Google Pho­tos ser­vice. The plain­tiffs fur­ther alleged that Google scanned the faces of peo­ple who had nev­er signed up for Google Pho­tos, but instead sim­ply had images of them­selves uploaded there by oth­er means. From a 2016 Inter­na­tion­al Busi­ness Times arti­cle on the case:

    [Plain­tiff Lind­abeth Rivera] claims she does not use Google Pho­tos or even own an Android phone, there­by mak­ing her an unwit­ting par­tic­i­pant in what her lawyers describe as an ambi­tious data-col­lec­tion scheme on the part of the Moun­tain View, Cal­i­for­nia, search giant.

    “The use of facial recog­ni­tion tech­nol­o­gy in the com­mer­cial con­text presents numer­ous con­sumer pri­va­cy con­cerns,” lawyers for Rivera wrote.

    ...

    How­ev­er, U.S. Dis­trict Judge Edmond E. Chang was skep­ti­cal that the plain­tiffs had proven any harm arose from the prac­tice. A sum­ma­ry of the rul­ing obtained by Bloomberg said the case “is dis­missed for lack of sub­ject mat­ter juris­dic­tion, because Plain­tiffs have not alleged an injury in-fact.”

    This year, Google became an active par­tic­i­pant in try­ing to alter the law, which has also attract­ed crit­i­cism from indus­try play­ers who believe most of the law­suits it has gen­er­at­ed impact busi­ness­es that use bio­met­rics for jus­ti­fi­able employ­ment or safe­ty and secu­ri­ty pur­pos­es. The tech giant sup­port­ed an amend­ment to BIPA that would have exempt­ed pho­tos from the law after a sep­a­rate effort by Face­book failed, Bloomberg report­ed in April 2018, but no fur­ther action has been tak­en on the pro­posed changes since short­ly after that time, accord­ing to the state’s leg­is­la­tion-track­ing web­site.

    Theme park oper­a­tor Six Flags is also fac­ing a law­suit that threat­ens to under­mine BIPA by redefin­ing the nature of vio­la­tions to only include inci­dents involv­ing harm (such as data breach­es or unau­tho­rized release of bio­met­ric infor­ma­tion). The plain­tiff in that case alleged the com­pa­ny fin­ger­print­ed her son, a minor, with­out obtain­ing her per­mis­sion first; the case is still mak­ing its way through the Illi­nois Supreme Court.

    ———–

    “Google Has Law­suit in Illi­nois Over Facial Recog­ni­tion Scan­ning in Google Pho­tos Dis­missed” by Tom McK­ay; Giz­mo­do; 12/29/2018

    “Indi­vid­u­als in Illi­nois who believe their rights under BIPA, the nation’s strongest bio­met­rics pri­va­cy law, have been vio­lat­ed can sue for dam­ages.”

    Thanks to the 2008 BIPA law, Illi­nois is a won­der­ful headache for Big Tech and the only real poten­tial hur­dle for the busi­ness of bio­met­rics in the Unit­ed States. So it’s a pret­ty big deal for the future of mass facial recog­ni­tion data col­lec­tion in the US that this Illi­nois judge threw out this law­suit. The col­lec­tion of “face tem­plate” cat­a­logs by com­pa­nies like Google remains unhin­dered:

    ...
    Bloomberg wrote that plain­tiffs in this case alleged that Google vio­lat­ed BIPA by col­lect­ing facial recog­ni­tion data with­out express user con­sent, specif­i­cal­ly by extract­ing mil­lions of “face tem­plates” from images uploaded to the cloud-based Google Pho­tos ser­vice. The plain­tiffs fur­ther alleged that Google scanned the faces of peo­ple who had nev­er signed up for Google Pho­tos, but instead sim­ply had images of them­selves uploaded there by oth­er means. From a 2016 Inter­na­tion­al Busi­ness Times arti­cle on the case:

    [Plain­tiff Lind­abeth Rivera] claims she does not use Google Pho­tos or even own an Android phone, there­by mak­ing her an unwit­ting par­tic­i­pant in what her lawyers describe as an ambi­tious data-col­lec­tion scheme on the part of the Moun­tain View, Cal­i­for­nia, search giant.

    “The use of facial recog­ni­tion tech­nol­o­gy in the com­mer­cial con­text presents numer­ous con­sumer pri­va­cy con­cerns,” lawyers for Rivera wrote.

    ...

    How­ev­er, U.S. Dis­trict Judge Edmond E. Chang was skep­ti­cal that the plain­tiffs had proven any harm arose from the prac­tice. A sum­ma­ry of the rul­ing obtained by Bloomberg said the case “is dis­missed for lack of sub­ject mat­ter juris­dic­tion, because Plain­tiffs have not alleged an injury in-fact.”
    ...

    And note that one of the oth­er Illi­nois BIPA law­suits — a law­suit against Six Flags involv­ing the use of fin­ger­print scan­ning for sea­son pass entry at the park that a wom­an’s son was offered and used with­out get­ting the mom’s per­mis­sion in advance — was actu­al­ly just ruled in favor of the mom and may pave the way for class action law­suits over the col­lec­tion of fin­ger­prints. So BIPA is still scor­ing legal vic­to­ries:

    ...
    Theme park oper­a­tor Six Flags is also fac­ing a law­suit that threat­ens to under­mine BIPA by redefin­ing the nature of vio­la­tions to only include inci­dents involv­ing harm (such as data breach­es or unau­tho­rized release of bio­met­ric infor­ma­tion). The plain­tiff in that case alleged the com­pa­ny fin­ger­print­ed her son, a minor, with­out obtain­ing her per­mis­sion first; the case is still mak­ing its way through the Illi­nois Supreme Court.
    ...

    And that’s part of why Google and Face­book are both try­ing to carve out exemp­tions in BIPA through amend­ments that will avoid these facial recog­ni­tion law­suits. Like an amend­ment that would allow employ­ers to use facial recog­ni­tion for track­ing employ­ees that Google and Face­book are push­ing. And the rea­son Google backed a 2016 pro­posed amend­ment to BIPA that would have made it only apply to scanned phys­i­cal pho­tographs and not uploaded pho­tos. Until Big Tech is able to get BIPA over­turned, craft­ing loop­holes is the next best thing:

    ...
    This year, Google became an active par­tic­i­pant in try­ing to alter the law, which has also attract­ed crit­i­cism from indus­try play­ers who believe most of the law­suits it has gen­er­at­ed impact busi­ness­es that use bio­met­rics for jus­ti­fi­able employ­ment or safe­ty and secu­ri­ty pur­pos­es. The tech giant sup­port­ed an amend­ment to BIPA that would have exempt­ed pho­tos from the law after a sep­a­rate effort by Face­book failed, Bloomberg report­ed in April 2018, but no fur­ther action has been tak­en on the pro­posed changes since short­ly after that time, accord­ing to the state’s leg­is­la­tion-track­ing web­site.
    ...

    So facial recog­ni­tion on uploaded pho­tos got a thumbs up from Illi­nois’s courts but fin­ger­print­ing with­out per­mis­sion got thumbs down. Net, it’s a big win for busi­ness giv­en the grow­ing appli­ca­tions of facial recog­ni­tion that far out­strip the appli­ca­tion for fin­ger­print scan­ning.

    We’ll see what’s next for Illi­nois’s BIPA law­suits, but we prob­a­bly should­n’t be too sur­prised if it involves the tech­nol­o­gy described in the next arti­cle: It turns out Wal­greens is test­ing out new ‘smart-cool­ers’ for sell­ing chilled foods. The cool­ers will be equipped with cam­eras and will have capa­bil­i­ties like iris-track­ing. The cool­ers will use facial recog­ni­tion-like approach­es to ana­lyz­ing cus­tomers faces. But, cru­cial­ly from a BIPA stand­point, the cool­ers won’t be try­ing to iden­ti­fy peo­ple.. Instead, the smart cool­ers will ana­lyze cus­tomers for demo­graph­ic data, like age, gen­der, and race. So it’s going to be a high­ly inva­sive pri­va­cy vio­lat­ing tech­nol­o­gy for the Wal­greens cus­tomers, where there eyes are lit­er­al­ly tracked, but no actu­al attempt to match the cus­tomers with a faceprint data and iden­ti­fy them will take place so it tech­ni­cal­ly won’t vio­late BIPA. That’s what Wal­greens its try­ing out in Illi­nois: BIPA com­pli­ant mass facial analy­sis smart cool­ers for con­ve­nience stores:

    The Atlantic

    Now Your Gro­ceries See You, Too

    Wal­greens is explor­ing new tech that turns your pur­chas­es, your move­ments, even your gaze, into data.

    Sid­ney Fussell
    1/25/2019 2:15 PM ET

    Wal­greens is pilot­ing a new line of “smart cool­ers”—fridges equipped with cam­eras that scan shop­pers’ faces and make infer­ences on their age and gen­der. On Jan­u­ary 14, the com­pa­ny announced its first tri­al at a store in Chica­go in Jan­u­ary, and plans to equip stores in New York and San Fran­cis­co with the tech.

    Demo­graph­ic infor­ma­tion is key to retail shop­ping. Retail­ers want to know what peo­ple are buy­ing, seg­ment­ing shop­pers by gen­der, age, and income (to name a few char­ac­ter­is­tics) and then tar­get­ing them pre­cise­ly. To that end, these smart cool­ers are a mar­vel.

    If, for exam­ple, Pep­si launched an ad cam­paign tar­get­ing young women, it could use smart-cool­er data to see if its cam­paign was work­ing. These machines can draw all kinds of use­ful infer­ences: Maybe young men buy more Sprite if it’s dis­played next to Moun­tain Dew. Maybe old­er women buy more ice cream on Thurs­day nights than any oth­er day of the week. The tech also has “iris track­ing” capa­bil­i­ties, mean­ing the com­pa­ny can col­lect data on which dis­played items are the most looked at.

    Cru­cial­ly, the “Cool­er Screens” sys­tem does not use facial recog­ni­tion. Shop­pers aren’t iden­ti­fied when the fridge cam­eras scan their face. Instead, the cam­eras ana­lyze faces to make infer­ences about shop­pers’ age and gen­der. First, the cam­era takes their pic­ture, which an AI sys­tem will mea­sure and ana­lyze, say, the width of someone’s eyes, the dis­tance between their lips and nose, and oth­er micro mea­sure­ments. From there, the sys­tem can esti­mate if the per­son who opened the door is, say, a woman in her ear­ly 20s or a male in his late 50s. It’s analy­sis, not recog­ni­tion.

    The dis­tinc­tion between the two is very impor­tant. In Illi­nois, facial recog­ni­tion in pub­lic is out­lawed under BIPA, the Bio­met­ric Pri­va­cy Act. For two years, Google and Face­book fought class-actions suits filed under the law, after plain­tiffs claimed the com­pa­nies obtained their facial data with­out their con­sent. Home-secu­ri­ty cams with facial-recog­ni­tion abil­i­ties, such Nest or Amazon’s Ring, also have those fea­tures dis­abled in the state; includ­ing even Google’s viral “art self­ie” app is banned. The suit against Face­book was dis­missed in Jan­u­ary, but pri­va­cy advo­cates cham­pi­on BIPA as a would-be tem­plate for a world where facial recog­ni­tion is fed­er­al­ly reg­u­lat­ed.

    ...

    The smart cool­er is just one of dozens of track­ing tech­nolo­gies emerg­ing in retail. At Ama­zon Go stores, for example—which do not have cashiers or self-check­out stations—sensors make note of shop­pers’ pur­chas­es and charge them to their Ama­zon account; the result­ing data are part of the feed­back loop the com­pa­ny uses to tar­get ads at cus­tomers, mak­ing it more mon­ey.

    ———–

    “Now Your Gro­ceries See You, Too” by Sid­ney Fussell; The Atlantic; 01/25/2019

    “Wal­greens is pilot­ing a new line of “smart coolers”—fridges equipped with cam­eras that scan shop­pers’ faces and make infer­ences on their age and gen­der. On Jan­u­ary 14, the com­pa­ny announced its first tri­al at a store in Chica­go in Jan­u­ary, and plans to equip stores in New York and San Fran­cis­co with the tech.”

    Chica­go gets to be the new smart cool­ers test­ing grounds. Will iris scan­ning when there’s no attempt to know who’s eyes they are be BIPA com­pli­ant? We’ll see:

    ...
    Demo­graph­ic infor­ma­tion is key to retail shop­ping. Retail­ers want to know what peo­ple are buy­ing, seg­ment­ing shop­pers by gen­der, age, and income (to name a few char­ac­ter­is­tics) and then tar­get­ing them pre­cise­ly. To that end, these smart cool­ers are a mar­vel.

    If, for exam­ple, Pep­si launched an ad cam­paign tar­get­ing young women, it could use smart-cool­er data to see if its cam­paign was work­ing. These machines can draw all kinds of use­ful infer­ences: Maybe young men buy more Sprite if it’s dis­played next to Moun­tain Dew. Maybe old­er women buy more ice cream on Thurs­day nights than any oth­er day of the week. The tech also has “iris track­ing” capa­bil­i­ties, mean­ing the com­pa­ny can col­lect data on which dis­played items are the most looked at.
    ...

    And while Wal­greens doen­s’t say they chose Chica­go specif­i­cal­ly to see if their cool­ers were BIPA-com­pli­ant, but it seems like­ly giv­en that the cool­ers specif­i­cal­ly don’t try to iden­ti­fy peo­ple mak­ing them a per­fect oppor­tu­ni­ty for busi­ness to see what BIPA will allow them to get away with:

    ...
    Cru­cial­ly, the “Cool­er Screens” sys­tem does not use facial recog­ni­tion. Shop­pers aren’t iden­ti­fied when the fridge cam­eras scan their face. Instead, the cam­eras ana­lyze faces to make infer­ences about shop­pers’ age and gen­der. First, the cam­era takes their pic­ture, which an AI sys­tem will mea­sure and ana­lyze, say, the width of someone’s eyes, the dis­tance between their lips and nose, and oth­er micro mea­sure­ments. From there, the sys­tem can esti­mate if the per­son who opened the door is, say, a woman in her ear­ly 20s or a male in his late 50s. It’s analy­sis, not recog­ni­tion.

    The dis­tinc­tion between the two is very impor­tant. In Illi­nois, facial recog­ni­tion in pub­lic is out­lawed under BIPA, the Bio­met­ric Pri­va­cy Act. For two years, Google and Face­book fought class-actions suits filed under the law, after plain­tiffs claimed the com­pa­nies obtained their facial data with­out their con­sent. Home-secu­ri­ty cams with facial-recog­ni­tion abil­i­ties, such Nest or Amazon’s Ring, also have those fea­tures dis­abled in the state; includ­ing even Google’s viral “art self­ie” app is banned. The suit against Face­book was dis­missed in Jan­u­ary, but pri­va­cy advo­cates cham­pi­on BIPA as a would-be tem­plate for a world where facial recog­ni­tion is fed­er­al­ly reg­u­lat­ed.
    ...

    What kinds of BIPA-relat­ed legal chal­lenges will these smart-cool­ers bring? That’s going to be inter­est­ing to see.

    But it’s worth keep­ing mine that cus­tomers are rou­tine­ly iden­ti­fy­ing them­selves when they pay with cred­it and deb­it cards. And thanks to the data bro­ker­age indus­try it’s pos­si­ble to buy demo­graph­ic data on all of us. So it’s very tech­ni­cal­ly pos­si­ble for Wal­greens to have non-per­son­al­ly iden­ti­fy­ing smart cool­ers and still iden­ti­fy a large num­ber of the peo­ple after the fact sim­ply by comb­ing the name and demo­graph­ic infor­ma­tion col­lect­ed from the cred­it and deb­it sales with the demo­graph­ic infor­ma­tion anony­mous­ly col­lect­ed by the smart cool­er.

    Also keep in mind that once peo­ple get used to smart cool­ers that ana­lyze your face but don’t iden­ti­fy you, it’s just a mat­ter of time before com­pa­nies start using smart cool­ers that do iden­ti­fy you and serve up some sort of per­son­al­ly cus­tomized sales pitch to you. You know that’s just a mat­ter of time. And the more detailed the per­son­al data file is about you, the more sophis­ti­cat­ed and per­son­al­ized cus­tomized smart cool­er expe­ri­ence can be. Imag­ine Cam­bridge Anlyt­i­ca-style per­son­al­iza­tion that fac­tors in your psy­cho­log­i­cal pro­file that Wal­greens has on you.

    It a reminder that a grow­ing part of what makes the loss of anonymi­ty trou­bling is that it allows for the increas­ing­ly detailed and sophis­ti­cat­ed per­son­al pro­files that exist on all of us in the com­mer­cial and gov­ern­ment space to be applied to us in real time. The world where the prof­it poten­tial of cus­tomized mar­ket­ing cre­ates an incen­tive for the per­son­al data­bas­es being col­lect­ed on all of use to be acces­si­ble by every­day objects. Like cool­ers. Cool­ers that are one day going to be smarter than us prob­a­bly. Super-AI cool­ers that serve up super sophist­cat­ed sales pitch­es. That’s prob­a­bly going to be a thing some­day.

    And no one has a big­ger per­son­al pro­file data­bas­es — our likes and dis­likes and tastes, etc — than Google and Face­book. So it’s worth keep­ing in mind that the stuff you post on Face­book will prob­a­bly one day be known by the smart cool­er at your local Wal­greens and every oth­er Wal­greens. And it’s going to use it’s supe­ri­or knowl­edge of you and it’s super-AI prowess to sell you things (except in Illi­nois maybe!). And the way things are going that future is prob­a­bly com­ing soon­er than you think. So stay strong and watch out for the spe­cials on ice cream.

    Posted by Pterrafractyl | January 26, 2019, 1:01 am
  30. Here’s a quick update on Data Pro­pria, the Cam­bridge Ana­lyt­i­ca off­shoot cre­at­ed by Brad Parscale’s com­pa­ny Cloud Com­merce. Recall the reports from back in June about how the GOP was hir­ing the ser­vices of Data Pro­pria for the 2018 mid-terms. Data Pro­pria employs four ex-Cam­bridge Ana­lyt­i­ca employ­ees, includ­ing Cam­bridge Analytica’s chief data sci­en­tist. Cam­bridge Analytica’s for­mer head of prod­uct, Matt Oczkows­ki, leads Data Propia. Oczkows­ki led the Cam­bridge Ana­lyt­i­ca team that worked for Trump’s 2016 cam­paign and was report­ed­ly over­heard brag­ging to a prospec­tive client about how he’s already work­ing on Trump’s 2020 cam­paign (which he sub­se­quent­ly denied). Also recall how Brad Parscale ran the Trump 2016 cam­paign’s exten­sive dig­i­tal oper­a­tions that includ­ed exten­sive micro-tar­get­ing of indi­vid­u­als out­side of the Cam­bridge Ana­lyt­i­ca efforts.

    So the fact that these ex-Cam­bridge Ana­lyt­i­ca employ­ees, includ­ing key employ­ees, were hired by a sub­sidiary of Brad Parscale and alleged­ly already work­ing on Trump’s 2020 cam­paign appeared to be some­thing the Trump cam­paign want­ed to keep under wraps. Well, that just got a lot hard­er to deny fol­low­ing the announce­ment that Matt Oczkows­ki is now the run­ning Parscale Dig­i­tal in addi­tion to Data Pro­pria. Recall how Parscale Dig­i­tal is the rebrand­ed ver­sion of Parscale’s old mar­ket­ing com­pa­ny. As the fol­low­ing arti­cle notes, Parscale sold his shared in Parscale Dig­i­tal in August 2017 at the same time he pur­chased $9 mil­lion in stock for Cloud Com­merce and took a seat on its board. August of 2017 is also the same month Parscale Dig­i­tal was sold to Cloud Com­merce. So Parscale is a co-own­er of Cloud Com­merce which the own­er of Parscale Dig­i­tal. And now Matt Oczkows­ki, the for­mer head of prod­uct for Cam­bridge Ana­lyt­i­ca, is run­ning Parscale Dig­i­tal:

    Texas Pub­lic Radio

    Cam­bridge Ana­lyt­i­ca Alum To Run Anoth­er San Anto­nio-Based Firm

    By Paul Flahive • Feb 5, 2019

    Parscale Dig­i­tal, a San Anto­nio-based dig­i­tal mar­ket­ing firm best known for its name­sake and for­mer own­er Brad Parscale, Pres­i­dent Don­ald Trump’s 2020 cam­paign chair­man, is now being run by a for­mer exec­u­tive at Cam­bridge Ana­lyt­i­ca.

    Cam­bridge Ana­lyt­i­ca, the now defunct data analy­sis com­pa­ny known for work­ing on Pres­i­dent Trump’s 2016 cam­paign, declared bank­rupt­cy in 2018 after news broke that data from more than 80 mil­lion Face­book users was shared with it.

    Matt Oczkows­ki was head of prod­uct for Cam­bridge Ana­lyt­i­ca before form­ing Data Pro­pria in San Anto­nio with at least three oth­er Cam­bridge Ana­lyt­i­ca alums, includ­ing its chief data sci­en­tist, David Wilkin­son.

    Oczkows­ki will take on the dual role run­ning Parscale and Data Pro­pria, which are both owned by Cloud­Com­merce.

    Brad Parscale sits on the board of par­ent com­pa­ny Cloud Com­merce.

    Cloud­Com­merce Pres­i­dent Andrew Van Noy announced the change Jan. 25. in an email obtained by TPR.

    “Matt will con­tin­ue to lead the Data Pro­pria team and will now focus on stream­lin­ing the offer­ings and build­ing out the teams between the two brands,” he said.

    Accord­ing to the email from Van Noy, Oczkows­ki will serve in an inter­im capac­i­ty.

    Parscale Dig­i­tal has been with­out a leader since for­mer pres­i­dent Adam Brecht left the posi­tion in June 2018 after just a few months.

    Data Pro­pria report­ed­ly assist­ed the Repub­li­can Nation­al Com­mit­tee with midterm race polling last year and is work­ing on Trump’s 2020 cam­paign.

    Brad Parscale sold his half of Giles Parscale, which would become Parscale Dig­i­tal, in August 2017. At the time he took $9 mil­lion in Cloud­Com­merce stock and a seat on the board for his com­pa­ny assets.

    Rep­re­sen­ta­tives of Cloud­Com­merce didn’t respond to repeat­ed requests for com­ment.

    Accord­ing to fil­ings with the Secu­ri­ties and Exchange Com­mis­sion, Brad Parscale was invoiced more than $729,000 by Parscale Dig­i­tal for work done for the cam­paign manager’s polit­i­cal­ly ori­ent­ed con­sult­ing firm, Flori­da-based Parscale Strat­e­gy LLC.

    Don­ald J. Trump For Pres­i­dent, Inc paid Parscale Strat­e­gy LLC more than $3.4 mil­lion in 2018 for dig­i­tal con­sult­ing and online adver­tis­ing, accord­ing to the Fed­er­al Elec­tion Com­mis­sion.

    ...

    ———-

    “Cam­bridge Ana­lyt­i­ca Alum To Run Anoth­er San Anto­nio-Based Firm” by Paul Flahive; Texas Pub­lic Radio; 02/05/2019

    Oczkows­ki will take on the dual role run­ning Parscale and Data Pro­pria, which are both owned by Cloud­Com­merce.”

    Matt Oczkow­ki, Cam­bridge Ana­lyt­i­ca’s for­mer head of prod­uct, is turn­ing out to be quite an impor­tant mem­ber of Trump’s dig­i­tal cam­paign. Although we’re told Oczkows­ki posi­tion lead­ing Parscale Dig­i­tal is just in an inter­im capac­i­ty. It will be inter­est­ing to see how long of an inter­im that turns out to be giv­en the 2020 elec­tion cycle is already upon us:

    ...

    Cloud­Com­merce Pres­i­dent Andrew Van Noy announced the change Jan. 25. in an email obtained by TPR.

    “Matt will con­tin­ue to lead the Data Pro­pria team and will now focus on stream­lin­ing the offer­ings and build­ing out the teams between the two brands,” he said.

    Accord­ing to the email from Van Noy, Oczkows­ki will serve in an inter­im capac­i­ty.

    Parscale Dig­i­tal has been with­out a leader since for­mer pres­i­dent Adam Brecht left the posi­tion in June 2018 after just a few months.
    ...

    And to make it clear, Parscale Dig­i­tal does indeed do polit­i­cal work for the Trump cam­paign, but it’s as a sub­con­trac­tor for Parscale Strat­e­gy LLC which is Parscale’s polit­i­cal con­sult­ing firm (he’s got a lot of firms, as we can see). Accord­ing to fil­ings with the SEC and FEC, Don­ald J. Trump For Pres­i­dent, Inc paid Parscale Strat­e­gy LLC more than $3.4 mil­lion in 2018 for dig­i­tal con­sult­ing and online adver­tis­ing, and Parscale Dig­i­tal invoice Parscale Strat­e­gy for $729,000:

    ...
    Brad Parscale sits on the board of par­ent com­pa­ny Cloud Com­merce.

    ...

    Data Pro­pria report­ed­ly assist­ed the Repub­li­can Nation­al Com­mit­tee with midterm race polling last year and is work­ing on Trump’s 2020 cam­paign.

    Brad Parscale sold his half of Giles Parscale, which would become Parscale Dig­i­tal, in August 2017. At the time he took $9 mil­lion in Cloud­Com­merce stock and a seat on the board for his com­pa­ny assets.

    ...

    Accord­ing to fil­ings with the Secu­ri­ties and Exchange Com­mis­sion, Brad Parscale was invoiced more than $729,000 by Parscale Dig­i­tal for work done for the cam­paign manager’s polit­i­cal­ly ori­ent­ed con­sult­ing firm, Flori­da-based Parscale Strat­e­gy LLC.

    Don­ald J. Trump For Pres­i­dent, Inc paid Parscale Strat­e­gy LLC more than $3.4 mil­lion in 2018 for dig­i­tal con­sult­ing and online adver­tis­ing, accord­ing to the Fed­er­al Elec­tion Com­mis­sion.
    ...

    So giv­en the pri­or attempts by Oczkows­ki to deny that he was doing any work for the Trump cam­paign, keep in mind that the work Parscale Dig­i­tal and Data Pro­pria do for the 2020 Trump cam­paign will prob­a­bly be sub­con­tract­ing work of this nature which will allow the Trump cam­paign to deny that it’s ever direct­ly hired these ser­vices of these firms man­aged and staffed by ex-Cam­bridge Ana­lyt­i­ca employ­ees. And yes, those denials are implau­si­ble when you actu­al­ly look at the struc­ture of these com­pa­nies but dou­bling down on implau­si­ble deni­a­bil­i­ty is kind of Trump spe­cial­ty so we should­n’t be too sur­prised if that’s what hap­pens.

    Posted by Pterrafractyl | February 7, 2019, 12:42 pm
  31. Here’s one of those Cam­bridge Ana­lyt­i­ca sto­ries that could have impli­ca­tions going far beyond the Cam­bridge Ana­lyt­i­ca scan­dal. It’s a sto­ry about how David Car­roll, an Amer­i­can aca­d­e­m­ic, suc­cess­ful­ly sued Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, SCL, back in 2017 for a copy of all of the data that Cam­bridge Ana­lyt­i­ca held about him. And he won. He received a pro­file on him, but not the entire pro­file. SCL refused his request for the entire pro­file and in May of 2018 the British gov­ern­ment ruled that SCL was required to give Car­roll every­thing they had on him. But SCL con­tin­ues to refuse and chose to be fined instead. Car­roll con­tin­ues to sue. But the fact that he won this case at all has poten­tial­ly big impli­ca­tions. Because British law man­dates that UK cit­i­zens have a right to be noti­fied what infor­ma­tion a com­pa­ny holds about them, but the ques­tion of whether or not Amer­i­cans and any­one else out­side the UK also has that right under British law was­n’t real­ly estab­lished. And thanks to David Car­rol­l’s suc­cess­ful law­suit it appears that, yes, Amer­i­cans and oth­er non-UK cit­i­zens can sue UK com­pa­nies for that infor­ma­tion. The com­pa­nies might not actu­al­ly hand over the infor­ma­tion and choose to be fined instead, but at least you can suc­cess­ful­ly sue for it. And if Car­roll wins his ongo­ing law­suit against SCL and man­ages to actu­al­ly get a com­plete pro­file, all of the oth­er 87 mil­lion Face­book users that Cam­bridge Ana­lyt­i­ca col­lect­ed pro­files on will have a much eas­i­er time mak­ing such requests for them­selves:

    Wired

    One Man’s Obses­sive Fight to Reclaim His Cam­bridge Ana­lyt­i­ca Data

    David Car­roll has been locked in a legal war to force the infa­mous com­pa­ny to turn over its files on him. He’s won a bat­tle, but the strug­gle con­tin­ues.

    Author: Issie Lapowsky
    1.25.19 06:00 am

    It’s 8 on a Wednes­day morn­ing in Jan­u­ary, and David Carroll’s Brook­lyn apart­ment, a sun­ny, wood-beamed beau­ty con­vert­ed from an old sand­pa­per fac­to­ry, is buzzing.

    ...

    For most every­one in Carroll’s bustling house­hold, today is a morn­ing like any oth­er. Not for Car­roll. This morn­ing, he rolled out of bed at 6 am to news that the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, the now defunct inter­na­tion­al con­glom­er­ate, had pled guilty to crim­i­nal charges of dis­obey­ing a British data reg­u­la­tor.

    The sto­ry of how the data ana­lyt­ics firm and for­mer Trump cam­paign con­sul­tant mis­ap­pro­pri­at­ed the Face­book data of tens of mil­lions of Amer­i­cans before the 2016 elec­tion is by now well known. But the company’s guilty plea wasn’t real­ly about all those head­lines you’ve seen splat­tered in the news over the past year. Instead, their crime was defy­ing a gov­ern­ment order to hand over all of the data they had ever col­lect­ed on just one per­son: David Car­roll.

    For more than two years, Car­roll, a pro­fes­sor of media design at The New School in Man­hat­tan, has been on an obses­sive, epi­cal­ly nerdy, and ulti­mate­ly valu­able quest to retrieve his data from Cam­bridge Ana­lyt­i­ca. Dur­ing the 2016 elec­tion, when the firm worked for both the Trump cam­paign and sen­a­tor Ted Cruz’s cam­paign, its lead­ers bragged open­ly about hav­ing col­lect­ed thou­sands of data points to build detailed per­son­al­i­ty pro­files on every adult in the Unit­ed States. They said they used these pro­files to tar­get peo­ple with more per­sua­sive ads, and when Pres­i­dent Trump won the White House, they hun­gri­ly accept­ed cred­it.

    A year ago, Car­roll filed a legal claim against the Lon­don-based con­glom­er­ate, demand­ing to see what was in his pro­file. Because, with few excep­tions, British data pro­tec­tion laws allow peo­ple to request data on them that’s been processed in the UK, Car­roll believed that even as an Amer­i­can, he had a right to that infor­ma­tion. He just had to prove it.

    Car­roll shuf­fles past me bare­foot, a mug of cof­fee in one hand, his phone in the oth­er. “Enjoy the moment,” he says, read­ing a mes­sage from his lawyer, Ravi Naik, who’s been feed­ing him updates from Lon­don all morn­ing. About an hour lat­er, an email floats into Carroll’s inbox from the British Infor­ma­tion Commissioner’s Office, the reg­u­la­tor that brought the charges. Car­roll turns his phone toward me to reveal the news. Cam­bridge Analytica’s par­ent com­pa­ny, SCL, is being fined the equiv­a­lent of rough­ly $27,000. Carroll’s cut? About $222.

    He couldn’t help but laugh. The sum is insignif­i­cant. The moment, any­thing but.

    When he start­ed out, Car­roll was an under­dog, fac­ing off against a cor­po­ra­tion with ties to the pres­i­dent of the Unit­ed States and backed by bil­lion­aire donor Robert Mer­cer. If he lost, Car­roll would be on the hook for the oppos­ing team’s legal fees, which he wasn’t quite sure how he’d pay.

    But if he won, Car­roll believed he could prove an invalu­able point. He could use that trove of infor­ma­tion he received to show the world just how pow­er­less Amer­i­cans are over their pri­va­cy. He could offer up a con­crete exam­ple of how one man’s information—his super­mar­ket punch card, his online shop­ping habits, his vot­ing patterns—can be bought and sold and weaponized by cor­po­ra­tions and even for­eign enti­ties try­ing to influ­ence elec­tions.

    But more impor­tant­ly, he could show what’s pos­si­ble in coun­tries like the UK where peo­ple actu­al­ly have the right to reclaim some of that pow­er. He could prove why peo­ple in the Unit­ed States, who have no such rights, deserve those same pro­tec­tions.

    Much has changed since David Car­roll picked this fight with Goliath. Fol­low­ing a relent­less flood of scan­dals last spring, SCL shut­tered and is now going through insol­ven­cy pro­ceed­ings in the UK. The Cam­bridge Ana­lyt­i­ca scan­dal spurred just the kind of pri­va­cy awak­en­ing in the US that Car­roll was seek­ing. Face­book tight­ened its hold on user data and has been increas­ing­ly asked to answer for all the ways it gave that data away in the first place. A strict data pro­tec­tion law passed unan­i­mous­ly in Cal­i­for­nia last sum­mer, and mem­bers of Con­gress have begun float­ing plans for broad­er fed­er­al pri­va­cy leg­is­la­tion.

    Car­roll, mean­while, has emerged as a cult hero of pri­va­cy hawks, who fol­low every turn in his case, Twit­ter fin­gers itch­ing. This week, he’ll become a movie star, appear­ing as a cen­tral char­ac­ter in a fea­ture-length doc­u­men­tary called The Great Hack, pre­mier­ing at the Sun­dance Film Fes­ti­val. “We hope this film sheds light on what it means to sign the terms and con­di­tions that we agree to every day,” the film­mak­ers, Jehane Nou­jaim and Karim Amer, explained in an email. “What does it mean when we actu­al­ly become a com­mod­i­ty being mined?”

    But for all that’s changed dur­ing these past two years, so much has stayed the same. Despite SCL’s guilty plea, Car­roll still hasn’t got­ten his data. And Amer­i­cans today have no more legal rights to pri­va­cy than they did when Carroll’s cru­sade began two years ago. That could change this year. With a strict data pro­tec­tion law set to go into effect in Cal­i­for­nia next Jan­u­ary, even tech giants have begun push­ing for fed­er­al reg­u­la­tion that would set rules for busi­ness­es across the coun­try. Now more than ever, Car­roll says, hav­ing that infor­ma­tion in hand could help illus­trate exact­ly how this new economy—so often mis­un­der­stood and dis­cussed in the abstract—works. Which is why, near­ly a year after the Cam­bridge Ana­lyt­i­ca sto­ry broke, and many months after its name has fall­en out of the dai­ly head­lines, Car­roll keeps fight­ing.

    If you know Car­roll from Twitter—where, as @profcarroll, he spends his days tweet­ing bom­bas­ti­cal­ly about Facebook’s duplic­i­ty or stri­dent­ly skew­er­ing obscure fig­ures from the Trump cam­paign in long, snarky, and inscrutable threads—then you couldn’t pos­si­bly imag­ine the ner­vous, affa­ble guy I first met in a down­town Man­hat­tan cof­fee shop back in 2017.

    ...

    Car­roll hadn’t always been in acad­e­mia. Dur­ing the dot­com boom and bust, he worked in dig­i­tal mar­ket­ing and watched as adver­tis­ing evolved from the sort of broad brand­ing exer­cise that had been the domin­ion of tele­vi­sion and print to an indus­try dom­i­nat­ed by Google, which used infi­nite quan­ti­ties of user data to hyper-tar­get ads. When he left his mar­ket­ing career to teach full time, Car­roll, who has an MFA in design and tech­nol­o­gy, trans­formed from an indus­try par­tic­i­pant to a chief crit­ic, lec­tur­ing stu­dents on what he calls the “myth” that adver­tis­ing doesn’t work if it’s not tar­get­ed.

    In 2014, while he was on sab­bat­i­cal, Car­roll began work­ing on a start­up called Glossy, which inte­grat­ed with Face­book to rec­om­mend arti­cles from mag­a­zine archives based on users’ inter­ests. The idea nev­er took off; Car­roll couldn’t get fund­ing, and his ear­ly employ­ees got quick­ly poached by tech giants. But he got just far enough to see how much user data Face­book was will­ing to give away in the name of growth. At the time, the social net­work­ing giant allowed devel­op­ers to slurp up data not just from their own users but from their users’ friends, all with­out their friends’ aware­ness or explic­it con­sent. Face­book did­n’t offi­cial­ly end this pol­i­cy until April 2015, and con­tin­ued to give some devel­op­ers access even after that.

    “I saw how the sausage got made and how easy it was to amass data and cre­ate a sur­veil­lance infra­struc­ture,” Car­roll says.

    Around that same time, across the Atlantic Ocean, anoth­er young pro­fes­sor at the Uni­ver­si­ty of Cam­bridge named Alek­san­dr Kogan was build­ing an app of his own. It used a per­son­al­i­ty quiz to col­lect users’ pro­file infor­ma­tion, includ­ing their loca­tion, gen­der, name, and Page likes, and then spit out pre­dic­tions about their per­son­al­i­ty types. Like Car­roll, Kogan knew that when Face­book users took the quizzes, not only would their data be free for the tak­ing, so would the data belong­ing to mil­lions of their friends. Unlike Car­roll, Kogan viewed that not as an inva­sion of pri­va­cy but as an oppor­tu­ni­ty.

    “It didn’t even dawn on us that peo­ple could react this way,” Kogan says.

    Begin­ning in 2014, Kogan paid about 270,000 US Face­book users to take the quiz, which Kogan has said unlocked access to some 30 mil­lion people’s data. But Kogan wasn’t just work­ing on his own. He was col­lect­ing this infor­ma­tion on behalf of SCL, which had big plans to use it to influ­ence Amer­i­can elec­tions. Kogan sold the data and his pre­dic­tions to the com­pa­ny, and though he didn’t know it then, lit the fuse of a time bomb that would det­o­nate three years down the line.

    Car­roll knew none of this at the time. But his expe­ri­ence build­ing Glossy made him enough of a self-pro­claimed “pri­va­cy nerd” that by the time the 2016 elec­tion rolled around he was keep­ing a close eye on the pres­i­den­tial cam­paigns and their dig­i­tal strate­gies. He was watch­ing SCL’s spin­off Cam­bridge Ana­lyt­i­ca, in par­tic­u­lar, because it had tak­en cred­it for help­ing sen­a­tor Ted Cruz win the Iowa pri­ma­ry, using so-called psy­cho­graph­ic tar­get­ing tech­niques. But it wasn’t until Pres­i­dent Trump’s upset vic­to­ry, in a cam­paign that had been buoyed by Cam­bridge Ana­lyt­i­ca data sci­en­tists and con­sul­tants, that Car­roll, a Demo­c­rat, began to wor­ry about what this firm could real­ly do with mil­lions of Amer­i­cans’ infor­ma­tion.

    He wasn’t the only one. Thou­sands of miles away, in Gene­va, Switzer­land, a researcher named Paul-Olivi­er Dehaye, who now runs a dig­i­tal rights non­prof­it called PersonalData.IO, was deep into a months-long inves­ti­ga­tion of SCL. At the time, he was try­ing to answer a fun­da­men­tal ques­tion about the com­pa­ny that was also rumored to have played a hand in pro­mot­ing the Brex­it ref­er­en­dum: Did Cam­bridge Ana­lyt­i­ca real­ly know as much as it claimed? Or was it just sell­ing snake oil? One way to answer that ques­tion con­clu­sive­ly, Dehaye believed, would be to see what infor­ma­tion the com­pa­ny actu­al­ly held.

    The UK’s Data Pro­tec­tion Act guar­an­tees the right to access data that’s processed with­in the UK. But in the past, it was main­ly British res­i­dents who had exer­cised that right. Few had ever test­ed whether the law applied to peo­ple out­side of the coun­try as well. Dehaye believed this would be the per­fect oppor­tu­ni­ty to try, so he began reach­ing out to Amer­i­can aca­d­e­mics, activists, and jour­nal­ists, urg­ing them to sub­mit what is known as a “sub­ject access request” to the com­pa­ny. It was Amer­i­cans’ data, after all, that Cam­bridge Ana­lyt­i­ca seemed most inter­est­ed in. Car­roll was one of Dehaye’s tar­gets.

    “David was very vocal on Twit­ter, and he already knew a lot about ad tech,” Dehaye says. “That’s why I thought I had a chance to con­vince him.”

    He was right. Car­roll was one of a hand­ful of peo­ple who accept­ed the chal­lenge. He says he viewed the project as an aca­d­e­m­ic exper­i­ment at first, and, he says, a good use of his tenure. “I can’t get fired for what I do,” he said. “My job gives me the free­dom to pur­sue these things. If I don’t do it, who’s going to?”

    In ear­ly 2017, Car­roll sub­mit­ted his request, along with a copy of his driver’s license, his elec­tric bill, and a £10 fee, which Dehaye paid. Then he wait­ed. Dehaye nev­er real­ly expect­ed Car­roll to receive a response. In fact the sto­ry may have end­ed there, had SCL denied that Car­roll had the right to his data from the out­set. “They could have just said UK law doesn’t apply to you because you’re an Amer­i­can,” Dehaye says.

    Instead, one Mon­day morn­ing about a month lat­er, as Car­roll sat alone in his apart­ment, sip­ping cof­fee at the din­ing room table, an email land­ed in his inbox from the data com­pli­ance team at SCL Group. It includ­ed a let­ter signed by the company’s chief oper­at­ing offi­cer, Julian Wheat­land, and an Excel file lay­ing out in neat­ly arranged rows and columns exact­ly who Car­roll is—where he lives, how he’s vot­ed, and, most inter­est­ing­ly to Car­roll, how much he cares about issues like the nation­al debt, immi­gra­tion, and gun rights, on a scale of one to 10. Car­roll had no way of know­ing what infor­ma­tion informed those rank­ings; the thou­sands of data points Cam­bridge Ana­lyt­i­ca sup­pos­ed­ly used to build these pre­dic­tions were nowhere to be found.

    “I felt very invad­ed per­son­al­ly, but then I also saw it was such a pub­lic inter­est issue,” Car­roll says.

    He prompt­ly tweet­ed out his find­ings. To Car­roll, his file seemed woe­ful­ly incom­plete. But to Dehaye and oth­er experts of the inter­net, it seemed like exact­ly what he need­ed to prove a case. In answer­ing Car­roll at all, Dehaye argued, SCL con­ced­ed that even as an Amer­i­can, he was enti­tled to his data. But in show­ing him only the small­est slice of that data, Car­roll and Dehaye believed, SCL had bro­ken the law.

    Dehaye put Car­roll in touch with Ravi Naik, a British human rights lawyer, who had worked on data rights cas­es in the past. “Imme­di­ate­ly, he was like, ‘This is going to be a mas­sive case. It’s going to set prece­dents,’” Car­roll says.

    Still, Naik was cau­tious, know­ing that the case law regard­ing for­eign­ers gain­ing access to their data was extreme­ly lim­it­ed, rest­ing on just two cas­es where death row inmates from Thai­land and Kenya had attempt­ed to get their data from the British police. But Naik also viewed Carroll’s case as the begin­ning of a new civ­il rights move­ment. “It’s real­ly bal­anc­ing the rights of indi­vid­u­als against those with mass pow­er,” Naik says.

    In April 2017, Car­roll and Naik sent what’s known as a “pre-action” let­ter to SCL, lay­ing out a legal claim. In the UK, these let­ters are used to deter­mine if lit­i­ga­tion can be avoid­ed. In the let­ter, Naik and Car­roll argued that not only had SCL vio­lat­ed the UK’s Data Pro­tec­tion Act by fail­ing to give Car­roll all of the under­ly­ing data, the com­pa­ny hadn’t received the prop­er con­sent to process data relat­ed to his polit­i­cal views to begin with. Under the law, polit­i­cal opin­ions are con­sid­ered sen­si­tive data.

    Once again, Car­roll got no addi­tion­al data in return. Accord­ing to Alexan­der Nix, Cam­bridge Ana­lyt­i­ca’s then-CEO, the com­pa­ny shared cer­tain data with Car­roll as a ges­ture of “good faith,” but received legal advice that for­eign­ers did­n’t have rights under the Data Pro­tec­tion Act. Asked why the com­pa­ny did­n’t share more of that data, Nix said, “There was no legal rea­son to com­ply with this request, and it might be open­ing ... a bot­tom­less pit of sub­ject access requests in the Unit­ed States that we would be unable to ful­fill just through the sheer vol­ume of requests in com­par­i­son to the size of the com­pa­ny.” (After answer­ing WIRED’s ques­tions, Nix retroac­tive­ly asked for these answers to be off the record. WIRED declined.)

    Car­roll wasn’t the only per­son who had tried and failed to get his data from SCL. Ini­tial­ly, Naik says, about 20 peo­ple around the world were on board. But when it came time to bring the case to court, he says, they need­ed only one com­plainant, and it was Car­roll who was most will­ing to take the risk. “It says a lot about David that he’s will­ing to stand by not just his own rights but also the rights of every­one affect­ed, to work out what this com­pa­ny was doing,” Naik says.

    Car­roll and Naik spent the bulk of 2017 prepar­ing the case and hedg­ing their bets against worst case sce­nar­ios, of which there were many. In the British legal sys­tem, who­ev­er los­es a legal case winds up pay­ing the win­ning side’s fees. Car­roll wor­ried that could amount to hun­dreds of thou­sands of dol­lars, the kind of costs he couldn’t bear on his own. So that fall, Car­roll launched his own legal defense fund on Crowd­Jus­tice and announced his plans to file the com­plaint in The Guardian. Sud­den­ly, he was flood­ed with sup­port from strangers who’d grown sim­i­lar­ly sus­pi­cious of Cam­bridge Ana­lyt­i­ca. He raised near­ly $33,000 in a mat­ter of weeks. Today, he’s raised anoth­er $10,000 more.

    But for all the encour­age­ment Car­roll received, almost as soon as he went pub­lic with his plans he also got more than a few words of warn­ing. Once, Car­roll says, a Cam­bridge Ana­lyt­i­ca employ­ee approached him after a film screen­ing at The New School, shook his hand for a few beats too long, and told him to drop the case. Anoth­er time, Car­roll got a mys­te­ri­ous email about a British jour­nal­ist who had sup­pos­ed­ly been inves­ti­gat­ing SCL when he died sud­den­ly falling down the stairs. “Please don’t for­get how pow­er­ful these indi­vid­u­als are,” the email read.

    It was almost cer­tain­ly a coin­ci­dence, and Car­roll nev­er fol­lowed up with the woman who sent the email. “I didn’t want her to talk me out of it,” Car­roll says. But he still couldn’t help but feel spooked. In the fall of 2017, he right­ly felt like he had a lot to lose.

    The night we met in the cof­fee shop, I asked Car­roll whether all these risks he was tak­ing wor­ried him. He smiled anx­ious­ly and said, “It scares the shit out of me.”

    A few months lat­er, I spot­ted Car­roll across a crowd­ed audi­to­ri­um at Putin­Con, a gath­er­ing of reporters, for­eign pol­i­cy experts, intel­li­gence offi­cials, and pro­fes­sion­al para­noiacs being held in an undis­closed loca­tion in Man­hat­tan. The express pur­pose of the con­fer­ence was to dis­cuss “how Rus­sia is crip­pled by total­i­tar­i­an rule” and explore how Russ­ian pres­i­dent Vladimir Putin’s pow­er “is based in fear, mys­tery, and pro­pa­gan­da.”

    But Car­roll had oth­er mat­ters on his mind. That day, March 16, 2018, his lawyers in Lon­don were final­ly serv­ing SCL with a for­mal legal claim, request­ing dis­clo­sure of his data and lay­ing out their inten­tion to sue for dam­ages. The request had been more than a year in the mak­ing, and Car­roll spent much of the morn­ing dart­ing out to the hall­way, exchang­ing Sig­nal mes­sages with Naik, even though he was scared that any venue host­ing some­thing called Putin­Con must have been hacked.

    After Naik’s col­league served SCL with the paper­work, Car­roll stood look­ing at his phone in sat­is­fied dis­be­lief. “It’s final­ly real,” he told me. “It’s not just an idea any­more.”

    There was one oth­er thing. Car­roll said he’d heard “rum­blings” from British jour­nal­ist Car­ole Cad­wal­ladr that some big news regard­ing Cam­bridge Ana­lyt­i­ca was com­ing from The Guardian and The New York Times. “It’s going to make Face­book look real­ly bad,” he said.

    Less than 24 hours lat­er, Car­roll turned out to be more right than he even knew. The next morn­ing, pho­tos of a pink-haired, self-styled whistle­blow­er and for­mer SCL con­trac­tor named Christo­pher Wylie were splashed across pages of The New York Times and The Guardian. “Revealed: 50 mil­lion Face­book pro­files har­vest­ed for Cam­bridge Ana­lyt­i­ca in major data breach,” read the Guardian head­line. “How Trump Con­sul­tants Exploit­ed the Face­book Data of Mil­lions,” read the Times’. The night before, Face­book had tried to pre­empt the sto­ries, announc­ing it was sus­pend­ing Wylie, Cam­bridge Ana­lyt­i­ca, SCL, and Alek­san­dr Kogan for vio­lat­ing its poli­cies against shar­ing Face­book data with third par­ties.

    That hard­ly helped Facebook’s case. The news did more than make Face­book look bad. It did what his­to­ry may judge to be irrepara­ble dam­age to a com­pa­ny at the peak of its unprece­dent­ed pow­er. Facebook’s stock price plum­met­ed. Zucker­berg was sum­moned to Con­gress. The com­pa­ny gave itself the impos­si­ble task of audit­ing apps that had access to mass amounts of data, and began cut­ting off oth­er devel­op­ers from col­lect­ing even more. Google search­es for “how to delete Face­book” spiked.

    In the end, Face­book CEO Mark Zucker­berg acknowl­edged that as many as 87 mil­lion peo­ple may have been affect­ed by the data intru­sion. Even­tu­al­ly, the Fed­er­al Trade Com­mis­sion launched an inves­ti­ga­tion into whether Face­book vio­lat­ed a 2011 con­sent decree regard­ing its data pri­va­cy prac­tices. “I start­ed Face­book, and at the end of the day I’m respon­si­ble for what hap­pens on our plat­form,” Zucker­berg wrote on Face­book days after the news broke. “While this spe­cif­ic issue involv­ing Cam­bridge Ana­lyt­i­ca should no longer hap­pen with new apps today, that does­n’t change what hap­pened in the past.”

    Ear­li­er this month, The Wash­ing­ton Post report­ed that the FTC is con­sid­er­ing “impos­ing a record-set­ting fine” against Face­book.

    As bad as things were for Face­book, they soon got worse for Cam­bridge Ana­lyt­i­ca. Days after Wylie’s sto­ry first made head­lines, Britain’s Chan­nel 4 News began air­ing a series of dev­as­tat­ing under­cov­er videos that showed the firm’s once sought-after CEO, Alexan­der Nix, dis­cussing using dirty tricks like bribery and black­mail on behalf of clients. In one case, Nix boast­ed that using Ukrain­ian women to entrap politi­cians “works very well.”

    Nix has since denied that the com­pa­ny engages in those prac­tices. “That was just a lie to impress the peo­ple I was talk­ing to,” he told a par­lia­men­tary com­mit­tee last sum­mer. But almost as soon as the videos aired, Nix was replaced as CEO. By May, buried under an avalanche of neg­a­tive press, SCL Group announced it was shut­ting down com­plete­ly and fil­ing for bank­rupt­cy and insol­ven­cy in the US and the UK. Today, just one of its many cor­po­rate properties—SCL Insights—is still up and run­ning.

    As SCL was crum­bling, Carroll’s case took on a new sense of urgency. He was thrown into the media firestorm, criss­cross­ing Man­hat­tan as he dis­cussed his claim on an alpha­bet soup of tele­vi­sion net­works. Sud­den­ly, this wasn’t just a wonky aca­d­e­m­ic endeav­or to retrieve data from some com­pa­ny. It was a sto­ry about res­cu­ing that data from the one com­pa­ny the pub­lic had decid­ed, and Face­book had claimed, was sin­gu­lar­ly sin­is­ter. “Chris Wylie took the sto­ry that I knew was a big deal for a long time and made it a world­wide sto­ry, a house­hold name,” Car­roll says.

    When the news broke in March, the UK’s Infor­ma­tion Commissioner’s Office was already inves­ti­gat­ing SCL for its refusal to hand over Carroll’s data. Car­roll and Naik had filed a com­plaint with the ICO in 2017. But for months, SCL told the reg­u­la­tor that as an Amer­i­can, Car­roll had no more rights to his data “than a mem­ber of the Tal­iban sit­ting in a cave in the remotest cor­ner of Afghanistan.” The ICO dis­agreed. In May, days after SCL declared bank­rupt­cy, the reg­u­la­tor issued an order, direct­ing the firm to give Car­roll his data once and for all. Fail­ure to com­ply with­in 30 days, they warned, would result in crim­i­nal charges.

    SCL nev­er com­plied. Julian Wheat­land, direc­tor of SCL Group, told me he thinks the guilty plea the com­pa­ny issued in Jan­u­ary is a “shame” and says it mere­ly rep­re­sent­ed the path of least resis­tance for SCL’s liq­uida­tors, who over­see the insol­ven­cy pro­ceed­ings and are duty bound to max­i­mize the company’s assets. “There was lit­tle option but to plead guilty, as the cost of fight­ing the case would far out­weigh the cost of plead­ing guilty,” Wheat­land says. SCL’s admin­is­tra­tors declined WIRED’s request for com­ment.

    The ICO fine was ulti­mate­ly measly. Carroll’s slice of it couldn’t buy him more than a Metro­Card and a bag of gro­ceries. It’s also no guar­an­tee he’ll get his data. Naik is still wag­ing that bat­tle on Carroll’s behalf, as SCL’s insol­ven­cy pro­ceed­ings progress. Mean­while, an ICO spokesper­son con­firmed that the office now has access to SCL’s servers and is “assess­ing the mate­r­i­al on them,” which could help to bring Carroll’s infor­ma­tion to light.

    But the ICO’s charges were mean­ing­ful nonethe­less. It clear­ly under­scored the fact that peo­ple out­side the UK had these rights to begin with. “This pros­e­cu­tion, the first against Cam­bridge Ana­lyt­i­ca, is a warn­ing that there are con­se­quences for ignor­ing the law,” the infor­ma­tion com­mis­sion­er, Eliz­a­beth Den­ham, said in a state­ment fol­low­ing the hear­ing. “Wher­ev­er you live in the world, if your data is being processed by a UK com­pa­ny, UK data pro­tec­tion laws apply.”

    By the time I inter­viewed Car­roll in June 2018, about a month after SCL announced it was shut­ting down and just days after the ICO’s dead­line had ticked by, the fear Car­roll felt that first time we met had almost evap­o­rat­ed. We were in Lon­don to hear Cam­bridge Analytica’s fall­en CEO Alexan­der Nix state his case before a com­mit­tee of British par­lia­men­tar­i­ans. It seemed as if the entire cast of char­ac­ters involved in the sto­ry had set­tled into the hear­ing room’s green uphol­stered chairs. There was Cad­wal­ladr, The Guardian reporter who’d cracked the sto­ry open, and Wylie, the pink-haired source who’d helped her do it. Car­roll sat to my right, busi­ly tweet­ing every tense exchange between a defi­ant and defen­sive Nix and his inquisi­tors.

    There was also a doc­u­men­tary film crew, sta­tioned toward the back of the room. They’d been trail­ing Car­roll for months.

    When hus­band and wife team Jehane Nou­jaim and Karim Amer ini­tial­ly set out to make what is now The Great Hack, back in 2015, they planned to fol­low the sto­ry of the Sony Pic­tures breach that had exposed the film studio’s secrets in what US intel­li­gence offi­cials said was an attack by North Korea. But as time went on, their atten­tion, like that of the public’s, shift­ed focus from the ways in which pri­vate infor­ma­tion is straight-up stolen to all of the lit­tle ways we give it away to pow­er­ful cor­po­ra­tions, often with­out real­iz­ing it or know­ing what will hap­pen to it—and cer­tain­ly with­out any way to claw it back.

    That led them to Car­roll. “We were ini­tial­ly drawn to David’s sto­ry because his mis­sion to reclaim his data sum­ma­rized the com­plex­i­ties of this world into a sin­gle ques­tion: What do you know about me?” the direc­tors, who were nom­i­nat­ed for an Acad­e­my Award for their film The Square, wrote in an email. “One of the things that has become increas­ing­ly clear is that whether David gets his data back or not, his case has brought to light some of the largest ques­tions around data pri­va­cy.”

    This week­end, Car­roll will head to Park City, Utah, to see him­self on the big screen. For Naik, the fact that a film like this is debut­ing to main­stream audi­ences rep­re­sents an “aston­ish­ing step in the data rights move­ment.” “This shows a rapid change in inter­est in this field and the inter­est in data rights as a real and enforce­able facet of human rights,” he says.

    Over the past few years, these rights have expand­ed dras­ti­cal­ly. Last May, Europe’s Gen­er­al Data Pro­tec­tion Reg­u­la­tion went into effect across the Euro­pean Union, giv­ing Euro­peans the right to request and delete their data, and requir­ing busi­ness­es to receive informed con­sent before col­lect­ing that data. The law also estab­lished stricter report­ing pro­to­cols around data breach­es and cre­at­ed harsh new penal­ties for those who vio­late them.

    Last sum­mer, the state of Cal­i­for­nia unan­i­mous­ly passed its own pri­va­cy law, which lets res­i­dents of the state see the infor­ma­tion busi­ness­es col­lect on them and request that it be delet­ed. It also enables peo­ple to see which com­pa­nies have pur­chased their data and direct busi­ness­es to stop sell­ing it at all.

    Some of the most influ­en­tial busi­ness lead­ers in the world have simul­ta­ne­ous­ly ral­lied around the cause. In Brus­sels last year, Apple CEO Tim Cook con­demned what he called the “data indus­tri­al com­plex” and called for a fed­er­al law that would pre­vent per­son­al infor­ma­tion from being “weaponized against us with mil­i­tary effi­cien­cy.” Even data gob­blers like Google, Ama­zon, and Face­book have final­ly come out in sup­port of a fed­er­al pri­va­cy law, part­ly due to the fact that such leg­is­la­tion could pre­vent the stricter Cal­i­for­nia bill from tak­ing effect in 2020.

    If Con­gress is ever going to make good on its recent promis­es to crack down on ram­pant data min­ing, this could be the year. So far, sen­a­tor Ron Wyden (D‑Oregon) has float­ed some draft leg­is­la­tion. Sen­a­tor Mar­co Rubio (R‑Florida) pro­posed a bill that would task the FTC with draft­ing new rules. And in Decem­ber, sen­a­tor Bri­an Schatz (D‑Hawaii) intro­duced a bill of his own, cospon­sored by 14 oth­er Democ­rats, that requires com­pa­nies to “rea­son­ably secure” per­son­al­ly iden­ti­fy­ing infor­ma­tion and promise not to use it for harm. It would also force busi­ness­es and the third par­ties they work with to noti­fy users of data breach­es and gives the FTC new author­i­ty to fine vio­la­tors.

    “Just as doc­tors and lawyers are expect­ed to pro­tect and respon­si­bly use the per­son­al data they hold, online com­pa­nies should be required to do the same,” Schatz said in a state­ment when the bill was announced. “Our bill will help make sure that when peo­ple give online com­pa­nies their infor­ma­tion, it won’t be exploit­ed.”

    Car­roll isn’t so sure. He says bills like this hard­ly address the under­ly­ing prob­lem. If data is the new oil pow­er­ing the econ­o­my, then what Schatz is propos­ing is a process for clean­ing up the next oil spill. It’s not a set of safe­ty pro­ce­dures to pre­vent that spill from hap­pen­ing in the first place. That’s what Car­roll says the Unit­ed States, whose home­grown tech giants con­trol so much of the world’s data, des­per­ate­ly needs. He con­tin­ues to believe his SCL file would prove how bad­ly it’s need­ed.

    Cam­bridge Ana­lyt­i­ca may have tak­en data from Face­book that it didn’t have the right to, and Face­book may have made that data too easy to access. But the most over­looked fact in the whole saga is that Cam­bridge Ana­lyt­i­ca wasn’t alone. From data bro­kers that track your every pur­chase to mobile phone car­ri­ers that sell your loca­tion to social media com­pa­nies that give far more detail to devel­op­ers than is nec­es­sary, there’s an invis­i­ble, unreg­u­lat­ed mar­ket­place of per­son­al infor­ma­tion in the Unit­ed States. And it’s no longer just being used to sell us new boots or con­nect us with high school class­mates. It’s being used to influ­ence deci­sions about who the most pow­er­ful peo­ple in the world get to be.

    “They’re not the only ones by any stretch of the imag­i­na­tion,” Car­roll says of SCL. “It’s a dirty busi­ness, but sun­light is the best dis­in­fec­tant.”

    This is, per­haps, the one issue on which Car­roll and Wheat­land, SCL’s direc­tor, see eye to eye. Wheat­land pre­dictably dis­agrees with the broad char­ac­ter­i­za­tion of his com­pa­ny, whose very name has become a proxy for every­thing wrong with the data trade. He says Cam­bridge Ana­lyt­i­ca was a “light­ning rod” for a con­flu­ence of feel­ings about Pres­i­dent Trump’s elec­tion, Face­book, Brex­it, and the ris­ing use of data. “We found our­selves at the nexus of all of those and became the whip­ping boy,” he says.

    He doesn’t find many sym­pa­thet­ic audi­ences for that mes­sage these days. But he, too, says that reg­u­la­tion is imper­a­tive, giv­en the “huge pow­er” of data mod­el­ing. And he too says there’s a risk to cast­ing Cam­bridge Ana­lyt­i­ca as some­how unique. “This is an issue that is much big­ger than one com­pa­ny,” he says. “If we take this vil­lain men­tal­i­ty and think that we’ve moved on, we haven’t. We’ve lost Cam­bridge Ana­lyt­i­ca, but we haven’t moved on at all.”

    ...

    ———-

    “One Man’s Obses­sive Fight to Reclaim His Cam­bridge Ana­lyt­i­ca Data” by Issie Lapowsky; Wired; 01/25/2019

    A year ago, Car­roll filed a legal claim against the Lon­don-based con­glom­er­ate, demand­ing to see what was in his pro­file. Because, with few excep­tions, British data pro­tec­tion laws allow peo­ple to request data on them that’s been processed in the UK, Car­roll believed that even as an Amer­i­can, he had a right to that infor­ma­tion. He just had to prove it.”

    Yep, British data pro­tec­tion laws allow peo­ple to request data on them that’s been processed in the UK. Does that include all peo­ple any­where in the world? That’s what David Car­roll was test­ing with his law­suit. And sure enough, short­ly before the Cam­bridge Ana­lyt­i­ca sto­ry erupt­ed in late March of 2018, Car­roll learned that the British Infor­ma­tion Commissioner’s Office ruled that Car­rol­l’s legal claim against SCL for not hand­ing over all of the data they held on him was valid and SCL was to be fined. It was­n’t a big fine, but it estab­lished some impor­tant legal prece­dents, includ­ing the prece­dent that non-UK cit­i­zens have a right to this data too. And in doing so, Car­roll implic­it­ly high­light­ed the kinds of rights US cit­i­zens cur­rent­ly don’t have in their own coun­try:

    ...
    Car­roll shuf­fles past me bare­foot, a mug of cof­fee in one hand, his phone in the oth­er. “Enjoy the moment,” he says, read­ing a mes­sage from his lawyer, Ravi Naik, who’s been feed­ing him updates from Lon­don all morn­ing. About an hour lat­er, an email floats into Carroll’s inbox from the British Infor­ma­tion Commissioner’s Office, the reg­u­la­tor that brought the charges. Car­roll turns his phone toward me to reveal the news. Cam­bridge Analytica’s par­ent com­pa­ny, SCL, is being fined the equiv­a­lent of rough­ly $27,000. Carroll’s cut? About $222.

    He couldn’t help but laugh. The sum is insignif­i­cant. The moment, any­thing but.

    When he start­ed out, Car­roll was an under­dog, fac­ing off against a cor­po­ra­tion with ties to the pres­i­dent of the Unit­ed States and backed by bil­lion­aire donor Robert Mer­cer. If he lost, Car­roll would be on the hook for the oppos­ing team’s legal fees, which he wasn’t quite sure how he’d pay.

    But if he won, Car­roll believed he could prove an invalu­able point. He could use that trove of infor­ma­tion he received to show the world just how pow­er­less Amer­i­cans are over their pri­va­cy. He could offer up a con­crete exam­ple of how one man’s information—his super­mar­ket punch card, his online shop­ping habits, his vot­ing patterns—can be bought and sold and weaponized by cor­po­ra­tions and even for­eign enti­ties try­ing to influ­ence elec­tions.

    But more impor­tant­ly, he could show what’s pos­si­ble in coun­tries like the UK where peo­ple actu­al­ly have the right to reclaim some of that pow­er. He could prove why peo­ple in the Unit­ed States, who have no such rights, deserve those same pro­tec­tions.
    ...

    It also turns out Car­roll has per­son­al expe­ri­ence with anoth­er side of the Face­book data pri­va­cy night­mare: in 2014, he went on sab­bat­i­cal and cre­at­ed a start­up that devel­oped a Face­book app. And that’s where he learned about just how much data Face­book was giv­ing away to app devel­op­ers, includ­ing the “friends’ per­mis­sions” fea­ture exploit­ed by Cam­bridge Ana­lyt­i­ca that allowed it to use ~270,000 app users to grab ~87 mil­lion detailed pro­files. It’s a remind that the Cam­bridge Ana­lyt­i­ca scan­dal was­n’t actu­al­ly a secret to the thou­sands of com­pa­nies on the receiv­ing end of all that data:

    ...
    Car­roll hadn’t always been in acad­e­mia. Dur­ing the dot­com boom and bust, he worked in dig­i­tal mar­ket­ing and watched as adver­tis­ing evolved from the sort of broad brand­ing exer­cise that had been the domin­ion of tele­vi­sion and print to an indus­try dom­i­nat­ed by Google, which used infi­nite quan­ti­ties of user data to hyper-tar­get ads. When he left his mar­ket­ing career to teach full time, Car­roll, who has an MFA in design and tech­nol­o­gy, trans­formed from an indus­try par­tic­i­pant to a chief crit­ic, lec­tur­ing stu­dents on what he calls the “myth” that adver­tis­ing doesn’t work if it’s not tar­get­ed.

    In 2014, while he was on sab­bat­i­cal, Car­roll began work­ing on a start­up called Glossy, which inte­grat­ed with Face­book to rec­om­mend arti­cles from mag­a­zine archives based on users’ inter­ests. The idea nev­er took off; Car­roll couldn’t get fund­ing, and his ear­ly employ­ees got quick­ly poached by tech giants. But he got just far enough to see how much user data Face­book was will­ing to give away in the name of growth. At the time, the social net­work­ing giant allowed devel­op­ers to slurp up data not just from their own users but from their users’ friends, all with­out their friends’ aware­ness or explic­it con­sent. Face­book did­n’t offi­cial­ly end this pol­i­cy until April 2015, and con­tin­ued to give some devel­op­ers access even after that.

    “I saw how the sausage got made and how easy it was to amass data and cre­ate a sur­veil­lance infra­struc­ture,” Car­roll says.
    ...

    Car­roll end­ed up suing for the data after team­ing up with a researcher, Paul-Olivi­er Dehaye, who was already invest­ing SCL over its role in the Brex­it cam­paign. Dehaye want­ed to see if SCL real­ly have the detailed data pro­files they claimed but he also specif­i­cal­ly want­ed to test if British courts would find that non-UK res­i­dents have the right to make a data request. So he start­ed reach­ing out to Amer­i­can aca­d­e­mics and that put him in con­tact with Car­roll:

    ...
    He wasn’t the only one. Thou­sands of miles away, in Gene­va, Switzer­land, a researcher named Paul-Olivi­er Dehaye, who now runs a dig­i­tal rights non­prof­it called PersonalData.IO, was deep into a months-long inves­ti­ga­tion of SCL. At the time, he was try­ing to answer a fun­da­men­tal ques­tion about the com­pa­ny that was also rumored to have played a hand in pro­mot­ing the Brex­it ref­er­en­dum: Did Cam­bridge Ana­lyt­i­ca real­ly know as much as it claimed? Or was it just sell­ing snake oil? One way to answer that ques­tion con­clu­sive­ly, Dehaye believed, would be to see what infor­ma­tion the com­pa­ny actu­al­ly held.

    The UK’s Data Pro­tec­tion Act guar­an­tees the right to access data that’s processed with­in the UK. But in the past, it was main­ly British res­i­dents who had exer­cised that right. Few had ever test­ed whether the law applied to peo­ple out­side of the coun­try as well. Dehaye believed this would be the per­fect oppor­tu­ni­ty to try, so he began reach­ing out to Amer­i­can aca­d­e­mics, activists, and jour­nal­ists, urg­ing them to sub­mit what is known as a “sub­ject access request” to the com­pa­ny. It was Amer­i­cans’ data, after all, that Cam­bridge Ana­lyt­i­ca seemed most inter­est­ed in. Car­roll was one of Dehaye’s tar­gets.
    ...

    Ini­tial­ly, in ear­ly 2017, SCL appeared to try to pla­cate Car­roll by email­ing him a an Excel spread­sheet of infor­ma­tion that includ­ed a range of dif­fer­ent met­rics for Car­rol­l’s polit­i­cal beliefs. But it still seemed incom­plete and so Car­roll sued by argu­ing that giv­ing him an incom­plete set of the data they have on him rep­re­sent­ed a breach of the UK law:

    ...
    In ear­ly 2017, Car­roll sub­mit­ted his request, along with a copy of his driver’s license, his elec­tric bill, and a £10 fee, which Dehaye paid. Then he wait­ed. Dehaye nev­er real­ly expect­ed Car­roll to receive a response. In fact the sto­ry may have end­ed there, had SCL denied that Car­roll had the right to his data from the out­set. “They could have just said UK law doesn’t apply to you because you’re an Amer­i­can,” Dehaye says.

    Instead, one Mon­day morn­ing about a month lat­er, as Car­roll sat alone in his apart­ment, sip­ping cof­fee at the din­ing room table, an email land­ed in his inbox from the data com­pli­ance team at SCL Group. It includ­ed a let­ter signed by the company’s chief oper­at­ing offi­cer, Julian Wheat­land, and an Excel file lay­ing out in neat­ly arranged rows and columns exact­ly who Car­roll is—where he lives, how he’s vot­ed, and, most inter­est­ing­ly to Car­roll, how much he cares about issues like the nation­al debt, immi­gra­tion, and gun rights, on a scale of one to 10. Car­roll had no way of know­ing what infor­ma­tion informed those rank­ings; the thou­sands of data points Cam­bridge Ana­lyt­i­ca sup­pos­ed­ly used to build these pre­dic­tions were nowhere to be found.

    “I felt very invad­ed per­son­al­ly, but then I also saw it was such a pub­lic inter­est issue,” Car­roll says.

    He prompt­ly tweet­ed out his find­ings. To Car­roll, his file seemed woe­ful­ly incom­plete. But to Dehaye and oth­er experts of the inter­net, it seemed like exact­ly what he need­ed to prove a case. In answer­ing Car­roll at all, Dehaye argued, SCL con­ced­ed that even as an Amer­i­can, he was enti­tled to his data. But in show­ing him only the small­est slice of that data, Car­roll and Dehaye believed, SCL had bro­ken the law.
    ...

    Delaye then puts Car­roll in con­tact with a British human rights lawyer, Ravi Naik, and in April of 2017 they send SCL the legal let­ter stat­ing their case that the Excel file Car­roll received was a vio­la­tion of the law. SCL refus­es to give addi­tion­al data. And when asked why SCL refused Car­rol’s request, then-CEO Alexan­der Nix says the com­pa­ny had no legal oblig­a­tion and it was con­cerned about open­ing up a “bot­tom­less pit” of requests that the com­pa­ny lacked the resources to deal with:

    ...
    Dehaye put Car­roll in touch with Ravi Naik, a British human rights lawyer, who had worked on data rights cas­es in the past. “Imme­di­ate­ly, he was like, ‘This is going to be a mas­sive case. It’s going to set prece­dents,’” Car­roll says.

    Still, Naik was cau­tious, know­ing that the case law regard­ing for­eign­ers gain­ing access to their data was extreme­ly lim­it­ed, rest­ing on just two cas­es where death row inmates from Thai­land and Kenya had attempt­ed to get their data from the British police. But Naik also viewed Carroll’s case as the begin­ning of a new civ­il rights move­ment. “It’s real­ly bal­anc­ing the rights of indi­vid­u­als against those with mass pow­er,” Naik says.

    In April 2017, Car­roll and Naik sent what’s known as a “pre-action” let­ter to SCL, lay­ing out a legal claim. In the UK, these let­ters are used to deter­mine if lit­i­ga­tion can be avoid­ed. In the let­ter, Naik and Car­roll argued that not only had SCL vio­lat­ed the UK’s Data Pro­tec­tion Act by fail­ing to give Car­roll all of the under­ly­ing data, the com­pa­ny hadn’t received the prop­er con­sent to process data relat­ed to his polit­i­cal views to begin with. Under the law, polit­i­cal opin­ions are con­sid­ered sen­si­tive data.

    Once again, Car­roll got no addi­tion­al data in return. Accord­ing to Alexan­der Nix, Cam­bridge Ana­lyt­i­ca’s then-CEO, the com­pa­ny shared cer­tain data with Car­roll as a ges­ture of “good faith,” but received legal advice that for­eign­ers did­n’t have rights under the Data Pro­tec­tion Act. Asked why the com­pa­ny did­n’t share more of that data, Nix said, “There was no legal rea­son to com­ply with this request, and it might be open­ing ... a bot­tom­less pit of sub­ject access requests in the Unit­ed States that we would be unable to ful­fill just through the sheer vol­ume of requests in com­par­i­son to the size of the com­pa­ny.” (After answer­ing WIRED’s ques­tions, Nix retroac­tive­ly asked for these answers to be off the record. WIRED declined.)
    ...

    Car­roll even had to deal with a weird intim­i­da­tion attempt by a Cam­bride Ana­lyt­i­ca employ­ee and received warn­ings in the fall of 2017 about a jour­nal­ist who was inves­ti­gat­ing SCL and sud­den­ly fell dead in a stair­well. So if Car­roll drops dead between now and when his ongo­ing law­suit is resolved that’s going to be pret­ty extra sus­pi­cious:

    ...
    But for all the encour­age­ment Car­roll received, almost as soon as he went pub­lic with his plans he also got more than a few words of warn­ing. Once, Car­roll says, a Cam­bridge Ana­lyt­i­ca employ­ee approached him after a film screen­ing at The New School, shook his hand for a few beats too long, and told him to drop the case. Anoth­er time, Car­roll got a mys­te­ri­ous email about a British jour­nal­ist who had sup­pos­ed­ly been inves­ti­gat­ing SCL when he died sud­den­ly falling down the stairs. “Please don’t for­get how pow­er­ful these indi­vid­u­als are,” the email read.

    It was almost cer­tain­ly a coin­ci­dence, and Car­roll nev­er fol­lowed up with the woman who sent the email. “I didn’t want her to talk me out of it,” Car­roll says. But he still couldn’t help but feel spooked. In the fall of 2017, he right­ly felt like he had a lot to lose.

    The night we met in the cof­fee shop, I asked Car­roll whether all these risks he was tak­ing wor­ried him. He smiled anx­ious­ly and said, “It scares the shit out of me.”
    ...

    Flash for­ward to March 16, 2018, days before the Cam­bridge Ana­lyt­i­ca scan­dal erupts, and Car­roll serves SCL with the for­mal legal claim of his intent to sue. And in May the British Infor­ma­tion Commissioner’s Office (ICO) rules in Car­rolls favor, giv­ing SCL 30 days to com­ply. 30 days go by and SCL gets fined. It’s a small sum $27,000, of which $222 went to Car­roll. But as a legal prece­dent it could be quite sig­nif­i­cant, espe­cial­ly when it comes to giv­ing Amer­i­cans a taste of what have mean­ing­ful data pri­va­cy laws feels like:

    ...
    A few months lat­er, I spot­ted Car­roll across a crowd­ed audi­to­ri­um at Putin­Con, a gath­er­ing of reporters, for­eign pol­i­cy experts, intel­li­gence offi­cials, and pro­fes­sion­al para­noiacs being held in an undis­closed loca­tion in Man­hat­tan. The express pur­pose of the con­fer­ence was to dis­cuss “how Rus­sia is crip­pled by total­i­tar­i­an rule” and explore how Russ­ian pres­i­dent Vladimir Putin’s pow­er “is based in fear, mys­tery, and pro­pa­gan­da.”

    But Car­roll had oth­er mat­ters on his mind. That day, March 16, 2018, his lawyers in Lon­don were final­ly serv­ing SCL with a for­mal legal claim, request­ing dis­clo­sure of his data and lay­ing out their inten­tion to sue for dam­ages. The request had been more than a year in the mak­ing, and Car­roll spent much of the morn­ing dart­ing out to the hall­way, exchang­ing Sig­nal mes­sages with Naik, even though he was scared that any venue host­ing some­thing called Putin­Con must have been hacked.

    After Naik’s col­league served SCL with the paper­work, Car­roll stood look­ing at his phone in sat­is­fied dis­be­lief. “It’s final­ly real,” he told me. “It’s not just an idea any­more.”

    There was one oth­er thing. Car­roll said he’d heard “rum­blings” from British jour­nal­ist Car­ole Cad­wal­ladr that some big news regard­ing Cam­bridge Ana­lyt­i­ca was com­ing from The Guardian and The New York Times. “It’s going to make Face­book look real­ly bad,” he said.

    Less than 24 hours lat­er, Car­roll turned out to be more right than he even knew. The next morn­ing, pho­tos of a pink-haired, self-styled whistle­blow­er and for­mer SCL con­trac­tor named Christo­pher Wylie were splashed across pages of The New York Times and The Guardian. “Revealed: 50 mil­lion Face­book pro­files har­vest­ed for Cam­bridge Ana­lyt­i­ca in major data breach,” read the Guardian head­line. “How Trump Con­sul­tants Exploit­ed the Face­book Data of Mil­lions,” read the Times’. The night before, Face­book had tried to pre­empt the sto­ries, announc­ing it was sus­pend­ing Wylie, Cam­bridge Ana­lyt­i­ca, SCL, and Alek­san­dr Kogan for vio­lat­ing its poli­cies against shar­ing Face­book data with third par­ties.

    ...

    When the news broke in March, the UK’s Infor­ma­tion Commissioner’s Office was already inves­ti­gat­ing SCL for its refusal to hand over Carroll’s data. Car­roll and Naik had filed a com­plaint with the ICO in 2017. But for months, SCL told the reg­u­la­tor that as an Amer­i­can, Car­roll had no more rights to his data “than a mem­ber of the Tal­iban sit­ting in a cave in the remotest cor­ner of Afghanistan.” The ICO dis­agreed. In May, days after SCL declared bank­rupt­cy, the reg­u­la­tor issued an order, direct­ing the firm to give Car­roll his data once and for all. Fail­ure to com­ply with­in 30 days, they warned, would result in crim­i­nal charges.

    SCL nev­er com­plied. Julian Wheat­land, direc­tor of SCL Group, told me he thinks the guilty plea the com­pa­ny issued in Jan­u­ary is a “shame” and says it mere­ly rep­re­sent­ed the path of least resis­tance for SCL’s liq­uida­tors, who over­see the insol­ven­cy pro­ceed­ings and are duty bound to max­i­mize the company’s assets. “There was lit­tle option but to plead guilty, as the cost of fight­ing the case would far out­weigh the cost of plead­ing guilty,” Wheat­land says. SCL’s admin­is­tra­tors declined WIRED’s request for com­ment.

    The ICO fine was ulti­mate­ly measly. Carroll’s slice of it couldn’t buy him more than a Metro­Card and a bag of gro­ceries. It’s also no guar­an­tee he’ll get his data. Naik is still wag­ing that bat­tle on Carroll’s behalf, as SCL’s insol­ven­cy pro­ceed­ings progress. Mean­while, an ICO spokesper­son con­firmed that the office now has access to SCL’s servers and is “assess­ing the mate­r­i­al on them,” which could help to bring Carroll’s infor­ma­tion to light.

    But the ICO’s charges were mean­ing­ful nonethe­less. It clear­ly under­scored the fact that peo­ple out­side the UK had these rights to begin with. “This pros­e­cu­tion, the first against Cam­bridge Ana­lyt­i­ca, is a warn­ing that there are con­se­quences for ignor­ing the law,” the infor­ma­tion com­mis­sion­er, Eliz­a­beth Den­ham, said in a state­ment fol­low­ing the hear­ing. “Wher­ev­er you live in the world, if your data is being processed by a UK com­pa­ny, UK data pro­tec­tion laws apply.”
    ...

    So it’s going to be inter­est­ing to see how the abil­i­ty to sue UK-based com­pa­nies impacts the US data pri­va­cy reg­u­la­to­ry envi­ron­ment, an envi­ron­ment that basi­cal­ly has no reg­u­la­tions. It’s cer­tain­ly going to com­pli­cate the use of UK-based dig­i­tal psy­cho­log­i­cal war­fare firms like SCL and Cam­bridge Ana­lyt­i­ca for polit­i­cal dirty tricks oper­a­tions.

    And we can’t for­get that we still have no idea how much Cam­bridge Ana­lyt­i­ca actu­al­ly knows about Car­roll. All he’s received is an Excel sheet that he was con­fi­dent was incom­plete and SCL has refused to give him the com­plete pro­file and the law­suit is ongo­ing. You have to won­der just how big that full pro­file real­ly is. And now that Car­roll has estab­lished that Amer­i­cans have a legal right to this data, it’s hard to imag­ine there aren’t going to be a lot more requests made going for­ward.

    If Cam­bridge Ana­lyt­i­ca’s pro­fessed night­mare sce­nario emerges and it gets mass requests for the full data pro­files, that also rais­es the ques­tion of how that could change how Amer­i­cans view them­selves when they get what will prob­a­bly be the first high­ly detailed mar­ket­ing pro­file on them­selves. A pro­file that knows more about you than you do. Imag­ine if the full pro­file, which pre­sum­ably was a com­pi­la­tion of what Cam­bridge Ana­lyt­i­ca col­lect­ed from the Face­book app com­bined with what they could pur­chase about peo­ple in the large data bro­ker­age mar­ket, is just a real­ly com­pelling read for peo­ple. The Cam­bridge Ana­lyt­i­ca scan­dal could rep­re­sent a valu­able way for peo­ple to learn what that larg­er mar­ket­ing indus­try knows about them because Cam­bridge Ana­lyt­i­ca com­bined third-par­ty data with its own data. Will Amer­i­cans be shocked by the rich­ness of those details known about them or shrug it off? We’ll see, but it’s going to be worth keep­ing in mind that thanks to David Car­rol­l’s law­suit the Cam­bridge Ana­lyt­i­ca scan­dals now rep­re­sents a rare oppor­tu­ni­ty to teach the pub­lic some very impor­tant lessons about the scale of the per­son­al data bro­ker­age indus­try. If the full Cam­bridge Anlyt­i­ca pro­files Car­roll is still suing to get released end up being a com­pelling read for peo­ple it’s a great oppor­tu­ni­ty to teach the pub­lic about the kinds of data being col­lect­ed on them. We don’t know why SCL con­tin­ues to refuse to give Car­roll the full pro­file they hold on him. But one obvi­ous pos­si­bil­i­ty is that the full scope of what they know about him would shock the pub­lic and that’s why they’re refus­ing. And a shock­ing­ly mas­sive and detailed pro­file on you would prob­a­bly be a pret­ty com­pelling read for you so it seems pos­si­ble we could see a lot of peo­ple request their pro­files if David Car­rol­l’s suit wins out. Assum­ing SCL does­n’t destroy the data or finds some oth­er excuse for not releas­ing it.

    Giv­en that the Cam­bridge Ana­lyt­i­ca scan­dal in the US has been from the begin­ning been a scan­dal root­ed in the larg­er scan­dal of a US data col­lec­tion indus­try that oper­ates with few reg­u­la­to­ry restraints, it’s per­haps appro­pri­ate if the one stake­hold­er in the data bro­ker­age indus­try that nor­mal­ly nev­er gets to see the data, the pub­lic, final­ly gets to see them­selves the way the panop­ti­con sees them. Trans­paren­cy for the panop­ti­con: it may not be he ulti­mate reg­u­la­to­ry solu­tion, but it’s a start.

    Posted by Pterrafractyl | February 18, 2019, 12:11 am
  32. Cam­bridge Ana­lyt­i­ca Christo­pher Wylie has a new book out about his expe­ri­ences at the com­pa­ny, and based on the fol­low­ing book excerpt it’s sound­ing like a must-read book. For starters, Wylie describes a scene where his team unveils their core prod­uct to Cam­bridge Ana­lyt­i­ca’s investors and offi­cers: a data­base of mil­lions of Amer­i­cans filled with details about their life. Dur­ing this meet­ing, they bring up records of ran­dom peo­ple and then pro­ceed to test the accu­ra­cy of data by call­ing these peo­ple on the phone and pre­tend­ing to con­duct a sur­vey. The var­i­ous investors, includ­ing Steve Ban­non, all take turns call­ing ran­dom Amer­i­cans to see if the data in their files about their likes and dis­likes match­es their answers on the phone.

    Anoth­er major detail in Wylie’s descrip­tion of this ini­tial Cam­bridge Ana­lyt­i­ca data­base is that the Face­book har­vest­ed data was just of many data sources in the Cam­bridge Ana­lyt­i­ca mod­el. It includ­ed all sorts of com­mer­cial­ly avail­able data­bas­es and state data­bas­es. Infor­ma­tion like mort­gage appli­ca­tions or Google maps satel­lite pho­tos of their homes. A cut­ting edge full spec­trum per­son­al pro­file that includ­ed psy­cho­log­i­cal pro­files. That’s what Cam­bridge Ana­lyt­i­ca offered.

    But prob­a­bly the biggest rev­e­la­tion in this excerpt from Wylie’s book involves Palan­tir. Because that rev­e­la­tion also involves a desire by the US intel­li­gence agen­cies to use Palan­tir to buy Cam­bridge Ana­lyt­i­ca’s data as a means of mass har­vest­ing the kind of infor­ma­tion the US gov­ern­ment and nation­al secu­ri­ty con­trac­tors like Palan­tir aren’t legal­ly allowed to col­lect. The loop­hole that the col­lec­tions is allowed if its freely vol­un­teered by indi­vid­u­als or com­pa­nies. So the gov­ern­ment gets to use the data pro­files Face­book was freely offer­ing to every­one else by hir­ing Palan­tir to hire Cam­bridge Ana­lyt­i­ca to mass har­vest them from Face­book users. And merge it with all the oth­er “freely avail­able” com­mer­cial data­bas­es. But Cam­bridge Ana­lyt­i­ca also includ­ed psy­cho­log­i­cal pro­files built on peo­ple who took an online test to grab pro­files on about a quar­ter of a mil­lion peo­ple and then they used the “friends per­mis­sions” option to boost it up to at least 87 mil­lion Face­book users that had algo­rithms used to infer their psy­cho­log­i­cal pro­files. That’s a pret­ty big new detail on the Cam­bridge Ana­lyt­i­ca scan­dal.

    On one lev­el, it’s not remote­ly sur­pris­ing that the gov­ern­ment would use the data Face­book makes avail­able to app devel­op­ers like Cam­bridge Ana­lyt­i­ca because Face­book is sell­ing that info to every­one else, includ­ing oth­er gov­ern­ments pre­sum­ably. Com­mer­cial­iz­ing that data is Face­book’s ulti­mate busi­ness mod­el and sell­ing ads is only one part of that com­mer­cial­iza­tion. But on anoth­er lev­el, if Face­book acts as a legal loop­hole that allows gov­ern­ment agen­cies and con­trac­tors like Palan­tir to incor­po­rate that mass har­vest­ed trea­sure trove of per­son­al data into gov­ern­ment data­bas­es that is actu­al­ly a very big deal. Face­book would­n’t be alone in offer­ing mass har­vest­ed data pro­files but Face­book is the leader in com­mer­cial­iz­ing mass har­vest­ed data pro­files so it’s sort of pro­vid­ing the cut­ting edge for gov­ern­ment use data pro­files on peo­ple.

    But even Face­book’s cut­ting edge pro­files would still just be one part of these super-pro­file data­bas­es that Cam­bridge Ana­lyt­i­ca was build­ing. That’s what was pow­er this wing of the Trump 2016 pro­pa­gan­da effort: super-data­bas­es on peo­ple so pow­er­ful that Palan­tir and the gov­ern­ment want­ed to rent it.

    And that’s just the case for the US gov­ern­ment. The gov­ern­ments all over have prob­a­bly set up all sorts of Face­book apps to mass har­vest pro­files on as many peo­ple as pos­si­ble all over the globe. But in this case in sounds like Cam­bridge Ana­lyt­i­ca was the com­pa­ny that was blaz­ing the trail for the US nation­al secu­ri­ty state for this kind of gov­ern­ment deep pro­file build­ing. We can throw that on the pile of Cam­bridge Ana­lyt­i­ca’s accom­plish­ments. It’s an exam­ple of why we should­n’t assume we’ve heard the worst of Cam­bridge Ana­lyt­i­ca. Each new twist is a new low:

    New York Mag­a­zine
    Intel­li­gencer

    How I Helped Hack Democ­ra­cy

    By Christo­pher Wylie
    Oct. 4, 2019

    From the book MINDF*CK: Cam­bridge Ana­lyt­i­ca and the Plot to Break Amer­i­ca, by Christo­pher Wylie. Copy­right © 2019 by Ver­be­na Lim­it­ed. Pub­lished by Ran­dom House, an imprint and divi­sion of Pen­guin Ran­dom House LLC. All rights reserved.

    At first, it was the most anti­cli­mac­tic project launch in his­to­ry. Noth­ing hap­pened. Five, ten, fif­teen min­utes went by, and peo­ple start­ed shuf­fling around in antic­i­pa­tion. “What the fuck is this?” Cam­bridge Analytica’s CEO, Alexan­der Nix, barked. “Why are we stand­ing here?”

    It was June 2014. Fresh out of uni­ver­si­ty the pre­vi­ous year, I had tak­en a job at a Lon­don firm called SCL Group, which was sup­ply­ing the U.K. Min­istry of Defence and NATO armies with exper­tise in infor­ma­tion oper­a­tions. West­ern mil­i­taries were grap­pling with how to tack­le rad­i­cal­iza­tion online, and the firm want­ed me to help build a team of data sci­en­tists to cre­ate new tools to iden­ti­fy and com­bat inter­net extrem­ism. It was fas­ci­nat­ing, chal­leng­ing, and excit­ing all at once. We thought we would break new ground for the cyber defens­es of Britain, Amer­i­ca, and their allies and con­front bub­bling insur­gen­cies with data, algo­rithms, and tar­get­ed nar­ra­tives online. Then bil­lion­aire Robert Mer­cer acquired our project. His invest­ment was used to fund an off­shoot of SCL, which Steve Ban­non named Cam­bridge Ana­lyt­i­ca.

    By now peo­ple are famil­iar with the com­pa­ny: They have heard sto­ries about how it used per­son­al­i­ty pro­files built from Face­book inter­ac­tions to tar­get and sway poten­tial vot­ers; seen it debat­ed before Con­gress; or read that it recent­ly inspired Face­book to sus­pend tens of thou­sands of apps for improp­er­ly access­ing data. Some have claimed CA helped sway the elec­tion for Trump, while oth­ers have said com­pa­ny exec­u­tives exag­ger­at­ed their influ­ence. I first met Mer­cer in Novem­ber 2013, in a meet­ing held at his daugh­ter Rebekah’s apart­ment on the Upper West Side. Over the years, the hedge-fund CEO had donat­ed mil­lions of dol­lars to con­ser­v­a­tive cam­paigns. But in the months lead­ing up to the launch, I believed Mercer’s inter­est in our work was pri­mar­i­ly for its com­mer­cial poten­tial, not pol­i­tics. If we could copy everyone’s data pro­files and repli­cate soci­ety in a com­put­er — like the game The Sims but with real people’s data — we could sim­u­late and fore­cast what would hap­pen in soci­ety and the mar­ket. If you can pre­dict what peo­ple will buy or not buy, or see a crash com­ing, you have the all-see­ing orb for soci­ety. You might make bil­lions overnight.

    We had spent sev­er­al weeks cal­i­brat­ing every­thing, mak­ing sure the app worked, that it would pull in the right data, and that every­thing matched when it inject­ed the data into the inter­nal data­bas­es. We were stand­ing by the com­put­er in Lon­don, and Dr. Alek­san­dr Kogan, a pro­fes­sor who spe­cial­ized in com­pu­ta­tion­al mod­el­ing of psy­cho­log­i­cal traits, was in Cam­bridge. Kogan launched the app, and some­one said, “Yay.” With that, we were live.

    The app worked in con­cert with Ama­zon Mechan­i­cal Turk, or MTurk. Researchers would invite MTurk mem­bers to take a short test, in exchange for a small pay­ment. But in order to get paid, they would have to down­load our app on Face­book and input a spe­cial code. The app, which we called “This Is Your Dig­i­tal Life,” would take all the respons­es from the sur­vey and put those into one table. It would then pull all of the user’s Face­book data and put it into a sec­ond table. And then it would pull all the data for all the person’s Face­book friends and put that into anoth­er table.

    One person’s response would, on aver­age, pro­duce the records of three hun­dred oth­er peo­ple. Each of those peo­ple would have, say, a cou­ple hun­dred likes that we could ana­lyze. We need­ed to orga­nize and track all of those likes. How many pos­si­ble items, pho­tos, links, and pages are there to like across all of Face­book? Tril­lions. A Face­book page for some ran­dom band in Okla­homa, for exam­ple, might have 28 likes in the whole coun­try, but it still counts as its own like in the fea­ture set. We put $100,000 into the account to start recruit­ing peo­ple via MTurk, then wait­ed.

    I knew that it would take a bit of time for peo­ple to see the sur­vey on MTurk, fill it out, then install the app to get paid. Not long after the under­whelm­ing launch, we saw our first hit.

    Then the flood came. We got our first record, then two, then 20, then 100, then 1,000 — all with­in sec­onds. Chief tech­nol­o­gy offi­cer Tadas Jucikas added a ran­dom beep­ing sound to a record counter, and his com­put­er start­ed going boop-boop-boop as the num­bers went insane. The incre­ments of zeroes just kept build­ing, grow­ing the tables at expo­nen­tial rates as friend pro­files were added to the data­base. This was excit­ing for every­one, but for the data sci­en­tists among us, it was like an injec­tion of pure adren­a­line.

    **********

    Ban­non start­ed trav­el­ing to Lon­don more fre­quent­ly, to check on our progress. One of those vis­its hap­pened to be not long after we launched the app. We all went into the board­room again, with the giant screen at the front of the room. Jucikas made a brief pre­sen­ta­tion before turn­ing to Ban­non.

    “Give me a name.”

    Ban­non looked bemused and gave a name.

    “Okay. Now give me a state.”

    “I don’t know,” he said. “Nebras­ka.”

    Jucikas typed in a query, and a list of links popped up. He clicked on one of the many peo­ple who went by that name in Nebras­ka — and there was every­thing about her, right up on the screen. Here’s her pho­to, here’s where she works, here’s her house. Here are her kids, this is where they go to school, this is the car she dri­ves. She vot­ed for Mitt Rom­ney in 2012, she loves Katy Per­ry, she dri­ves an Audi. And not only did we have all her Face­book data, but we were merg­ing it with all the com­mer­cial and state bureau data we’d bought as well. And impu­ta­tions made from the U.S. Cen­sus. We had data about her mort­gage appli­ca­tions, we knew how much mon­ey she made, whether she owned a gun. We had infor­ma­tion from her air­line mileage pro­grams, so we knew how often she flew. We could see if she was mar­ried (she wasn’t). And we had a satel­lite pho­to of her house, eas­i­ly obtained from Google Earth. We had re-cre­at­ed her life in our com­put­er. She had no idea.

    “Give me anoth­er,” said Jucikas. And he did it again. And again. And by the third pro­file, Nix sud­den­ly sat up very straight.

    “Wait,” he said, his eyes widen­ing behind his black-rimmed glass­es. “How many of these do we have?”

    “We’re in the tens of mil­lions now,” said Jucikas. “At this pace, we could get to 200 mil­lion by the end of the year with enough fund­ing.”

    “Do we have their phone num­bers?” Nix asked. I told him we did. And then he reached for the speak­er­phone and asked for the num­ber. As Jucikas relayed it to him, he punched in the num­ber.

    After a cou­ple of rings, some­one picked up. We heard a woman say, “Hel­lo?” and Nix, in his most posh accent, said, “Hel­lo, ma’am. I’m ter­ri­bly sor­ry to both­er you, but I’m call­ing from the Uni­ver­si­ty of Cam­bridge. We are con­duct­ing a sur­vey. Might I speak with Ms. Jen­ny Smith, please?” The woman con­firmed that she was Jen­ny, and Nix start­ed ask­ing her ques­tions based on what we knew from her data.

    “Ms. Smith, I’d like to know, what is your opin­ion of the tele­vi­sion show Game of Thrones?” Jen­ny raved about it — just as she had on Face­book. “Did you vote for Mitt Rom­ney in the last elec­tion?” Jen­ny con­firmed that she had. Nix asked whether her kids went to such-and-such ele­men­tary school, and Jen­ny con­firmed that, too. When I looked over at Ban­non, he had a huge grin on his face.

    After Nix hung up with Jen­ny, Ban­non said, “Let me do one!” We went around the room, all of us tak­ing a turn. It was sur­re­al to think that these peo­ple were sit­ting in their kitchen in Iowa or Okla­homa or Indi­ana, talk­ing to a bunch of guys in Lon­don who were look­ing at satel­lite pic­tures of where they lived, fam­i­ly pho­tos, all of their per­son­al infor­ma­tion. Look­ing back, it’s crazy to think that Ban­non — who then was a total unknown, still more than a year away from gain­ing infamy as an advis­er to Don­ald Trump — sat in our office call­ing ran­dom Amer­i­cans to ask them per­son­al ques­tions. And peo­ple were more than hap­py to answer him.

    We had done it. We had recon­struct­ed tens of mil­lions of Amer­i­cans inside of a com­put­er, with poten­tial­ly hun­dreds of mil­lions more to come. This was an epic moment. I was proud that we had cre­at­ed some­thing so pow­er­ful. I felt sure it was some­thing that peo­ple would be talk­ing about for decades.

    ************

    By August 2014, just two months after we launched the app, Cam­bridge Ana­lyt­i­ca had col­lect­ed the com­plete Face­book accounts of more than 87 mil­lion users, most­ly from Amer­i­ca. They soon exhaust­ed the list of MTurk users and had to engage anoth­er com­pa­ny, Qualtrics, a sur­vey plat­form based in Utah. Almost imme­di­ate­ly, CA became one of their top clients and start­ed receiv­ing bags of Qualtrics-brand­ed good­ies. CA would get invoic­es sent from Pro­vo, billing them each time for 20,000 new users in their “Face­book Data Har­vest Project.”

    As soon as CA start­ed col­lect­ing this Face­book data, exec­u­tives from Palan­tir, Peter Thiel’s data-min­ing firm, start­ed mak­ing inquiries; their inter­est was appar­ent­ly piqued when they found out how much data the team was gath­er­ing — and that Face­book was just let­ting CA do it. The exec­u­tives CA met with want­ed to know how the project worked, and soon they approached our team about get­ting access to the data them­selves.

    Palan­tir was still doing work for the NSA and GCHQ. Staffers there said work­ing with Cam­bridge Ana­lyt­i­ca could poten­tial­ly open an inter­est­ing legal loop­hole: Gov­ern­ment secu­ri­ty agen­cies, along with con­trac­tors like Palan­tir, couldn’t legal­ly mass-har­vest per­son­al data on Amer­i­can cit­i­zens, but polling com­pa­nies, social net­works, and pri­vate com­pa­nies could. And despite the ban on direct­ly sur­veilling Amer­i­cans, I was told that U.S. intel­li­gence agen­cies were nonethe­less able to make use of infor­ma­tion on Amer­i­can cit­i­zens that was “freely vol­un­teered” by U.S. indi­vid­u­als or com­pa­nies. I didn’t think any­one was actu­al­ly being seri­ous, but I soon real­ized that I under­es­ti­mat­ed everyone’s inter­est in access­ing this data (which was sur­pris­ing­ly easy to acquire through Face­book, with Facebook’s loose­ly super­vised per­mis­sion­ing pro­ce­dures).

    Some of the staff work­ing at Palan­tir real­ized that Face­book had the poten­tial to become the best dis­creet sur­veil­lance tool imag­in­able for the NSA — that is, if that data was “freely vol­un­teered” by anoth­er enti­ty. To be clear, these con­ver­sa­tions were spec­u­la­tive, and it is unclear if Palan­tir itself was actu­al­ly aware of the par­tic­u­lars of these dis­cus­sions, or if the com­pa­ny received any CA data. The staff sug­gest­ed to Nix that if Cam­bridge Ana­lyt­i­ca gave them access to the har­vest­ed data, they could then, at least in the­o­ry, legal­ly pass it along to the NSA. One lead data sci­en­tist from Palan­tir began mak­ing reg­u­lar trips to the Cam­bridge Ana­lyt­i­ca office to work with the data sci­ence team on build­ing pro­fil­ing mod­els. He was occa­sion­al­ly accom­pa­nied by col­leagues, but the entire arrange­ment was kept secret from the rest of the CA teams — and per­haps Palan­tir itself. (It wasn’t clear whether these Palan­tir exec­u­tives were vis­it­ing CA offi­cial­ly or “unof­fi­cial­ly,” and Palan­tir has since assert­ed that it was only a sin­gle staff mem­ber who worked at CA in a “per­son­al capac­i­ty.”)

    By late spring 2014, Mercer’s invest­ment had spurred a hir­ing spree of psy­chol­o­gists, data sci­en­tists, and researchers. Nix brought on a new team of man­agers to orga­nize the fast-grow­ing research oper­a­tions. Although I remained the tit­u­lar direc­tor of research, the new oper­a­tions man­agers were now giv­en con­trol over direct over­sight and plan­ning of this rapid­ly grow­ing exer­cise. New projects seemed to pop up each day, and some­times it was unclear how or why projects were being approved to go to field. At this point, I did start to feel weird about every­thing, but when­ev­er I spoke with oth­er peo­ple at the firm, we all man­aged to calm one anoth­er down and ratio­nal­ize every­thing. And after Mer­cer installed Ban­non, I over­looked or explained away things that, in hind­sight, were obvi­ous red flags. Ban­non had his “niche” polit­i­cal inter­ests, but Mer­cer seemed to be too seri­ous a char­ac­ter to dab­ble in Bannon’s trashy polit­i­cal sideshows. At the time, many on the team sim­ply assumed that to jus­ti­fy tak­ing such a high finan­cial risk on our ideas, Mer­cer must have expect­ed that the research had the chance of mak­ing tons of mon­ey at his hedge fund.

    After Kogan joined, I had pro­fes­sors at the Uni­ver­si­ty of Cam­bridge con­stant­ly fawn­ing over the ground­break­ing poten­tial that the project could have for advanc­ing psy­chol­o­gy and soci­ol­o­gy, which made me feel like I was on a mis­sion. And if their col­leagues at uni­ver­si­ties like Har­vard or Stan­ford were also get­ting inter­est­ed in our work, I thought that sure­ly we must be onto some­thing. As corny as this might sound, it real­ly felt like I was work­ing on some­thing impor­tant — not just for Mer­cer or the com­pa­ny, but for sci­ence.

    **********

    The firm became a revolv­ing door of for­eign politi­cians, fix­ers, secu­ri­ty agen­cies, and busi­ness­men with their scant­i­ly clad pri­vate sec­re­taries in tow. It was obvi­ous that many of these men were asso­ciates of Russ­ian oli­garchs who want­ed to influ­ence a for­eign gov­ern­ment, but their inter­est in for­eign pol­i­tics was rarely ide­o­log­i­cal. Rather, they were usu­al­ly either seek­ing help to stash mon­ey some­where dis­creet, or to retrieve mon­ey that was sit­ting in a frozen account some­where in the world. Staff were told to just ignore the com­ings and goings of these men and not ask too many ques­tions, but staff would joke about it on inter­nal chat logs, and the vis­it­ing Rus­sians in par­tic­u­lar were usu­al­ly the more eccen­tric vari­ety of clients we would encounter. We hired a man named Sam Pat­ten, who had lived a col­or­ful life as a polit­i­cal oper­a­tive for hire all over the world. He had just fin­ished a project for pro-Russ­ian polit­i­cal par­ties in Ukraine work­ing with a man named Kon­stan­tin Kil­imnik, a for­mer offi­cer of Russia’s Main Intel­li­gence Direc­torate (the GRU). Although Pat­ten denies that he gave his Russ­ian part­ner any data, it was lat­er revealed that Paul Man­afort, who was for sev­er­al months Don­ald Trump’s cam­paign man­ag­er, did pass along vot­er polling data to Kil­imnik in a sep­a­rate instance.

    Pat­ten was a per­fect fit to nav­i­gate the world of shady inter­na­tion­al influ­ence oper­a­tions, and he was also well con­nect­ed among the grow­ing num­ber of Repub­li­cans join­ing Cam­bridge Ana­lyt­i­ca. When CA launched, the Democ­rats were far ahead of the Repub­li­cans in using data effec­tive­ly. For years, they had main­tained a cen­tral data sys­tem in VAN, which any Demo­c­ra­t­ic cam­paign in the coun­try could tap into. The Repub­li­cans had noth­ing com­pa­ra­ble. CA would close that gap.

    First we used focus groups and qual­i­ta­tive obser­va­tion to unpack the per­cep­tions of a giv­en pop­u­la­tion and learn what peo­ple cared about — term lim­its, the deep state, drain­ing the swamp, guns, and the con­cept of walls to keep out immi­grants were all explored in 2014, years before the Trump cam­paign. We then came up with hypothe­ses for how to sway opin­ions. CA test­ed these hypothe­ses with tar­get seg­ments in online pan­els or exper­i­ments to see whether they per­formed as the team expect­ed, based on the data. We also pulled Face­book pro­files, look­ing for pat­terns in order to build a neur­al-net­work algo­rithm that would help us make pre­dic­tions. Cam­bridge Ana­lyt­i­ca would tar­get those who were more prone to impul­sive anger or con­spir­a­to­r­i­al think­ing than aver­age cit­i­zens, intro­duc­ing nar­ra­tives via Face­book groups, ads, or arti­cles that the firm knew from inter­nal test­ing were like­ly to inflame the very nar­row seg­ments of peo­ple with these traits. CA want­ed to pro­voke peo­ple, to get them to engage.

    We began devel­op­ing fake pages on Face­book and oth­er plat­forms that looked like real forums, groups, and news sources, with vague names like Smith Coun­ty Patri­ots or I Love My Coun­try. When users joined CA’s fake groups, it would post videos and arti­cles that would fur­ther pro­voke and inflame them. Con­ver­sa­tions would rage on the group page, with peo­ple com­mis­er­at­ing about how ter­ri­ble or unfair some­thing was. CA broke down social bar­ri­ers, cul­ti­vat­ing rela­tion­ships across groups. And all the while it was test­ing and refin­ing mes­sages, to achieve max­i­mum engage­ment.

    Lots of report­ing on Cam­bridge Ana­lyt­i­ca gave the impres­sion that every­one was tar­get­ed. In fact, not that many peo­ple were tar­get­ed at all. CA didn’t need to cre­ate a big tar­get uni­verse, because most elec­tions are zero-sum games: If you get one more vote than the oth­er guy or girl, you win the elec­tion. Cam­bridge Ana­lyt­i­ca need­ed to infect only a nar­row sliv­er of the pop­u­la­tion, and then it could watch the nar­ra­tive spread.

    Mer­cer looked at win­ning elec­tions as a social-engi­neer­ing prob­lem. The way to “fix soci­ety” was by cre­at­ing sim­u­la­tions: If we could quan­ti­fy soci­ety inside a com­put­er, opti­mize that sys­tem, and then repli­cate that opti­miza­tion out­side the com­put­er, we could remake Amer­i­ca in his image. Beyond the tech­nol­o­gy and the grander cul­tur­al strat­e­gy, invest­ing in CA was a clever polit­i­cal move. At the time, I was told that because he was back­ing a pri­vate com­pa­ny rather than a PAC, Mer­cer wouldn’t have to report his sup­port as a polit­i­cal dona­tion. He would get the best of both worlds: CA would be work­ing to sway elec­tions, but with­out any of the cam­paign-finance restric­tions that gov­ern U.S. elec­tions. His giant foot­prints would remain hid­den.

    CA’s client list grew into a who’s who of the Amer­i­can right wing. The Trump and Ted Cruz cam­paigns paid more than $5 mil­lion apiece to the firm. In the autumn of 2014, Jeb Bush paid a vis­it to the office. He began by telling Nix that if he decid­ed to run for pres­i­dent, he want­ed to be able to do it on his terms, with­out hav­ing to “court the cra­zies” in his par­ty.

    “Of course, of course,” Nix answered. When it was over, he was so excit­ed at the pos­si­bil­i­ty of sign­ing up anoth­er big Amer­i­can client, he insist­ed on imme­di­ate­ly call­ing the Mer­cers with the good news, hav­ing appar­ent­ly for­got­ten that the Mer­cers had told him on count­less occa­sions of their sup­port for Ted Cruz.

    ...

    For most of the time I was at SCL and Cam­bridge Ana­lyt­i­ca, none of what we were doing felt real, part­ly because so many of the peo­ple I met seemed almost car­toon­ish. The job became more of an intel­lec­tu­al adven­ture, like play­ing a video game with esca­lat­ing lev­els of dif­fi­cul­ty. What hap­pens if I do this? Can I make this char­ac­ter turn from blue to red, or red to blue? Sit­ting in an office, star­ing at a screen, it was easy to spi­ral down into a deep­er, dark­er place, to lose sight of what I was actu­al­ly involved in.

    But I couldn’t ignore what was right in front of my eyes. Weird PACs start­ed show­ing up. The super-PAC of future nation­al-secu­ri­ty advis­er John Bolton paid Cam­bridge Ana­lyt­i­ca more than $1 mil­lion to explore how to increase mil­i­tarism in Amer­i­can youth. Bolton was wor­ried that mil­len­ni­als were a “moral­ly weak” gen­er­a­tion that would not want to go to war with Iran or oth­er “evil” coun­tries.

    Even­tu­al­ly, I was feel­ing more and more as if I was a part of some­thing that I did not under­stand and could not con­trol, and that was, at its core, deeply unsa­vory. The deep­er I got into SCL’s projects, the more the office cul­ture seemed to be cloud­ing my judg­ment. Over time, I was accli­ma­tiz­ing to their cor­rup­tion and moral dis­re­gard. Every­one was excit­ed about the dis­cov­er­ies we were mak­ing, but how far were we will­ing to go in the name of this new field of research? Was there a point at which some­one would final­ly say enough is enough? I didn’t know, and in truth, I didn’t want to think about it. Like so many peo­ple in tech­nol­o­gy, I stu­pid­ly fell for the hubris­tic allure of Facebook’s call to “move fast and break things.” I’ve nev­er regret­ted some­thing so much. I moved fast, I built things of immense pow­er, and I nev­er ful­ly appre­ci­at­ed what I was break­ing until it was too late.

    ———–

    “How I Helped Hack Democ­ra­cy” by Christo­pher Wylie; New York Mag­a­zine; 10/04/2019

    “Jucikas typed in a query, and a list of links popped up. He clicked on one of the many peo­ple who went by that name in Nebras­ka — and there was every­thing about her, right up on the screen. Here’s her pho­to, here’s where she works, here’s her house. Here are her kids, this is where they go to school, this is the car she dri­ves. She vot­ed for Mitt Rom­ney in 2012, she loves Katy Per­ry, she dri­ves an Audi. And not only did we have all her Face­book data, but we were merg­ing it with all the com­mer­cial and state bureau data we’d bought as well. And impu­ta­tions made from the U.S. Cen­sus. We had data about her mort­gage appli­ca­tions, we knew how much mon­ey she made, whether she owned a gun. We had infor­ma­tion from her air­line mileage pro­grams, so we knew how often she flew. We could see if she was mar­ried (she wasn’t). And we had a satel­lite pho­to of her house, eas­i­ly obtained from Google Earth. We had re-cre­at­ed her life in our com­put­er. She had no idea.

    They were recre­at­ing lives by merg­ing all sorts of dif­fer­ent data­bas­es. The Face­book data pro­vid­ed an impor­tant psy­cho­log­i­cal dimen­sion to the pro­files but it was just one dimen­sion of many. And up to 200 mil­lions were pro­ject­ed by the end of the year. Pro­files that recre­at­ed lives. For use by Robert Mer­cer and Steve Ban­non:

    ...
    After Nix hung up with Jen­ny, Ban­non said, “Let me do one!” We went around the room, all of us tak­ing a turn. It was sur­re­al to think that these peo­ple were sit­ting in their kitchen in Iowa or Okla­homa or Indi­ana, talk­ing to a bunch of guys in Lon­don who were look­ing at satel­lite pic­tures of where they lived, fam­i­ly pho­tos, all of their per­son­al infor­ma­tion. Look­ing back, it’s crazy to think that Ban­non — who then was a total unknown, still more than a year away from gain­ing infamy as an advis­er to Don­ald Trump — sat in our office call­ing ran­dom Amer­i­cans to ask them per­son­al ques­tions. And peo­ple were more than hap­py to answer him.

    We had done it. We had recon­struct­ed tens of mil­lions of Amer­i­cans inside of a com­put­er, with poten­tial­ly hun­dreds of mil­lions more to come. This was an epic moment. I was proud that we had cre­at­ed some­thing so pow­er­ful. I felt sure it was some­thing that peo­ple would be talk­ing about for decades.
    ...

    But it was­n’t lim­it­ed to Mer­cer and Ban­non and that ini­tial group of investors. They were offer­ing the ser­vices of these ser­vices to all sorts of Repub­li­can fig­ures. John Bolton’s super-PAC want­ed to fig­ure out how to increase mil­i­tarism in the Amer­i­can youth. Even Jeb Bush want­ed in but the Mer­cers would­n’t allow it. And recall how Sam Pat­ten worked for SCL on the 2015 Niger­ian cam­paign where polit­i­cal hack­ing oper­a­tions were employed. Also recall how Sam Pat­ten plead guilty to FARA vio­la­tions when he act­ed as a straw pur­chas­er of Trump inau­gu­ra­tion tick­ets for Ukrain­ian oli­garch and Man­afort asso­ciate Sergii Lovochkin (Lyovochkin). Also recall how SCL/Cambridge Ana­lyt­i­ca spin­off AIQ was doing con­sult­ing work for Ukrain­ian oli­garch Sergei Taru­ta, who, like Lovochkin, appears to be a Ukrain­ian oli­garch who strad­dles the East/West divide in the coun­try while gen­er­al­ly sup­port­ing mov­ing Ukraine towards the West. We don’t know if Pat­ten’s work with Cam­bridge Ana­lyt­i­ca was on behalf of a Ukrain­ian client but he had just been work­ing with Man­afort’s long-time part­ner Kon­stan­tin Kil­imnik so it would­n’t be sur­pris­ing:

    ...
    The firm became a revolv­ing door of for­eign politi­cians, fix­ers, secu­ri­ty agen­cies, and busi­ness­men with their scant­i­ly clad pri­vate sec­re­taries in tow. It was obvi­ous that many of these men were asso­ciates of Russ­ian oli­garchs who want­ed to influ­ence a for­eign gov­ern­ment, but their inter­est in for­eign pol­i­tics was rarely ide­o­log­i­cal. Rather, they were usu­al­ly either seek­ing help to stash mon­ey some­where dis­creet, or to retrieve mon­ey that was sit­ting in a frozen account some­where in the world. Staff were told to just ignore the com­ings and goings of these men and not ask too many ques­tions, but staff would joke about it on inter­nal chat logs, and the vis­it­ing Rus­sians in par­tic­u­lar were usu­al­ly the more eccen­tric vari­ety of clients we would encounter. We hired a man named Sam Pat­ten, who had lived a col­or­ful life as a polit­i­cal oper­a­tive for hire all over the world. He had just fin­ished a project for pro-Russ­ian polit­i­cal par­ties in Ukraine work­ing with a man named Kon­stan­tin Kil­imnik, a for­mer offi­cer of Russia’s Main Intel­li­gence Direc­torate (the GRU). Although Pat­ten denies that he gave his Russ­ian part­ner any data, it was lat­er revealed that Paul Man­afort, who was for sev­er­al months Don­ald Trump’s cam­paign man­ag­er, did pass along vot­er polling data to Kil­imnik in a sep­a­rate instance.

    Pat­ten was a per­fect fit to nav­i­gate the world of shady inter­na­tion­al influ­ence oper­a­tions, and he was also well con­nect­ed among the grow­ing num­ber of Repub­li­cans join­ing Cam­bridge Ana­lyt­i­ca. When CA launched, the Democ­rats were far ahead of the Repub­li­cans in using data effec­tive­ly. For years, they had main­tained a cen­tral data sys­tem in VAN, which any Demo­c­ra­t­ic cam­paign in the coun­try could tap into. The Repub­li­cans had noth­ing com­pa­ra­ble. CA would close that gap.

    ...

    CA’s client list grew into a who’s who of the Amer­i­can right wing. The Trump and Ted Cruz cam­paigns paid more than $5 mil­lion apiece to the firm. In the autumn of 2014, Jeb Bush paid a vis­it to the office. He began by telling Nix that if he decid­ed to run for pres­i­dent, he want­ed to be able to do it on his terms, with­out hav­ing to “court the cra­zies” in his par­ty.

    ...

    But I couldn’t ignore what was right in front of my eyes. Weird PACs start­ed show­ing up. The super-PAC of future nation­al-secu­ri­ty advis­er John Bolton paid Cam­bridge Ana­lyt­i­ca more than $1 mil­lion to explore how to increase mil­i­tarism in Amer­i­can youth. Bolton was wor­ried that mil­len­ni­als were a “moral­ly weak” gen­er­a­tion that would not want to go to war with Iran or oth­er “evil” coun­tries.
    ...

    But by far the most con­tro­ver­sial Cam­bridge Ana­lyt­i­ca client would habe to be Palan­tir. Palan­tir staffers want­ed to open an inter­est­ing legal loop­hole and let Cam­bridge Ana­lyt­i­ca get around the pro­hi­bi­tion against gov­ern­ment mass-har­vest­ing of data on Amer­i­can cit­i­zens. That data­base that recre­at­ed the details of mil­lions of lives that was going to get sold to Big Broth­er:

    ...
    As soon as CA start­ed col­lect­ing this Face­book data, exec­u­tives from Palan­tir, Peter Thiel’s data-min­ing firm, start­ed mak­ing inquiries; their inter­est was appar­ent­ly piqued when they found out how much data the team was gath­er­ing — and that Face­book was just let­ting CA do it. The exec­u­tives CA met with want­ed to know how the project worked, and soon they approached our team about get­ting access to the data them­selves.

    Palan­tir was still doing work for the NSA and GCHQ. Staffers there said work­ing with Cam­bridge Ana­lyt­i­ca could poten­tial­ly open an inter­est­ing legal loop­hole: Gov­ern­ment secu­ri­ty agen­cies, along with con­trac­tors like Palan­tir, couldn’t legal­ly mass-har­vest per­son­al data on Amer­i­can cit­i­zens, but polling com­pa­nies, social net­works, and pri­vate com­pa­nies could. And despite the ban on direct­ly sur­veilling Amer­i­cans, I was told that U.S. intel­li­gence agen­cies were nonethe­less able to make use of infor­ma­tion on Amer­i­can cit­i­zens that was “freely vol­un­teered” by U.S. indi­vid­u­als or com­pa­nies. I didn’t think any­one was actu­al­ly being seri­ous, but I soon real­ized that I under­es­ti­mat­ed everyone’s inter­est in access­ing this data (which was sur­pris­ing­ly easy to acquire through Face­book, with Facebook’s loose­ly super­vised per­mis­sion­ing pro­ce­dures).

    Some of the staff work­ing at Palan­tir real­ized that Face­book had the poten­tial to become the best dis­creet sur­veil­lance tool imag­in­able for the NSA — that is, if that data was “freely vol­un­teered” by anoth­er enti­ty. To be clear, these con­ver­sa­tions were spec­u­la­tive, and it is unclear if Palan­tir itself was actu­al­ly aware of the par­tic­u­lars of these dis­cus­sions, or if the com­pa­ny received any CA data. The staff sug­gest­ed to Nix that if Cam­bridge Ana­lyt­i­ca gave them access to the har­vest­ed data, they could then, at least in the­o­ry, legal­ly pass it along to the NSA. One lead data sci­en­tist from Palan­tir began mak­ing reg­u­lar trips to the Cam­bridge Ana­lyt­i­ca office to work with the data sci­ence team on build­ing pro­fil­ing mod­els. He was occa­sion­al­ly accom­pa­nied by col­leagues, but the entire arrange­ment was kept secret from the rest of the CA teams — and per­haps Palan­tir itself. (It wasn’t clear whether these Palan­tir exec­u­tives were vis­it­ing CA offi­cial­ly or “unof­fi­cial­ly,” and Palan­tir has since assert­ed that it was only a sin­gle staff mem­ber who worked at CA in a “per­son­al capac­i­ty.”)
    ...

    “he staff sug­gest­ed to Nix that if Cam­bridge Ana­lyt­i­ca gave them access to the har­vest­ed data, they could then, at least in the­o­ry, legal­ly pass it along to the NSA.”

    Cam­bridge Ana­lyt­i­ca’s data­bas­es were so detailed even the NSA want­ed in. That’s what Christo­pher Wylie’s team built for Robert Mer­cer and Steve Ban­non. And then sold to a bunch of Repub­li­cans. And that’s what we’re learn­ing from just this excerpt of Wylie’s new book: that the Cam­bridge Ana­lyt­i­ca scan­dal was­n’t just a scan­dal about the har­vest­ing of Face­book data and psy­cho­log­i­cal pro­files get­ting sold to the Trump cam­paign. It’s the scan­dal of Cam­bridge Ana­lyt­i­ca cre­at­ing super data­bas­es that includes far more than just these Face­book pro­files and sell­ing it to all sorts of fig­ures, includ­ing pos­si­bly the gov­ern­ment, which seems like a much big­ger scan­dal.

    Posted by Pterrafractyl | October 9, 2019, 10:46 pm
  33. Face­book just dis­closed a new data ‘oop­sie’ involv­ing app devel­op­ers improp­er­ly access­ing infor­ma­tion they should­n’t have been access­ing (and yet some­how could access). Sur­prise!

    Face­book haven’t released very much infor­ma­tion about it yet. We’re sim­ply told that rough­ly 100 devel­op­ers may have improp­er­ly grabbed infor­ma­tion about peo­ple belong­ing to cer­tain Face­book Groups. We’re told the apps were pri­mar­i­ly involved with social media man­age­ment and video-stream­ing apps and these apps were able to access infor­ma­tion about Face­book Group mem­bers even after Face­book changed its poli­cies in April of 2018 in the wake of the Cam­bridge Ana­lyt­i­ca scan­dal. So we’re talk­ing about apps that were some­how able to access this infor­ma­tion of Face­book Group mem­bers for over a year and a half fol­low­ing the pol­i­cy change.

    Face­book won’t tell us what exact­ly the infor­ma­tion was that these apps could access oth­er than to say it includ­ed names and pho­tos. The com­pa­ny also won’t say how many peo­ple were impact­ed. And while Face­book is claim­ing that they haven’t seen any signs of devel­op­ers abus­ing the infor­ma­tion, they assure us they are ask­ing the devel­op­ers to delete the data and will be con­duct­ing an audit to con­firm the data is delet­ed. Keep in mind that Face­book’s audit can’t real­ly con­sist of much more than ask­ing these com­pa­nies if they real­ly delet­ed the data since it’s impos­si­ble for Face­book to tru­ly con­firm it. It’s a reminder that Face­book’s busi­ness mod­el is based on col­lect­ing mas­sive amounts of data and then com­plete­ly los­ing con­trol of the data when they sell/trade it away:

    Forbes

    Face­book Is Still Leak­ing Data More Than One Year After Cam­bridge Ana­lyt­i­ca

    Michael Nuñez
    Forbes Staff
    Nov 5, 2019, 07:53pm

    Face­book said late Tues­day that rough­ly 100 devel­op­ers may have improp­er­ly accessed user data, which includes the names and pro­file pic­tures of indi­vid­u­als in cer­tain Face­book Groups.

    The com­pa­ny explained in a blog post that devel­op­ers pri­mar­i­ly of social media man­age­ment and video-stream­ing apps retained the abil­i­ty to access Face­book Group mem­ber infor­ma­tion longer than the com­pa­ny intend­ed.

    The com­pa­ny did not detail the type of data that was improp­er­ly accessed beyond names and pho­tos, and it did not dis­close the num­ber of users affect­ed by the leak.

    Face­book restrict­ed its devel­op­er APIs—which pro­vide a way for apps to inter­face with Face­book data—in April 2018, after the Cam­bridge Ana­lyt­i­ca scan­dal broke the month before. The goal was to reduce the way in which devel­op­ers could gath­er large swaths of data from Face­book users.

    But the company’s sweep­ing changes have been rel­a­tive­ly inef­fec­tive. More than a year after the com­pa­ny restrict­ed API access, the com­pa­ny con­tin­ues to announce new­ly dis­cov­ered data leaks.

    “Although we’ve seen no evi­dence of abuse, we will ask them to delete any mem­ber data they may have retained and we will con­duct audits to con­firm that it has been delet­ed,” Face­book said in a state­ment.

    The social media giant says in its announce­ment that it reached out to 100 devel­op­er part­ners who may have improp­er­ly accessed user data and says that at least 11 devel­op­er part­ners accessed the user data with­in the last 60 days.

    ...

    The Fed­er­al Trade Com­mis­sion slapped Face­book with a $5 bil­lion fine as a result of the breach. As part of the 20-year agree­ment both par­ties reached, Face­book now faces new guide­lines for how it han­dles pri­va­cy leaks.

    “The new frame­work under our agree­ment with the FTC means more account­abil­i­ty and trans­paren­cy into how we build and main­tain prod­ucts,” Facebook’s direc­tor of plat­form part­ner­ships, Kon­stan­ti­nos Papamil­tiadis, wrote in a Face­book post.

    “As we work through this process we expect to find exam­ples like the Groups API of where we can improve; rest assured we are com­mit­ted to this work and sup­port­ing the peo­ple on our plat­form.”

    ———-

    “Face­book Is Still Leak­ing Data More Than One Year After Cam­bridge Ana­lyt­i­ca” by Michael Nuñez; Forbes; 11/05/2019

    “The com­pa­ny did not detail the type of data that was improp­er­ly accessed beyond names and pho­tos, and it did not dis­close the num­ber of users affect­ed by the leak.”

    We don’t know what was leaked and we don’t know how many peo­ple were affect­ed. We just know there’s a leak and it involves rough­ly 100 app devel­op­ers. There’s no doubt about it. That’s omi­nous. You don’t want ambi­gu­i­ty in a Face­book scan­dal. That just means it’s only going to get a lot worse. The Cam­bridge Ana­lyt­i­ca scan­dal made abun­dant­ly clear with one update after anoth­er of more peo­ple affect­ed, more data col­lect­ed, and a worse cor­po­rate cul­ture that made it all inevitable. That’s how Face­book does scan­dals. Drip-drip-drip­ping it along. This is just the first drip in this new scan­dal.

    And note how we’re told the rough­ly 100 app devel­op­ers were pri­mar­i­ly devel­op­ers of social media man­age­ment and video-stream­ing apps. The use of the word “pri­mar­i­ly” also implies there are oth­er types of apps involved. What types of apps might those be? Hope­ful­ly we’ll learn that in one of the future drips:

    ...
    The com­pa­ny explained in a blog post that devel­op­ers pri­mar­i­ly of social media man­age­ment and video-stream­ing apps retained the abil­i­ty to access Face­book Group mem­ber infor­ma­tion longer than the com­pa­ny intend­ed.
    ...

    And note one of the oth­er impli­ca­tions of this sto­ry: it demon­strates that Face­book either can’t or won’t be able to effec­tive­ly imple­ment pol­i­cy changes. And when it comes to pol­i­cy changes around the data it’s already giv­en out to devel­op­ers, it real­ly can’t enforce those changes. It can only ask the devel­op­ers to please not use the data they already got from Face­book in ways that vio­late the new poli­cies and hope they do it. Now, yes, Face­book can the­o­ret­i­cal­ly enforce pol­i­cy changes in how devel­op­ers use data they’ve already col­lect­ed in the Face­book apps they’re devel­op­ing. But there are many uses for Face­book data that don’t involve Face­book apps, as the Cam­bridge Ana­lyt­i­ca scan­dal psy­cho­log­i­cal pro­fil­ing for polit­i­cal pur­pos­es using Face­book pro­files also made abun­dant­ly clear. And Face­book has a his­to­ry of know­ing about these abus­es and allow­ing them to hap­pen and then pre­tend­ing like they did­n’t know about it until they can’t pre­tend any­more, as the Cam­bridge Ana­lyt­i­ca scan­dal also made abun­dant­ly clear.

    So if this Face­book Groups scan­dal ends up involv­ing the mass han­dover of a large num­ber of detailed pro­files, all that data will be out of Face­book’s con­trol. It’s just out there. Pos­si­bly get­ting passed around, sold and trad­ed. It’s anoth­er way Face­book con­nects peo­ple: through the data black mar­ket for all the data Face­book has qui­et­ly hand­ed out:

    ...
    But the company’s sweep­ing changes have been rel­a­tive­ly inef­fec­tive. More than a year after the com­pa­ny restrict­ed API access, the com­pa­ny con­tin­ues to announce new­ly dis­cov­ered data leaks.

    “Although we’ve seen no evi­dence of abuse, we will ask them to delete any mem­ber data they may have retained and we will con­duct audits to con­firm that it has been delet­ed,” Face­book said in a state­ment.
    ...

    And note the oth­er omi­nous tid­bit that’s undoubt­ed­ly going to be explo­sive­ly bad tucked away in this ini­tial drip: in addi­tion to the rough­ly 100 app devel­op­ers, there are app devel­op­er part­ners involved in this too. They already have the improp­er­ly har­vest­ed data. And at least 11 of them have been access­ing that data in the last 60 days. At least 11, which means that num­ber is also going to go up in a future drip:

    ...
    The social media giant says in its announce­ment that it reached out to 100 devel­op­er part­ners who may have improp­er­ly accessed user data and says that at least 11 devel­op­er part­ners accessed the user data with­in the last 60 days.
    ...

    So we’ll see how much worse this sto­ry gets. At this point we just know it’s going to get a lot worse. But keep in mind that Face­book appears to be patho­log­i­cal­ly dri­ven to repeat the same Cam­bridge Ana­lyt­i­ca-style scan­dal over and over — where Face­book qui­et­ly max­i­mize their prof­its by fla­grant­ly sell­ing data and it keeps get­ting worse and worse until it explodes into a scan­dal that starts off as a mild sound­ing sto­ry that soon erupts into anoth­er Face­book mega-scan­dal — and if this is anoth­er sto­ry fol­low­ing that pathol­o­gy this is prob­a­bly going to look like the Cam­bridge Ana­lyt­i­ca scan­dal, where the mass har­vest­ing of 87 mil­lion detailed user pro­files was enabled via “friends per­mis­sion” fea­ture that Face­book allowed for app devel­op­ers from 2007–2014 that allowed apps to grab the pro­files of all app users’ friends too. If this scan­dal involves app devel­op­ers grab­bing detailed pro­files of all the mem­bers in a group this could end up dwarf­ing the Cam­bridge Ana­lyt­i­ca scan­dal. The Cam­bridge Ana­lyt­i­ca scan­dal was the sto­ry of a sin­gle app devel­op­er. This is at least 100.

    In relat­ed news, an anony­mous White House source dis­closed to NBC News that Mark Zucker­berg had a secret din­ner with Pres­i­dent Trump and Peter Thiel when Zucker­berg was in DC to pitch his Libra cryp­tocur­ren­cy scheme to con­gress last month. It’s a reminder that we prob­a­bly should­n’t be sur­prised if this new data scan­dal involves Trump cam­paign Face­book Group apps and if that’s the case we’ll prob­a­bly learn about it after the 2020 elec­tion. Kind of like the Cam­bridge Ana­lyt­i­ca scan­dal. But prob­a­bly worse. Because that’s how Face­book does scan­dals. They just steadi­ly get worse. Drip by drip. Scan­dal by scan­dal. A fire hose of drips.

    Posted by Pterrafractyl | November 23, 2019, 9:51 pm
  34. Here’s an inter­est­ing sto­ry that could actu­al­ly end up being incred­i­bly impact­ful on the out­come of the 2020 US elec­tion. Or it might end up being a reaf­fir­ma­tion of Face­book’s ded­i­ca­tion to mak­ing mon­ey help­ing Repub­li­cans spreads mis­in­for­ma­tion. We’ll see:

    Face­book hint­ed that it might be mak­ing changes to its adver­tis­ing tools. Changes that lim­it the abil­i­ty to micro­tar­get ads. And that pre­dictably has led to an out­cry from the Trump team. Recall how Trump’s 2016 cam­paign relied heav­i­ly on Face­book’s micro­tar­get­ing tech­nolo­gies in 2016 and that’s seen as one of the core ele­ments of his vic­to­ry. Micro­tar­get­ing was a big part of the cam­paign’s ‘secret sauce’ in 2016 and all indi­ca­tions are that the Trump team is plan­ning on using refined micro­tar­get­ing tech­niques even more exten­sive­ly in 2020. So it real­ly could be a very big deal if Trump’s cam­paign can’t rely on micro­tar­get­ing.

    But it remains extreme­ly unclear what, if any, changes Face­book is actu­al­ly going to make. The report­ing is based on an anony­mous indi­vid­ual famil­iar with Face­book’s think­ing on the mat­ter who claims that lim­it­ing micro­tar­get­ing is one of the changes Face­book is con­sid­er­ing. But on Mon­day, Facebook’s vice pres­i­dent of glob­al mar­ket­ing solu­tions assert­ed that the ad tar­get­ing tech­nolo­gies would­n’t be impact­ed by any upcom­ing changes. Lat­er she told Axios that a range of changes were still pos­si­ble.

    So maybe Face­book is lim­it­ing their micro­tar­get­ing options and maybe it isn’t. But the very threat of that has the Trump cam­paign decry­ing that any lim­i­ta­tions on micro­tar­get­ing would sup­press vot­er turnout and sti­fle free speech. It’s inter­est­ing spin since there is some truth to the com­plaint that lim­it­ing micro­tar­get­ing would lim­it vot­er turnout. But that’s only because, as the arti­cle notes, micro­tar­get­ing tools enhance the abil­i­ty to send peo­ple the kind of high­ly inflam­ma­to­ry and decep­tive ads that will get them emo­tion­al­ly engaged enough to go out an vote. Recall how the Cam­bridge Ana­lyt­i­ca scan­dal involved the devel­op of psy­cho­log­i­cal pro­files on users — based on their “Likes” and oth­er Face­book pro­file infor­ma­tion — and use those pro­files to tai­lor the kinds of mes­sages, often decep­tive mes­sages, that would emo­tion­al­ly move and inflame peo­ple. So if you lim­it the use of micro­tar­get­ing, you do actu­al­ly lim­it the abil­i­ty to moti­vate peo­ple to get out and vote by lim­it­ing the abil­i­ty to deliv­er tai­lored ads designed to emo­tion­al­ly inflame peo­ple. Also recall how one of the oth­er goals of the Cam­bridge Ana­lyt­i­ca micro­tar­get­ing effort was to sup­press the vote by encour­ag­ing left-lean­ing vot­ers to stay home and not vote at all. Because, in the end, micro­tar­get­ing can encour­ag­ing vot­ing or not vot­ing because it’s fun­da­men­tal­ly about micro­ma­nip­u­la­tion. Micro­ma­nip­u­la­tion that the Trump cam­paign absolute­ly needs for 2020:

    The Wash­ing­ton Post

    Trump cam­paign, spend­ing furi­ous­ly to counter impeach­ment inquiry, assails Face­book over poten­tial changes to polit­i­cal ad rules
    Lim­it­ing micro­tar­get­ing would strike at a major Trump ad strat­e­gy

    By Isaac Stan­ley-Beck­er and Tony Romm
    Novem­ber 20, 2019 at 4:41 p.m. CST

    The Trump cam­paign on Wednes­day lashed out at Face­book after com­pa­ny exec­u­tives said they were con­sid­er­ing changes to rules around polit­i­cal ads that could affect the campaign’s abil­i­ty to tar­get its sup­port­ers on the plat­form.

    The out­cry came as Trump’s reelec­tion team has under­tak­en a mas­sive spend­ing blitz on Face­book aimed at coun­ter­ing the House’s impeach­ment inquiry. Trump’s page alone pro­mot­ed more than $830,000 worth of ads in the sev­en days end­ing on Nov. 17, accord­ing to Facebook’s ad archive.

    Facebook’s micro­tar­get­ing tech­nolo­gies allow adver­tis­ers to home in on spe­cif­ic groups of users and deliv­er mes­sag­ing tai­lored to them — a strat­e­gy the Trump cam­paign has used pro­lif­i­cal­ly. Trump’s cam­paign direc­tor Brad Parscale has not­ed that the president’s team has test­ed thou­sands of vari­a­tions of polit­i­cal ads in an attempt to reach small groups of vot­ers, such as “15 peo­ple in the Flori­da Pan­han­dle that I would nev­er buy a TV com­mer­cial for.”

    The prospect that his reelec­tion cam­paign could lose access to some of those tools appears to be vex­ing his team. The campaign’s offi­cial Twit­ter account used siren emo­jis to sound the alarm, tag­ging Facebook’s account on Twit­ter as it warned that the com­pa­ny “wants to take impor­tant tools away from us for 2020.”

    ??IMPORTANT??@facebook wants to take impor­tant tools away from us for 2020.Tools that help us reach more great Amer­i­cans & lift voic­es the media & big tech choose to ignore!They want to raise prices to put more of your hard earned small dol­lar dona­tions into their pock­ets. https://t.co/gJbFfTLnzW— Team Trump (@TeamTrump) Novem­ber 20, 2019

    The dis­ap­proval came in response to rev­e­la­tions that Face­book exec­u­tives were con­sid­er­ing broad changes to adver­tis­ing rules, even as they have declined to fol­low Twitter’s lead in wash­ing their hands of the issue by ban­ning ads from polit­i­cal can­di­dates alto­geth­er.

    Face­book and Twit­ter did not imme­di­ate­ly respond to a request for com­ment. Nor did Google, where exec­u­tives also are con­sid­er­ing changes in the rules for polit­i­cal ads on YouTube, a video-stream­ing site where Trump also has attacked Democ­rats dur­ing the impeach­ment debate.

    Face­book has faced a tor­rent of crit­i­cism this fall after affirm­ing that it would not fact-check speech by politi­cians, argu­ing that it should not serve as the arbiter of truth online. Spark­ing that con­tro­ver­sy was one of the Trump campaign’s own ads, which assailed for­mer Vice Pres­i­dent Joe Biden with false claims about his deal­ings in Ukraine. Biden, who is seek­ing the Demo­c­ra­t­ic nom­i­na­tion for pres­i­dent, request­ed that Face­book remove the ad — and the social media giant declined.

    In the weeks since, Trump has con­tin­ued to make false claims in Face­book ads, includ­ing the asser­tion that the inves­ti­ga­tion by spe­cial coun­sel Robert S. Mueller III result­ed in a “total exon­er­a­tion,” which it did not.

    Many of the campaign’s most recent posts take aim at Rep. Adam Schiff (D‑Calif.), the chair­man of the House Intel­li­gence Com­mit­tee, and House Speak­er Nan­cy Pelosi (D‑Calif.), press­ing users to con­tribute to Trump’s cam­paign to sig­nal their oppo­si­tion to the “Fake Impeach­ment Tri­als” and the “Impeach­ment Hoax.”

    Mean­while, Face­book has expressed an open­ness to rethink­ing some of its poli­cies, which have drawn sharp rebuke from reg­u­la­tors in Wash­ing­ton and even the company’s own employ­ees, who asked Face­book CEO Mark Zucker­berg in an open let­ter to restrict tar­get­ing for polit­i­cal ads. Zucker­berg recent­ly acknowl­edged the tech giant is “con­tin­u­ing to look at how it might make sense to refine it in the future.”

    Among the changes under con­sid­er­a­tion, accord­ing to an indi­vid­ual famil­iar with Facebook’s delib­er­a­tions who was not autho­rized to speak pub­licly, is lim­it­ing micro­tar­get­ing.

    On Mon­day, Facebook’s vice pres­i­dent of glob­al mar­ket­ing solu­tions, Car­olyn Ever­son, said tar­get­ing tech­nolo­gies would not be affect­ed, promis­ing at the 2019 Code Media con­fer­ence, “We are not talk­ing about chang­ing the tar­get­ing.”

    Then, she walked those com­ments back, telling Axios that a range of mod­i­fi­ca­tions were still pos­si­ble.

    That refusal to affirm that micro­tar­get­ing was safe from pol­i­cy changes prompt­ed the Trump team’s out­cry. The campaign’s dig­i­tal direc­tor, Gary Coby, issued a series of tweets claim­ing that rein­ing in micro­tar­get­ing would sup­press vot­er turnout and sti­fle speech.

    “This would uneven­ly hurt the lit­tle guy, small­er voic­es, & issues the pub­lic is not aware of OR news is NOT cov­er­ing,” he wrote.

    Start­ing with the 2016 elec­tion, Trump has used Facebook’s pro­mo­tion tools to bypass main­stream gate­keep­ers and speak direct­ly to Amer­i­cans, fre­quent­ly spread­ing false­hoods to boost his case. Face­book does not reveal which users were tar­get­ed by any giv­en ad, dis­clos­ing only a rough demo­graph­ic pro­file of those who ulti­mate­ly viewed the mes­sag­ing.

    Many experts have raised con­cerns about micro­tar­get­ing, argu­ing that broad­er appeals force politi­cians to speak across demo­graph­ic groups and are more like­ly to gain the atten­tion of fact-check­ers.

    The more hyper-tar­get­ed an ad, said Shan­non McGre­gor, an assis­tant pro­fes­sor of com­mu­ni­ca­tions at the Uni­ver­si­ty of Utah, the “more like­ly the appeals are to be inflam­ma­to­ry.”

    The warn­ing from the Trump cam­paign illus­trates the par­ti­san fault lines of the debate over dig­i­tal adver­tis­ing, with nation­al Democ­rats clam­or­ing for the plat­forms to do more to weed out mis­in­for­ma­tion. Demo­c­ra­t­ic can­di­date Eliz­a­beth War­ren, for exam­ple, ran ads on Face­book in Octo­ber accus­ing the com­pa­ny of giv­ing “Don­ald Trump free rein to lie on his plat­form — and then to pay Face­book gobs of mon­ey to push out their lies to Amer­i­can vot­ers.”

    ...

    Trump and his allies reg­u­lar­ly have alleged that social-media giants are biased against the pres­i­dent and oth­er Repub­li­cans, though they have nev­er pre­sent­ed sys­tem­at­ic evi­dence of such cen­sor­ship — and the tech com­pa­nies strong­ly deny it.

    ———–

    “Trump cam­paign, spend­ing furi­ous­ly to counter impeach­ment inquiry, assails Face­book over poten­tial changes to polit­i­cal ad rules” by Isaac Stan­ley-Beck­er and Tony Romm; The Wash­ing­ton Post; 11/20/2019

    Facebook’s micro­tar­get­ing tech­nolo­gies allow adver­tis­ers to home in on spe­cif­ic groups of users and deliv­er mes­sag­ing tai­lored to them — a strat­e­gy the Trump cam­paign has used pro­lif­i­cal­ly. Trump’s cam­paign direc­tor Brad Parscale has not­ed that the president’s team has test­ed thou­sands of vari­a­tions of polit­i­cal ads in an attempt to reach small groups of vot­ers, such as “15 peo­ple in the Flori­da Pan­han­dle that I would nev­er buy a TV com­mer­cial for.”

    Search­ing for the pre­cise mes­sage that will get those 15 peo­ple in the Flori­da Pan­han­dle to get out and vote. That’s what the Trump team’s social media adver­tis­ing cam­paign is going to be focused on and it’s a strat­e­gy that can’t work with­out the abil­i­ty to micro­tar­get, hence the freak­out by the Trump cam­paign. A freak­out that just might work at cow­ing Face­book because the com­pa­ny is mak­ing it very clear that it’s very unclear if there’s going to be any changes to the micro­tar­get­ing tools:

    ...
    Among the changes under con­sid­er­a­tion, accord­ing to an indi­vid­ual famil­iar with Facebook’s delib­er­a­tions who was not autho­rized to speak pub­licly, is lim­it­ing micro­tar­get­ing.

    On Mon­day, Facebook’s vice pres­i­dent of glob­al mar­ket­ing solu­tions, Car­olyn Ever­son, said tar­get­ing tech­nolo­gies would not be affect­ed, promis­ing at the 2019 Code Media con­fer­ence, “We are not talk­ing about chang­ing the tar­get­ing.”

    Then, she walked those com­ments back, telling Axios that a range of mod­i­fi­ca­tions were still pos­si­ble.
    ...

    And note how the argu­ing by the Trump team that lim­it­ing micro­tar­get­ing is a lim­i­ta­tion on free speech is true in the sense that it lim­its the abil­i­ty to deliv­er tar­get­ed decep­tive mes­sages that are more like­ly to fly under the fact-check­ers’ radar. Micro­tar­get­ed free speech unfor­tu­nate­ly includes the free­dom to tell lies designed to emo­tion­al­ly inflame a spe­cif­ic per­son based on a psy­cho­log­i­cal pro­file you’ve built of them, as the Trump team keeps mak­ing clear:

    ...
    That refusal to affirm that micro­tar­get­ing was safe from pol­i­cy changes prompt­ed the Trump team’s out­cry. The campaign’s dig­i­tal direc­tor, Gary Coby, issued a series of tweets claim­ing that rein­ing in micro­tar­get­ing would sup­press vot­er turnout and sti­fle speech.

    “This would uneven­ly hurt the lit­tle guy, small­er voic­es, & issues the pub­lic is not aware of OR news is NOT cov­er­ing,” he wrote.

    Start­ing with the 2016 elec­tion, Trump has used Facebook’s pro­mo­tion tools to bypass main­stream gate­keep­ers and speak direct­ly to Amer­i­cans, fre­quent­ly spread­ing false­hoods to boost his case. Face­book does not reveal which users were tar­get­ed by any giv­en ad, dis­clos­ing only a rough demo­graph­ic pro­file of those who ulti­mate­ly viewed the mes­sag­ing.

    Many experts have raised con­cerns about micro­tar­get­ing, argu­ing that broad­er appeals force politi­cians to speak across demo­graph­ic groups and are more like­ly to gain the atten­tion of fact-check­ers.

    The more hyper-tar­get­ed an ad, said Shan­non McGre­gor, an assis­tant pro­fes­sor of com­mu­ni­ca­tions at the Uni­ver­si­ty of Utah, the “more like­ly the appeals are to be inflam­ma­to­ry.”
    ...

    So we’ll see if (like­ly how) Face­book even­tu­al­ly ends up capit­u­lat­ing the the Trump team’s demands. But it’s worth not­ing that we already have a pret­ty good idea of what par­tic­u­lar types of micro­tar­get­ed ads the Trump team will be using on Face­book next year if Face­book leaves them with that option: micro­tar­get­ing old peo­ple with ads designed to scare them about immi­gra­tion. As the fol­low­ing arti­cle from back in April describes, 44 per­cent of Trump’s Face­book adver­tis­ing is spent on audi­ences 65 years and old­er (com­pared to 4 per­cent spent on the 18–34 crowd) and 54 per­cent of Trump’s Face­book ads are using nativist lan­guage around immi­gra­tion. And yet micro­tar­get­ing is also being used. It’s a reminder that the micro­tar­get­ing the Trump team is engaged in is large­ly going to be micro­tar­get­ing designed to deliv­er inflam­ma­to­ry white nation­al­ist memes most effec­tive­ly to an indi­vid­ual:

    Vox

    The Trump 2020 cam­paign is going after old­er peo­ple with immi­gra­tion ads on Face­book

    Trump’s cam­paign is spend­ing 44 per­cent of its Face­book adver­tis­ing bud­get to tar­get users age 65 and up.

    By Emi­ly Stew­art
    Apr 16, 2019, 11:30am EDT

    Pres­i­dent Don­ald Trump’s reelec­tion cam­paign is prob­a­bly scar­ing your grand­ma about immi­grants on Face­book.

    Trump’s cam­paign is spend­ing 44 per­cent of its Face­book adver­tis­ing bud­get to tar­get users who are 65 and old­er, accord­ing to a report from Axios based on data from the polit­i­cal com­mu­ni­ca­tions agency Bul­ly Pul­pit Inter­ac­tive. That’s sig­nif­i­cant­ly more than the top 12 Demo­c­ra­t­ic 2020 can­di­dates, who are spend­ing an aver­age of 27 per­cent of their Face­book ad bud­gets on the over-65 crowd.

    And the mes­sage Trump is using to appeal to Face­book users is a famil­iar one: immi­gra­tion. Accord­ing to Axios, he uses “nativist lan­guage around immi­grants” in 54 per­cent of his ads. Democ­rats, on the oth­er hand, are talk­ing about fundrais­ing and oth­er pol­i­cy issues, but not immi­gra­tion. Bul­ly Pul­pit ana­lyzed ad data from March 23 to April 5.

    It’s not only the focus of Trump’s mes­sag­ing that’s notable, but also its size. Trump is out­spend­ing his poten­tial Demo­c­ra­t­ic rivals on Face­book and Google ads in a big way, accord­ing to sep­a­rate Bul­ly Pul­pit data released to Axios in March. For the first two and a half months of the year, Trump spent $4.5 mil­lion on Face­book and Google ads, sev­en and a half times more than the top-spend­ing Demo­c­rat, Sen. Eliz­a­beth War­ren (D‑MA), spent on Face­book and Google ads.

    The Trump cam­paign has made no secret of its Face­book-heavy focus. Brad Parscale, who was the dig­i­tal direc­tor for Trump’s 2016 bid and is now his 2020 cam­paign man­ag­er, said in an inter­view with CBS’s 60 Min­utes in Octo­ber 2017 that Face­book “was the method” for the for­mer real­i­ty tele­vi­sion star’s sur­prise polit­i­cal rise.

    “Face­book now lets you get to places — and places pos­si­bly that you would nev­er go with TV ads,” he said. “Now I can find, you know, 15 peo­ple in the Flori­da Pan­han­dle that I would nev­er buy a TV com­mer­cial for.”

    In an inter­view with PBS’s Front­line in Novem­ber 2018, Parscale again indi­cat­ed that Face­book ads would be a major plank of Trump’s reelec­tion bid. He added that changes Face­book made to make polit­i­cal adver­tis­ing on the plat­form were “a gift,” because its polit­i­cal ads archive that lets any­one search and view ads means peo­ple “see all my ads for free.”

    He also talked about the type of micro­tar­get­ing the reelec­tion cam­paign is like­ly doing in serv­ing old­er peo­ple ads about immi­gra­tion on Face­book right now:

    “When you decide you’re going to run for pres­i­dent of the Unit­ed States, now you have hard-matched data with con­sumer data, matched with vot­er his­to­ry, matched with very com­pre­hen­sive polling data from all over the coun­try,” Parscale said.

    By the time all those pieces are put togeth­er, then you can actu­al­ly pull out an audi­ence. You can say, ‘I want to find every­body in this por­tion of Ohio that believes that the wall needs to be built, that thinks that pos­si­bly trade reform needs to hap­pen,’ and so we want to show them [an ad] on trade and immi­gra­tion.”

    Parscale called anoth­er Face­book ad tool, Looka­like Audi­ences, “one of the most pow­er­ful fea­tures of Face­book.” He said the tool allowed the cam­paign to expand its audi­ence and find peo­ple they didn’t already know. “Face­book Looka­like Audi­ences are pret­ty amaz­ing. I mean, it’s why the platform’s great.”

    If you go to Facebook’s ad tools page, you can see the types of ads peo­ple are being served — and whom and where they’re going to. Tues­day morn­ing, for exam­ple, I looked up one ad the cam­paign start­ed run­ning on April 14 that warns that “ille­gal aliens are com­ing across the Mex­i­can bor­der in record-break­ing num­bers” and that there were more than 100,000 arrests by Cus­toms and Bor­der Pro­tec­tion last month alone. Thus far, less than $100 has been spent on the ad, and it’s got­ten few­er than 1,000 impres­sions. But 49 per­cent of the users it’s been shown to are 65 and over.

    To be sure, the over-65 group isn’t the only one the Trump cam­paign is tar­get­ing, or even the biggest. Accord­ing to Bul­ly Pul­pit, 51 per­cent of Trump’s polit­i­cal ad spent was tar­get­ed to peo­ple ages 36 to 64. Fifty-four per­cent of Democ­rats’ bud­gets went to that age group.

    Both the Trump cam­paign and Democ­rats are spend­ing the least amount of their Face­book ad bud­gets tar­get­ing peo­ple ages 18 to 35. Just 4 per­cent of the Trump Face­book ad spend is going there, and 19 per­cent among Democ­rats. But there’s also a lot of vari­a­tion among can­di­dates: Sen. Bernie Sanders (I‑VT) is spend­ing 49 per­cent of his Face­book ad bud­get on young peo­ple, com­pared to just 8 per­cent for Sen. Amy Klobuchar (D‑MN).

    Old­er peo­ple are also more sus­cep­ti­ble to fake news on Face­book

    Face­book is get­ting more pop­u­lar among old­er peo­ple, as younger groups move to oth­er apps, such as Insta­gram (which Face­book also owns). Old­er peo­ple are also like­li­er to spread fake news on Face­book, research shows.

    A study from Prince­ton and New York Uni­ver­si­ty researchers pub­lished in Sci­ence Advances in Jan­u­ary found that con­ser­v­a­tives and peo­ple over 65 were dis­pro­por­tion­ate­ly like­ly to share arti­cles from fake news domains dur­ing the 2016 pres­i­den­tial elec­tion. Researchers also found that regard­less of ide­ol­o­gy, Face­book users over 65 shared almost sev­en times as many fake news arti­cles as younger users.

    Researchers didn’t iden­ti­fy why, specif­i­cal­ly, old­er users were more sus­cep­ti­ble to fake news, but they sug­gest­ed it could be an issue with media lit­er­a­cy.

    While the Trump cam­paign isn’t spread­ing fake news with its cam­paign ads on immi­gra­tion, it may be pulling at a sim­i­lar thread in tar­get­ing old­er peo­ple who could be more like­ly to take what they see on Face­book at face val­ue. The strat­e­gy could be work­ing.

    ———-

    “The Trump 2020 cam­paign is going after old­er peo­ple with immi­gra­tion ads on Face­book” by Emi­ly Stew­art; Vox; 04/16/2019

    Trump’s cam­paign is spend­ing 44 per­cent of its Face­book adver­tis­ing bud­get to tar­get users who are 65 and old­er, accord­ing to a report from Axios based on data from the polit­i­cal com­mu­ni­ca­tions agency Bul­ly Pul­pit Inter­ac­tive. That’s sig­nif­i­cant­ly more than the top 12 Demo­c­ra­t­ic 2020 can­di­dates, who are spend­ing an aver­age of 27 per­cent of their Face­book ad bud­gets on the over-65 crowd.”

    It’s the Trumpian ver­sion of Big Data pol­i­tics: Scar­ing old peo­ple about immi­grants with cus­tomized social media ads. And the Trump cam­paign is a BIG cus­tomer for these ser­vices. It’s some­thing to keep in mind when the Trump cam­paign freaks out fol­low­ing the report of the pos­si­bil­i­ty new micro­tar­get­ing poli­cies: The Trump cam­paign is prob­a­bly Face­book’s biggest client for those ser­vices. They are plan­ning on spend­ing hun­dreds of mil­lions of dol­lars on this and the larg­er right-wing pro­pa­gan­da ecosys­tem will prob­a­bly spend bil­lions over the next year. Micro­tar­get­ed white nation­al­ist trolling is big mon­ey for Face­book in 2020:

    ...
    And the mes­sage Trump is using to appeal to Face­book users is a famil­iar one: immi­gra­tion. Accord­ing to Axios, he uses “nativist lan­guage around immi­grants” in 54 per­cent of his ads. Democ­rats, on the oth­er hand, are talk­ing about fundrais­ing and oth­er pol­i­cy issues, but not immi­gra­tion. Bul­ly Pul­pit ana­lyzed ad data from March 23 to April 5.

    It’s not only the focus of Trump’s mes­sag­ing that’s notable, but also its size. Trump is out­spend­ing his poten­tial Demo­c­ra­t­ic rivals on Face­book and Google ads in a big way, accord­ing to sep­a­rate Bul­ly Pul­pit data released to Axios in March. For the first two and a half months of the year, Trump spent $4.5 mil­lion on Face­book and Google ads, sev­en and a half times more than the top-spend­ing Demo­c­rat, Sen. Eliz­a­beth War­ren (D‑MA), spent on Face­book and Google ads.
    ...

    And note how Trump’s dig­i­tal cam­paign direc­tor Brad Parscale has talked about the “Looka­like audi­ence” tool that Face­book also offers to find peo­ple sim­i­lar a tar­get list. Face­book’s ad sys­tem is basi­cal­ly set up to max­i­mize micro­tar­get­ing, which unfor­tu­nate­ly dou­bles as a sys­tem for max­i­mized inflam­ma­to­ry micro­tar­get­ed dis­in­for­ma­tion:

    ...
    He also talked about the type of micro­tar­get­ing the reelec­tion cam­paign is like­ly doing in serv­ing old­er peo­ple ads about immi­gra­tion on Face­book right now:

    “When you decide you’re going to run for pres­i­dent of the Unit­ed States, now you have hard-matched data with con­sumer data, matched with vot­er his­to­ry, matched with very com­pre­hen­sive polling data from all over the coun­try,” Parscale said.

    By the time all those pieces are put togeth­er, then you can actu­al­ly pull out an audi­ence. You can say, ‘I want to find every­body in this por­tion of Ohio that believes that the wall needs to be built, that thinks that pos­si­bly trade reform needs to hap­pen,’ and so we want to show them [an ad] on trade and immi­gra­tion.”

    Parscale called anoth­er Face­book ad tool, Looka­like Audi­ences, “one of the most pow­er­ful fea­tures of Face­book.” He said the tool allowed the cam­paign to expand its audi­ence and find peo­ple they didn’t already know. “Face­book Looka­like Audi­ences are pret­ty amaz­ing. I mean, it’s why the platform’s great.”

    ...

    And it hap­pens to be the case that the elder­ly are the most sus­cep­ti­ble to fake news on Face­book. Trump scar­ing grand­ma and grand­pa with scary immi­grant memes is the per­fect storm for fake news. The researchers who found the elder­ly to be sev­en times more like­ly to share fake news than the younger Face­book users sug­gest­ed an issue media lit­er­a­cy might be part of the issue. Which is undoubt­ed­ly true. The elder­ly who get right-wing dis­in­for­ma­tion on Face­book are also going to heav­i­ly over­lap with Fox News view­ers — where the medi­an view­er age is around 68 — and the Fox News audi­ence is an audi­ence with self-evi­dent major media lit­er­a­cy issues. So prey­ing on media lit­er­a­cy deficits is a major part of the Trump 2020 strat­e­gy:

    ...
    Old­er peo­ple are also more sus­cep­ti­ble to fake news on Face­book

    Face­book is get­ting more pop­u­lar among old­er peo­ple, as younger groups move to oth­er apps, such as Insta­gram (which Face­book also owns). Old­er peo­ple are also like­li­er to spread fake news on Face­book, research shows.

    A study from Prince­ton and New York Uni­ver­si­ty researchers pub­lished in Sci­ence Advances in Jan­u­ary found that con­ser­v­a­tives and peo­ple over 65 were dis­pro­por­tion­ate­ly like­ly to share arti­cles from fake news domains dur­ing the 2016 pres­i­den­tial elec­tion. Researchers also found that regard­less of ide­ol­o­gy, Face­book users over 65 shared almost sev­en times as many fake news arti­cles as younger users.

    Researchers didn’t iden­ti­fy why, specif­i­cal­ly, old­er users were more sus­cep­ti­ble to fake news, but they sug­gest­ed it could be an issue with media lit­er­a­cy.
    ...

    And it’s that abil­i­ty to micro­tar­get old peo­ple with mes­sages about scary immi­grants that the Trump cam­paign can’t afford to lose. It’s too impor­tant. An end­less hur­ri­cane of inflam­ma­to­ry dig­i­tal micro­tar­get­ed lies and white nation­al­ist memes. That was the Trump cam­paign’s dig­i­tal ‘secret sauce’ in 2016 and it’s going to be next gen­er­a­tion secret sauce in 2020. Unless Face­book ends the micro­tar­get­ing. That’s part of what makes this sto­ry of Face­book think­ing about chang­ing those rules some­thing to keep an eye on. This is a very big deal for the Trump cam­paign. Per­son­al­ized provo­ca­tions and decep­tion is what the dig­i­tal oper­a­tions for Trump 2020 is all about. Oth­er­wise it’s back to more gener­ic provo­ca­tions and decep­tion, which Face­book is still quite good at deliv­er­ing so the Trump cam­paign’s lies should be ok.

    Posted by Pterrafractyl | November 24, 2019, 10:12 pm
  35. It hap­pened again. Again: Face­book just had anoth­er giant data leak. A secu­ri­ty researcher just found an unen­crypt­ed data­base on a Dark Web hack­er forum con­tain Face­book user account info on 267 mil­lion users. The secu­ri­ty researcher the data­base had no pass­word pro­tec­tion and was avail­able for any­one on the hack­er forum to down­load for about two weeks. It appears to be most­ly US users. Each entry in the data­base con­tained a Face­book user id, a full name, and a phone num­ber. Impor­tant­ly, it appears to be pret­ty up to date infor­ma­tion, so it’s per­fect for scam artists. The secu­ri­ty researcher con­clud­ed that it was like­ly cre­at­ed by a crim­i­nal oper­a­tion in Viet­nam.

    Inter­est­ing­ly, while the researcher raised the pos­si­bil­i­ty that this infor­ma­tion was sim­ply scraped from the infor­ma­tion Face­book users pub­licly make avail­able on their pro­files, they also sus­pect the infor­ma­tion may have some­how been grabbed via the Face­book API used by app devel­op­ers. Face­book used to give app devel­op­ers direct access to infor­ma­tion like the phone num­bers asso­ci­at­ed with a user account via the API until the com­pa­ny restrict­ed access to that infor­ma­tion in 2018 fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, so it’s pos­si­ble this infor­ma­tion was all grabbed before those restric­tions were put in place. But as the researchers note, it’s also pos­si­ble some­one found a vul­ner­a­bil­i­ty in the updat­ed Face­book API.

    So are we look­ing at a new ‘bug’ that allowed for the mass col­lec­tion of data or is this the con­se­quence of Face­book’s past poli­cies? At this point we have no idea. But it’s worth recall the scan­dal revealed last month when Face­book admit­ted that at least 100 app devel­op­er part­ners may have improp­er­ly accessed user data from the mem­bers of Face­book Groups. We weren’t giv­en any infor­ma­tion how how many peo­ple were impact­ed by this and Face­book only gave a vague descrip­tion of the type of infor­ma­tion devel­op­ers were able to grab, only admit­ting that it includ­ed names and pho­tos. So Face­book admit­ted just last month that there was a bug with the API it makes avail­able to the devel­op­er of Face­book Group apps, but that’s about all they told at the time. Might this lat­est leak be relat­ed to that Face­book Groups leak? Who knows. At this point there are so many reports of leaks it seems plau­si­ble that at least some of those leaks are relat­ed. Either way, if you’re a Face­book user in the US and you sud­den­ly start get­ting a bunch of scam­my phone calls or texts from unknown num­bers you can prob­a­bly thank Face­book for that:

    Forbes

    267 Mil­lion Names And Phone Num­bers Leaked Online — And They’re All From Face­book

    John Bran­don
    Dec 19, 2019, 04:01pm

    If you start receiv­ing more tele­mar­ket­ing calls, you can blame Face­book.

    Recent­ly, a secu­ri­ty researcher named Bob Diachenko found a data­base of user account info includ­ing their name and phone num­bers for 267 mil­lion Face­book users. It was avail­able in an unpro­tect­ed for­mat and copied to oth­er hack­er forums.

    Reports indi­cate that this presents a trea­sure trove of data for tele­mar­keters and spam pur­vey­ors because the data looks legit­i­mate and comes from the social net­work itself, not from an untrust­ed source. (In some cas­es, leaked data that is old and out­dat­ed doesn’t help would-be scam­mers because the names and num­bers are incor­rect.)

    Hav­ing this data means scam­mers can start new phish­ing scams and cor­re­late the data from the phone records to Face­book user pro­files.

    The ana­lyst says the data was poten­tial­ly com­pro­mised through an API that gives devel­op­ers access to back-end data, such as friend lists, groups, and pho­tos.

    He says at one time it was like­ly a pro­tect­ed, pri­vate data­base even on hack­er forums, but was set to pub­lic and was read­i­ly avail­able to any­one for about two weeks.

    Hack­ers rou­tine­ly down­load user infor­ma­tion like this or pur­chase it on the Dark Web, but the dif­fer­ence with this data is that it has some authen­tic­i­ty since it also con­tains Face­book user infor­ma­tion. And, because it con­tains phone num­bers, it means hack­ers might be able to set up more sophis­ti­cat­ed attacks that could include both a phone scam and an email scam.

    Face­book has come under fire in recent years because of what some per­ceive to be lax secu­ri­ty pro­to­cols.

    The most famous inci­dent is relat­ed to Cam­bridge Ana­lyt­i­ca and how that com­pa­ny had har­vest­ed user data from Face­book by using an app that appeared to be an aca­d­e­m­ic sur­vey.

    This lat­est breach is much larg­er in scope. The sur­vey col­lect­ed data from 87 mil­lion users but this lat­est leak, accord­ing to the researcher, totals 267 mil­lion accounts.

    ...

    ———–

    “267 Mil­lion Names And Phone Num­bers Leaked Online — And They’re All From Face­book” by John Bran­don; Forbes; 12/19/2019

    Reports indi­cate that this presents a trea­sure trove of data for tele­mar­keters and spam pur­vey­ors because the data looks legit­i­mate and comes from the social net­work itself, not from an untrust­ed source. (In some cas­es, leaked data that is old and out­dat­ed doesn’t help would-be scam­mers because the names and num­bers are incor­rect.)”

    A trea­sure trove of data for tele­mar­keters and spam pur­vey­ors for 267 mil­lion peo­ple. That’s what some­one was just giv­ing away on this hack­er forum ear­li­er this month. Was this data col­lect­ed from scrap­ing infor­ma­tion users make pub­licly avail­able? Or are we look­ing at anoth­er Cam­bridge Ana­lyt­i­ca-style leak where Face­book was basi­cal­ly giv­ing this infor­ma­tion away to app devel­op­ers? Or maybe it was an API bug. At this point we have no idea about the source of this leak. We just know from expe­ri­ence at this point that there are a vari­ety of expla­na­tions because we’ve seen so many dif­fer­ent types of Face­book leaks:

    ...
    The ana­lyst says the data was poten­tial­ly com­pro­mised through an API that gives devel­op­ers access to back-end data, such as friend lists, groups, and pho­tos.

    He says at one time it was like­ly a pro­tect­ed, pri­vate data­base even on hack­er forums, but was set to pub­lic and was read­i­ly avail­able to any­one for about two weeks.

    Hack­ers rou­tine­ly down­load user infor­ma­tion like this or pur­chase it on the Dark Web, but the dif­fer­ence with this data is that it has some authen­tic­i­ty since it also con­tains Face­book user infor­ma­tion. And, because it con­tains phone num­bers, it means hack­ers might be able to set up more sophis­ti­cat­ed attacks that could include both a phone scam and an email scam.
    ...

    Sophis­ti­cat­ed scams on 267 mil­lion peo­ple are now that much more acces­si­ble to ran­dom scam­mers. But the infor­ma­tion in this data­base does­n’t appear to lim­it­ed to Face­book user ids, full names, and phone num­bers. Based on a screen­shot in the actu­al the Com­par­itech report, it looks like the data also poten­tial­ly includes infor­ma­tion like date of birth, loca­tion, gen­der, rela­tion­ship sta­tus, and email address­es. But in that screen­shot, all of the fields oth­er than full name, userid, time­stamp, and the phone num­ber were set to null. And that sug­gests that who­ev­er set this data­base up and made it pub­lic might have all of that addi­tion­al infor­ma­tion and inten­tion­al­ly scrubbed it so only the names, user ids, and phone num­bers were released:

    Com­par­itech
    Blog

    Report: 267 mil­lion Face­book users IDs and phone num­bers exposed online

    Paul Bischoff
    TECH WRITER, PRIVACY ADVOCATE AND VPN EXPERT
    Decem­ber 19, 2019

    A data­base con­tain­ing more than 267 mil­lion Face­book user IDs, phone num­bers, and names was left exposed on the web for any­one to access with­out a pass­word or any oth­er authen­ti­ca­tion.

    Com­par­itech part­nered with secu­ri­ty researcher Bob Diachenko to uncov­er the Elas­tic­search clus­ter. Diachenko believes the trove of data is most like­ly the result of an ille­gal scrap­ing oper­a­tion or Face­book API abuse by crim­i­nals in Viet­nam, accord­ing to the evi­dence.

    The infor­ma­tion con­tained in the data­base could be used to con­duct large-scale SMS spam and phish­ing cam­paigns, among oth­er threats to end users.

    Diachenko imme­di­ate­ly noti­fied the inter­net ser­vice provider man­ag­ing the IP address of the serv­er so that access could be removed. How­ev­er, Diachenko says the data was also post­ed to a hack­er forum as a down­load.

    Time­line of the expo­sure

    The data­base was exposed for near­ly two weeks before access was removed. Here’s what we know:

    * Decem­ber 4 – The data­base was first indexed.
    * Decem­ber 12 – The data was post­ed as a down­load on a hack­er forum.
    * Decem­ber 14 – Diachenko dis­cov­ered the data­base and imme­di­ate­ly sent an abuse report to the ISP man­ag­ing the IP address of the serv­er.
    * Decem­ber 19 – The data­base is now unavail­able.

    Typ­i­cal­ly, when we find exposed per­son­al data like this, we take steps to noti­fy the own­er of the data­base. But because we believe this data belongs to a crim­i­nal orga­ni­za­tion, Diachenko went straight to the ISP.

    What data was exposed

    In total 267,140,436 records were exposed. Most of the affect­ed users were from the Unit­ed States. Diachenko says all of them seem to be valid. Each con­tained:

    * A unique Face­book ID
    * A phone num­ber
    * A full name
    * A time­stamp

    The serv­er includ­ed a land­ing page with a login dash­board and wel­come note.

    Face­book IDs are unique, pub­lic num­bers asso­ci­at­ed with spe­cif­ic accounts, which can be used to dis­cern an account’s user­name and oth­er pro­file info.

    [see screen­shot of exam­ple Face­book user pro­file data that includes the fields like date of birth, gen­der, rela­tion­ship sta­tus, and email address all set to null]

    Face­book scrap­ing

    How crim­i­nals obtained the user IDs and phone num­bers isn’t entire­ly clear. One pos­si­bil­i­ty is that the data was stolen from Facebook’s devel­op­er API before the com­pa­ny restrict­ed access to phone num­bers in 2018. Facebook’s API is used by app devel­op­ers to add social con­text to their appli­ca­tions by access­ing users’ pro­files, friends list, groups, pho­tos, and event data. Phone num­bers were avail­able to third-par­ty devel­op­ers pri­or to 2018.

    Diachenko says Facebook’s API could also have a secu­ri­ty hole that would allow crim­i­nals to access user IDs and phone num­bers even after access was restrict­ed.

    Anoth­er pos­si­bil­i­ty is that the data was stolen with­out using the Face­book API at all, and instead scraped from pub­licly vis­i­ble pro­file pages.

    “Scrap­ing” is a term used to describe a process in which auto­mat­ed bots quick­ly sift through large num­bers of web pages, copy­ing data from each one into a data­base. It’s dif­fi­cult for Face­book and oth­er social media sites to pre­vent scrap­ing because they often can­not tell the dif­fer­ence between a legit­i­mate user and a bot. Scrap­ing is against Facebook’s–and most oth­er social networks’–terms of ser­vice.

    Many peo­ple have their Face­book pro­file vis­i­bil­i­ty set­tings set to pub­lic, which makes scrap­ing them triv­ial.

    This isn’t the first time such a data­base has been exposed. In Sep­tem­ber 2019, 419 mil­lion records across sev­er­al data­bas­es were exposed. These also includ­ed phone num­bers and Face­book IDs.

    Dan­gers of exposed data

    A data­base this big is like­ly to be used for phish­ing and spam, par­tic­u­lar­ly via SMS. Face­book users should be on the look­out for sus­pi­cious text mes­sages. Even if the sender knows your name or some basic infor­ma­tion about you, be skep­ti­cal of any unso­licit­ed mes­sages.

    Face­book users can min­i­mize the chances of their pro­files being scraped by strangers by adjust­ing their account pri­va­cy set­tings:

    * Open Face­book and go to **Set­tings**
    * Click **Pri­va­cy**
    * Set all rel­e­vant fields to **Friends** or **Only me**
    * Set **”Do you want search engines out­side of Face­book to link to your pro­file** to **No**

    This will reduce the chances of your pro­file being scraped by third par­ties, but the only way to ensure it nev­er hap­pens again is to com­plete­ly deac­ti­vate or delete your Face­book account.

    ...

    ———–

    “Report: 267 mil­lion Face­book users IDs and phone num­bers exposed online” by Paul Bischoff; Com­par­itech; 12/19/2019

    “How crim­i­nals obtained the user IDs and phone num­bers isn’t entire­ly clear. One pos­si­bil­i­ty is that the data was stolen from Facebook’s devel­op­er API before the com­pa­ny restrict­ed access to phone num­bers in 2018. Facebook’s API is used by app devel­op­ers to add social con­text to their appli­ca­tions by access­ing users’ pro­files, friends list, groups, pho­tos, and event data. Phone num­bers were avail­able to third-par­ty devel­op­ers pri­or to 2018.”

    Maybe the data was grabbed from Face­book API when Face­book was just giv­ing infor­ma­tion like phone num­bers away to third-par­ty app devel­op­ers. Or maybe it’s an ongo­ing secu­ri­ty vul­ner­a­bil­i­ty in the API allow­ing some­one to still access that infor­ma­tion. We don’t know. But based on the screen­shot in this report, it looks like that data­base had sep­a­rate fields for infor­ma­tion like date of birth, loca­tion, gen­der, and rela­tion­ship sta­tus, but those fields were all set to null:

    ...
    In total 267,140,436 records were exposed. Most of the affect­ed users were from the Unit­ed States. Diachenko says all of them seem to be valid. Each con­tained:

    * A unique Face­book ID
    * A phone num­ber
    * A full name
    * A time­stamp

    The serv­er includ­ed a land­ing page with a login dash­board and wel­come note.

    Face­book IDs are unique, pub­lic num­bers asso­ci­at­ed with spe­cif­ic accounts, which can be used to dis­cern an account’s user­name and oth­er pro­file info.

    [see screen­shot of exam­ple Face­book user pro­file data that includes the fields like date of birth, gen­der, rela­tion­ship sta­tus, and email address all set to null]
    ...

    In oth­er words, who­ev­er decid­ed to leak this data­base to the world for free just might have a larg­er, more com­pre­hen­sive data­base on these same 267 mil­lion peo­ple for sale. Might this leak be a means of allow­ing the hack­er com­mu­ni­ty to ‘taste’ this data set and ver­i­fy that it’s legit so peo­ple will be will­ing to pay for the com­plete data set? Again, we have no idea. All we know is some­one decid­ed to give this data­base away for free and that peo­ple should be extra wary of those odd phone calls and text as a con­se­quence. And peo­ple should prob­a­bly delete their Face­book accounts, which we already knew.

    Posted by Pterrafractyl | December 19, 2019, 10:29 pm
  36. Brit­tany Kaiser, the Cam­bridge Ana­lyt­i­ca whis­tle-blow­er who appears to be the per­son releas­ing thou­sands of inter­nal com­pa­ny doc­u­ments via the @HindsightFiles twit­ter account that detail the glob­al scale of the Cam­bridge Analytica/SCL polit­i­cal influ­ence oper­a­tions, recent­ly hint­ed at what parts of that glob­al oper­a­tion she’s going to be dis­cussing in the future: Asia. It turns out Cam­bridge Ana­lyt­i­ca has been oper­at­ing in Sin­ga­pore, Tai­wan, South Korea, Myan­mar, and the Philip­pines. And accord­ing to Kaiser, the Philip­pines appears to be par­tic­u­lar­ly sus­cep­ti­ble to the type of per­son­al­ized micro­tar­get­ing oper­a­tion Cam­bridge Ana­lyt­i­ca spe­cial­ized in because Face­book is extreme­ly pop­u­lar there and Fil­ipino laws make it easy to access large amounts of per­son­al infor­ma­tion. As we’ll see in the sec­ond arti­cle below, the Philip­pines was also the coun­try that had the sec­ond largest num­ber of Face­book user who had their pro­files scraped by Cam­bridge Ana­lyt­i­ca (around 1.2 mil­lion) and Cam­bridge Ana­lyt­i­ca’s par­ent com­pa­ny, SCL, has been work­ing with a polit­i­cal client in the coun­try since 2013.

    SCL’s advice to the Fil­ipino client revolved around tak­ing a ‘tough on crime’ and ‘hon­or­able’ polit­i­cal brand­ing. No one knows who that polit­i­cal client was but it does­n’t appear the client was Rodri­go Duterte, a politi­cian who did­n’t need to hire SCL to tell him to run as a ‘tough on crime’ politi­cian in 2013. There are a list of sus­pect­ed clients, but no one know who exact­ly it was in April of 2018 when Quartz first report­ed on this.

    We’ll see if Kaiser ends up reveal­ing the mys­tery client or not. But the fact that there’s mys­tery about the iden­ti­ty of SCL’s Fil­ipino client is a reminder that these polit­i­cal influ­enc­ing ser­vices are gen­er­al­ly going to be pur­chased in secret­ly or at least very qui­et­ly so there’s prob­a­bly going to be a lot of mys­tery clients out there in this indus­try:

    Nikkei Asian Review

    Whistle­blow­er warns of Cam­bridge Ana­lyt­i­ca’s Asia reach

    For­mer employ­ees still active in elec­tions, Brit­tany Kaiser says

    KOSUKE TERAI, Nikkei staff writer
    Jan­u­ary 11, 2020 00:23 JST

    TOKYO — As Asia pre­pares for four nation­al elec­tions this year, a for­mer exec­u­tive at con­tro­ver­sial polit­i­cal con­sul­tan­cy Cam­bridge Ana­lyt­i­ca warned of the com­pa­ny’s per­sis­tent reach despite clos­ing its doors in 2018.

    “With hun­dreds of Cam­bridge Ana­lyt­i­ca rem­nants oper­at­ing around the world, the threat of pub­lic opin­ion manip­u­la­tion is grow­ing in Asia,” Brit­tany Kaiser told Nikkei. Her warn­ing comes as Sin­ga­pore, Tai­wan, South Korea, and Myan­mar all pre­pare to go to the polls in the com­ing months.

    Kaiser blew the whis­tle on the com­pa­ny’s alleged mis­use of Face­book user data dur­ing the Brex­it ref­er­en­dum and the 2016 U.S. pres­i­den­tial elec­tion, lead­ing to her tes­ti­fy­ing before the British par­lia­ment and U.S. inves­ti­ga­tors.

    “I feel like I kind of had blind­ers on because I want­ed to believe that this com­pa­ny was build­ing some­thing real­ly impor­tant,” she said about her time at Cam­bridge Ana­lyt­i­ca, which began when she was writ­ing her doc­tor­al the­sis on using real-time data to pre­dict and pre­vent mass vio­lence.

    Kaiser recalled her sur­prise when news reports revealed that the com­pa­ny had ille­gal­ly acquired Face­book data for com­mer­cial and polit­i­cal use, under the guise of aca­d­e­m­ic research. Accord­ing to Kaiser, Cam­bridge Ana­lyt­i­ca exec­u­tives, includ­ing CEO Alexan­der Nix, had spo­ken open­ly of the Face­book datasets in pre­sen­ta­tions to clients. She said anoth­er exec­u­tive, Alex Tayler, told her that the data was pur­chased legal­ly from an aca­d­e­m­ic who, accord­ing to Tayler, may have lied about how it was acquired.

    “Obvi­ous­ly, lat­er on, I found out that they were pret­ty well aware of what they were doing,” Kaiser said.

    Cam­bridge Ana­lyt­i­ca had deep ties to con­ser­v­a­tive polit­i­cal move­ments. It was part­ly owned by Amer­i­can bil­lion­aire Robert Mer­cer, a sup­port­er of con­ser­v­a­tive caus­es. Stephen Ban­non, who was chief strate­gist for Don­ald Trump’s pres­i­den­tial cam­paign, was an exec­u­tive at the com­pa­ny.

    “It is my per­son­al belief and due to the strate­gies and truth that I have seen [in] case stud­ies, that with­out Cam­bridge Ana­lyt­i­ca as part of the team, [the Brex­it and Trump cam­paigns] would prob­a­bly not have been suc­cess­ful,” Kaiser said.

    Polit­i­cal con­sul­tan­cies like Cam­bridge Ana­lyt­i­ca and data com­pa­nies like Google charge fees for their teams to assist polit­i­cal cam­paigns. Face­book, by con­trast, pro­vid­ed the ser­vice for free to its biggest clients, accord­ing to Kaiser. Hillary Clin­ton’s pres­i­den­tial cam­paign declined Face­book’s assis­tance, while the Trump cam­paign accept­ed it.

    “These types of ser­vices are avail­able to the high­est bid­der. It does­n’t mat­ter what the goals are of that indi­vid­ual,” she said.

    Accord­ing to Kaiser, Face­book did not send a con­tract or a rep­re­sen­ta­tive to ensure that Cam­bridge Ana­lyt­i­ca had delet­ed the ille­gal­ly acquired data of 87 mil­lion users. “All they did was send an email that said the data was delet­ed, which I think is pret­ty irre­spon­si­ble giv­en that Cam­bridge Ana­lyt­i­ca is run­ning elec­tions around the world,” Kaiser said, not­ing the con­sul­tan­cy’s involve­ment in elec­tions in Malaysia and the Philip­pines.

    Kaiser argued that elec­tions in the Philip­pines, due to the coun­try’s dig­i­tal infra­struc­ture, are par­tic­u­lar­ly sus­cep­ti­ble to online polit­i­cal manip­u­la­tion. “You can actu­al­ly pur­chase and license a lot of data on Fil­ipino cit­i­zens, and because there’s so much data in the Philip­pines, much more advanced tac­tics could be used there than in Malaysia,” she said, adding that she would soon release files on the Philip­pines for local jour­nal­ists to inves­ti­gate.

    Since becom­ing a whistle­blow­er, Kaiser has writ­ten a mem­oir of her time at Cam­bridge Ana­lyt­i­ca, in which she detailed the inner work­ings of the com­pa­ny and gave her account of the events sur­round­ing the 2016 U.S. elec­tion.

    “What I think is one of the biggest mis­con­cep­tions about the sit­u­a­tion was that Rus­sia and Cam­bridge Ana­lyt­i­ca pos­si­bly coor­di­nat­ed with each oth­er. What every­one needs to under­stand is that Face­book made it real­ly easy for Rus­sia to tar­get peo­ple with­out hav­ing to con­tact a data com­pa­ny like Cam­bridge Ana­lyt­i­ca,” Kaiser said.

    “In fact, Rus­sia, or peo­ple any­where in the world, legal­ly buy data on Amer­i­can cit­i­zens. There are no laws that stop that from hap­pen­ing,” she added.

    ...

    hile such leg­is­la­tion is in the works, Kaiser said social media plat­forms like Face­book should ban polit­i­cal adver­tis­ing. She applaud­ed Twit­ter CEO Jack Dorsey for tak­ing that step, while crit­i­ciz­ing Face­book CEO Mark Zucker­berg for “hid­ing behind free speech.”

    “He can’t admit how big the prob­lem is and that he does­n’t have the abil­i­ty to fix it,” she said. “While they haven’t solved the prob­lem, it’s bet­ter to have no polit­i­cal com­mu­ni­ca­tions than to have com­mu­ni­ca­tions that are dan­ger­ous.”

    ———-

    “Whistle­blow­er warns of Cam­bridge Ana­lyt­i­ca’s Asia reach” by KOSUKE TERAI; Nikkei Asian Review; 01/11/2020

    ““With hun­dreds of Cam­bridge Ana­lyt­i­ca rem­nants oper­at­ing around the world, the threat of pub­lic opin­ion manip­u­la­tion is grow­ing in Asia,” Brit­tany Kaiser told Nikkei. Her warn­ing comes as Sin­ga­pore, Tai­wan, South Korea, and Myan­mar all pre­pare to go to the polls in the com­ing months.”

    Hun­dreds of Cam­bridge Ana­lyt­i­ca rem­nants oper­at­ing around the world. That’s omi­nous. Yet that’s what Kaiser describes. She also describes the Philip­pines being par­tic­u­lar­ly sus­cep­ti­ble to these oper­a­tions due to the large vol­ume of data made com­mer­cial­ly avail­able:

    ...
    Accord­ing to Kaiser, Face­book did not send a con­tract or a rep­re­sen­ta­tive to ensure that Cam­bridge Ana­lyt­i­ca had delet­ed the ille­gal­ly acquired data of 87 mil­lion users. “All they did was send an email that said the data was delet­ed, which I think is pret­ty irre­spon­si­ble giv­en that Cam­bridge Ana­lyt­i­ca is run­ning elec­tions around the world,” Kaiser said, not­ing the con­sul­tan­cy’s involve­ment in elec­tions in Malaysia and the Philip­pines.

    Kaiser argued that elec­tions in the Philip­pines, due to the coun­try’s dig­i­tal infra­struc­ture, are par­tic­u­lar­ly sus­cep­ti­ble to online polit­i­cal manip­u­la­tion. “You can actu­al­ly pur­chase and license a lot of data on Fil­ipino cit­i­zens, and because there’s so much data in the Philip­pines, much more advanced tac­tics could be used there than in Malaysia,” she said, adding that she would soon release files on the Philip­pines for local jour­nal­ists to inves­ti­gate.

    ...

    “In fact, Rus­sia, or peo­ple any­where in the world, legal­ly buy data on Amer­i­can cit­i­zens. There are no laws that stop that from hap­pen­ing,” she added.
    ...

    Any­one any­where in the world can poten­tial­ly run a dig­i­tal micro­tar­get­ing oper­a­tion in any oth­er coun­try as long as this vast mar­ket­place of per­son­al data remains a com­mer­cial prod­uct that any­one can buy. In oth­er words, cen­tral to the sto­ry of Cam­bridge Ana­lyt­i­ca is the fact that a big part of what makes the sto­ry so impor­tant is that it’s just one exam­ple of a glob­al indus­try. It was a peek behind the cur­tain. And as the fol­low­ing April 2018 Quartz report describes, the peek behind the cur­tain of what SCL has been up to the Philip­pines only revealed that the com­pa­ny had a polit­i­cal client. Not the iden­ti­ty of the client. And while there are edu­cat­ed guess­es about the iden­ti­ty of this mys­tery client, it’s still a mys­tery. Which is part of what makes Kaiser’s ref­er­ences to “hun­dreds of Cam­bridge Ana­lyt­i­ca rem­nants oper­at­ing around the world” so dis­turb­ing. It implies there’s a lot more mys­tery clients:

    Quartz

    Cam­bridge Ana­lyt­i­ca boast­ed about brand­ing a Fil­ipino politi­cian as tough on crime and “no-non­sense”

    By Josh Hor­witz & Devjy­ot Ghoshal
    April 9, 2018

    In May 2015, about one year before the Philip­pines went to the polls that brought Rodri­go Duterte to pow­er, Alexan­der Nix showed up at Manila’s Nation­al Press Club.

    The now-sus­pend­ed CEO of Cam­bridge Ana­lyt­i­ca (CA)—the British polit­i­cal con­sul­tan­cy that sur­rep­ti­tious­ly har­vest­ed the data of at least 87 mil­lion Face­book users—declared that the future of elec­tion cam­paigns were going to be about data and tech­nol­o­gy.

    “Instead of rely­ing heav­i­ly on polit­i­cal sur­veys, cam­paign strate­gists must use those data to influ­ence the behav­ior of the per­son,” Nix said, accord­ing to The Mani­la Times news­pa­per.

    On April 05, Face­book admit­ted some 1.2 mil­lion users in Philip­pines had their data improp­er­ly accessed by CA. That’s the high­est num­ber after the US, where the com­pa­ny used infor­ma­tion from over 70 mil­lion users to help Don­ald Trump’s pres­i­den­tial cam­paign strate­gi­cal­ly tar­get vot­ers.

    But CA’s foot­print in the Philippines—one of Facebook’s most active mar­kets—is at least five years old.

    Doc­u­ments issued around 2013 obtained by Quartz list out a num­ber of case stud­ies illus­trat­ing how SCL Elec­tions, Cam­bridge Analytica’s pre­de­ces­sor, helped politi­cians around the world gauge and increase their influ­ence.

    A short blurb on the Philip­pines reads:

    Fac­ing nation­al elec­tions, the incum­bent client was per­ceived as kind and hon­ourable – qual­i­ties his cam­paign team thought were elec­tion-win­ning. By con­trast, SCL’s research showed that many groups with­in the elec­torate were more like­ly to be swayed by qual­i­ties such as ‘tough’ and ‘deci­sive’. Using the cross-cut­ting issue of crime, SCL rebrand­ed the client as a strong, no-non­sense man of action.

    Unlike descrip­tions of its work in Indone­sia and Thai­land, SCL’s brief case study for the Philip­pines men­tions no names or dates. That makes it dif­fi­cult to pin­point the spe­cif­ic politician—the “incum­bent client”—SCL claims to have helped.

    Since the doc­u­ments were pub­lished around 2013, it is unlike­ly the client was Duterte. While the Fil­ipino pres­i­dent made crime a key part of his cam­paign, he did not estab­lish him­self as a can­di­date for any nation­al elec­tion until late 2015, when he launched his pres­i­den­tial cam­paign. Until 2016, he was the may­or of Davao City in the south­ern Philip­pines, and his rep­u­ta­tion for being tough on crime—per­haps too tough—was well cement­ed by the time of his pres­i­den­tial run.

    Beyond that, experts on Fil­ipino pol­i­tics are unde­cid­ed as to which politi­cian (or politi­cians) SCL might have helped—or even which election—judging sole­ly from the descrip­tion. Pri­or to the 2016 pres­i­den­tial elec­tions, the Philip­pines held mid-term elec­tions in 2013, and a pres­i­den­tial elec­tion in 2010 that brought Benig­no Aquino III to pow­er.

    Two ana­lysts Quartz reached out to sug­gest­ed that the client could be Mar Rox­as, a pedi­greed Fil­ipino politi­cian who ran for pres­i­dent in 2010 before with­draw­ing and run­ning for vice pres­i­dent. Ulti­mate­ly, he was appoint­ed as a cab­i­net sec­re­tary under for­mer pres­i­dent Aquino in 2011. Rox­as again ran for pres­i­dent in 2016 in 2016 but lost.

    One ana­lyst, request­ing anonymi­ty, said that Rox­as “start­ed to project him­self as a deci­sive ‘man of action’ with the help of Pres­i­dent Aquino start­ing in mid-2013.” This includes his role in the han­dling of the armed con­flict in Zam­boan­ga City in Sep­tem­ber 2013, the earth­quake that hit Bohol in Octo­ber 2013, and the relief and reha­bil­i­ta­tion work in the after­math of the super­ty­phoon Haiyan in Novem­ber 2013.

    It’s pos­si­ble that Rox­as worked with SCL around that time in antic­i­pa­tion of his 2016 run, accord­ing to anoth­er ana­lyst. “They tried to rebrand him [Rox­as], but it didn’t work,” the ana­lyst said, also request­ing anonymi­ty. “Of the pres­i­den­tial can­di­dates in 2016, he would best fit the mold. And he was also known to like data and info in dri­ving his cam­paign.”

    Nico Ravanil­la, assis­tant pro­fes­sor of polit­i­cal sci­ence at the Uni­ver­si­ty of Cal­i­for­nia, San Diego (UCSD) is skep­ti­cal that Rox­as fits the descrip­tion. “Mar Rox­as ran on a plat­form of ‘Mr. Palengke’ (Mr. Mar­ket, lit­er­al­ly), so he doesn’t fit the pro­file of some­one who rebrand­ed his cam­paign from ‘kind and hon­or­able’ to some­one who is a ‘strong, no-non­sense man of action.’”

    Alan Peter Cayetano, who now serves as the sec­re­tary of for­eign affairs under Duterte, also came up as a poten­tial fit. “If we exam­ined his cam­paign plat­form in 2013, he might be con­strued as some­one who is a strong, no-non­sense man of action,” said Ravanil­la.

    Alan Cayetano was known to want to reach out to for­eign con­sul­tants,” anoth­er ana­lyst said. “He was proud of that.”
    But Cayetano did not run on a strong anti-crime plat­form until he teamed up with Duterte, Ravanil­la added.

    Ravanil­la sug­gest­ed that sen­a­tor Richard Gor­don also match­es the descrip­tion out­lined in the SCL doc­u­ments. Active in pol­i­tics since the 1980s as may­or of Olon­gapo City, in the north, Gor­don end­ed his first stint in the sen­ate in 2010 to make a pres­i­den­tial bid that failed. Lat­er, he cam­paigned for a return to sen­ate in the 2013 mid-term elec­tions, call­ing him­self “Action Gor­don” in some ads (link to video).

    But, accord­ing to a dif­fer­ent ana­lyst, Gor­don may not have had the means to engage SCL’s ser­vices. “It would be too expen­sive for any can­di­date, par­tic­u­lar­ly for a pres­i­den­tial can­di­date, to get the ser­vices of a for­eign con­sul­tant unless the can­di­date has seri­ous chances of win­ning and has a deep pock­et for cam­paign expens­es,” the ana­lyst explained.

    The offices of Cayetano, Gor­don, and Cam­bridge Ana­lyt­i­ca, did not reply to Quartz’s requests for com­ment.

    A spokesper­son for Rox­as denied that SCL or CA were involved in his 2016 elec­tion cam­paign. “SCL/Cambridge Ana­lyt­i­ca were not retained nor did they pro­vide any ser­vices for the Rox­as cam­paign,” the spokesper­son said.

    There is anoth­er pos­si­bil­i­ty: that SCL exag­ger­at­ed or mis­rep­re­sent­ed its actu­al impact in the Philip­pines.

    Well before the Face­book scan­dal, the com­pa­ny had a rep­u­ta­tion for over­stat­ing its tal­ents.. In 2000, the Wall Street Jour­nal and the Observ­er each pub­lished pieces illus­trat­ing SCL’s oper­a­tions in Indone­sia sup­port­ing pres­i­dent Abdur­rah­man Wahid. They each depict­ed the company’s local offices as a spy’s lair, full of com­put­ers and flat-screen monitors—while the firm deliv­ered few mean­ing­ful results, beyond assuag­ing an unpop­u­lar regime’s inse­cu­ri­ties. ”It was just like a movie set to impress the clients, to calm down the fam­i­ly,” the Observ­er report­ed one Indone­sian who vis­it­ed the office say­ing. “They are real­ly des­per­ate.”

    ———-

    “Cam­bridge Ana­lyt­i­ca boast­ed about brand­ing a Fil­ipino politi­cian as tough on crime and “no-non­sense”” by Josh Hor­witz & Devjy­ot Ghoshal; Quartz; 04/09/2018

    On April 05, Face­book admit­ted some 1.2 mil­lion users in Philip­pines had their data improp­er­ly accessed by CA. That’s the high­est num­ber after the US, where the com­pa­ny used infor­ma­tion from over 70 mil­lion users to help Don­ald Trump’s pres­i­den­tial cam­paign strate­gi­cal­ly tar­get vot­ers.”

    Keep in mind that the 1.2 mil­lion fig­ure for the num­ber of peo­ple’s pro­files from the Philip­pines acquired by Cam­bridge Ana­lyt­i­ca is prob­a­bly a dra­mat­ic under­state­ment. 1.2 mil­lion was just what Face­book ini­tial­ly admit­ted to and it’s almost guar­an­teed that Face­book’s first admis­sion is an under­state­ment. That’s how Face­book scan­dals always seem to go. An ini­tial admis­sion that’s alarm­ing but not near­ly as alarm­ing as the final admis­sion. SCL had been oper­at­ing in the Philip­pines begin­ning in 2013, the same year Cam­bridge Ana­lyt­i­ca was start­ed. There’s every rea­son to believe the com­pa­ny would have been grab­bing as many Face­book pro­files in the coun­try as pos­si­ble. A 2018 report found the Philip­pines near the 76 mil­lion social media users in the Philip­pines in 2018 and 75 mil­lion of them are on Face­book. So there’s a very good chance Cam­bridge Ana­lyt­i­ca got a lot more than 1.2 mil­lion Philip­pines Face­book pro­files:

    ...
    But CA’s foot­print in the Philippines—one of Facebook’s most active mar­kets—is at least five years old.

    Doc­u­ments issued around 2013 obtained by Quartz list out a num­ber of case stud­ies illus­trat­ing how SCL Elec­tions, Cam­bridge Analytica’s pre­de­ces­sor, helped politi­cians around the world gauge and increase their influ­ence.
    ...

    And note how one of the sus­pect­ed mys­tery clients, sen­a­tor Richard Gor­don, match­es one of the Fil­ipino politi­cians who match­es the pat­tern of a politi­cian who start­ed por­tray­ing a ‘tough on crime’ polit­i­cal brand start­ing in 2013. But he’s dis­missed as the mys­tery client by some ana­lysts because it’s assumed that Gor­don could­n’t pos­si­bly afford the fees. That’s assum­ing he’s the one pay­ing. The secre­cy of the client also means the buy­er is a secret. It could be a for­eign or domes­tic backer of Gor­don that’s the mys­tery client. We have no idea, but it’s a reminder of how firms offer­ing these kind of polit­i­cal psy-op ser­vices to secret clients make it easy for for­eign inter­ests to secret­ly ‘invest’ in a can­di­date by pay­ing for secret polit­i­cal con­sult­ing and influ­ence/p­sy-op ser­vices:

    ...
    Since the doc­u­ments were pub­lished around 2013, it is unlike­ly the client was Duterte. While the Fil­ipino pres­i­dent made crime a key part of his cam­paign, he did not estab­lish him­self as a can­di­date for any nation­al elec­tion until late 2015, when he launched his pres­i­den­tial cam­paign. Until 2016, he was the may­or of Davao City in the south­ern Philip­pines, and his rep­u­ta­tion for being tough on crime—per­haps too tough—was well cement­ed by the time of his pres­i­den­tial run.

    ...

    Ravanil­la sug­gest­ed that sen­a­tor Richard Gor­don also match­es the descrip­tion out­lined in the SCL doc­u­ments. Active in pol­i­tics since the 1980s as may­or of Olon­gapo City, in the north, Gor­don end­ed his first stint in the sen­ate in 2010 to make a pres­i­den­tial bid that failed. Lat­er, he cam­paigned for a return to sen­ate in the 2013 mid-term elec­tions, call­ing him­self “Action Gor­don” in some ads (link to video).

    But, accord­ing to a dif­fer­ent ana­lyst, Gor­don may not have had the means to engage SCL’s ser­vices. “It would be too expen­sive for any can­di­date, par­tic­u­lar­ly for a pres­i­den­tial can­di­date, to get the ser­vices of a for­eign con­sul­tant unless the can­di­date has seri­ous chances of win­ning and has a deep pock­et for cam­paign expens­es,” the ana­lyst explained.
    ...

    So that’s going to be some­thing to watch for as Kaiser starts reveal­ing more about Cam­bridge Ana­lyt­i­ca’s inter­nal oper­a­tions in com­ing months. Will we learn about the mys­tery can­di­date? And what about the “hun­dreds of rem­nants” of Cam­bridge Ana­lyt­i­ca oper­at­ing around the world that Kaiser warned about? Are these rem­nants still oper­at­ing under new­er firms like Emer­da­ta? Hope­ful­ly we find out in Kaiser’s trea­sure trove of files.

    Either way, it’s impor­tant to keep in mind that a big part of what makes this sto­ry so impor­tant is that it’s a noto­ri­ous exam­ple of a larg­er indus­try. Cam­bridge Ana­lyt­i­ca is notable for being cut­ting edge and on a mas­sive scale thanks to Face­book’s lax poli­cies, and was run by and for fas­cists like Steve Ban­non, Robert Mer­cer, and Don­ald Trump. It’s a par­tic­u­lar­ly noto­ri­ous exam­ple of a larg­er psy-op indus­try and it’s that larg­er indus­try that we need to be most wor­ried about because this is the kind of indus­try that could grow real­ly mas­sive as it gets more effec­tive. It’s the kind of indus­try that includes com­pa­nies like Psy-Group, all poten­tial­ly with their own mys­tery clients. Per­son­al­ized per­sua­sion tech­nol­o­gy is only going to get more and more per­sua­sive and the bet­ter it gets the more mys­tery clients it’s inevitably going to get too. Espe­cial­ly in the polit­i­cal space.

    And fas­cists will be there to exploit this kind of per­son­al per­sua­sion tech­nol­o­gy. That’s per­haps the most crit­i­cal point of the Cam­bridge Ana­lyt­i­ca scan­dal: it’s not just a noto­ri­ous exam­ple of an out of con­trol per­son­al­ized per­sua­sion indus­try. It’s a noto­ri­ous exam­ple of how it’s inevitably going to be used to great effect to help fas­cists like Robert Mer­cer, Steve Ban­non, and Don­ald Trump because they thrive in an envi­ron­ment of lies and mis­trust that these com­pa­nies help pro­mote. This is like fas­cist dream tech­nol­o­gy and that’s who we find at the cut­ting edge of it. So the big­ger pic­ture sto­ry that peo­ple need pay atten­tion to on these mat­ters is that that there’s a larg­er indus­try offer­ing glob­al ser­vices that Cam­bridge Ana­lyt­i­ca was one noto­ri­ous exam­ple of, but there’s also a par­tic­u­lar dan­ger of fas­cists abus­ing this per­son­al­ized per­sua­sion tech­nol­o­gy which Cam­bridge Ana­lyt­i­ca is also a noto­ri­ous exam­ple of. Cam­bridge Ana­lyt­i­ca is a good exam­ple of a lot of bad things, hence all the mys­tery.

    Posted by Pterrafractyl | January 13, 2020, 12:25 am
  37. The Trump admin­is­tra­tion’s response to the coro­n­avirus has been so per­fect that the crit­i­cisms of that per­fect response are part of an elab­o­rate hoax to take down Pres­i­dent Trump. That’s lit­er­al­ly the mes­sage being the Trump admin­is­tra­tion and broad­er right-wing media estab­lish­ment has been dou­bling and tripling-down on over the last few days. It’s the kind of dark­ly sur­re­al ‘lead­er­ship’ that on a sig­nif­i­cant pub­lic health issue that simul­ta­ne­ous­ly feels unhinged and unprece­dent­ed and at the same time exact­ly what we should expect from the con­tem­po­rary GOP and allied media. But while this kind of ‘hoax’ rhetoric from the GOP as a reflex­ive defense against crit­i­cisms of the Trump admin­is­tra­tion’s response (or lack of response) is no longer unprece­dent­ed and sad­ly exact­ly what we should expect after three years of this mad­ness, it’s still unprece­dent­ed to see the ‘hoax’ deflec­tion strat­e­gy applied to an urgent and grow­ing real world emer­gent viral pan­dem­ic.

    The US elec­torate is going to feel the impact of Trump’s mean­der­ing fed­er­al response to the coro­n­avirus quite lit­er­al­ly. In the form of the flu. Hope­ful­ly it’s going to be a very mild flu for most peo­ple or maybe no symp­toms at all. But we’re all going to feel the impact of this virus at some point, direct­ly or indi­rect­ly, as the virus spreads across the world and dis­miss­ing crit­i­cisms of the Trump admin­is­tra­tion’s response as a ‘hoax’ prob­a­bly isn’t going to play well with the peo­ple who actu­al­ly get sick in com­ing months or see the econ­o­my stall. Some of the dam­age to health and the econ­o­my going to be unavoid­able but it’s almost unavoid­able that there’s going to be a lot of very avoid­able dam­age done too. Because if there’s one thing Trump can’t avoid it’s avoid­able dam­age. That was always part of Trump’s ‘charm’: he was going to be a ‘Bull in Chi­na Shop’ and charge in and break stuff. And he’s done that, includ­ing try­ing to break the US’s fed­er­al dis­ease response capa­bil­i­ties. So we’re poised for both an epi­dem­ic of the COVID-19 coro­n­avirus and an epi­dem­ic of right-wing griev­ance-media-com­plex blus­ter­ing about how all of the crit­i­cisms — for moves like putting Mike Pence in charge of the response to polit­i­cal­ly muz­zle the gov­ern­men­t’s coro­n­virus mes­sag­ing — are part of a hoax and a deep state plot to take down Trump. It’s going to get awful and weird if the Trump team decides to get awful and weird and apoc­a­lyp­tic like they always do when they are in trou­ble. The micro-tar­get­ing of apoc­a­lyp­tic garbage mes­sag­ing is the kind of thing we should expect from Trump’s team if the COVID-19 out­break comes to dom­i­nate the elec­tion. Espe­cial­ly if it looks like he’s going to lose and needs an excuse to post­pone (can­cel) the elec­tion. Super-flu may have been a PR dis­as­ter for Trump so far but there’s still plen­ty of pro­pa­gan­da oppor­tu­ni­ty. Right now the Trump team is push­ing the idea that there’s a deep state plot to make him look bad with unfair crit­i­cisms. If the COVID-19 virus gets real­ly bad and tem­porar­i­ly shuts down cities we’ll prob­a­bly see a very dif­fer­ent kind of deep state-themed mes­sage com­ing from the Trump admin­is­tra­tion and direct­ly at the loy­al base audi­ence who will believe any­thing.

    So it’s worth recall­ing an inter­est­ing sto­ry in Salon from right before the Novem­ber 2018 mid-term elec­tions about the giant vot­er pro­file data­base cre­at­ed by the Koch Broth­ers (now Koch Broth­er) for gen­er­al use by Repub­li­can can­di­dates and right-wing inter­est groups like the NRA. As the arti­cle describes, they cre­at­ed a com­pa­ny, Themis, as part of their work on the 2010 Project REDMAP ini­tia­tive. Project REDMAP was the GOP project to win as many state-lev­el races as pos­si­ble to max­i­mize Repub­li­can pow­er in the once-a-decade cen­sus and redis­trict­ing process that became a Repub­li­can hyper-par­ti­san ger­ry­man­der­ing bonan­za. In 2011, they bought out a com­peti­tor, i360, merged it with Themis and kept the i360 name. Yep, the Koch Broth­ers’ i360 com­pa­ny that accu­mu­lat­ed the GOP super-data­base of detailed pro­files on vir­tu­al­ly every US vot­er was ini­tial­ly part of their 2010 state-lev­el super-ger­ry­man­der­ing schemes. It’s an indi­ca­tion of the lev­el of per­son­al gran­u­lar­i­ty they were using in draw­ing the GOP-ger­ry­man­dered maps. High­ly per­son­al pro­files on almost all Amer­i­cans. It’s not hard to imag­ine that would be use­ful for draw­ing par­ti­san dis­trict lines and, sure enough, it looks like that’s what the Repub­li­cans were doing in 2010, which is just a pre­lude for what’s in store for 2020.

    And as the arti­cle also described, this detailed per­son­al­ized i360 data­base of every vot­er allowed the Repub­li­cans to man­age their mes­sag­ing dur­ing a dif­fer­ent pub­lic health cri­sis in a very effec­tive man­ner. It was the reelec­tion of Ohio Repub­li­can Sen­a­tor Rob Port­man, who rep­re­sents a state that’s been heav­i­ly hit by the opi­oid and hero­ine crises. Those inter­twined pub­lic health crises were top con­cerns for vot­ers but also very polar­iz­ing issues. Some vot­ers want­ed to see the issues treat­ed as med­ical and pub­lic health issues and oth­ers want­ed to see a more tra­di­tion crim­i­nal approach to the prob­lems. So the Repub­li­cans used their i360 data­base to guess which vot­ers would pre­fer Sen­a­tor Port­man treat the opi­oid and hero­ine crises as med­ical and pub­lic health prob­lems and send them mes­sages about how Sen­a­tor Port­man wants to treat the prob­lem like pub­li­can health and med­ical issues. Vot­ers who would pre­fer Port­man rely on the crim­i­nal jus­tice sys­tem to deal with the opi­oid and hero­ine crises were deliv­ered mes­sages about his insis­tence that the crim­i­nal jus­tice sys­tem be part of the solu­tion. This was how they treat­ed one of the key issues of the race. Just deter­min­ing what the vot­er want­ed to hear and say­ing that. The trick was accu­rate­ly pre­dict­ing what vot­ers want­ed to hear and it sounds like they did a good job. So they built a “Hero­in Mod­el” and “Hero­in Treat­ment Mod­el” to build an over­all per­sua­sion mod­el to help craft mes­sages for vot­ers that tried to pre­dict each Ohio voter’s views on how the opi­oid epi­dem­ic should be han­dled and whether or not they were impact­ed by it per­son­al­ly. The Kochs’ oper­a­tion is described as bet­ter than what the Demo­c­ra­t­ic or Repub­li­can par­ties can do on their own and the results of Port­man’s race would sup­port that assess­ment. Port­man start­ed out the race 9 points behind his Demo­c­ra­t­ic oppo­nent and end­ed up win­ning with 58 per­cent of the vote.

    So if the COVID-19 coro­n­avirus ends up becom­ing major issue in the 2020 elec­tion, it seems like­ly that the Trump team and Repub­li­can Par­ty is going to want to deliv­er an “it’s all a hoax and every­thing is fine” mes­sage to the Repub­li­can base audi­ence and a dif­fer­ent “we are tak­ing this very seri­ous­ly and are very com­pe­tent at han­dling it” mes­sage to every­one else. And they’re be able to large­ly deliv­er those mes­sages in a very tar­get­ed man­ner because they have a pro­file on every­one. That’s one of the Koch free­bies for Repub­li­cans. It’s not quite free but i360 is sub­si­dized and a mon­ey-los­ing oper­a­tion accord­ing to the arti­cle. It’s about giv­ing access to Repub­li­can rais­es and right-wing groups access to a very detailed data­base on all Amer­i­can vot­ers. So if the COVID-19 sit­u­a­tion gets kind of nuts and the Trump team get des­per­ate and decides, for exam­ple, to micro-tar­get Repub­li­can evan­gel­i­cals with End Times memes, keep in mind that they have the micro-tar­get­ing infra­struc­ture set up to do that. That’s what they used for Sen­a­tor Port­man in Ohio in 2016 to send very dif­fer­ent mes­sages about his stances on the opi­oid and hero­ine crises and that micro-tar­get­ing infra­struc­ture is only going to be more sophis­ti­cat­ed in 2020. We already know Trump and the Repub­li­cans are will­ing to be insane­ly irre­spon­si­ble when it comes to what they are say­ing about the coro­n­avirus sit­u­a­tion so it’s just a mat­ter of how irre­spon­si­ble they’ll ulti­mate­ly get. And if they want to get real­ly irre­spon­si­ble in a micro-tar­get­ed man­ner they can do that:

    Salon

    Koch broth­ers are watch­ing you: And new doc­u­ments reveal just how much they know
    Bil­lion­aire broth­ers have built per­son­al­i­ty pro­files of most Amer­i­cans, and use them to push right-wing pro­pa­gan­da

    Calvin Sloan
    Novem­ber 5, 2018 12:00PM (UTC)

    New doc­u­ments uncov­ered by the Cen­ter for Media and Democ­ra­cy show that the bil­lion­aire Koch broth­ers have devel­oped detailed per­son­al­i­ty pro­files on 89 per­cent of the U.S. pop­u­la­tion; and are using those pro­files to launch an unprece­dent­ed pri­vate pro­pa­gan­da offen­sive to advance Repub­li­can can­di­dates in the 2018 midterms.

    The doc­u­ments also show that the Kochs have devel­oped per­sua­sion mod­els — like their “Hero­in Mod­el” and “Hero­in Treat­ment Mod­el” — that tar­get vot­ers with tai­lored mes­sag­ing on select issues, and part­ner with cable and satel­lite TV providers to play those tai­lored mes­sages dur­ing “reg­u­lar” tele­vi­sion broad­casts.

    Over the last decade, big data and micro­tar­get­ing have rev­o­lu­tion­ized polit­i­cal com­mu­ni­ca­tions. And the Kochs, who are col­lec­tive­ly worth $120 bil­lion, now stand at the fore­front of that rev­o­lu­tion — invest­ing bil­lions in data aggre­ga­tion, machine learn­ing, soft­ware engi­neer­ing and Arti­fi­cial Intel­li­gence opti­miza­tion.

    In mod­ern elec­tions, incor­po­rat­ing AI into vot­er file main­te­nance has become a pre­req­ui­site to pro­duc­ing reli­able data. The Kochs’ polit­i­cal data firm, i360 states that it has “been prac­tic­ing AI for years. Our team of data sci­en­tists uses com­po­nents of Machine learn­ing, Deep Learn­ing and Pre­dic­tive Ana­lyt­ics, every day as they build and refine our pre­dic­tive mod­els.”

    Thanks to that invest­ment (and the Supreme Court’s cam­paign finance rul­ings that opened the flood­gates for super PACs), the Koch net­work is bet­ter posi­tioned than either the Demo­c­ra­t­ic Par­ty or the GOP to reach vot­ers with their indi­vid­u­al­ly tai­lored com­mu­ni­ca­tions.

    That is a dan­ger­ous devel­op­ment, with poten­tial­ly dra­mat­ic con­se­quences for our democ­ra­cy.

    The Kochs and i360

    The Kochs for­mal­ly entered the data space nine years ago, devel­op­ing the “Themis Trust” pro­gram for the 2010 midterms — an uncom­mon­ly impact­ful elec­tion cycle where Repub­li­can oper­a­tives exe­cut­ed their REDMAP pro­gram and algo­rith­mi­cal­ly ger­ry­man­dered con­gres­sion­al maps across the coun­try in their favor.

    In 2011, the Kochs fold­ed Themis into a data com­peti­tor it acquired, i360 LLC, which was found­ed by Michael Palmer, the for­mer chief tech­nol­o­gy offi­cer of Sen. John McCain’s 2008 pres­i­den­tial cam­paign. Palmer still leads the orga­ni­za­tion.

    Back then, as jour­nal­ists Ken­neth Vogel and Mike Allen doc­u­ment­ed, the Kochs’ long-term fund­ing com­mit­ments to i360 allowed the orga­ni­za­tion to think big­ger than their polit­i­cal com­peti­tors.

    “Right now, we’re talk­ing about and build­ing things that you won’t see in 2016, because it’s not going to be ready until 2018,” Michael Palmer said in the wake of the 2014 midterm cycle.

    Those pro­grams are now oper­a­tional. And accord­ing to a suc­cess­ful GOP cam­paign man­ag­er, i360 is the “best in the busi­ness” at pro­vid­ing Repub­li­cans with vot­er data.

    i360’s client list reflects that data supe­ri­or­i­ty. The country’s most noto­ri­ous and effec­tive polit­i­cal spenders, like the Nation­al Rifle Asso­ci­a­tion, use the plat­form to iden­ti­fy and influ­ence vot­ers, as do Repub­li­can par­ty com­mit­tees, and U.S. House and Sen­ate cam­paigns.

    []

    i360 sweet­ens the deal to its clients by offer­ing its ser­vices at below-mar­ket rates. And once clients are locked into the i360 plat­form, they have access to the company’s vot­er file — the beat­ing heart of mod­ern polit­i­cal cam­paigns.

    Con­ser­v­a­tives agree that the Kochs are sub­si­diz­ing i360. The loss­es they sus­tain by under­charg­ing clients, how­ev­er, are a pit­tance com­pared to the down-stream pub­lic pol­i­cy returns and polit­i­cal pow­er the Kochs receive from oper­at­ing what amounts to a shad­ow polit­i­cal par­ty in the Unit­ed States — one that vig­i­lant­ly guards the fos­sil fuel sub­si­dies, dereg­u­la­to­ry schemes, and regres­sive tax struc­tures that enable Koch Indus­tries to bring in $115 bil­lion annu­al­ly in pri­vate rev­enue.

    Inside the i360 Vot­er File

    i360’s vot­er file iden­ti­fies “more than 199 mil­lion active vot­ers and 290 mil­lion U.S. con­sumers,” and pro­vides its users with up to 1,800 unique data points on each iden­ti­fied indi­vid­ual.

    As a result, i360 and the Kochs know your vitals, eth­nic­i­ty, reli­gion, occu­pa­tion, hob­bies, shop­ping habits, polit­i­cal lean­ings, finan­cial assets, mar­i­tal sta­tus and much more.

    They know if you enjoy fish­ing — and if you do, whether you pre­fer salt or fresh water. They know if you have blad­der con­trol dif­fi­cul­ty, get migraines or have osteo­poro­sis. They know which adver­tis­ing medi­ums (radio, TV, inter­net, email) are the most effec­tive. For you.

    i360 has the fol­low­ing attribute tags, among hun­dreds of oth­ers, ranked 1–10, or sub­di­vid­ed oth­er­wise in their vot­er file.

    []

    ...

    But i360 attribute codes are not lim­it­ed to that 1–10 scale. Their knowl­edge of your finan­cial stand­ing is gran­u­lar, from how much equi­ty you have in your home to your net wealth and expend­able income.

    []

    They know where you live, what your mort­gage sta­tus is and even how many bath­rooms are in your house.

    []

    i360 has also cre­at­ed a set of 70 “clus­ter­codes” to human­ize its data for cam­paign oper­a­tives. These cat­e­gories range from “Fad­ed Blue Col­lars” to “Mean­der­ing Mil­len­ni­als,” and have flam­boy­ant descrip­tions that cor­re­spond with their attribute head­ings.

    Here are some exam­ples:

    []

    Koch Per­sua­sion Mod­els

    Addi­tion­al­ly, i360 has devel­oped a series of per­sua­sion mod­els for its vot­er file. These mod­els are often region­al­ly sen­si­tive — since vot­ers have region­al con­cerns — and are being used in fed­er­al elec­tions and down-bal­lot races to assist Repub­li­cans across the coun­try.

    In 2016, i360 cre­at­ed a set of region­al mod­els while work­ing with Sen. Rob Portman’s 2016 re-elec­tion cam­paign in Ohio. Port­man start­ed out the race polling nine points behind his Demo­c­ra­t­ic oppo­nent, Gov. Ted Strick­land, but ulti­mate­ly won with 58 per­cent of the vote.

    The com­pa­ny devel­oped a mod­el that could pre­dict whether a vot­er sup­port­ed Port­man or Strick­land with 89 per­cent accu­ra­cy, and oth­ers that pre­dict­ed vot­er pol­i­cy pref­er­ences. Well aware of the 2016 land­scape, i360 also made a Trump/Clinton mod­el, an Anti-Hillary mod­el, and a Tick­et Split­ter mod­el.

    Much of i360’s suc­cess in the race, how­ev­er, was linked to under­stand­ing (after con­duct­ing exten­sive polling) that a “key local issue fac­ing Ohio was the opi­oid epi­dem­ic.” In response, the com­pa­ny cre­at­ed a “hero­in mod­el” and a “hero­in treat­ment mod­el” that were par­tic­u­lar­ly effec­tive at con­vinc­ing vot­ers to sup­port Port­man.

    []

    When describ­ing how they employed their “hero­in mod­el,” i360 was clear that Portman’s “posi­tion” on the cri­sis depend­ed on the vot­er, empha­siz­ing health care solu­tion com­mu­ni­ca­tions for some, and crim­i­nal jus­tice solu­tion com­mu­ni­ca­tions for oth­ers.

    Here is i360 on the sub­ject:

    the issue of opi­oid abuse was par­tic­u­lar­ly com­plex in that it was rel­a­tive­ly unknown whether it was con­sid­ered a health­care issue or a crim­i­nal jus­tice issue. The answer to this would dic­tate the most effec­tive mes­sag­ing. In addi­tion, this was a par­tic­u­lar­ly per­son­al issue affect­ing some vot­ers and not oth­ers.

    By lever­ag­ing two pre­dic­tive mod­els — the Hero­in mod­el iden­ti­fy­ing those con­stituents most like­ly to have been affect­ed by the issue of opi­oid abuse and the Hero­in Treat­ment mod­el deter­min­ing whether those indi­vid­u­als were more like­ly to view the issue as one of health­care or of crim­i­nal jus­ticethe cam­paign was able to effec­tive­ly craft their mes­sag­ing about Sen­a­tor Portman’s exten­sive work in the Sen­ate to be tai­lored to each indi­vid­ual accord­ing to their dis­po­si­tion on the top­ic.

    This manip­u­la­tion of the opi­oid cri­sis for polit­i­cal gain has a per­verse irony giv­en the Kochs’ long-run­ning work to pro­vide cor­po­rate inter­ests, includ­ing health care and phar­ma­ceu­ti­cal inter­ests, with undue polit­i­cal pow­er and influ­ence over pub­lic pol­i­cy deci­sions. The Kochs have gift­ed over a mil­lion dol­lars to ALEC, for exam­ple, an orga­ni­za­tion that counts Pur­due Phar­ma — the uncon­scionable man­u­fac­tur­er of Oxy­Con­tin — as a mem­ber.

    The com­pa­ny also stat­ed it joined Portman’s cam­paign 21 months before the elec­tion, and that, “Togeth­er, i360 and the cam­paign strate­gized a plan to exe­cute one of the most cus­tom-tar­get­ed, inte­grat­ed cam­paigns to date with a focus on get­ting the right mes­sage to the right vot­er wher­ev­er that might be.”

    This is notable because dur­ing the 2016 elec­tion, i360 also ran $11.7 mil­lion worth of “inde­pen­dent” expen­di­tures for the Nation­al Rifle Asso­ci­a­tion Polit­i­cal Vic­to­ry Fund, Free­dom Part­ners Action Fund, and Amer­i­cans for Pros­per­i­ty in Portman’s race.

    These out­side spenders, two of which are Koch-fund­ed groups, and Portman’s cam­paign all used i360 to coor­di­nate their dig­i­tal mar­ket­ing, phone banks and tele­vi­sion ad buys, in the same mar­ket, in the same elec­tion.

    Addi­tion­al­ly, i360 sup­plied Portman’s cam­paign with oth­er issue-based mod­els on gun con­trol, gay mar­riage and abor­tion that the com­pa­ny con­tin­ues to sup­ply to its clients in 2018.

    Here are some exam­ples of i360’s issue-based mod­els:

    []

    The list goes on, but the struc­ture stays the same. The Kochs are tai­lor­ing their adver­tis­ing to you, because they know near­ly every­thing about you.

    ————

    “Koch broth­ers are watch­ing you: And new doc­u­ments reveal just how much they know” by Calvin Sloan; Salon; 11/05/2018

    “Thanks to that invest­ment (and the Supreme Court’s cam­paign finance rul­ings that opened the flood­gates for super PACs), the Koch net­work is bet­ter posi­tioned than either the Demo­c­ra­t­ic Par­ty or the GOP to reach vot­ers with their indi­vid­u­al­ly tai­lored com­mu­ni­ca­tions.”

    Bet­ter than the Democ­rats or Repub­li­cans. That was the sta­tus of the Koch’s i360 vot­er micro-tar­get­ing capa­bil­i­ties. There are up to 1,800 unique data points on 199 mil­lion active vot­ers and 290 mil­lion US con­sumers. That’s almost every­one. And it includes gran­u­lar data on your mort­gage sta­tus. They cer­tain­ly aren’t unique at aggre­gat­ing all of this kind of data. But they were appar­ent­ly the best as of the 2018 mid-terms. And it’s a sure bet that what­ev­er the Trump team builds will include the Koch’s i360 data:

    ...
    Inside the i360 Vot­er File

    i360’s vot­er file iden­ti­fies “more than 199 mil­lion active vot­ers and 290 mil­lion U.S. con­sumers,” and pro­vides its users with up to 1,800 unique data points on each iden­ti­fied indi­vid­ual.

    As a result, i360 and the Kochs know your vitals, eth­nic­i­ty, reli­gion, occu­pa­tion, hob­bies, shop­ping habits, polit­i­cal lean­ings, finan­cial assets, mar­i­tal sta­tus and much more.

    They know if you enjoy fish­ing — and if you do, whether you pre­fer salt or fresh water. They know if you have blad­der con­trol dif­fi­cul­ty, get migraines or have osteo­poro­sis. They know which adver­tis­ing medi­ums (radio, TV, inter­net, email) are the most effec­tive. For you.

    i360 has the fol­low­ing attribute tags, among hun­dreds of oth­ers, ranked 1–10, or sub­di­vid­ed oth­er­wise in their vot­er file.

    []

    ...

    But i360 attribute codes are not lim­it­ed to that 1–10 scale. Their knowl­edge of your finan­cial stand­ing is gran­u­lar, from how much equi­ty you have in your home to your net wealth and expend­able income.

    []

    They know where you live, what your mort­gage sta­tus is and even how many bath­rooms are in your house.

    []

    i360 has also cre­at­ed a set of 70 “clus­ter­codes” to human­ize its data for cam­paign oper­a­tives. These cat­e­gories range from “Fad­ed Blue Col­lars” to “Mean­der­ing Mil­len­ni­als,” and have flam­boy­ant descrip­tions that cor­re­spond with their attribute head­ings.
    ...

    And in 2016, we got to see the i360 sys­tem go to work for Rob Port­man’s US sen­ate reelec­tion bid in Ohio, where the opi­oid cri­sis was a key region­al issue in the race. They devel­oped mod­els to pre­dict if indi­vid­ual vot­ers would pre­fer to hear that Rob Port­man sup­port­ed health care solu­tions to the epi­dem­ic or if they want­ed crim­i­nal solu­tions and deliv­ered those per­son­al­ized mes­sages. When i360 start­ed with Port­man he was 9 points down in the polls and end­ed with 58 per­cent of the vote. It’s hard to say how much that 17 point shift was due to the i360 mod­el­ing but it’s hard to imag­ine it did­n’t help Port­man:

    ...
    In 2016, i360 cre­at­ed a set of region­al mod­els while work­ing with Sen. Rob Portman’s 2016 re-elec­tion cam­paign in Ohio. Port­man start­ed out the race polling nine points behind his Demo­c­ra­t­ic oppo­nent, Gov. Ted Strick­land, but ulti­mate­ly won with 58 per­cent of the vote.

    The com­pa­ny devel­oped a mod­el that could pre­dict whether a vot­er sup­port­ed Port­man or Strick­land with 89 per­cent accu­ra­cy, and oth­ers that pre­dict­ed vot­er pol­i­cy pref­er­ences. Well aware of the 2016 land­scape, i360 also made a Trump/Clinton mod­el, an Anti-Hillary mod­el, and a Tick­et Split­ter mod­el.

    Much of i360’s suc­cess in the race, how­ev­er, was linked to under­stand­ing (after con­duct­ing exten­sive polling) that a “key local issue fac­ing Ohio was the opi­oid epi­dem­ic.” In response, the com­pa­ny cre­at­ed a “hero­in mod­el” and a “hero­in treat­ment mod­el” that were par­tic­u­lar­ly effec­tive at con­vinc­ing vot­ers to sup­port Port­man.

    []

    When describ­ing how they employed their “hero­in mod­el,” i360 was clear that Portman’s “posi­tion” on the cri­sis depend­ed on the vot­er, empha­siz­ing health care solu­tion com­mu­ni­ca­tions for some, and crim­i­nal jus­tice solu­tion com­mu­ni­ca­tions for oth­ers.

    Here is i360 on the sub­ject:

    the issue of opi­oid abuse was par­tic­u­lar­ly com­plex in that it was rel­a­tive­ly unknown whether it was con­sid­ered a health­care issue or a crim­i­nal jus­tice issue. The answer to this would dic­tate the most effec­tive mes­sag­ing. In addi­tion, this was a par­tic­u­lar­ly per­son­al issue affect­ing some vot­ers and not oth­ers.

    By lever­ag­ing two pre­dic­tive mod­els — the Hero­in mod­el iden­ti­fy­ing those con­stituents most like­ly to have been affect­ed by the issue of opi­oid abuse and the Hero­in Treat­ment mod­el deter­min­ing whether those indi­vid­u­als were more like­ly to view the issue as one of health­care or of crim­i­nal jus­ticethe cam­paign was able to effec­tive­ly craft their mes­sag­ing about Sen­a­tor Portman’s exten­sive work in the Sen­ate to be tai­lored to each indi­vid­ual accord­ing to their dis­po­si­tion on the top­ic.

    This manip­u­la­tion of the opi­oid cri­sis for polit­i­cal gain has a per­verse irony giv­en the Kochs’ long-run­ning work to pro­vide cor­po­rate inter­ests, includ­ing health care and phar­ma­ceu­ti­cal inter­ests, with undue polit­i­cal pow­er and influ­ence over pub­lic pol­i­cy deci­sions. The Kochs have gift­ed over a mil­lion dol­lars to ALEC, for exam­ple, an orga­ni­za­tion that counts Pur­due Phar­ma — the uncon­scionable man­u­fac­tur­er of Oxy­Con­tin — as a mem­ber.
    ...

    The Hero­in mod­el and Hero­in Treat­ment mod­el. It was a big part of Sen­a­tor Port­man’s suc­cess­ful come-from-behind mes­sag­ing cam­paign devised by i360. Might there be a COVID-19 mod­el being devised by i360 right now? That seems like a near cer­tain­ty. We’ll prob­a­bly find out after the elec­tion about elab­o­rate COVID-19 mod­els that involved all sorts of para­me­ters. Like whether or not some­one caught the virus them­selves or had a house­hold mem­ber who did. Or the peo­ple with fam­i­ly mem­bers who die from it. There’s prob­a­bly going to be all sorts of mod­el­ing involv­ing the virus and how hard it hits var­i­ous com­mu­ni­ties.

    And while it’s unclear where the Trump would get their hands on infor­ma­tion like who had COVID-19, keep in mind that we are liv­ing in the gold­en age of com­mer­cial per­son­al data-bro­ker­age data­bas­es. So mak­ing infer­ences about peo­ple from the meta-data about them is eas­i­er then ever. For exam­ple the com­mer­cial­ly avail­able smart­phone-based loca­tion infor­ma­tion sold by cell­phone providers might alone give cam­paigns enough infor­ma­tion to make edu­cat­ed guess­es about who got the coro­n­avirus. Some­thing like look­ing for peo­ple with smart­phone loca­tion infor­ma­tion that sud­den­ly shows move between their bed­room and bath­room for a week. If you cross-ref­er­enced that with reports of COVID-19 out­breaks you would have a good shot of guess­ing who got the COVID-19 ill­ness and prob­a­bly wants to hear a very dif­fer­ent mes­sage about the COVID-19 out­break response than some­one who has yet to face the out­break. Who knows what infor­ma­tion source they ulti­mate use. The point is that there are so many to choose from it’s just a mat­ter of time before they find a source for what they’re look­ing for. It’s one of the fea­tures of the Infor­ma­tion Age so far. It’s been an explo­sion of infor­ma­tion cap­tured and pack­aged for com­mer­cial sale with­out the pub­lic real­ly real­iz­ing it and that means there’s a good chance what­ev­er infor­ma­tion you’re look­ing for is for sale some­where. It’s the End of Pri­va­cy in the form of a giant infor­ma­tion mar­ket­place.

    It’s all pret­ty impres­sive if you ignore the destruc­tion: Trump’s mael­strom of dis­in­for­ma­tion sur­round­ing the COVID-19 virus is set up to be tur­bocharged with the kind of sophis­ti­cat­ed micro-tar­get­ing cam­paign that can fig­ure out what each vot­er wants to hear and deliv­er that mes­sage to them pow­ered by that com­mer­cial mar­ket­place of all of the aggre­gat­ed per­son­al pro­files on vir­tu­al­ly every Amer­i­can. There will pre­sum­ably be Cam­bridge Ana­lyt­i­ca-style psy­cho­graph­ic pro­fil­ing too. What­ev­er helps change minds and vot­ing behav­ior. And those Trump cam­paign mod­els will attempt to pre­dict for every US vot­er if they want to hear sober­ing mes­sages about remain­ing calm or right-wing rants about the deep state is try­ing to make Trump look bad. It’s all a reminder that while part of the hur­ri­cane of dis­in­for­ma­tion that we’re going to expe­ri­ence com­ing out of the Trump admin­is­tra­tion over the COVID-19 virus will be due to Trump’s own per­son­al incli­na­tion to lie all the time and almost patho­log­i­cal­ly, anoth­er part of that hur­ri­cane of dis­in­for­ma­tion will be com­ing from cut­ting-edge micro-tar­get­ing oper­a­tions financed in large part by the Kochs.

    Posted by Pterrafractyl | March 2, 2020, 12:32 am
  38. Here’s just a quick fol­lowup on the legal reper­cus­sions of the Cam­bridge Ana­lyt­i­ca scan­dal: the com­pa­ny’s for­mer direc­tor, Alexan­der Nix, was just issued his pun­ish­ment from the UK gov­ern­ment for his role as CEO of Cam­bridge Ana­lyt­i­ca. The rul­ing was made by the UK’s Insol­ven­cy Ser­vice which cit­ed a num­ber of vio­la­tions by Cam­bridge Ana­lyt­i­ca includ­ing “bribery or hon­ey trap stings, vot­er dis­en­gage­ment cam­paigns, obtain­ing infor­ma­tion to dis­cred­it polit­i­cal oppo­nents and spread­ing infor­ma­tion anony­mous­ly in polit­i­cal cam­paigns.” The rul­ing was­n’t lim­it­ed to Cam­bridge Ana­lyt­i­ca. Cam­bridge Analytica’s par­ent com­pa­ny, SCL Elec­tions, “repeat­ed­ly offered shady polit­i­cal ser­vices to poten­tial clients over a num­ber of years” accord­ing to the report.

    The report unfor­tu­nate­ly does­n’t give exam­ples of those spe­cif­ic charges, although that cer­tain­ly sounds like a rep­re­sen­ta­tive list of what we’ve already learned about the kinds of shady ser­vices the com­pa­ny offered clients. Ser­vices that Nix him­self explic­it­ly laid out in the now noto­ri­ous under­cov­er video with a jour­nal­ist where he bragged tac­tics like hir­ing Ukrain­ian sex work­ers to dis­cred­it a clien­t’s polit­i­cal oppo­nent.

    So what was Nix’s pun­ish­ment for direct­ing a com­pa­ny that rou­tine­ly offered shady ser­vices to poten­tial client for years? Nix is dis­qual­i­fied from run­ning a com­pa­ny until 2028. That’s it:

    The Dai­ly Beast

    Trump Data Guru Offi­cial­ly Dis­qual­i­fied Over ‘Shady’ Cam­paign Tac­tics

    It’s the final humil­i­a­tion for the for­mer Cam­bridge Ana­lyt­i­ca boss whose fall from grace began when he was caught brag­ging that his com­pa­ny was to thank for Pres­i­dent Trump.

    Jamie Ross
    Reporter
    Updat­ed Sep. 24, 2020 5:03PM ET
    Pub­lished Sep. 24, 2020 11:01AM ET

    LONDON—Alexander Nix, the man who was run­ning Cam­bridge Ana­lyt­i­ca when it har­vest­ed the Face­book data of tens of mil­lions of vot­ers with­out their knowl­edge so it could be exploit­ed by the Trump 2016 cam­paign, has been banned from direct­ing any com­pa­nies for sev­en years.

    The now-defunct Cam­bridge Ana­lyt­i­ca was a U.K. dig­i­tal black-ops firm that col­lapsed in 2018 fol­low­ing rev­e­la­tions that it secret­ly col­lect­ed Face­book pro­file infor­ma­tion on 87 mil­lion peo­ple. The Dai­ly Beast revealed two years ago that Team Trump used audi­ence lists cre­at­ed by Cam­bridge Ana­lyt­i­ca to tar­get “dark ads” on Face­book dur­ing the final months of the 2016 cam­paign and until Trump’s inau­gu­ra­tion.

    Nix gained noto­ri­ety as the face of Cam­bridge Ana­lyt­i­ca when he inad­ver­tent­ly revealed the shock­ing extent of its dubi­ous oper­a­tions. The company’s for­mer chief exec­u­tive was secret­ly record­ed by Britain’s Chan­nel 4 blab­bing about his fir­m’s work for Trump and effec­tive­ly claim­ing that Cam­bridge Ana­lyt­i­ca was to thank for Trump becom­ing pres­i­dent.

    Nix said in the secret record­ing, “We did all the research, all the data, all the ana­lyt­ics, all the tar­get­ing, we ran all the dig­i­tal cam­paign, the tele­vi­sion cam­paign and our data informed all the strat­e­gy.”

    The footage, in which he bragged about appar­ent­ly ille­gal cam­paign tac­tics used on jobs in oth­er parts of the world, was the begin­ning of his and Cam­bridge Analytica’s swift and spec­tac­u­lar down­fall. Nix was sus­pend­ed as CEO when the tapes were broad­cast in March 2018, the com­pa­ny col­lapsed in May that year. It was there­after forced into com­pul­so­ry liq­ui­da­tion in April 2019.

    Now, Nix has been slapped with a new pun­ish­ment that will pre­vent him from direct­ing any com­pa­nies until Octo­ber 2027.

    The British government’s Insol­ven­cy Ser­vice con­firmed Thurs­day that Nix will be “dis­qual­i­fied for sev­en years from act­ing as a direc­tor or direct­ly or indi­rect­ly becom­ing involved, with­out the per­mis­sion of the court, in the pro­mo­tion, for­ma­tion or man­age­ment of a com­pa­ny.”

    In the state­ment, the gov­ern­ment agency con­demned Nix for allow­ing Cam­bridge Ana­lyt­i­ca to car­ry out what it called “uneth­i­cal ser­vices,” which it said includ­ed “bribery or hon­ey trap stings, vot­er dis­en­gage­ment cam­paigns, obtain­ing infor­ma­tion to dis­cred­it polit­i­cal oppo­nents and spread­ing infor­ma­tion anony­mous­ly in polit­i­cal cam­paigns.”

    Mark Bruce, the chief inves­ti­ga­tor for the Insol­ven­cy Ser­vice, said that Cam­bridge Analytica’s par­ent com­pa­ny, SCL Elec­tions, “repeat­ed­ly offered shady polit­i­cal ser­vices to poten­tial clients over a num­ber of years.”

    The chief inves­ti­ga­tor went on to say in his state­ment: “Alexan­der Nix’s actions did not meet the appro­pri­ate stan­dard for a com­pa­ny direc­tor and his dis­qual­i­fi­ca­tion from man­ag­ing lim­it­ed com­pa­nies for a sig­nif­i­cant amount of time is jus­ti­fied in the pub­lic inter­est.”

    ...

    Britain’s puni­tive action against Nix comes near­ly a year after the U.S. Fed­er­al Trade Com­mis­sion came to a set­tle­ment with him and Alek­san­dr Kogan, who devel­oped the app which allowed Cam­bridge Ana­lyt­i­ca to har­vest the per­son­al infor­ma­tion of mil­lions of Amer­i­cans.

    ————-

    “Trump Data Guru Offi­cial­ly Dis­qual­i­fied Over ‘Shady’ Cam­paign Tac­tics” by Jamie Ross; The Dai­ly Beast; 09/24/2020

    “The chief inves­ti­ga­tor went on to say in his state­ment: “Alexan­der Nix’s actions did not meet the appro­pri­ate stan­dard for a com­pa­ny direc­tor and his dis­qual­i­fi­ca­tion from man­ag­ing lim­it­ed com­pa­nies for a sig­nif­i­cant amount of time is jus­ti­fied in the pub­lic inter­est.””

    Yes, the dis­qual­i­fi­ca­tion for Nix from man­ag­ing lim­it­ed com­pa­nies for a sig­nif­i­cant amount of time is indeed jus­ti­fied in the pub­lic inter­est. Assum­ing the pun­ish­ment was meang­in­ful enough to actu­al­ly dis­suade oth­ers from run­ning uneth­i­cal mass pub­lic pro­pa­gan­da out­fits.

    Unless, of course, it’s such a lenient sen­tence that it acts as an incen­tive for every­one else work­ing in this shady indus­try to con­tin­ue doing so with impuni­ty. And that rais­es the ques­tion: so what about all of those Cam­bridge Ana­lyt­i­ca spin-offs like Emer­da­ta? Or com­pa­nies found­ed by Cam­bridge Ana­lyt­i­ca for­mer employ­ees like Data Propia that’s now work­ing with Brad Parscale on the Trump reelec­tion cam­paign? What are the many rem­nants of Cam­bridge Ana­lyt­i­ca up to and what about all of that data? How many enti­ties around the world pos­sess the Cam­bridge Ana­lyt­i­ca data trove and what are they doing with that data today? These are the kinds of ques­tions that need to be answered when attempt­ing to assess whether or not Nix’s pun­ish­ment was in any way a deter­rent to future crimes of this nature. Ques­tions that were posed to for­mer Cam­bridge Ana­lyt­i­ca CEO and Emer­da­ta founder Julian Wheat­land in the fol­low­ing Fast Com­pa­ny arti­cle of July of 2019. And if Wheat­land’s implau­si­ble denials are any indi­ca­tion of whether or not that data is still being used in secret, some pow­er­ful deter­rents are very nec­es­sary:

    Fast Com­pa­ny

    The strange after­life of Cam­bridge Ana­lyt­i­ca and the mys­te­ri­ous fate of its data
    As the U.S. announces a law­suit against Cam­bridge Ana­lyt­i­ca, the Mer­cer-con­trolled Emer­da­ta dis­clos­es that it now owns the dis­graced Trump data firm and its par­ent com­pa­ny.

    By Jesse Witt and Alex Paster­nack
    07–26-19

    When the Fed­er­al Trade Com­mis­sion said on Wednes­day it would impose a long-await­ed $5 bil­lion penal­ty against Face­book and sue Cam­bridge Ana­lyt­i­ca, it appeared to be clos­ing anoth­er chap­ter on the dis­graced data firm that paid for data that had been scraped from 87 mil­lion unwit­ting social media users.

    After a series of bomb­shell rev­e­la­tions last spring—depicted in a dizzy­ing new doc­u­men­tary, The Great Hack—the Trump cam­paign con­trac­tor head­ed for bank­rupt­cy courts on both sides of the Atlantic, and its for­mer exec­u­tives went dark, leav­ing behind a trail of unan­swered ques­tions. For instance, what would hap­pen to the com­pa­nies, their employ­ees, and their assets?

    Julian Wheat­land, the for­mer CEO of Cam­bridge Ana­lyt­i­ca and for­mer direc­tor of a num­ber of SCL-con­nect­ed firms, told Fast Com­pa­ny this week that there were no plans to revive the com­pa­nies. “I’m pret­ty sure nobody’s think­ing of try­ing to start it up again under a dif­fer­ent guise,” he said.

    But the com­pa­nies’ fate—and the lega­cy of their exper­tise and troves of data—remains murky. A cor­po­rate fil­ing released this week, along with doc­u­ments and inter­views with for­mer exec­u­tives and employ­ees, pro­vide new insight into how a shift­ing, elu­sive group of direc­tors and own­ers have strate­gi­cal­ly man­aged the unwind­ing of Cam­bridge and oth­er firms con­nect­ed to its Lon­don-based par­ent SCL Group as they face bank­rupt­cy pro­ceed­ings, inves­ti­ga­tions, and law­suits.

    Last sum­mer, as some for­mer employ­ees of Cam­bridge and SCL scat­tered to a hand­ful of suc­ces­sor firms, the com­pa­nies were ful­ly acquired by a hold­ing com­pa­ny called Emer­da­ta Lim­it­ed. The com­pa­ny was incor­po­rat­ed in the U.K. in 2017 by for­mer direc­tors of Cam­bridge Ana­lyt­i­ca and mem­bers of the Mer­cer fam­i­ly, who pro­vid­ed the ini­tial fund­ing for Ana­lyt­i­ca four years ear­li­er. Describ­ing its busi­ness as “data pro­cess­ing, host­ing, and relat­ed activ­i­ties,” Emer­da­ta acquired most of the SCL com­pa­nies pri­or to their bank­rupt­cies, Wheat­land said. The pur­pose was to bring them under a sin­gle own­er­ship struc­ture for the pur­pose of refi­nanc­ing them, he said.

    Since last year, Emer­da­ta has been foot­ing the SCL com­pa­nies’ legal bills amid bank­rupt­cy pro­ceed­ings, inves­ti­ga­tions, and law­suits on both sides of the Atlantic. After both SCL Group and Cam­bridge Ana­lyt­i­ca ceased oper­a­tions and filed for bankruptcy–or, in British par­lance, “went into administration”–in May 2018, Emer­da­ta also paid mil­lions to acquire what remained of the com­pa­nies while they are being liq­ui­dat­ed.

    In an email after this sto­ry was pub­lished, Wheat­land sought to clar­i­fy that the hold­ing com­pa­ny did not seek to acquire the SCL com­pa­nies’ data or oth­er assets, which remain under the con­trol of the admin­is­tra­tors over­see­ing their bank­rupt­cies. “Emer­da­ta did not acquire or retain any assets from the bank­rupt com­pa­nies, and if Emer­da­ta had known in advance that the com­pa­nies were going bank­rupt, would nev­er have acquired them,” he wrote.

    Accord­ing to its lat­est cor­po­rate fil­ing, Emer­da­ta pur­chased 100% of the share cap­i­tal of SCL Group for £10,861,339 GBP, equiv­a­lent to around $13 mil­lion. Emer­da­ta also not­ed its 89.5% own­er­ship of Cam­bridge Ana­lyt­i­ca, just as it report­ed in pre­vi­ous fil­ings.

    A third and final whol­ly owned sub­sidiary list­ed on the doc­u­ment is Anaxi Hold­ings, a gov­ern­ment con­trac­tor that was reg­is­tered in Delaware ten days after the firestorm began to hit the com­pa­nies last March. Emer­da­ta has not acquired SCL Insight Lim­it­ed, anoth­er gov­ern­ment-focused com­pa­ny owned by Nigel Oakes, an SCL Group co-founder.

    David Car­roll, an Amer­i­can pro­fes­sor who sued Cam­bridge Ana­lyt­i­ca to obtain his per­son­al data, has alleged that Emer­da­ta had sur­rep­ti­tious­ly tried to shield the defunct firms from scruti­ny and lia­bil­i­ty. “Obvi­ous­ly, any­thing that you do relat­ed to data pro­tec­tion while the com­pa­ny is active becomes moot if they can just go out of busi­ness as soon as they get caught,” he said.

    An inves­ti­ga­tion into “dis­in­for­ma­tion” led a Par­lia­men­tary com­mit­tee to share Carroll’s con­cerns, as it stat­ed in one of its reports. “The trans­for­ma­tion of Cam­bridge Ana­lyt­i­ca into Emer­da­ta illus­trates how easy it is for dis­cred­it­ed com­pa­nies to rein­vent them­selves and poten­tial­ly use the same data and the same tac­tics to under­mine gov­ern­ments, includ­ing in the UK,” the pan­el wrote. “The indus­try needs clean­ing up.”

    In court, Carroll’s lawyers claimed that Cambridge’s court-appoint­ed admin­is­tra­tors, Crowe U.K. LLP, who were being paid by Emer­da­ta, were act­ing with prej­u­dice against Car­roll when they sought to liq­ui­date assets before a full inves­ti­ga­tion into the com­pa­nies could be con­duct­ed.

    Rather than com­ply with Carroll’s data request last year, Crowe chose instead to sub­ject SCL to a crim­i­nal inquiry by the U.K. Infor­ma­tion Commissioner’s Office, or ICO. In Jan­u­ary, SCL pled guilty to break­ing data laws and was fined a total of $26,000, but was not com­pelled to give Car­roll his data.

    In May, a U.K. court final­ly denied Carroll’s law­suit, paving the way for SCL Group to legal­ly dis­solve with­out turn­ing over his data. Car­roll is now rais­ing mon­ey for an appeal, and await­ing the find­ings of a report by the ICO based on data and pass­words it has seized from SCL. Last June, the reg­u­la­tor said it would “look close­ly at any suc­ces­sor com­pa­ny linked to SCL and Cam­bridge Ana­lyt­i­ca or their direc­tors.”

    Wheat­land, who helped launch Emer­da­ta in 2017 and remains a share­hold­er, dis­put­ed the idea that the hold­ing com­pa­ny was being used to shield the SCL firms from scruti­ny. Car­roll had “zero under­stand­ing” of Emer­da­ta, he said. “It’s all con­spir­a­cy the­o­ry.”

    Con­trary to spec­u­la­tion, Wheat­land said, “there was nev­er any attempt to trans­fer data any­where, either up to Emer­da­ta or to any oth­er orga­ni­za­tion, and any claims that there were are fan­ci­ful.”

    The hold­ing com­pa­ny now appears to be large­ly owned by con­ser­v­a­tive activist Rebekah Mer­cer and her sis­ter Jen­nifer, whose shares are held in trust and via their U.S.-based Cam­bridge Ana­lyt­i­ca Hold­ings LLC, accord­ing to Cambridge’s bank­rupt­cy fil­ing in New York. The remain­ing fifth of Emerdata’s shares are owned by sev­er­al pri­or SCL Group minor­i­ty investors and a num­ber of Hong Kong-based shell com­pa­nies. Inves­tiga­tive jour­nal­ist Wendy Siegel­man first report­ed on Emer­da­ta and its links to SCL last year.

    Emerdata’s newest direc­tor, Jacque­lyn James-Var­ga, has also man­aged account­ing over the last few years for the Mer­cer Fam­i­ly Foun­da­tion, the non­prof­it run by Rebekah and her megadonor father, Robert. James-Var­ga for­mer­ly served as trea­sur­er for the Mer­cer-helmed Make Amer­i­ca Num­ber 1 polit­i­cal action committee–a major sup­port­er of both Ted Cruz’s and Don­ald Trump’s pres­i­den­tial candidacies–as well as for Mak­ing Amer­i­ca Great, a dor­mant pro-Trump polit­i­cal action com­mi­tee. Rep­re­sen­ta­tives for Emer­da­ta and the Mer­cer fam­i­ly did not respond to requests for com­ment.

    Wheat­land declined to dis­cuss the Mer­cers’ involve­ment in the firm, but insist­ed that there were no plans to start a new Cam­bridge Ana­lyt­i­ca. “That was an idea that exist­ed in the media,” he said. Asked if the com­pa­ny would cease oper­at­ing once its sub­sidiaries’ assets are liq­ui­dat­ed, he said, “that would be my expec­ta­tion, but I also don’t think there’s any rush to do it, either.”

    Miss­ing data, poten­tial “claims” against for­mer employ­ees

    Wheat­land, one of Emerdata’s orig­i­nal direc­tors, resigned ear­li­er this year, he said, because the com­pa­ny “stopped doing any­thing.” Two oth­er for­mer Cam­bridge Ana­lyt­i­ca chief exec­u­tive offi­cers, Alexan­der Nix and Alexan­der Tayler, were also for a time list­ed as Emer­da­ta direc­tors, and have also since resigned. Nix was sus­pend­ed by Cam­bridge Analytica’s board last year after Britain’s Chan­nel 4 pub­lished an under­cov­er cam­era inves­ti­ga­tion in which he was caught dis­cussing dirty elec­tion tricks includ­ing black­mail­ing can­di­dates.

    The Finan­cial Times report­ed last June that Nix had appro­pri­at­ed $8 mil­lion from the company’s cof­fers in its final days. Nix told law­mak­ers in Par­lia­ment that month that the report was inac­cu­rate, but declined to com­ment fur­ther on the mat­ter.

    The sev­en-page Emer­da­ta fil­ing describes a com­pa­ny strug­gling with miss­ing data–seized by the British data reg­u­la­tor in its raid on SCL’s Lon­don office–and unco­op­er­a­tive and unnamed for­mer employ­ees, against whom it said it was con­sid­er­ing “claims.”

    “Due to [a U.K. Infor­ma­tion Commissioner’s Office] raid result­ing in com­pa­ny account­ing infor­ma­tion being seized, and the lack of coop­er­a­tion shown by for­mer direc­tors, advi­sors, and employ­ees of the Com­pa­ny (against whom, by virtue of the out­right loss of the Company’s investors’ cap­i­tal, the Com­pa­ny is inves­ti­gat­ing poten­tial claims), these accounts have been pre­pared based on lim­it­ed and incom­plete data,” the doc­u­ment reads.

    Cambridge’s admin­is­tra­tors also lost lap­tops and oth­er data, accord­ing to court doc­u­ments filed as part of Carroll’s law­suit against the com­pa­ny. “Employ­ees have refused to return lap­tops to the Admin­is­tra­tors, and oth­ers have been stolen from the Admin­is­tra­tors’ cus­tody,” Carroll’s lawyers wrote. “Adding to con­cerns that the Cam­bridge Ana­lyt­i­ca busi­ness con­tin­ues to be car­ried-on under anoth­er guise, as [of] Novem­ber 2018, for­mer employ­ees were appar­ent­ly still access­ing its cloud-based infra­struc­ture.”

    Crowe, Cambridge’s admin­is­tra­tors in the U.K., declined to com­ment, cit­ing what a spokesper­son said were “ongo­ing inves­tiga­tive mat­ters relat­ing to the UK Com­pa­nies now in liq­ui­da­tion.”

    Like their assets, the rest of the SCL com­pa­nies’ remain­ing data, said to total 700 ter­abytes, remains under the sole con­trol of Crowe and a U.S. lawyer admin­is­trat­ing the company’s bank­rupt­cy pro­ceed­ings there, said Wheat­land. “Emer­da­ta has access to noth­ing, Emer­da­ta nev­er had access to any­thing.”

    The U.K Infor­ma­tion Com­mis­sion­er is now exam­in­ing the SCL data as part of its inves­ti­ga­tion. But even when the data trove is returned, pos­si­bly lat­er this year, Wheat­land said it will not be sold or destroyed due to “hold­ing orders as a result of reg­u­la­to­ry and civ­il law­suits.”

    The for­mer CEO said the infa­mous cache of Face­book data had long since been delet­ed, and no copies remained in the hands of any the com­pa­nies. After learn­ing last year that a reporter had seen a copy of the data, he said, “I and [for­mer Emer­da­ta direc­tor Alexan­der Tayler,] the chief data offi­cer at SCL, were com­plete­ly mind-bog­gled as to what that could be or how that could be.”

    How­ev­er, Wheat­land could not dis­pute the pos­si­bil­i­ty that copies of the data had been made. Apart from for­mer employ­ees, a num­ber of oth­er peo­ple had access to ver­sions of the Face­book data set through Alek­san­dr Kogan, the researcher who col­lect­ed it.

    Its capa­bil­i­ties per­sist through new enti­ties

    As Cam­bridge Ana­lyt­i­ca and its relat­ed com­pa­nies con­tin­ue wind­ing down, the core promis­es of their much-debat­ed psy­cho­log­i­cal tar­get­ing meth­ods per­sist, both through relat­ed enti­ties and a grow­ing inter­est in merg­ing big data and behav­ioral sci­ence. Some for­mer Cam­bridge Ana­lyt­i­ca data experts now work for a new firm, the Texas-based behav­ioral-sci­ence mar­ket­ing com­pa­ny Data Pro­pria. The com­pa­ny was found­ed by Matt Oczkows­ki, a polit­i­cal strate­gist who served as Cambridge’s head of prod­uct dur­ing the Trump cam­paign. Ear­li­er this year, Oczkows­ki was also tapped to lead the oper­a­tions of Parscale Dig­i­tal, a data firm launched by Brad Parscale, who ran dig­i­tal media for Trump in 2016 and is man­ag­ing the President’s re-elec­tion cam­paign.

    Parscale Dig­i­tal and Data Pro­pria are both sub­sidiaries of Cloud­Com­merce, a com­pa­ny in which Parscale holds sig­nif­i­cant stock and a board seat. Last year, cam­paign finance experts expressed con­cern that Parscale firms were receiv­ing fund­ing from a pro-Trump super PAC in advance of Trump’s 2020 cam­paign, test­ing rules meant to keep cam­paigns from coor­di­nat­ing with out­side groups.

    Last June, the Asso­ci­at­ed Press report­ed that Data Pro­pria was also sup­port­ing the cam­paign, but Oczkows­ki denied doing any work on behalf of Trump. A per­son famil­iar with Data Propria’s busi­ness said the com­pa­ny worked on polling data for the Repub­li­can Nation­al Com­mit­tee ahead of the 2018 midterm elec­tions, but had since decid­ed to halt its polit­i­cal work in favor of com­mer­cial clients.

    Reports that Data Propria’s meth­ods and staff over­lapped with Cam­bridge Ana­lyt­i­ca prompt­ed mem­bers of Con­gress last June to seek assur­ances from the com­pa­ny that it is not using any of Cambridge’s improp­er­ly obtained Face­book data. Law­mak­ers have not heard back from Oczkows­ki, Data Pro­pria, or Cloud­Com­merce, a spokesper­son for the Ener­gy and Com­merce com­mit­tee said. The per­son famil­iar with Data Pro­pria con­tend­ed the com­pa­ny had no access to the Face­book data.

    Still, it is near­ly impos­si­ble to deter­mine which data cam­paigns are using to tar­get vot­ers, and how they are using it. Even deter­min­ing which firms are work­ing on polit­i­cal campaigns—especially dig­i­tal ones—is chal­leng­ing under exist­ing trans­paren­cy rules in the US.

    “There are a lot of prob­lems with the lack of trans­paren­cy around sub­ven­dor report­ing,” said Bren­dan Fis­ch­er, an attor­ney at the Cam­paign Legal Cen­ter. “The FEC gen­er­al­ly doesn’t require dis­clo­sure of pay­ments to sub­ven­dors, and only requires a min­i­mal descrip­tion of the pur­pose of the dis­burse­ment, which can allow cam­paigns to dis­guise much of their spend­ing.”

    Anoth­er com­pa­ny launched by for­mer SCL employ­ees, Aus­pex Inter­na­tion­al, is ded­i­cat­ed to using sim­i­lar influ­ence meth­ods to pro­vide what a state­ment called “eth­i­cal­ly based” con­sult­ing ser­vices in the Mid­dle East and Africa. The company’s founder and sole investor, a for­mer Emer­da­ta direc­tor named Ahmed Al-Khat­ib, said the com­pa­ny intend­ed to work on polit­i­cal and health cam­paigns and on tack­ling “the spread of extrem­ist ide­ol­o­gy, which has poi­soned my gen­er­a­tion, prey­ing on the minds of dis­en­fran­chised youth.”

    Only one of SCL’s sub­sidiaries sur­vives inde­pen­dent­ly of Emer­da­ta: SCL Insight Lim­it­ed, a gov­ern­ment con­trac­tor that com­plet­ed a “data ana­lyt­ics” con­tract for the U.K. Min­istry of Defence in April 2018. Around the same time, in the run-up to its U.S. bank­rupt­cy fil­ing, Cam­bridge Ana­lyt­i­ca absorbed its own gov­ern­ment-focused sub­sidiary, Anaxi Solu­tions. Orig­i­nal­ly named SCL Group Inc., Anaxi reg­is­tered as a busi­ness enti­ty in Vir­ginia, New York, Delaware, and D.C. dur­ing 2017, and was acquired by Emer­da­ta last August.

    In the new fis­cal fil­ing, Anaxi’s prin­ci­pal activ­i­ty is claimed as “Pro­vi­sion and com­mu­ni­ca­tion ser­vices for elec­tion cam­paigns.” Dur­ing its brief year in busi­ness, how­ev­er, Anaxi Solu­tions appears to have exclu­sive­ly sought gov­ern­ment con­tracts in the infor­ma­tion tech­nol­o­gy, defense, and law enforce­ment sec­tors. A busi­ness plan post­ed online in ear­ly 2018 point­ed to a num­ber of its con­tract bids, includ­ing those at the Depart­ment of Home­land Secu­ri­ty and the Marine Corps Forces Spe­cial Oper­a­tions Com­mand.

    Anaxi Solu­tions was led by Josh Weeras­inghe, who had pre­vi­ous­ly spent the major­i­ty of his career as a civil­ian work­er in the gov­ern­ment IT sec­tor. While staffed at the Office of the Direc­tor of Nation­al Intel­li­gence, Weeras­inghe served under Lt. Gen. Michael Fly­nn, who signed a state­ment of work with SCL for a month-long peri­od in late 2016. A few months after Cambridge’s ini­tial bank­rupt­cy fil­ings, sev­er­al of Anaxi’s data and behav­ioral sci­en­tists fol­lowed their for­mer CEO to an estab­lished gov­ern­ment con­trac­tor that main­tains active fed­er­al con­tracts.

    “We didn’t start the eth­i­cal bit”

    On Wednes­day, as the U.S. Fed­er­al Trade Com­mis­sion announced it would issue a record-break­ing $5 bil­lion fine to Face­book, the agency also said it filed a law­suit against Cam­bridge Ana­lyt­i­ca over “decep­tive prac­tices.” One new issue, the FTC dis­cov­ered, was that the com­pa­ny was for a time not meet­ing its oblig­a­tions under the EU-US Pri­va­cy Shield frame­work, which allows U.S. com­pa­nies to law­ful­ly process con­sumer data from Euro­pean Union coun­tries.

    Wheat­land said this was the result of an admin­is­tra­tive error that occurred in the chaos of the com­pa­nies’ dis­so­lu­tion. The task of respond­ing to the suit would fall to the company’s trustee in charge of its bank­rupt­cy pro­ceed­ings, he said.

    The law­suit, or admin­is­tra­tive com­plaint, seeks to impose new require­ments on Cam­bridge, or what remains of the com­pa­ny cur­rent­ly in bank­rupt­cy pro­ceed­ings: destroy any per­son­al infor­ma­tion col­lect­ed from con­sumers via the Face­book-scrap­ing app, turn over busi­ness doc­u­ments, and refrain from mak­ing false or decep­tive state­ments regard­ing the extent to which it col­lects or uses per­son­al infor­ma­tion, the FTC said.

    The FTC placed sim­i­lar restric­tions on for­mer Cam­bridge CEO Nix and app devel­op­er Kogan as part of sep­a­rate set­tle­ments the agency reached with them. Not named by the FTC was Joseph Chan­cel­lor, who helped Kogan gath­er the Face­book data for Cam­bridge Ana­lyt­i­ca short­ly before he was hired by Face­book as a researcher. Face­book has not explained how it came to hire Chan­cel­lor, who qui­et­ly left the com­pa­ny last Sep­tem­ber.

    Emer­da­ta was also not men­tioned by name in the com­plaint, but there may be room for the U.S. to penal­ize it in the future, giv­en its own­er­ship by the U.S.-based Cam­bridge Ana­lyt­i­ca Hold­ings LLC, Car­roll spec­u­lat­ed. “That would be great, as it would show that com­pa­nies can­not so eas­i­ly evade lia­bil­i­ty in bank­rupt­cy,” he said.

    But while the FTC com­plaint refers to Cam­bridge Ana­lyt­i­ca and its “suc­ces­sors and assigns,” that order is unlike­ly to apply to oth­er SCL-relat­ed com­pa­nies con­trolled by Emer­da­ta, Wheat­land said. The FTC did not respond to a request for clar­i­fi­ca­tion.

    ...

    ———–

    “The strange after­life of Cam­bridge Ana­lyt­i­ca and the mys­te­ri­ous fate of its data” by Jesse Witt and Alex Paster­nack; Fast Com­pa­ny; 07/26/2019

    “Julian Wheat­land, the for­mer CEO of Cam­bridge Ana­lyt­i­ca and for­mer direc­tor of a num­ber of SCL-con­nect­ed firms, told Fast Com­pa­ny this week that there were no plans to revive the com­pa­nies. “I’m pret­ty sure nobody’s think­ing of try­ing to start it up again under a dif­fer­ent guise,” he said.”

    LOL! Yes, Julian Wheat­land, the for­mer CEO of Cam­bridge Ana­lyt­i­ca, assures us that nobody’s think­ing about to try­ing to start up Cam­bridge Ana­lyt­i­ca. Wheat­land is, of course, one of the orig­i­nal direc­tors of Emer­da­ta, along with Alexan­der Nix:

    ...
    In an email after this sto­ry was pub­lished, Wheat­land sought to clar­i­fy that the hold­ing com­pa­ny did not seek to acquire the SCL com­pa­nies’ data or oth­er assets, which remain under the con­trol of the admin­is­tra­tors over­see­ing their bank­rupt­cies. “Emer­da­ta did not acquire or retain any assets from the bank­rupt com­pa­nies, and if Emer­da­ta had known in advance that the com­pa­nies were going bank­rupt, would nev­er have acquired them,” he wrote.

    ...

    Miss­ing data, poten­tial “claims” against for­mer employ­ees

    Wheat­land, one of Emerdata’s orig­i­nal direc­tors, resigned ear­li­er this year, he said, because the com­pa­ny “stopped doing any­thing.” Two oth­er for­mer Cam­bridge Ana­lyt­i­ca chief exec­u­tive offi­cers, Alexan­der Nix and Alexan­der Tayler, were also for a time list­ed as Emer­da­ta direc­tors, and have also since resigned. Nix was sus­pend­ed by Cam­bridge Analytica’s board last year after Britain’s Chan­nel 4 pub­lished an under­cov­er cam­era inves­ti­ga­tion in which he was caught dis­cussing dirty elec­tion tricks includ­ing black­mail­ing can­di­dates.
    ...

    So one of the founders of Emer­da­ta is main­tain­ing that there are no plans for start­ing a com­pa­ny like Emer­da­ta. It’s not exact­ly reas­sur­ing. Espe­cial­ly since Emer­da­ta has been foot­ing the legal bills and bank­rupt­cy pro­ceed­ings for Cam­bridge Ana­lyt­i­ca and the rest of the SCL off­shoots. It’s the court-appoint­ed admin­is­tra­tor, Crowe U.K. LLP, that con­trols what, which means the enti­ty that con­trols all of that data is a client of Emer­da­ta. And when US aca­d­e­m­ic Davis Car­roll sued Cam­bridge Ana­lyt­i­ca to obtain his per­son­al data, Crowe instead decid­ed to sub­ject SCL to a crim­i­nal inves­ti­ga­tion, a move that could be seen as an attempt by Crowe to shield from pub­lic scruti­ny the nature of that data:

    ...
    Last sum­mer, as some for­mer employ­ees of Cam­bridge and SCL scat­tered to a hand­ful of suc­ces­sor firms, the com­pa­nies were ful­ly acquired by a hold­ing com­pa­ny called Emer­da­ta Lim­it­ed. The com­pa­ny was incor­po­rat­ed in the U.K. in 2017 by for­mer direc­tors of Cam­bridge Ana­lyt­i­ca and mem­bers of the Mer­cer fam­i­ly, who pro­vid­ed the ini­tial fund­ing for Ana­lyt­i­ca four years ear­li­er. Describ­ing its busi­ness as “data pro­cess­ing, host­ing, and relat­ed activ­i­ties,” Emer­da­ta acquired most of the SCL com­pa­nies pri­or to their bank­rupt­cies, Wheat­land said. The pur­pose was to bring them under a sin­gle own­er­ship struc­ture for the pur­pose of refi­nanc­ing them, he said.

    Since last year, Emer­da­ta has been foot­ing the SCL com­pa­nies’ legal bills amid bank­rupt­cy pro­ceed­ings, inves­ti­ga­tions, and law­suits on both sides of the Atlantic. After both SCL Group and Cam­bridge Ana­lyt­i­ca ceased oper­a­tions and filed for bankruptcy–or, in British par­lance, “went into administration”–in May 2018, Emer­da­ta also paid mil­lions to acquire what remained of the com­pa­nies while they are being liq­ui­dat­ed.

    ...

    David Car­roll, an Amer­i­can pro­fes­sor who sued Cam­bridge Ana­lyt­i­ca to obtain his per­son­al data, has alleged that Emer­da­ta had sur­rep­ti­tious­ly tried to shield the defunct firms from scruti­ny and lia­bil­i­ty. “Obvi­ous­ly, any­thing that you do relat­ed to data pro­tec­tion while the com­pa­ny is active becomes moot if they can just go out of busi­ness as soon as they get caught,” he said.

    An inves­ti­ga­tion into “dis­in­for­ma­tion” led a Par­lia­men­tary com­mit­tee to share Carroll’s con­cerns, as it stat­ed in one of its reports. “The trans­for­ma­tion of Cam­bridge Ana­lyt­i­ca into Emer­da­ta illus­trates how easy it is for dis­cred­it­ed com­pa­nies to rein­vent them­selves and poten­tial­ly use the same data and the same tac­tics to under­mine gov­ern­ments, includ­ing in the UK,” the pan­el wrote. “The indus­try needs clean­ing up.”

    In court, Carroll’s lawyers claimed that Cambridge’s court-appoint­ed admin­is­tra­tors, Crowe U.K. LLP, who were being paid by Emer­da­ta, were act­ing with prej­u­dice against Car­roll when they sought to liq­ui­date assets before a full inves­ti­ga­tion into the com­pa­nies could be con­duct­ed.

    Rather than com­ply with Carroll’s data request last year, Crowe chose instead to sub­ject SCL to a crim­i­nal inquiry by the U.K. Infor­ma­tion Commissioner’s Office, or ICO. In Jan­u­ary, SCL pled guilty to break­ing data laws and was fined a total of $26,000, but was not com­pelled to give Car­roll his data.
    ...

    Adding to sus­pi­cions that Emer­da­ta is sit­ting on that trove of 700 ter­abytes of Cam­bridge Ana­lyt­i­ca data is the sim­ply fact that a reporter was able to view the data, leav­ing Wheat­land “com­plete­ly mind-bog­gled” as to how that could be. A pos­si­bil­i­ty that’s a lot less mind-bog­gling when you con­sid­er obvi­ous pos­si­bil­i­ties like copies of the data hav­ing been made or for­mer employ­ees tak­ing the data. Pos­si­bil­i­ties that Wheat­land acknowl­edges:

    ...
    The sev­en-page Emer­da­ta fil­ing describes a com­pa­ny strug­gling with miss­ing data–seized by the British data reg­u­la­tor in its raid on SCL’s Lon­don office–and unco­op­er­a­tive and unnamed for­mer employ­ees, against whom it said it was con­sid­er­ing “claims.”

    “Due to [a U.K. Infor­ma­tion Commissioner’s Office] raid result­ing in com­pa­ny account­ing infor­ma­tion being seized, and the lack of coop­er­a­tion shown by for­mer direc­tors, advi­sors, and employ­ees of the Com­pa­ny (against whom, by virtue of the out­right loss of the Company’s investors’ cap­i­tal, the Com­pa­ny is inves­ti­gat­ing poten­tial claims), these accounts have been pre­pared based on lim­it­ed and incom­plete data,” the doc­u­ment reads.

    Cambridge’s admin­is­tra­tors also lost lap­tops and oth­er data, accord­ing to court doc­u­ments filed as part of Carroll’s law­suit against the com­pa­ny. “Employ­ees have refused to return lap­tops to the Admin­is­tra­tors, and oth­ers have been stolen from the Admin­is­tra­tors’ cus­tody,” Carroll’s lawyers wrote. “Adding to con­cerns that the Cam­bridge Ana­lyt­i­ca busi­ness con­tin­ues to be car­ried-on under anoth­er guise, as [of] Novem­ber 2018, for­mer employ­ees were appar­ent­ly still access­ing its cloud-based infra­struc­ture.”

    Crowe, Cambridge’s admin­is­tra­tors in the U.K., declined to com­ment, cit­ing what a spokesper­son said were “ongo­ing inves­tiga­tive mat­ters relat­ing to the UK Com­pa­nies now in liq­ui­da­tion.”

    Like their assets, the rest of the SCL com­pa­nies’ remain­ing data, said to total 700 ter­abytes, remains under the sole con­trol of Crowe and a U.S. lawyer admin­is­trat­ing the company’s bank­rupt­cy pro­ceed­ings there, said Wheat­land. “Emer­da­ta has access to noth­ing, Emer­da­ta nev­er had access to any­thing.”

    The U.K Infor­ma­tion Com­mis­sion­er is now exam­in­ing the SCL data as part of its inves­ti­ga­tion. But even when the data trove is returned, pos­si­bly lat­er this year, Wheat­land said it will not be sold or destroyed due to “hold­ing orders as a result of reg­u­la­to­ry and civ­il law­suits.”

    The for­mer CEO said the infa­mous cache of Face­book data had long since been delet­ed, and no copies remained in the hands of any the com­pa­nies. After learn­ing last year that a reporter had seen a copy of the data, he said, “I and [for­mer Emer­da­ta direc­tor Alexan­der Tayler,] the chief data offi­cer at SCL, were com­plete­ly mind-bog­gled as to what that could be or how that could be.”

    How­ev­er, Wheat­land could not dis­pute the pos­si­bil­i­ty that copies of the data had been made. Apart from for­mer employ­ees, a num­ber of oth­er peo­ple had access to ver­sions of the Face­book data set through Alek­san­dr Kogan, the researcher who col­lect­ed it.
    ...

    But per­haps one of the biggest rea­sons to sus­pect Emer­da­ta is not just hold­ing that data trove but active­ly plan­ning on uti­liz­ing it is now large­ly owned by Rebekah Mer­cer and her sis­ter Jen­nifer and is direct­ed by a fig­ure from the Mer­cer Fam­i­ly Foun­da­tion. The Mer­cers don’t seem like the types to relin­quish­ing all of that data just become of some laws and court rul­ings:

    ...
    The hold­ing com­pa­ny now appears to be large­ly owned by con­ser­v­a­tive activist Rebekah Mer­cer and her sis­ter Jen­nifer, whose shares are held in trust and via their U.S.-based Cam­bridge Ana­lyt­i­ca Hold­ings LLC, accord­ing to Cambridge’s bank­rupt­cy fil­ing in New York. The remain­ing fifth of Emerdata’s shares are owned by sev­er­al pri­or SCL Group minor­i­ty investors and a num­ber of Hong Kong-based shell com­pa­nies. Inves­tiga­tive jour­nal­ist Wendy Siegel­man first report­ed on Emer­da­ta and its links to SCL last year.

    Emerdata’s newest direc­tor, Jacque­lyn James-Var­ga, has also man­aged account­ing over the last few years for the Mer­cer Fam­i­ly Foun­da­tion, the non­prof­it run by Rebekah and her megadonor father, Robert. James-Var­ga for­mer­ly served as trea­sur­er for the Mer­cer-helmed Make Amer­i­ca Num­ber 1 polit­i­cal action committee–a major sup­port­er of both Ted Cruz’s and Don­ald Trump’s pres­i­den­tial candidacies–as well as for Mak­ing Amer­i­ca Great, a dor­mant pro-Trump polit­i­cal action com­mi­tee. Rep­re­sen­ta­tives for Emer­da­ta and the Mer­cer fam­i­ly did not respond to requests for com­ment.
    ...

    But con­cerns about how data trove might be used going for­ward aren’t lim­it­ed to Emer­da­ta. Oth­er com­pa­nies start­ed by Cam­bridge Ana­lyt­i­ca employ­ees include Data Pro­pria, found­ed by Matt Oczkows­ki who now works for the Trump reelec­tion cam­paign. And Aus­pex Inter­na­tion­al, which sounds like a Cam­bridge Ana­lyt­i­ca ded­i­cat­ed to pub­lic per­suasian cam­paigns in the Mid­dle East and Africa. And there’s even Vir­ginia-based Anaxi Solu­tions that spe­cial­ized in gov­ern­ment con­tracts:

    ...
    Its capa­bil­i­ties per­sist through new enti­ties

    As Cam­bridge Ana­lyt­i­ca and its relat­ed com­pa­nies con­tin­ue wind­ing down, the core promis­es of their much-debat­ed psy­cho­log­i­cal tar­get­ing meth­ods per­sist, both through relat­ed enti­ties and a grow­ing inter­est in merg­ing big data and behav­ioral sci­ence. Some for­mer Cam­bridge Ana­lyt­i­ca data experts now work for a new firm, the Texas-based behav­ioral-sci­ence mar­ket­ing com­pa­ny Data Pro­pria. The com­pa­ny was found­ed by Matt Oczkows­ki, a polit­i­cal strate­gist who served as Cambridge’s head of prod­uct dur­ing the Trump cam­paign. Ear­li­er this year, Oczkows­ki was also tapped to lead the oper­a­tions of Parscale Dig­i­tal, a data firm launched by Brad Parscale, who ran dig­i­tal media for Trump in 2016 and is man­ag­ing the President’s re-elec­tion cam­paign.

    Parscale Dig­i­tal and Data Pro­pria are both sub­sidiaries of Cloud­Com­merce, a com­pa­ny in which Parscale holds sig­nif­i­cant stock and a board seat. Last year, cam­paign finance experts expressed con­cern that Parscale firms were receiv­ing fund­ing from a pro-Trump super PAC in advance of Trump’s 2020 cam­paign, test­ing rules meant to keep cam­paigns from coor­di­nat­ing with out­side groups.

    Last June, the Asso­ci­at­ed Press report­ed that Data Pro­pria was also sup­port­ing the cam­paign, but Oczkows­ki denied doing any work on behalf of Trump. A per­son famil­iar with Data Propria’s busi­ness said the com­pa­ny worked on polling data for the Repub­li­can Nation­al Com­mit­tee ahead of the 2018 midterm elec­tions, but had since decid­ed to halt its polit­i­cal work in favor of com­mer­cial clients.

    ...

    Anoth­er com­pa­ny launched by for­mer SCL employ­ees, Aus­pex Inter­na­tion­al, is ded­i­cat­ed to using sim­i­lar influ­ence meth­ods to pro­vide what a state­ment called “eth­i­cal­ly based” con­sult­ing ser­vices in the Mid­dle East and Africa. The company’s founder and sole investor, a for­mer Emer­da­ta direc­tor named Ahmed Al-Khat­ib, said the com­pa­ny intend­ed to work on polit­i­cal and health cam­paigns and on tack­ling “the spread of extrem­ist ide­ol­o­gy, which has poi­soned my gen­er­a­tion, prey­ing on the minds of dis­en­fran­chised youth.”

    Only one of SCL’s sub­sidiaries sur­vives inde­pen­dent­ly of Emer­da­ta: SCL Insight Lim­it­ed, a gov­ern­ment con­trac­tor that com­plet­ed a “data ana­lyt­ics” con­tract for the U.K. Min­istry of Defence in April 2018. Around the same time, in the run-up to its U.S. bank­rupt­cy fil­ing, Cam­bridge Ana­lyt­i­ca absorbed its own gov­ern­ment-focused sub­sidiary, Anaxi Solu­tions. Orig­i­nal­ly named SCL Group Inc., Anaxi reg­is­tered as a busi­ness enti­ty in Vir­ginia, New York, Delaware, and D.C. dur­ing 2017, and was acquired by Emer­da­ta last August.
    ...

    Did any of these spin-offs hap­pen to get its hands on that data trove? It’s hard to imag­ine that not being the case. Secret­ly copy­ing and uti­liz­ing that 700 ter­abytes of data is a tech­ni­cal triv­i­al­i­ty after all. And it’s hard to imag­ine Alexan­der Nix’s pun­ish­ment is act­ing as a sig­nif­i­cant deter­rent. The only thing hold­ing these spin-offs back is their own self-restraint and inter­nal eth­i­cal stan­dards. Which is why the real ques­tion over whether or not all of the Cam­bridge Ana­lyt­i­ca data is still be used today is the ques­tion of whether or not the peo­ple who made lots of mon­ey secret­ly doing it it in the first place would be will­ing to make lots more mon­ey secret­ly doing it again.

    Posted by Pterrafractyl | September 27, 2020, 8:16 pm
  39. With the Trump 2020 reelec­tion now in a state of self-inflict­ed COVID tur­moil less than a month before elec­tion day the ques­tion only grows as to what sort of of what sort of orches­trat­ed counter-tur­moil we should expect to see cre­at­ed by the Repub­li­can Par­ty and its many affil­i­ates. And when we have to ask ques­tions about Repub­li­can dirty trick that obvi­ous­ly rais­es all sorts of ques­tions about what exact­ly hap­pened dur­ing the 2016 cam­paign and all of the dirty trick mys­ter­ies that remain unre­solved. So it’s worth not­ing one of the inter­est­ing ques­tions that we sort of got an answer for in the Sen­ate Intel­li­gence Com­mit­tee report released back in August: the ques­tion of whether or not Psy Group first approached the Trump cam­paign or vice ver­sa.

    First, recall how we pre­vi­ous­ly learned that Psy Group was mak­ing pitch­es to the Trump cam­paign as ear­ly as March of 2016 to help Trump defeat Ted Cruz to win the GOP nom­i­na­tion. But it appeared at the time that it was Psy Group who approached the Trump team, rais­ing all sorts of ques­tion as to who prompt­ed Psy Group to make the offer in the first place. Although the Saud­is and UAE were obvi­ous­ly sus­pects for being behind the offer since we also learned that Psy Group’s ser­vices were offered in ear­ly August of 2016 to help the Trump cam­paign win the gen­er­al elec­tion on behalf of the crown princes of Sau­di Ara­bia and the UAE. Right-wing Israeli forces were also an obvi­ous sus­pect. But we nev­er real­ly got an answer on who was behind that ini­tial pitch.

    Well, tucked away in the Sen­ate Intel­li­gence Com­mit­tee report we find this fun fact about the ear­li­est report­ed con­tact between Psy Group and Repub­li­cans: It appears that Kory Bar­dash, the head of the group Repub­li­cans in Israel (also known as Repub­li­cans Over­seas Israel), was the per­son who ini­tial­ly reached out to two fig­ures at Psy­Group.

    Repub­li­cans in Israel is a non-prof­it found­ed by Marc Zell, a fig­ure described as the head of the Repub­li­can Par­ty in Israel. Zell is obvi­ous­ly on board with Repub­li­can pol­i­tics in gen­er­al but to get a sense of just how com­plete­ly on board he is with the con­tem­po­rary GOP’s Trumpian embrace of fas­cism it’s worth not­ing that Zell con­demned the anti-Nazi counter-pro­test­ers for the 2017 vio­lence in Char­lottesville, Vir­ginia, that result­ed in Trump’s infa­mous “good peo­ple on both sides” dec­la­ra­tion. Zell also called Con­fed­er­ate gen­er­al Robert E. Lee a great man and made an equiv­o­cat­ing state­ment of his own about how both the North and South had ter­ri­ble sides dur­ing the Civ­il War. So based on the Sen­ate Intel­li­gence Com­mit­tee’s report it sounds like it was basi­cal­ly an arm of the Repub­li­can Par­ty who made the ini­tial out­reach to Psy Group.

    But keep in mind that this ini­tial request for help was made while Repub­li­can pri­ma­ry was still in full swing. So this was­n’t real­ly the Repub­li­can Par­ty that reached out to Psy Group but instead a pro-Trump fac­tion of it that hap­pens to include Zell. And as we’ll see in the sec­ond arti­cle below, part of what makes a fas­ci­nat­ing turn of events is that Zell was pub­licly against Trump win­ning the nom­i­na­tion. In ear­ly Decem­ber 2015, Zell declared that Trump can’t and won’t be pres­i­dent. It was­n’t until August of 2016 — after Erik Prince and George Nad­er made their ear­ly August secret trip to Trump Tow­er make the assis­tance offer on behalf of the crown princes of Sau­di Ara­bia and the UAE — that Zell was open­ly talk­ing about his change of heart on Trump after open­ly oppos­ing his nom­i­na­tion.

    It’s a sequence of events that rais­es all sorts of inter­est­ing ques­tions about the kinds of back­room nego­ti­a­tions that must have been tak­ing place in late 2015-ear­ly 2016 after it became clear that the Repub­li­can elec­torate strong­ly pre­ferred Trump over fig­ures like Ted Cruz or Mar­co Rubio. What kind of secret offers did Trump have to make to the Repub­li­can estab­lish­ment in order to earn the sup­port of fig­ures like Zell? And, in turn, what role might those secret offers have played in the for­ma­tion of the dirty tricks cam­paign that was car­ried out across the world in 2016 in favor or Trump? It’s an exam­ple of how the more answers we get about what hap­pened in 2016 the more ques­tions get raised because the cov­er up is still ongo­ing:

    The Times of Israel

    Israel ducks blame for firm with ex-intel offi­cers that bid to ‘shape’ US vote
    Defense Min­istry tries to shrug off respon­si­bil­i­ty as Sen­ate report reveals activ­i­ties of Psy-Group; Israeli law pro­hibits unli­censed export of defense tech, knowl­edge, ser­vices

    By Simona Wein­glass
    25 August 2020, 2:31 pm

    A week after the US Sen­ate issued a report accus­ing an Israeli com­pa­ny, Psy-Group, of attempt­ing to med­dle in US elec­tions and, sep­a­rate­ly, of work­ing on behalf of a Russ­ian agent, Israel’s Defense Min­istry said it is not aware of the company’s activ­i­ties and has nev­er con­sid­ered itself respon­si­ble for reg­u­lat­ing them.

    Sev­er­al of Psy-Group’s employ­ees were grad­u­ates of Israeli intel­li­gence units, includ­ing a for­mer senior IDF intel­li­gence offi­cer, how­ev­er, and Israel’s own export con­trol leg­is­la­tion bars pri­vate firms from export­ing intel­li­gence knowl­edge and ser­vices with­out a license.

    On August 18, the US Sen­ate Select Com­mit­tee on Intel­li­gence released the fifth vol­ume of its Rus­sia inves­ti­ga­tion, which exam­ines Russia’s attempts to med­dle in US pol­i­tics dur­ing the 2016 elec­tions. An entire sec­tion of the report is devot­ed to the Israeli cyber-intel­li­gence com­pa­ny, Psy-Group, which the report con­cludes pitched, but may not have car­ried out, covert influ­ence ser­vices on behalf of the Trump pres­i­den­tial elec­tion cam­paign. “Psy Group rep­re­sen­ta­tives engaged with Trump Cam­paign senior offi­cials in 2016 for a con­tract to per­form work on behalf of the Cam­paign,” it states. “These engage­ments… pur­port­ed­ly nev­er mate­ri­al­ized into any Cam­paign work.”

    The report also says that Psy-Group on sep­a­rate occa­sions worked for at least two Russ­ian oli­garchs, includ­ing Oleg Deri­pas­ka, whom the report describes as “a proxy for the Russ­ian state and intel­li­gence ser­vices.”

    Israel’s Defense Export Con­trol Law strict­ly pro­hibits Israeli com­pa­nies from export­ing defense equip­ment, knowl­edge, tech­nol­o­gy, or ser­vices, includ­ing intel­li­gence knowl­edge and ser­vices, with­out a license from the Min­istry of Defense.

    But when The Times of Israel con­tact­ed the Defense Min­istry to ask whether Psy-Group had obtained a license to car­ry out covert influ­ence cam­paigns in the Unit­ed States or to work on behalf of Russ­ian state actors, a spokes­woman replied, “Psy-Group does not appear on any of our lists. What this means is that they do not have a defense prod­uct that requires reg­u­la­tion. They are not on our list and it is not our respon­si­bil­i­ty to over­see them.”

    An Israeli expert con­sult­ed by The Times of Israel strong­ly con­test­ed this Defense Min­istry asser­tion.

    Dr. Avn­er Barnea, research fel­low at the Nation­al Secu­ri­ty Stud­ies Cen­ter of the Uni­ver­si­ty of Haifa and a for­mer senior offi­cial with the Israel Secu­ri­ty Agency (Shin Bet), said, after read­ing the Sen­ate Intel­li­gence report, that the Min­istry of Defense should have been reg­u­lat­ing Psy-Group and pre­vent­ing it from doing busi­ness with any­one con­nect­ed to Rus­sia or oth­er for­eign intel­li­gence agen­cies.

    “The peo­ple in the Defense Export Con­trol Agency are not seri­ous, unfor­tu­nate­ly,” he told The Times of Israel. “They’re bureau­crats who lack tech­no­log­i­cal under­stand­ing. They should have checked if Psy-Group was using knowl­edge or tech­nol­o­gy it acquired in the mil­i­tary. Who is mak­ing sure that these intel­li­gence secrets don’t leak out of Israel?”

    Accord­ing to the Sen­ate com­mit­tee report, Psy-Group was one of three influ­ence com­pa­nies with for­eign ties that the com­mit­tee inves­ti­gat­ed because it ini­tial­ly sus­pect­ed that the com­pa­ny may have “played a role in shap­ing the out­come of the 2016 US pres­i­den­tial elec­tion.” The oth­er two com­pa­nies were Cam­bridge Ana­lyt­i­ca and Colt Ven­tures. Psy-Group was the brand name used by Invop Ltd., an Israeli com­pa­ny that went into liq­ui­da­tion in April 2018.

    Royi Burstien, the CEO of Psy-Group at the time of the activ­i­ties described in the Sen­ate report, is a lieu­tenant colonel (res.) in Israeli mil­i­tary intel­li­gence.

    The Sen­ate Committee’s descrip­tion of Psy-Group reveals its con­cerns that the com­pa­ny may have used or offered to use meth­ods that the firm’s employ­ees honed in Israeli intel­li­gence.

    “The Com­mit­tee exam­ined these spe­cif­ic com­pa­nies and their activ­i­ties relat­ed to the 2016 US elec­tion to bet­ter under­stand how for­eign influ­ence, includ­ing the use of tech­niques and method­olo­gies honed by for­eign gov­ern­ments and intel­li­gence ser­vices, may have been exert­ed in 2016,” the report stat­ed.

    Accord­ing to the Sen­ate report, “[Psy-Group founder Joel] Zamel described Burstien’s back­ground as includ­ing work in the intel­li­gence field, con­duct­ing influ­ence oper­a­tions. The pre­cise nature of Burstien’s work in the intel­li­gence field is not known to the Com­mit­tee.”

    The com­mit­tee also cit­ed Psy-Group’s own lit­er­a­ture and its empha­sis on intel­li­gence capa­bil­i­ties.

    “The Com­mit­tee reviewed sev­er­al doc­u­ments that described the suite of ser­vices offered by Psy Group. One cor­po­rate overview, enti­tled ‘Shap­ing Real­i­ty through Intel­li­gence and Influ­ence,’ sent from Psy Group to Amer­i­can inter­na­tion­al polit­i­cal con­sul­tant George Birn­baum in May 2016, high­light­ed Psy Group’s capa­bil­i­ties in ‘influ­ence’ and ‘intel­li­gence.’”

    The com­mit­tee quot­ed anoth­er sec­tion of Psy-Group’s mar­ket­ing mate­r­i­al that said “Psy Group’s ‘intel­li­gence offer­ing’ includ­ed a ‘mul­ti-lev­el approach to intel­li­gence col­lec­tion’ that com­bined open source research, cyber oper­a­tions includ­ing social engi­neer­ing and ‘honeypots…to extract required infor­ma­tion from the right sources,’ and ‘covert tech­niques and capa­bil­i­ties in the phys­i­cal world.’”

    The com­mit­tee also inter­viewed Birn­baum, an inter­na­tion­al polit­i­cal con­sul­tant who helped put Psy-Group in touch with the Trump cam­paign. He told the com­mit­tee that Psy-Group’s capa­bil­i­ties were unique, as a result of their mil­i­tary pedi­gree.

    “These guys came out of the [Israeli] mil­i­tary intel­li­gence army unit,” Birn­baum told the com­mit­tee, “and it’s like com­ing out with a triple Ph.D. from MIT. The amount of knowl­edge these guys have in terms of cyber­se­cu­ri­ty, cyber-intel­li­gence . . . they come out of a unit in which their minds in terms of under­stand­ing cyber­se­cu­ri­ty — the algo­rithms that they can cre­ate — it’s just so beyond what you could get [with] a nor­mal edu­ca­tion that it’s just unique…there are hun­dreds and hun­dreds of Israeli start-up com­pa­nies that the founders are guys who came out of this unit.”

    Famil­iar names

    The intel­li­gence com­mit­tee report cites many names that may be famil­iar to con­sumers of Israel-relat­ed news.

    Accord­ing to the report, Psy-Group ini­tial­ly got in touch with the Trump cam­paign in March 2016, when Kory Bar­dash, the head of Repub­li­cans in Israel, emailed Birn­baum, as well as Eitan Charnoff, a project man­ag­er at Psy-Group.

    Eitan Charnoff served as the direc­tor of the wide­ly pub­li­cized orga­ni­za­tion iVote Israel, a pur­port­ed­ly non-par­ti­san group that encour­ages Amer­i­cans in Israel to vote in US elec­tions, but that was accused by some vot­ers of flub­bing their absen­tee bal­lot requests. He ran iVote Israel in 2016, while he was employed at Psy-Group.

    Accord­ing to the Sen­ate report, Bar­dash sent an email to Birn­baum and Charnoff say­ing “I have spo­ken to both of you about the oth­er. Hope­ful­ly, you can have a mutu­al­ly ben­e­fi­cial chat.”

    Lat­er that spring and sum­mer, accord­ing to the intel­li­gence com­mit­tee, Psy-Group pitched two influ­ence and intel­li­gence projects to the Trump cam­paign. These includ­ed offers to do oppo­si­tion research on Hillary Clin­ton, offers to use fake social media pro­files to covert­ly influ­ence Repub­li­can Nation­al Con­ven­tion del­e­gates, and offers to tar­get minor­i­ty com­mu­ni­ties, sub­ur­ban female vot­ers, and unde­cid­ed vot­ers with covert mes­sag­ing.

    In inter­nal com­pa­ny emails, employ­ees also dis­cussed the use of “hun­dreds of avatars dri­ving neg­a­tive mes­sag­ing,” and “phys­i­cal world ops like counter protests, heck­lers, etc.”

    Accord­ing to Zamel, none of these cam­paigns were ever car­ried out: “Not a tweet, not a char­ac­ter, noth­ing,” Zamel told the com­mit­tee. How­ev­er, the com­mit­tee not­ed that Zamel was paid over $1 mil­lion by George Nad­er, an advis­er to the Unit­ed Arab Emi­rates, with high-lev­el Russ­ian ties.

    The Sen­ate report men­tions the names of oth­er Psy-Group employ­ees who were includ­ed in emails dis­cussing pos­si­ble influ­ence cam­paigns on behalf of Trump. One of these is Paul Vese­ly, an Aus­tralian-Israeli, who, in Novem­ber 2017, post­ed an inter­view he gave on his LinkedIn pro­file con­cern­ing dis­in­for­ma­tion cam­paigns dur­ing the 2016 pres­i­den­tial elec­tion.

    In the inter­view, Vese­ly dis­cuss­es such cam­paigns in a knowl­edge­able way, although it is not clear whether he is refer­ring to work that may have specif­i­cal­ly been car­ried out by Psy-Group.

    “The rea­son the tech­niques used on social media were so effec­tive dur­ing the 2016 US pres­i­den­tial elec­tions,” he is quot­ed as say­ing, “was because the mes­sag­ing was seg­ment­ed per­fect­ly per tar­get audi­ence. This cre­at­ed both engag­ing and some­times infu­ri­at­ing con­tent tai­lored to appeal to spe­cif­ic seg­ments of the US pop­u­la­tion. This sim­ple yet effec­tive mar­ket­ing tech­nique allowed mil­lions of Amer­i­cans to not just digest but crave the nar­ra­tive these fake news accounts were spurt­ing out. The sec­ond rea­son why the dis­in­for­ma­tion cam­paign was so suc­cess­ful was because it seemed to come from grass­roots sup­port­ers even though it was being led by avatars. There is no greater pow­er of influ­ence than over peo­ple in a seg­ment­ed group with sim­i­lar inter­ests one another’s out­look in an effec­tive echo cham­ber. This echo cham­ber allowed real peo­ple to be involved with avatars and social group admin­is­tra­tors who direct­ed con­ver­sa­tion and released infor­ma­tion that var­ied in spec­trum from being loose­ly based on truth all the way to being com­plete­ly fic­ti­tious. No mat­ter how dis­tant the con­tent was from the truth, when agreed upon and repeat­ed in a group, it was extreme­ly effec­tive.”

    The Sen­ate Intel­li­gence Com­mit­tee said it could find no con­vinc­ing evi­dence that Psy-Group car­ried out influ­ence oper­a­tions against the US on behalf of Rus­sia. Nor did it draw a con­clu­sion as to whether Psy-Group had car­ried out any influ­ence cam­paigns against Amer­i­cans at all.

    “The Com­mit­tee found no con­vinc­ing evi­dence that Russia’s gov­ern­ment or intel­li­gence ser­vices worked with or through any of these com­pa­nies in fur­ther­ance of Moscow’s 2016 US elec­tion inter­fer­ence. There are, how­ev­er, lim­i­ta­tions to the Committee’s under­stand­ing of this sub­ject,” the report said.

    The com­mit­tee did, how­ev­er, find that Psy-Group had worked for Russ­ian oli­garch Oleg Deri­pas­ka on anoth­er project involv­ing a busi­ness dis­pute in Aus­tria. Deri­pas­ka was intro­duced to Psy-Group by a man named Wal­ter Sori­ano, accord­ing to the report.

    “Accord­ing to Burstien, Psy-Group under­took an “intel­li­gence project” (code­named “Project Star­bucks”) in prob­a­bly 2015 for Oleg·Deripaska involv­ing a busi­ness dis­pute with a large Aus­tri­an com­pa­ny, pos­si­bly con­nect­ed to real estate,” the report said.

    ...

    The Times of Israel report­ed in May that Psy-Group alleged­ly car­ried out an online harass­ment cam­paign against pro-democ­ra­cy activists in Ukraine, a cam­paign that would have strong­ly aligned with Russ­ian gov­ern­ment inter­ests.

    ————

    “Israel ducks blame for firm with ex-intel offi­cers that bid to ‘shape’ US vote” by Simona Wein­glass; The Times of Israel; 08/25/2020

    “Accord­ing to the report, Psy-Group ini­tial­ly got in touch with the Trump cam­paign in March 2016, when Kory Bar­dash, the head of Repub­li­cans in Israel, emailed Birn­baum, as well as Eitan Charnoff, a project man­ag­er at Psy-Group.

    So based on the ear­li­est avail­able evi­dence it was Kory Bar­dash, head of Repub­li­cans in Israel, who ini­ti­at­ed the idea of hir­ing Psy Group to help the Trump cam­paign. Although in that email sent by Bar­dash he refers to hav­ing spo­ken to two Psy Group employ­ees pre­vi­ous­ly about each oth­er. So there’s still the ques­tion of when those ear­li­er con­ver­sa­tions start­ed and whether or not they includ­ed the top­ic of assist­ing Trump. At this point we can con­clude that Repub­li­cans in Israel/Republicans Over­seas Israel had made its deci­sion to back Trump over more tra­di­tion­al fig­ures like Ted Cruz or Mar­co Rubio by at least March of 2016. And they were so keen on back­ing Trump that they were will­ing to hire a for­eign social media manip­u­la­tion spe­cial­ist like Psy Group to help Trump secure the nom­i­na­tion. In oth­er words, they had­n’t sim­ply warmed to Trump at that point. They were full Trump back­ers:

    ...
    Eitan Charnoff served as the direc­tor of the wide­ly pub­li­cized orga­ni­za­tion iVote Israel, a pur­port­ed­ly non-par­ti­san group that encour­ages Amer­i­cans in Israel to vote in US elec­tions, but that was accused by some vot­ers of flub­bing their absen­tee bal­lot requests. He ran iVote Israel in 2016, while he was employed at Psy-Group.

    Accord­ing to the Sen­ate report, Bar­dash sent an email to Birn­baum and Charnoff say­ing “I have spo­ken to both of you about the oth­er. Hope­ful­ly, you can have a mutu­al­ly ben­e­fi­cial chat.”

    Lat­er that spring and sum­mer, accord­ing to the intel­li­gence com­mit­tee, Psy-Group pitched two influ­ence and intel­li­gence projects to the Trump cam­paign. These includ­ed offers to do oppo­si­tion research on Hillary Clin­ton, offers to use fake social media pro­files to covert­ly influ­ence Repub­li­can Nation­al Con­ven­tion del­e­gates, and offers to tar­get minor­i­ty com­mu­ni­ties, sub­ur­ban female vot­ers, and unde­cid­ed vot­ers with covert mes­sag­ing.

    In inter­nal com­pa­ny emails, employ­ees also dis­cussed the use of “hun­dreds of avatars dri­ving neg­a­tive mes­sag­ing,” and “phys­i­cal world ops like counter protests, heck­lers, etc.”
    ...

    And note the chill­ing lev­el of hon­esty from one of the Psy Group employ­ees about how and why Psy Group’s tech­niques for push­ing lies on social media are so effec­tive: as long as you can invade peo­ple’s social media ‘echo cham­bers’ with bots and avatars peo­ple will believe what those bots and avatars tell them. It’s dia­bol­i­cal­ly sim­ple:

    ...
    The Sen­ate report men­tions the names of oth­er Psy-Group employ­ees who were includ­ed in emails dis­cussing pos­si­ble influ­ence cam­paigns on behalf of Trump. One of these is Paul Vese­ly, an Aus­tralian-Israeli, who, in Novem­ber 2017, post­ed an inter­view he gave on his LinkedIn pro­file con­cern­ing dis­in­for­ma­tion cam­paigns dur­ing the 2016 pres­i­den­tial elec­tion.

    In the inter­view, Vese­ly dis­cuss­es such cam­paigns in a knowl­edge­able way, although it is not clear whether he is refer­ring to work that may have specif­i­cal­ly been car­ried out by Psy-Group.

    “The rea­son the tech­niques used on social media were so effec­tive dur­ing the 2016 US pres­i­den­tial elec­tions,” he is quot­ed as say­ing, “was because the mes­sag­ing was seg­ment­ed per­fect­ly per tar­get audi­ence. This cre­at­ed both engag­ing and some­times infu­ri­at­ing con­tent tai­lored to appeal to spe­cif­ic seg­ments of the US pop­u­la­tion. This sim­ple yet effec­tive mar­ket­ing tech­nique allowed mil­lions of Amer­i­cans to not just digest but crave the nar­ra­tive these fake news accounts were spurt­ing out. The sec­ond rea­son why the dis­in­for­ma­tion cam­paign was so suc­cess­ful was because it seemed to come from grass­roots sup­port­ers even though it was being led by avatars. There is no greater pow­er of influ­ence than over peo­ple in a seg­ment­ed group with sim­i­lar inter­ests one another’s out­look in an effec­tive echo cham­ber. This echo cham­ber allowed real peo­ple to be involved with avatars and social group admin­is­tra­tors who direct­ed con­ver­sa­tion and released infor­ma­tion that var­ied in spec­trum from being loose­ly based on truth all the way to being com­plete­ly fic­ti­tious. No mat­ter how dis­tant the con­tent was from the truth, when agreed upon and repeat­ed in a group, it was extreme­ly effec­tive.”

    The Sen­ate Intel­li­gence Com­mit­tee said it could find no con­vinc­ing evi­dence that Psy-Group car­ried out influ­ence oper­a­tions against the US on behalf of Rus­sia. Nor did it draw a con­clu­sion as to whether Psy-Group had car­ried out any influ­ence cam­paigns against Amer­i­cans at all.
    ...

    Final­ly, regard­ing Psy Group’s claims that it nev­er actu­al­ly car­ried out any of these ser­vices, as the arti­cle notes the com­pa­ny was paid over $1 mil­lion by George Nad­er — who appeared to be act­ing on behalf of the crown prince of the UAE — for some sort of ser­vices. Also recall that George Nad­er him­self has report­ed­ly giv­en a dif­fer­ent account of the ser­vices Psy Group pro­vid­ed, although we aren’t told what he said. So there real­ly is an abun­dance of cir­cum­stan­tial evi­dence sug­gest­ing some­thing was car­ried out by Psy Group on the Trump cam­paign’s behalf. Some worth mil­lions of dol­lars in pay­ment:

    ...
    Accord­ing to Zamel, none of these cam­paigns were ever car­ried out: “Not a tweet, not a char­ac­ter, noth­ing,” Zamel told the com­mit­tee. How­ev­er, the com­mit­tee not­ed that Zamel was paid over $1 mil­lion by George Nad­er, an advis­er to the Unit­ed Arab Emi­rates, with high-lev­el Russ­ian ties.
    ...

    And now here’s an August 2016 arti­cle about how Marc Zell, described as Israel’s lead­ing Repub­li­can, came around to not just accept­ing Trump but becom­ing a vocal sup­port­er. Zell gives the typ­i­cal expla­na­tion we heard at the time about how in pri­vate Trump was actu­al­ly a very sober-mind­ed and ratio­nal busi­ness­man behind the scenes, a far more plau­si­ble lie at the time com­pared to today. The arti­cle goes on to describe how some Repub­li­cans in Israel had yet to come around to Trump and Zel­l’s pre­dic­tion that they would even­tu­al­ly do so. Guess who the exam­ple is of an Repub­li­can in Israel who had­n’t yet ful­ly come around to Trump: Zel­l’s co-chair Kory Bar­dash, the same guy who made the secret out­reach to Psy Group to ensure Trump gets the nom­i­na­tion five months ear­li­er:

    The Times of Israel

    How Israel’s lead­ing Repub­li­can learned to love The Don­ald
    Trump should not become pres­i­dent, Marc Zell declared eight months ago. Now he sings the nominee’s prais­es, and accus­es the Clin­tons of rap­ing the Amer­i­can pub­lic

    By RAPHAEL AHREN
    11 August 2016, 7:41 pm

    In 1992, Marc Zell, who had just estab­lished an Israeli branch of Repub­li­cans Abroad, pub­licly rebelled against the par­ty. He felt George Bush’s sec­re­tary of state, James Bak­er, was an anti-Semi­te who treat­ed Israel like a door­mat. Rather than vot­ing for a sec­ond Bush term, Zell endorsed Bill Clin­ton.

    “I’m sor­ry I did, because Clin­ton turned out to be a dis­as­ter for Israel. But at that time Clin­ton was say­ing he wasn’t inter­est­ed in for­eign pol­i­cy,” Zell recalled this week.

    The 2016 pres­i­den­tial cam­paign also pre­sent­ed a dilem­ma for Zell. Dur­ing the Repub­li­can pri­maries, Zell sup­port­ed Flori­da Sen­a­tor Mar­co Rubio, and argued that Don­ald Trump lacked the tem­pera­ment to be pres­i­dent. But now, even as the real-estate developer’s cam­paign attracts grow­ing con­tro­ver­sy, Zell is endors­ing Trump.

    In Decem­ber, Zell declared: “The vot­ers under­stand that to lead the Unit­ed States, you need a per­son who knows more than how to sell prod­ucts, with all due respect to Don­ald Trump, and every­thing he has achieved in his career… In my opin­ion, he can­not be pres­i­dent of the Unit­ed States.”

    Eight months and a Repub­li­can pres­i­den­tial nom­i­na­tion for The Don­ald lat­er, Zell, an inter­na­tion­al lawyer based in Jerusalem, sings a dif­fer­ent tune. Still not very fond of Trump’s often brash demeanor, Zell backs the can­di­date and defends his choice by dif­fer­en­ti­at­ing between Trump’s pub­lic per­sona and his poli­cies.

    What turns a crit­ic into a staunch sup­port­er?

    At first, Zell admit­ted in an inter­view, he wres­tled with the New York billionaire’s unex­pect­ed suc­cess. “I said, after hav­ing attacked him in the pri­maries, how can I pos­si­bly rep­re­sent him and the par­ty to the media and else­where?”

    Zell went so far as to offer his res­ig­na­tion — both as chair­man of Repub­li­cans in Israel and as vice pres­i­dent of Repub­li­cans Over­seas. But the boards of both orga­ni­za­tions reject­ed it. “That forced me to have to come to terms with the sit­u­a­tion.”

    Zell, who grew up out­side of Wash­ing­ton, D.C. and immi­grat­ed to Israel in 1986, explained his dra­mat­ic change of heart by cit­ing the Kübler-Ross mod­el, which delin­eates sev­er­al stages of grief.

    “I went through a process that was not dis­sim­i­lar to a griev­ing process: You deny, then you get angry and depressed, and even­tu­al­ly you come to accep­tance,” he said. “And as I went through that process, I came to learn a few things that helped change my mind.”

    The fact that Trump gar­nered more votes than any Repub­li­can pri­ma­ry can­di­date in his­to­ry, win­ning 37 states and knock­ing 16 oth­er can­di­dates from the race, tops Zell’s long list of rea­sons for his about-face. “First and fore­most, the peo­ple have spo­ken, and they have spo­ken in an unequiv­o­cal fash­ion,” he told The Times of Israel in his office on 15th floor of a cen­tral Jerusalem high-rise, a “Trump 2016” pin shin­ing on his lapel.

    Indi­cat­ing his change of heart is no case of polit­i­cal expe­di­en­cy but rather one of gen­uine, albeit new-found con­vic­tion, Zell pas­sion­ate­ly advo­cat­ed for a Trump pres­i­den­cy.

    For one thing, he hails Trump’s sup­port for Israel. While ini­tial­ly he wor­ried about the candidate’s posi­tion on the Israeli-Pales­tin­ian con­flict, he now says he no longer has any doubts that Trump will be bet­ter for Jerusalem than Demo­c­ra­t­ic rival Hillary Clin­ton. Specif­i­cal­ly, Zell cit­ed the Repub­li­can plat­form, which no longer men­tions a two-state solu­tion. “It was his­toric in its depar­ture from more watered-down Israel planks, like the Democ­rats have now,” he said.

    Him­self a res­i­dent of the West Bank set­tle­ment of Tekoa, Zell deemed it “bril­liant” that the GOP aban­doned calls for Pales­tin­ian state­hood. “I’m against unre­al­is­tic poli­cies. I am against try­ing to fit a square peg into a round hole. There’s no con­text in which a two-state solu­tion in today’s world would work.”

    For anoth­er, Zell also gushed over Trump’s run­ning mate, Mike Pence, say­ing he knows Pence per­son­al­ly and can attest to his bona fide pro-Israel cre­den­tials. Pence’s nom­i­na­tion was unex­pect­ed and speaks mas­sive­ly in favor of Trump, Zell argued, echo­ing the feel­ings of many Repub­li­can Jews in the Unit­ed States.

    Trump also deserved much cred­it for bring­ing sev­er­al sub­jects into the elec­tion spot­light that oth­er­wise would have remained off the radar, such as immi­gra­tion, America’s trade agree­ments, and its rela­tion­ship to NATO, Zell con­tin­ued. “Here’s a guy who, just by the unortho­dox man­ner of cam­paign­ing, intro­duced these top­ics into pub­lic debate. That’s fan­tas­tic. This shows already his abil­i­ty to influ­ence pub­lic opin­ion and to change the order of things.”

    Trump unique­ly rec­og­nized the electorate’s boil­ing anger over the polit­i­cal estab­lish­ment, added Zell, a grad­u­ate of Prince­ton and the Uni­ver­si­ty of Mary­land. “He under­stood this, either instinc­tive­ly or delib­er­ate­ly, and he tapped into that groundswell of dis­con­tent and rode it to (nom­i­na­tion) vic­to­ry.”

    This has noth­ing to do with pop­ulism or dem­a­goguery, insist­ed Zell, a father of eight and a grand­fa­ther of 14. “The sys­tem needs to be shak­en up, both domes­ti­cal­ly and in terms of for­eign pol­i­cy.”

    Anoth­er fac­tor that tipped the scale in Trump’s favor is the iden­ti­ty of that Demo­c­ra­t­ic oppo­nent. Hillary Clin­ton, Zell said, “has got all this expe­ri­ence, it’s true. But she lacks judge­ment.” Zell described the for­mer sec­re­tary of state as incom­pe­tent, dis­hon­est and cor­rupt. And he accused the Clin­ton fam­i­ly of noth­ing less than rap­ing the US pub­lic, cit­ing con­tro­ver­sies involv­ing the Clin­ton Foun­da­tion, Bill Clinton’s lucra­tive pub­lic speak­ing, Chelsea Clinton’s employ­ment, and oth­ers: “It’s a fam­i­ly busi­ness, and it’s been very suc­cess­ful. But they’re com­ing back, these influ­ence ped­dlers, to Wash­ing­ton, and they’re going to take advan­tage, they’re going to rape the Amer­i­can peo­ple again,” he said. “It’s not accept­able.”

    When asked if he real­ly meant to use that word, Zell acknowl­edged that it was strong lan­guage but stuck by it.

    Two Don­ald Trumps

    If oppo­si­tion to Clin­ton is to be expect­ed for a Repub­li­can, how does Zell, who not too long ago expressed con­cern over both Trump’s char­ac­ter and some of his sub­stan­tive poli­cies, defend the Repub­li­can nom­i­nee amid the ongo­ing con­tro­ver­sies sur­round­ing him?

    Zell’s answer: Trump’s osten­si­bly out­ra­geous pol­i­cy pro­pos­als might not be for­mu­lat­ed very ele­gant­ly, but behind them always stands a sound pol­i­cy. (The inter­view took place on Tues­day, before the erup­tion of the firestorm sur­round­ing com­ments by Trump seen by some as a call to vio­lence against Clin­ton.)

    For instance, one of the things that shook Zell out of his “griev­ing process” were sev­er­al dis­cus­sions he had with senior Repub­li­cans who also ini­tial­ly opposed Trump but then got to meet the man and even­tu­al­ly embraced him. One sen­a­tor told Zell how sur­prised he was to find Trump was very well pre­pared for a meet­ing with him, lis­tened care­ful­ly and asked intel­li­gent ques­tions. “This is com­plete­ly at odds with Trump’s pub­lic per­sona,” Zell said.

    Indeed, there are real­ly two Don­ald Trumps, Zell posit­ed. “There’s the pub­lic per­sona, which works as a brand­ing, mar­ket­ing, cam­paign­ing tac­tic. And then there’s a Don­ald Trump that knows how to run a busi­ness. He had his fail­ures, he had his suc­cess­es, but you don’t run a busi­ness by not lis­ten­ing. You got to lis­ten to your advis­ers, you got to make intel­li­gent deci­sions. That’s what he does.”

    For instance, he argued, Trump’s pro­pos­al to ban Mus­lims from enter­ing the Unit­ed States — or “vet them,” as Zell says — is mere­ly a copy of what is going in Israel on a dai­ly basis. “We pro­file. When we see a Mus­lim com­ing to Israel or leav­ing Israel, they’re sub­ject to spe­cial inter­ro­ga­tions to make sure they’re okay. It’s not polit­i­cal­ly cor­rect at all in the West­ern con­text. But it works.”

    The Oba­ma admin­is­tra­tion is unwill­ing to tack­le the issue head on, which is jeop­ar­diz­ing Amer­i­can lives, Zell charged. “That’s just wrong. He [Trump] is say­ing: fix it. He’s right. You don’t like the way he says it. I’m sor­ry. He doesn’t say it in a par­tic­u­lar­ly grace­ful way. But he’s right about sin­gling out this par­tic­u­lar group of peo­ple and check­ing them espe­cial­ly to make sure that they don’t have ter­ror­ist inten­tions.”

    Islam is not just anoth­er reli­gion, Zell went on, argu­ing that it is “no acci­dent” that over 90 per­cent of recent ter­ror­ist attacks were com­mit­ted by Mus­lims. “I didn’t like the tone of the state­ment. But he’s right about Mus­lims. He’s right about the need to pro­tect the home­land against this threat.”

    By refus­ing to speak about “rad­i­cal Islam,” the Oba­ma admin­is­tra­tion does not even acknowl­edge the nature of the prob­lem, and has thus “con­tributed, fun­da­men­tal­ly, to the exis­tence of this prob­lem,” Zell thun­dered. With his com­ments about Mus­lims, Trump was mere­ly stat­ing a sim­ple truth, he added. “Now you don’t like the way he artic­u­lates his views? I’ll tell you some­thing: I’m not that hap­py about the ways in which he artic­u­lates his views some­times. But he made these issues fun­da­men­tal­ly part of the debate.”

    Zell argues sim­i­lar­ly about Trump’s promise to build a wall and have Mex­i­co pay for it. It is debat­able whether Trump is talk­ing about a phys­i­cal wall or a sym­bol­ic one, but the idea behind his pro­pos­al — the need to stop the influx of ille­gal immi­grants — is praise­wor­thy.

    “It’s a beau­ti­ful image,” Zell said about the pro­posed bor­der bar­ri­er. “This guy, Trump, has an unbe­liev­able abil­i­ty to brand him­self and to mar­ket his per­sona. It’s not nec­es­sar­i­ly easy to hear some­times, but he does it, and he does it effec­tive­ly.”

    Zell’s line of defense for all of Trump’s scan­dals works accord­ing to the same prin­ci­ple: It might not sound pret­ty, but the guy is smarter than you think.

    “I wouldn’t talk that way,” he said, refer­ring to Trump’s often offen­sive rhetoric. “It’s not my style. I don’t like it. But there’s the oth­er Trump. The cool-head­ed guy that does his home­work, the behind-the-scenes deci­sion­mak­er. That’s the Trump I didn’t know.”

    What about scores of senior Repub­li­cans who are jump­ing ship, denounc­ing Trump and in some cas­es even endors­ing Clin­ton? Zell, who has nev­er met the GOP nom­i­nee, esti­mat­ed that many of those who ini­tial­ly sup­port­ed oth­er can­di­dates still have not com­plet­ed the griev­ing process. Most of them will even­tu­al­ly come around as well, he pre­dict­ed.

    Dis­uni­ty in the Israeli Repub­li­can camp

    Mean­while, Zell is work­ing on per­suad­ing Repub­li­cans in Israeli to vote for the party’s can­di­date, launch­ing a get-out-the-vote cam­paign unprece­dent­ed in scope. Some 50 peo­ple, paid staff and vol­un­teers, are cur­rent­ly try­ing to con­vince the unde­cid­ed, he said.

    Most Repub­li­cans in Israel ini­tial­ly favored Rubio or Ted Cruz but will vote for Trump come Novem­ber, Zell pre­dict­ed. In past elec­tions, between 80 and 85 per­cent of Israeli-Amer­i­cans sup­port­ed the GOP can­di­date (as opposed to Jews liv­ing in Amer­i­ca, who over­whelm­ing­ly vote for Democ­rats,) though he expects the num­ber to be low­er than usu­al this time. “There are still some who need coax­ing,” he said.

    One of them might be Zell’s co-chair at the Repub­li­cans’ Israel branch, Kory Bar­dash.

    Asked about his view on Trump, Bar­dash pro­vid­ed The Times of Israel with the fol­low­ing state­ment: “I am cog­nizant of the fact that there are Repub­li­can vot­ers who are ambiva­lent about Mr. Trump being the nom­i­nee. How­ev­er, I would strong­ly rec­om­mend they go out and vote for Repub­li­can House and Sen­ate can­di­dates. When it comes to eco­nom­ic and for­eign pol­i­cy, hav­ing a Repub­li­can Con­gress can ensure improved leg­is­la­tion. Too many Demo­c­ra­t­ic elect­ed offi­cials have shown hos­til­i­ties toward free mar­ket eco­nom­ics and Israel.”

    ...

    ————-

    “How Israel’s lead­ing Repub­li­can learned to love The Don­ald” by RAPHAEL AHREN; The Times of Israel; 08/11/2016

    “Indi­cat­ing his change of heart is no case of polit­i­cal expe­di­en­cy but rather one of gen­uine, albeit new-found con­vic­tion, Zell pas­sion­ate­ly advo­cat­ed for a Trump pres­i­den­cy.

    It was­n’t polit­i­cal expe­di­en­cy. It was a new­found gen­uine pas­sion­ate con­vic­tion about Trump. That’s how Marc Zell was spin­ning his seem­ing­ly sud­den sup­port of Trump at the time. And while the kinds of state­ments Zell was mak­ing at the time seemed con­sis­tent with stan­dard pub­lic rela­tions spin that should­n’t in any way be tak­en at face val­ue, the fact that we’ve now learned that Zel­l’s orga­ni­za­tion was secret­ly reach­ing out to Psy Group to secure Trump’s nom­i­na­tion back in March of 2016 does lend cre­dence to his Zel­l’s claims. He real­ly must have been very sup­port­ive of Trump to go as far as reach­ing out to Psy Group.

    And if Zell and his Repub­li­cans in Israel orga­ni­za­tion real­ly was enthu­si­as­ti­cal­ly back­ing Trump in ear­ly 2016, you have to won­der if the var­i­ous Trump pol­i­cy pro­pos­als cit­ed that con­vinced Zell that Trump was the supe­ri­or can­di­date real­ly were major fac­tors. Pol­i­cy pro­pos­als like the ‘Mus­lim ban’ and build­ing ‘the Wall’ with Mex­i­co. Con­sid­er­ing that the ‘Mus­lim ban’ and ‘the Wall’ are nor­mal­ly seen as the kinds of ‘red meat’ poli­cies intend­ed to appeal to the broad­er Repub­li­can vot­ing base, as opposed to elite Repub­li­can orga­niz­ers like Zell, the fact that Zell cites those as the kinds of poli­cies that brought him around to sup­port­ing Trump rais­es the inter­est­ing ques­tion of how much Trump’s ‘red meat’ for the base is also high­ly appeal­ing to the Repub­li­can elites like Zell on whose behalf the par­ty is actu­al­ly run:

    ...
    Two Don­ald Trumps

    If oppo­si­tion to Clin­ton is to be expect­ed for a Repub­li­can, how does Zell, who not too long ago expressed con­cern over both Trump’s char­ac­ter and some of his sub­stan­tive poli­cies, defend the Repub­li­can nom­i­nee amid the ongo­ing con­tro­ver­sies sur­round­ing him?

    Zell’s answer: Trump’s osten­si­bly out­ra­geous pol­i­cy pro­pos­als might not be for­mu­lat­ed very ele­gant­ly, but behind them always stands a sound pol­i­cy. (The inter­view took place on Tues­day, before the erup­tion of the firestorm sur­round­ing com­ments by Trump seen by some as a call to vio­lence against Clin­ton.)

    For instance, one of the things that shook Zell out of his “griev­ing process” were sev­er­al dis­cus­sions he had with senior Repub­li­cans who also ini­tial­ly opposed Trump but then got to meet the man and even­tu­al­ly embraced him. One sen­a­tor told Zell how sur­prised he was to find Trump was very well pre­pared for a meet­ing with him, lis­tened care­ful­ly and asked intel­li­gent ques­tions. “This is com­plete­ly at odds with Trump’s pub­lic per­sona,” Zell said.

    Indeed, there are real­ly two Don­ald Trumps, Zell posit­ed. “There’s the pub­lic per­sona, which works as a brand­ing, mar­ket­ing, cam­paign­ing tac­tic. And then there’s a Don­ald Trump that knows how to run a busi­ness. He had his fail­ures, he had his suc­cess­es, but you don’t run a busi­ness by not lis­ten­ing. You got to lis­ten to your advis­ers, you got to make intel­li­gent deci­sions. That’s what he does.”

    For instance, he argued, Trump’s pro­pos­al to ban Mus­lims from enter­ing the Unit­ed States — or “vet them,” as Zell says — is mere­ly a copy of what is going in Israel on a dai­ly basis. “We pro­file. When we see a Mus­lim com­ing to Israel or leav­ing Israel, they’re sub­ject to spe­cial inter­ro­ga­tions to make sure they’re okay. It’s not polit­i­cal­ly cor­rect at all in the West­ern con­text. But it works.”
    ...

    Final­ly, note the exam­ple giv­en of a Repub­li­can in Israel who might still need more con­vinc­ing to sup­port Trump: Kory Bar­dash, the same guy who reached out to Psy Group about sup­port­ing Trump months ear­li­er:

    ...
    Most Repub­li­cans in Israel ini­tial­ly favored Rubio or Ted Cruz but will vote for Trump come Novem­ber, Zell pre­dict­ed. In past elec­tions, between 80 and 85 per­cent of Israeli-Amer­i­cans sup­port­ed the GOP can­di­date (as opposed to Jews liv­ing in Amer­i­ca, who over­whelm­ing­ly vote for Democ­rats,) though he expects the num­ber to be low­er than usu­al this time. “There are still some who need coax­ing,” he said.

    One of them might be Zell’s co-chair at the Repub­li­cans’ Israel branch, Kory Bar­dash.

    Asked about his view on Trump, Bar­dash pro­vid­ed The Times of Israel with the fol­low­ing state­ment: “I am cog­nizant of the fact that there are Repub­li­can vot­ers who are ambiva­lent about Mr. Trump being the nom­i­nee. How­ev­er, I would strong­ly rec­om­mend they go out and vote for Repub­li­can House and Sen­ate can­di­dates. When it comes to eco­nom­ic and for­eign pol­i­cy, hav­ing a Repub­li­can Con­gress can ensure improved leg­is­la­tion. Too many Demo­c­ra­t­ic elect­ed offi­cials have shown hos­til­i­ties toward free mar­ket eco­nom­ics and Israel.”
    ...

    Notice how Bar­dash was did­n’t give any sort of full-throat­ed sup­port in his state­ment when con­tact­ed by the Times of Israel, as if he want­ed to main­tain the image of some­one who was still only tepid­ly sup­port­ive of Trump. The same guy who was secret­ly try­ing to hire Psy Group on Trump’s behalf. And yet, to this day, we are told that it isn’t real­ly known if Psy Group ever pro­vid­ed any ser­vices for the Trump cam­paign at all. It’s all a mys­tery that will appar­ent­ly go unre­solved for­ev­er.

    Also note that since Psy Group was an Israeli com­pa­ny it only makes sens that the Israeli branch of the Repub­li­can Par­ty would be the group that reach­es out to them. But that does­n’t nec­es­sar­i­ly mean that the Psy Group oper­a­tion was an Israeli Repub­li­can oper­a­tion. The group of Repub­li­can elites (and their asso­ciates) behind the Psy Group plan could be much larg­er.

    So as the final month of the 2020 elec­tion clusterf*ck plays out, with all of the upcom­ing dirty tricks we can now con­fi­dent­ly expect from the GOP, it’s going to be impor­tant to keep in mind that the 2016 mys­tery of Psy Group now includes the mys­tery of which Repub­li­can Par­ty elites secret­ly tried to hire it. There’s also the ques­tion of whether or not this same group would be will­ing and able to engage in more dirty tricks in 2020, although that’s not real­ly a mys­tery.

    Posted by Pterrafractyl | October 4, 2020, 9:09 pm
  40. In light of recent rev­e­la­tion from the Sen­ate Intel­li­gence Com­mit­tee’s report on the 2016 Rus­sia inves­ti­ga­tion regard­ing Psy Group that rais­es all sorts of inter­est­ing ques­tions about who, in addi­tion to the crown Princes of Sau­di Ara­bia and the UAE, was behind the hir­ing of Psy Group in 2016 to help Don­ald Trump win the elec­tion — the rev­e­la­tion that it was the head of the Repub­li­can Par­ty’s pri­ma­ry orga­ni­za­tion in Israel, Kory Bar­dash, who ini­ti­at­ed the out­reach to Psy Group in March of 2016 for the pur­pose of help­ing Don­ald Trump win the 2016 pri­ma­ry — it’s worth not­ing an ear­li­er rev­e­la­tion about Psy Group that emerged in rela­tion to a com­plete­ly dif­fer­ent scan­dalous case: Psy Group employ­ees were caught work­ing in tan­dem with Black Cube employ­ees on a joint smear cam­paign project accord­ing to a law­suit by Cana­di­an hedge fund West Face Cap­i­tal Inc. The hedge fund sued rival Cana­di­an invest­ment fund Cat­a­lyst Cap­i­tal Group Inc. alleg­ing that Cat­a­lyst hired Psy Group and Black Cube to run a sting oper­a­tion and defama­tion cam­paign against it. The fund also sued Psy Group and Black Cube for $500 mil­lion in dam­ages.

    It sounds like the smear cam­paign was financed by pri­vate donors and focused on stig­ma­tiz­ing var­i­ous pro-Pales­tin­ian BDS groups. West Face Cap­i­tal charges that it was a tar­get of this joint Psy Group/Black Cube project and lists the name of an Indi­an con­trac­tor hired by Psy Group to post defam­a­to­ry con­tent about the hedge fund online. And it’s accord­ing to this legal com­plaint that Psy Group and Black Cube employ­ees were work­ing in tan­dem with each oth­er when pub­lish­ing defam­a­to­ry con­tent using sophis­ti­cat­ed mask­ing tech­niques to hide their tracks. The hedge fund learned it was a tar­get of Black Cube and Psy Group when employ­ees rec­og­nized the image of Black Cube employ­ee Stel­la Penn Pechanac in news reports about Har­vey Wein­stein hir­ing Black Cube to inves­ti­gate women accus­ing him of rape and assault.

    It was­n’t a par­tic­u­lar­ly sur­pris­ing­ly rev­e­la­tion, if true, that Psy group and Black Cube employ­ees were work­ing in tan­dem and using sophis­ti­cat­ed tech­niques to hide their tracks. But part of what makes it rel­e­vant in the con­text of the new rev­e­la­tion that the out­reach to Psy Group came from the head of Repub­li­cans in Israel is that while Psy Group has long vocif­er­ous­ly denied hack­ing the tar­gets of their clients, Black Cube has been caught hack­ing the tar­gets of its clients. So if Psy Group and Black Cube have a his­to­ry of team­ing up so close­ly to the point where the com­pa­nies’ employ­ees are work­ing in tan­dem on joint projects those denials that Psy Group would hack a tar­get are point­less. Not that the denials had much weight any­way since of course they would deny it. And if the Kori Bar­dash, the head of the main Repub­li­can Par­ty out­reach group in Israel, was the fig­ure who ini­tial­ly tried to hire Psy Group back in March of 2016 we have to ask if those still-secret Trump back­ers tried to secret­ly hire Black Cube for the project too:

    The Times of Israel

    Israeli firm under FBI scruti­ny in Trump probe alleged­ly tar­get­ed BDS activists
    Accord­ing to a law­suit against the com­pa­ny, a Psy-Group oper­a­tive reg­is­tered outlawbds.com, a web­site that report­ed­ly named and shamed sup­port­ers of the anti-Israel boy­cott effort

    By Simona Wein­glass, David Horovitz and Raphael Ahren
    6 June 2018, 8:18 pm

    Psy-Group, a mys­te­ri­ous Israeli com­pa­ny that is report­ed­ly being inves­ti­gat­ed by the FBI in con­nec­tion with Spe­cial Coun­sel Robert Mueller’s probe into alleged ille­gal inter­fer­ence in the 2016 US pres­i­den­tial elec­tion, was also involved in covert anti-BDS efforts, accord­ing to a law­suit against the com­pa­ny and mul­ti­ple sources who spoke to The Times of Israel.

    BDS is a cam­paign by some pro-Pales­tin­ian activists encour­ag­ing peo­ple to boy­cott, divest from and sanc­tion Israel over what they call its ill-treat­ment of the Pales­tini­ans.

    Psy-Group, which oper­ates in Israel under the name Invop Ltd, is a self-styled leader in “intel­li­gence and influ­ence” which boasts in its mar­ket­ing mate­r­i­al of its covert tech­niques and capa­bil­i­ties. Its founder and co-own­er Joel Zamel was report­ed by the New York Times last month to have met with Don­ald Trump Jr. three months before the Novem­ber 2016 US pres­i­den­tial elec­tions, offer­ing to assist his father’s cam­paign, and the com­pa­ny was report­ed to have drawn up “a mul­ti­mil­lion-dol­lar pro­pos­al for a social media manip­u­la­tion effort to help elect Mr. Trump.”

    While a lawyer for Zamel denied that he or any his com­pa­nies had any involve­ment in the US elec­tion cam­paign, Bloomberg News has report­ed that Spe­cial Coun­sel Mueller’s team is inves­ti­gat­ing flows of mon­ey into Psy-Group’s Cyprus bank account, and also that Psy-Group formed an alliance with Cam­bridge Ana­lyt­i­ca, a (now col­lapsed) com­pa­ny that the Trump cam­paign con­sult­ed on social media issues, fol­low­ing Trump’s elec­tion.

    Psy-Group’s mar­ket­ing mate­ri­als high­light its “pro­fes­sion­al­ism, legal­i­ty and dis­cre­tion,” and lit­tle is pub­licly known of its activ­i­ties, but a Cana­di­an legal bat­tle has shed some light on its prac­tices, includ­ing its alleged work to counter anti-Israel BDS activ­i­ties.

    The outlawbds.com web­site

    Cana­di­an hedge fund West Face Cap­i­tal Inc. is cur­rent­ly involved in a legal dis­pute with rival Cana­di­an invest­ment fund Cat­a­lyst Cap­i­tal Group Inc. West Face alleges that Cat­a­lyst hired Psy-Group and a sec­ond pri­vate intel­li­gence com­pa­ny, Black Cube, to con­duct a sting oper­a­tion and defama­tion cam­paign against it. West Face is suing the two Israeli com­pa­nies and their oper­a­tives for $500 mil­lion in dam­ages. Cat­a­lyst, for its part, accus­es West Face of foul play.

    In the course of its inves­ti­ga­tion into the tac­tics alleged­ly used by Black Cube and Psy-Group, West Face dis­cov­ered that a Psy-Group oper­a­tive reg­is­tered the web­site outlawbds.com, West Face’s com­plaint alleges.

    Outlawbds.com has dis­ap­peared from the inter­net, but some BDS pro­po­nents have claimed that the site con­tained a “black­list” with pho­tos and email address­es of indi­vid­u­als believed to sup­port BDS.

    Oth­er BDS sup­port­ers report hav­ing received an email in Sep­tem­ber 2017 from the email address admin@outlawbds.com warn­ing them: “Be aware that you have been iden­ti­fied as a BDS pro­mot­er. Accord­ing to new leg­is­la­tion in New York State, indi­vid­u­als or orga­ni­za­tions that engage in or pro­mote BDS activ­i­ties with US allies will no longer receive pub­lic fund­ing or sup­port. More­over, the state and its agen­cies will no longer engage in busi­ness or hire these orga­ni­za­tions and indi­vid­u­als as they have been deemed prob­lem­at­ic and anti-Amer­i­can. You have been marked. You have been iden­ti­fied. You have a lim­it­ed win­dow of oppor­tu­ni­ty to cease and desist or face the con­se­quences of your actions in legal pro­ceed­ings. In case you have ceased your past wrong­do­ing, please con­tact us at admin@outlawbds.com for your pro­file to be removed from the Black­list.”

    The Times of Israel was told by mul­ti­ple sources that Psy-Group worked to counter BDS activists — and is one of sev­er­al such firms, set up by or employ­ing for­mer Israeli intel­li­gence oper­a­tives, that do so.

    Such work, The Times of Israel was told, is known to the Israeli gov­ern­ment, and specif­i­cal­ly the Min­istry of Strate­gic Affairs, but is not paid for by the gov­ern­ment.

    The sources said that such com­pa­nies engage in var­i­ous under­cov­er activ­i­ties against BDS lead­ers and activists. This work includes high­light­ing the sources of fund­ing for BDS activ­i­ties if such fund­ing is obtained from ter­ror­ist or oth­er banned orga­ni­za­tions, and mak­ing pub­lic instances where activists have expressed extrem­ist and/or anti-Semit­ic views. The goal can be to deter the activists from con­tin­u­ing their activ­i­ties, the sources explained.

    In the case of Psy-Group, The Times of Israel was told but could not inde­pen­dent­ly ver­i­fy, its anti-BDS work was financed by pri­vate donors, and the firm did not engage in ille­gal activ­i­ty, did not dis­sem­i­nate false infor­ma­tion, and did not engage in hack­ing.

    The West Face project

    The com­plaint in the law­suit filed by West Face devotes rel­a­tive­ly lit­tle atten­tion to Psy-Group’s alleged anti-BDS work.

    It briefly asserts that the Psy-Group oper­a­tive who reg­is­tered the out­lawbds web­site went on to hire one “Amin Razvi (‘Razvi’), an indi­vid­ual resid­ing in India,” to car­ry out work relat­ed to outlawbds.com.

    The main thrust of the doc­u­ment deals with the alleged “defama­tion cam­paign against West Face,” as car­ried out by Psy-Group and Black Cube. Razvi worked for Psy-Group on that cam­paign, the doc­u­ment claims, post­ing defam­a­to­ry con­tent about West Face online.

    Detail­ing the alleged activ­i­ties of the two Israeli com­pa­nies, the com­plaint claims that oper­a­tives from both Black Cube and Psy-Group “con­spired to pro­vide reporters, news agen­cies (includ­ing the Nation­al Post, Bloomberg News and the Asso­ci­at­ed Press), as well as oth­ers, with edit­ed, dis­tort­ed or oth­er­wise fal­si­fied record­ings and/or tran­scripts of meet­ings between oper­a­tives of Black Cube and its tar­gets, includ­ing cur­rent and for­mer employ­ees.”

    Accord­ing to the com­plaint, Psy-Group — whose oper­a­tives in the Cana­di­an project alleged­ly includ­ed for­mer Israeli tele­vi­sion jour­nal­ist Emmanuel Rosen — worked in tan­dem with Black Cube, pub­lish­ing defam­a­to­ry arti­cles and social media posts about West Face and using sophis­ti­cat­ed mask­ing tech­niques to hide their tracks.

    West Face learned of Black Cube and Psy-Group’s activ­i­ties when its employ­ees rec­og­nized the image of Stel­la Penn Pechanac in news reports about rape and harass­ment accu­sa­tions against Hol­ly­wood mogul Har­vey Wein­stein. Pechanac was a Black Cube employ­ee who report­ed­ly posed as a woman’s rights activist and who befriend­ed actress Rose McGowan in a bid to gath­er intel­li­gence on behalf of Wein­stein. Employ­ees of West Face rec­og­nized Pechanac as the same woman who had reached out to them.

    Sev­er­al affi­davits by employ­ees and for­mer employ­ees of West Face Cap­i­tal allege that Stel­la Penn Pechanac approached them, using var­i­ous alias­es, offer­ing unique and excit­ing employ­ment oppor­tu­ni­ties and in some cas­es fly­ing them to Lon­don for fake job inter­views. Black Cube oper­a­tives then alleged­ly plied some of these employ­ees with alco­hol in an attempt to extract infor­ma­tion that would ben­e­fit their client in its legal pro­ceed­ings.

    In one instance, a Black Cube oper­a­tive is alleged to have tried to entrap a retired Cana­di­an judge into mak­ing anti-Semit­ic com­ments. Accord­ing to the plain­tiffs, the hope was to dis­cred­it a ver­dict the judge had hand­ed down against Black Cube’s client, Cat­a­lyst, whose man­ag­ing part­ner is Jew­ish.

    In response to a Times of Israel request for com­ment, Black Cube sent the fol­low­ing state­ment: “It is Black Cube’s pol­i­cy to nev­er dis­cuss its clients with any third par­ty, and to nev­er con­firm or deny any spec­u­la­tion made with regard to the company’s work. Ref­er­enc­ing Black Cube has become an inter­na­tion­al sport dur­ing 2018. It is impor­tant to note that Black Cube always oper­ates in full com­pli­ance of the law in every juris­dic­tion in which it con­ducts its work, fol­low­ing legal advice from the world’s lead­ing law firms.”

    In a sep­a­rate mat­ter, Black Cube was accused last month of work­ing to slan­der Oba­ma admin­is­tra­tion offi­cials involved with the Iran nuclear deal as part of a covert effort to dis­cred­it the multi­na­tion­al accord. It denied the alle­ga­tion.

    Psy-Group clos­ing down

    Psy-Group is cur­rent­ly in bank­rupt­cy pro­ceed­ings in an Israeli court.

    The pub­licly avail­able bank­rupt­cy fil­ings, in which 29 of the Psy-Group’s employ­ees have request­ed that the firm be liq­ui­dat­ed, con­tain var­i­ous hith­er­to unpub­lished details about Psy-Group, its activ­i­ties and employ­ees.

    In Israel, Psy-Group oper­ates under the name Invop Ltd., which accord­ing to Israel’s cor­po­rate reg­istry was found­ed on Decem­ber 22, 2014. Its chief exec­u­tive offi­cer is Royi Burstien, the for­mer com­man­der of an Israeli psy­cho­log­i­cal war­fare unit, accord­ing to Bloomberg. All of Invop’s shares are owned by a Cypri­ot com­pa­ny called IOCO Ltd., which in turn is owned by a British Vir­gin Islands com­pa­ny called Pro­tex­er Lim­it­ed.

    Accord­ing to the peti­tion by Psy-Group employ­ees, dat­ed Feb­ru­ary 28, 2018, employ­ees were informed the pre­vi­ous week that they would not receive their Feb­ru­ary salaries because “two crit­i­cal busi­ness devel­op­ments had failed.” As a result, the com­pa­ny had fall­en into a cred­it crunch and would be unable to pay its employ­ees their Feb­ru­ary salaries, or, for that mat­ter, con­tin­ue to oper­ate. The company’s direc­tor, Burstien, did not object to the bank­rupt­cy pro­ceed­ings ini­ti­at­ed by employ­ees and on May 21, the court appoint­ed a spe­cial admin­is­tra­tor for the com­pa­ny.

    The major­i­ty of Psy-Group’s 29 employ­ees list­ed in the peti­tion appear to be young recent uni­ver­si­ty grad­u­ates. A perusal of their LinkedIn pages reveals that many attend­ed the Inter­dis­ci­pli­nary Cen­ter in Her­zliya, an elite pri­vate col­lege near Tel Aviv, and many served in intel­li­gence units in the Israel Defense Forces and oth­er Israeli secu­ri­ty ser­vices, although not nec­es­sar­i­ly in senior roles.

    Despite her rel­a­tive youth, at least one employ­ee of Psy-Group earned a salary that is more than three times the Israeli aver­age of NIS 10,300 a month: The peti­tion includes the pay stub of an employ­ee whose job title was “project man­ag­er” and whose gross salary was NIS 33,0363.

    A few names on the list stand out. One of the peti­tion­ers is Emmanuel Rosen, a for­mer Israeli jour­nal­ist. Con­tact­ed by The Times of Israel, he had no com­ment on this arti­cle.

    Also on the list of Psy-Group employ­ees is Eitan Charnoff, who in 2016 was the direc­tor of iVote Israel, a con­tro­ver­sial get-out-the-vote cam­paign for Amer­i­cans in Israel. He direct­ed iVote Israel at the same time that he worked at Psy-Group. “Eitan Charnoff is not cur­rent­ly involved with either inde­pen­dent orga­ni­za­tion so is not in a posi­tion to com­ment on any­thing relat­ed to either enti­ty,” said his attor­ney Car­rie H. Cohen of Mor­ri­son & Foer­ster LLP.

    Israeli-Aus­tralian Paul Vese­ly, who is not one of the peti­tion­ers, but who claimed in his LinkedIn page to have been one of the founders of Psy-Group, was pre­vi­ous­ly the chief oper­at­ing offi­cer at Veri­bo, a com­pa­ny that pro­vid­ed online rep­u­ta­tion man­age­ment ser­vices to bina­ry options com­pa­nies. He had not replied to a request for com­ment at the time of pub­li­ca­tion.

    In addi­tion, four pre­vi­ous Psy-Group employ­ees list­ed in Invop Ltd’s bank­rupt­cy fil­ings cur­rent­ly work for a com­pa­ny called Cyabra, whose stat­ed mis­sion is to help iden­ti­fy sock­pup­pets and fake Face­book pro­files for orga­ni­za­tions that have been tar­get­ed by fake com­ments. Cyabra had no com­ment on this arti­cle.

    ...

    ———-

    “Israeli firm under FBI scruti­ny in Trump probe alleged­ly tar­get­ed BDS activists” By Simona Wein­glass, David Horovitz and Raphael Ahren; The Times of Israel; 06/06/2018

    “Accord­ing to the com­plaint, Psy-Group — whose oper­a­tives in the Cana­di­an project alleged­ly includ­ed for­mer Israeli tele­vi­sion jour­nal­ist Emmanuel Rosen — worked in tan­dem with Black Cube, pub­lish­ing defam­a­to­ry arti­cles and social media posts about West Face and using sophis­ti­cat­ed mask­ing tech­niques to hide their tracks.”

    As the West Face law­suit demon­strates, Psy Group and Black Cube are cer­tain­ly will­ing to work togeth­er when a client asks. So we have to ask: when Kori Bar­dash hired Psy Group in 2016, was Black Cube in on the deal? Was a few hacks of the Democ­rats part of the request­ed ser­vice pack­age? It’s a ques­tion espe­cial­ly rel­e­vant in the con­text of the ‘Russ­ian hack­ers’ fias­co. Also recall that the out­reach by Kori Bar­dash to Psy Group took place in March of 2016, the same month we are told ‘Fan­cy Bear’ GRU hack­ers start­ed their hack­ing cam­paign against the Democ­rats.

    It’s also worth recall­ing that one of the dis­in­for­ma­tion cam­paigns Black Cube worked on that involved hack­ing was a cam­paign where it was hired by Cam­bridge Ana­lyt­i­ca to hack the polit­i­cal oppo­nent of Cam­bridge Ana­lyt­i­ca’s client, Niger­ian Pres­i­dent Good­luck Jonathan. Again, we have to ask, did Psy Group hire Black Cube for any Repub­li­can-relat­ed projects in 2016? How about Cam­bridge Ana­lyt­i­ca? Maybe a few hack­ing-relat­ed projects? We don’t know and we’ll pre­sum­ably nev­er find out. Which is all the more rea­son we have to ask.

    Posted by Pterrafractyl | October 7, 2020, 12:04 am
  41. Here’s an inter­est­ing sto­ry that direct­ly relates to the ongo­ing, if belat­ed, legal reper­cus­sions still ema­nat­ing from the Cam­bridge Ana­lyt­i­ca scan­dal. But per­haps more impor­tant­ly it relates to the deci­sion Pres­i­dent Trump needs to make over whether or not he’s going to par­don Steve Ban­non. A deci­sion that will pre­sum­ably hinge direct­ly on Ban­non’s knowl­edge of Trump-relat­ed crimes and fears that he might be called in to tes­ti­fy about them. A deci­sion that will also pre­sum­ably be com­pli­cat­ed by the fact that par­don­ing Ban­non for a crime also elim­i­nates his abil­i­ty to exert his Fifth Amend­ment right against self-incrim­i­na­tion in US courts when asked to tes­ti­fy about those par­doned crimes:

    The US Fed­er­al Trade Com­mis­sion (FTC) recent­ly asked a fed­er­al court to force Steve Bannnon to tes­ti­fy under oath as part of the FTC’s Cam­bridge Ana­lyt­i­ca inves­ti­ga­tion. The probe will also the ask the ques­tion of whether or not Ban­non him­self should be found per­son­al­ly liable for his role in the Cam­bridge Ana­lyt­i­ca scan­dal and asso­ci­at­ed data breach­es. The FTC also report­ed­ly wants to ask Ban­non if copies of the Cam­bridge Ana­lyt­i­ca data exist and who might be in pos­ses­sion of them. Giv­en that mul­ti­ple copies of that trea­sure trove sure­ly exist and was quite pos­si­bly used by the Trump 2020 cam­paign and all sorts of oth­er Repub­li­can cam­paigns this could be a high­ly explo­sive ques­tion for Ban­non to answer. At least assum­ing he won’t just com­mit per­jury with the expec­ta­tion of a par­don.

    Ban­non already agreed to an in-per­son inter­view in Sep­tem­ber, with the under­stand­ing that he would be invok­ing the Fifth Amend­ment. But instead of show­ing up he skipped it. He’s already sig­naled that he’ll be invok­ing the Fifth and already defied the FTC on the mat­ter. It real­ly is some sort of legal show­down. The FTC wants to place Ban­non in a posi­tion where he poten­tial­ly faces crim­i­nal charges, which in the­o­ry should make him a pret­ty com­pli­ant wit­ness. Except, of course, Ban­non knows he might be par­doned by Trump, but that par­don­ing win­dow is only going to stay open until Trump leaves office. At the same time, if Trump par­dons Ban­non for any Cam­bridge Ana­lyt­i­ca crimes he might want to do it AFTER Ban­non tes­ti­fies so Ban­non can retain his Fifth Amend­ment right against self-incrim­i­na­tion dur­ing the tes­ti­mo­ny and also lie under oath if need be since any per­jury charges can be par­doned away too. So Trump needs to not only decide if he’s going to par­don Ban­non but also decide when to par­don Ban­non. Before or after the inter­view.

    The new date request by the FTC is Decem­ber 8, which is soon enough that if Trump wants to wait for Ban­non to tes­ti­fy to allow Ban­non to invoke the Fifth Amend­ment and only lat­er decide whether to par­don him that will be an option. Trump about a week and a half to decide, just as Ban­non has about a week and a half to decide whether he’s going to show up for this inter­view or defy a court order and keep hold­ing out for that par­don:

    Politi­co

    FTC asks court to force Ban­non to tes­ti­fy on Cam­bridge Ana­lyt­i­ca scan­dal

    The FTC wants to inter­view Ban­non as part of a probe into whether he should be found per­son­al­ly liable for his involve­ment in the data breach.

    By LEAH NYLEN
    11/20/2020 03:43 PM EST

    The Fed­er­al Trade Com­mis­sion has asked a fed­er­al court to force for­mer Trump cam­paign CEO Steve Ban­non to tes­ti­fy under oath as part of the agency’s inves­ti­ga­tion into Facebook’s Cam­bridge Ana­lyt­i­ca data breach.

    FTC pros­e­cu­tors said they want to inter­view Ban­non as part of a probe into whether he should be found per­son­al­ly liable for his involve­ment in the breach, in which the now-defunct polit­i­cal data firm improp­er­ly obtained infor­ma­tion on about 50 mil­lion Face­book users. Before join­ing Don­ald Trump’s 2016 cam­paign team, Ban­non served as vice pres­i­dent and a board mem­ber of Cam­bridge Ana­lyt­i­ca, which also did work for the pres­i­den­t’s cam­paign.

    The FTC filed suit last year against Cam­bridge Ana­lyt­i­ca and two of its senior exec­u­tives: Alek­san­dr Kogan, who devel­oped the app involved in the breach, and CEO Alexan­der Nix, both of whom set­tled with the agency. The FTC also fined Face­book $5 bil­lion for fail­ing to pro­tect its users’ pri­va­cy.

    Court fil­ings: In court papers filed under seal last week, the FTC said Ban­non agreed to appear for an in-per­son inter­view at the com­mis­sion in Sep­tem­ber, but then didn’t show up. The FTC’s fil­ings were unsealed Fri­day and a fed­er­al judge sched­uled a hear­ing for Dec. 8 on the agency’s request.

    FTC lawyers said they want to ques­tion Ban­non about whether the Face­book user data col­lect­ed by Cam­bridge Ana­lyt­i­ca still exists and was shared with any­one else.

    “Any fur­ther delays in dis­cov­er­ing addi­tion­al infor­ma­tion about where the decep­tive­ly obtained Face­book pro­file data may be locat­ed, or with whom it may have been shared, would fur­ther harm con­sumers,” the FTC’s lawyers said.

    Pre­vi­ous dif­fi­cul­ties: The FTC had some dif­fi­cul­ty serv­ing Ban­non, the for­mer exec­u­tive chair­man of Bre­it­bart News, with a sub­poe­na last year, but the agency said he was for­mal­ly served in Novem­ber 2019. An inter­view in March was post­poned because of the coro­n­avirus pan­dem­ic. Bannon’s lawyers then nego­ti­at­ed a new inter­view for Sep­tem­ber, with the caveat that the for­mer Trump strate­gist would invoke his Fifth Amend­ment right against self-incrim­i­na­tion in response to any ques­tions.

    ...

    The day before his sched­uled inter­view in Sep­tem­ber, Bannon’s lawyers told the FTC he wouldn’t be com­ing. The FTC asked the court to require Ban­non to sit for an inter­view under oath.

    ———–

    “FTC asks court to force Ban­non to tes­ti­fy on Cam­bridge Ana­lyt­i­ca scan­dal” by LEAH NYLEN; Politi­co; 11/20/2020

    “FTC lawyers said they want to ques­tion Ban­non about whether the Face­book user data col­lect­ed by Cam­bridge Ana­lyt­i­ca still exists and was shared with any­one else.

    Who has the stolen Face­book data? That appears to be one of the key ques­tions the FTC wants to ask Ban­non, which is the kind of ques­tion that threat­ens far more peo­ple than just Ban­non. The GOP has had at least 4 years to secret­ly learn how to uti­lize the com­bined data sets of Face­book’s per­son­al data pro­files and the Cam­bridge Ana­lyt­ic psy­cho­log­i­cal pro­files. What did Repub­li­can cam­paigns do with all that data and how might it have been used in 2018 or 2020? If any­one knows the answers to those ques­tions it’s Steve Ban­non, which is why par­don­ing him has to be sooooo tempt­ing right now. Except for that pesky issue of par­dons and the Fifth Amend­ment:

    ...
    Pre­vi­ous dif­fi­cul­ties: The FTC had some dif­fi­cul­ty serv­ing Ban­non, the for­mer exec­u­tive chair­man of Bre­it­bart News, with a sub­poe­na last year, but the agency said he was for­mal­ly served in Novem­ber 2019. An inter­view in March was post­poned because of the coro­n­avirus pan­dem­ic. Bannon’s lawyers then nego­ti­at­ed a new inter­view for Sep­tem­ber, with the caveat that the for­mer Trump strate­gist would invoke his Fifth Amend­ment right against self-incrim­i­na­tion in response to any ques­tions.

    ...

    The day before his sched­uled inter­view in Sep­tem­ber, Bannon’s lawyers told the FTC he wouldn’t be com­ing. The FTC asked the court to require Ban­non to sit for an inter­view under oath.
    ...

    What will Trump do? What about Ban­non? Will he even show up this time? These are the ques­tions we’ll get answered in about a week and a half. Unlike the ques­tions about what hap­pened to all the Cam­bridge Ana­lyt­i­ca data and stolen Face­book data, which will pre­sum­ably remain unan­swered one way or anoth­er.

    Posted by Pterrafractyl | November 29, 2020, 6:52 pm
  42. In light of the recent reports about how Mark Zucker­berg and Joel Kaplan have been per­son­al­ly inter­ven­ing to pro­tect fig­ures like Alex Jones or out­lets like Ben Shapiro’s Dai­ly Wire from the con­se­quences of break­ing Face­book’s rules, here’s an arti­cle from Judd Legum’s Popular.info newslet­ter from back in June about anoth­er exam­ple of Face­book seem­ing­ly bend­ing the rule in ways intend­ed to max­i­mize the reach and influ­ence of Shapiro’s Dai­ly Wire:

    First, recall how we just learned how Face­book decid­ed to con­tin­ue allow­ing the In Feed Rec­om­men­da­tions (IFR) — a fea­ture that inserts posts into peo­ple’s feeds from accounts they don’t fol­low, osten­si­bly to ‘fos­ter new con­nec­tions’ — to serve links to con­ser­v­a­tive per­son­al­i­ties includ­ing Ben Shapiro despite rules against polit­i­cal IFR con­tent. Why did Face­book con­tin­ue serv­ing up links to Ben Shapiro to peo­ple who had­n’t signed up for Shapiro con­tent? Because, Kaplan’s con­tent pol­i­cy team argued, if they dropped the links to Shapiro and oth­er con­ser­v­a­tives that might trig­ger a new round of right-wing accu­sa­tions about Face­book ‘shad­ow-ban­ning’ them.

    Pre­emp­tive capit­u­la­tion in the face of pos­si­ble ‘shad­ow-ban­ning’ charges were con­sis­tent­ly used inter­nal­ly as an excuse to con­tin­ue poli­cies that help right-wing caus­es and per­son­al­i­ties. So you have to won­der if that was also the inter­nal excuse used when Face­book decid­ed to not enforce the rules against the Dai­ly Wire in anoth­er case of sys­tem­at­ic rule-break­ing dis­cov­ered by Popular.info last year: it appears that the Dai­ly Wire was secret­ly pay­ing one of the most pro­lif­ic super-spread­ers of far right junk ‘news’ con­tent on Face­book to pro­mote the Dai­ly Wire’s con­tent. The net­work of high-pro­file web­pages are all run by Corey and Christy Pep­ple, who are best known as the cre­ators of Mad World News. The net­work spe­cial­izes in tak­ing old, high­ly racial­ly charged sto­ries and recy­cling them (with­out indi­ca­tion they are years old) in a ways designed to exploit Face­book’s algo­rithms. And yet there’s one source of con­tent pushed by this net­work that isn’t sole­ly recy­cled racist click-bait: the Dai­ly Wire con­tent, which appears to be the only pub­lish­er with this arrange­ment.

    It turns out this kind of arrange­ment is in direct vio­la­tion of Face­book’s rules. And yet, when direct­ly con­front­ed with the evi­dence, Face­book refused to do any­thing about it and denied that the Dai­ly Wire was break­ing the rules at all.

    So what are the con­se­quences Face­book allow­ing the Dai­ly Wire to flout their third-par­ty pro­mo­tion rules? Well, in May of 2020, the Dai­ly Wire was the sev­enth-ranked pub­lish­er on Face­book and on a per-arti­cle basis receives far greater dis­tri­b­u­tion than any oth­er major pub­lish­er. The Dai­ly Wire real­ly is get­ting an enor­mous ser­vice from the Pep­ples’ net­work in the form of out­sized Face­book traf­fic. A ser­vice that Face­book should be pun­ish­ing both the Dai­ly Wire and Pep­ples for engag­ing in, and yet Face­book refus­es to acknowl­edge any­thing wrong even took place. So it looks like we can add ‘allow­ing the Dai­ly Wire to pig­gy-back on a racist-click-bait empire even when it’s against the rules’ to the list of things Face­book has been doing to ensure Face­book remains the great­est prop­a­ga­tor of far right con­tent ever known.

    But there’s anoth­er aspect of this sto­ry that should be point­ed out in the con­text of the ongo­ing inter­net-dri­ven rad­i­cal­iza­tion of con­ser­v­a­tive audi­ences tak­ing place around the world: The reliance by Mad World News LLC on old polar­iz­ing click-bait arti­cles that inflame fears and prej­u­dices isn’t just an exam­ple of amoral mar­ket­ing tac­tics. It’s actu­al­ly a means of attract­ing and keep­ing an extrem­ist-mind­ed audi­ence. Extrem­ists-mind­ed in the most fun­da­men­tal sense accord­ing to some recent­ly pub­lished researched that exam­ined the per­cep­tu­al traits of extrem­ists.

    The new research, which was led by Dr Leor Zmi­grod’s lab at Cam­bridge University’s depart­ment of psy­chol­o­gy, com­pared how peo­ple of dif­fer­ent polit­i­cal ori­en­ta­tions fun­da­men­tal­ly per­ceived the world. The study gave 522 par­tic­i­pants a bat­tery of 37 cog­ni­tive tests and 22 per­son­al­i­ty sur­veys that focused on self-reg­u­la­tion and per­son­al­i­ty char­ac­ter­is­tics. The study was designed to ask the fol­low­ing ques­tions: to what extent do the ide­olo­gies peo­ple espouse reflect their cog­ni­tive and per­son­al­i­ty char­ac­ter­is­tics? What are the com­mon­al­i­ties and dif­fer­ences between the psy­cho­log­i­cal under­pin­nings of diverse ide­o­log­i­cal ori­en­ta­tions? What are the con­tri­bu­tions of cog­ni­tive process­es ver­sus per­son­al­i­ty traits to the under­stand­ing of ide­olo­gies? and which psy­cho­log­i­cal traits are asso­ci­at­ed with one’s like­li­hood of being attract­ed to par­tic­u­lar ide­olo­gies?

    The sur­veys allowed the researchers to assess both the polit­i­cal ide­olo­gies and world­views of the par­tic­i­pants as well as their per­son­al psy­cho­log­i­cal traits (self-report­ed) and includ­ed self-report­ed ques­tion­naires on nation­al­ism, patri­o­tism, social and eco­nom­ic con­ser­vatism, sys­tem jus­ti­fi­ca­tion, dog­ma­tism, open­ness to revis­ing one’s view­points and engage­ment with reli­gion. They then mea­sured a vari­ety of cog­ni­tive traits by ask­ing par­tic­i­pants to car­ry out tasks like view­ing a dot mov­ing on a screen and deter­min­ing if the dot was mov­ing left or right as quick­ly as pos­si­ble. They dis­tilled the sur­vey psy­cho­log­i­cal infor­ma­tion down to three core psy­cho­log­i­cal dimen­sions — con­ser­vatism, dog­ma­tism, and reli­gios­i­ty — and com­pared the fun­da­men­tal cog­ni­tive traits of peo­ple shar­ing these cog­ni­tive traits. Per­haps not sur­pris­ing­ly, they found that peo­ple who self-report­ed high­er lev­els of con­ser­v­a­tive, dog­ma­tism, and reli­gios­i­ty were lit­er­al­ly slow­er and more cau­tious at mak­ing assess­ments about the phys­i­cal world around them like whether or not the dot was mov­ing to the left or right.

    Over­all, they found that the polit­i­cal con­ser­vatism fac­tor in their mod­el, which reflects ten­den­cies towards polit­i­cal con­ser­vatism and nation­al­ism, was sig­nif­i­cant­ly asso­ci­at­ed with greater cau­tion and tem­po­ral dis­count­ing and reduced strate­gic infor­ma­tion pro­cess­ing in the cog­ni­tive domain, and by greater goal-direct­ed­ness, impul­siv­i­ty, and reward sen­si­tiv­i­ty, and reduced social risk-tak­ing in the per­son­al­i­ty domain. They also found that peo­ple who tend­ed towards extrem­ism has poor­er emo­tion­al self-reg­u­la­tion. It’s the kind of research that, if it pans out, high­lights how insid­i­ous­ly manip­u­la­tive Face­book’s rela­tion­ship is with groups like Mad World News LLC and Shapiro’s Dai­ly Wire by reveal­ing what appears to be a greater vul­ner­a­bil­i­ty pos­sessed by psy­cho­log­i­cal­ly con­ser­v­a­tive-ori­ent­ed peo­ple to manip­u­la­tive media prac­tices. The mutu­al rela­tion­ship between Face­book and the right-wing dis­in­fo­tain­ment media out­lets that dom­i­nate it by pump­ing how high­ly sen­su­al­ly charg­ing con­tent intend­ed to trig­ger the fear and anx­i­ety cen­ters of the brain isn’t just a mass psy­cho­log­i­cal manip­u­la­tion cam­paign. It’s a mass psy­cho­log­i­cal manip­u­la­tion cam­paign that is sys­tem­at­i­cal­ly hav­ing a greater impact on the psy­cho­log­i­cal­ly con­ser­v­a­tive seg­ments of soci­ety. It’s a poten­tial­ly dia­bol­i­cal method for soci­etal polar­iza­tion that oper­ates at an sub­con­scious lev­el.

    But there’s anoth­er major twist in this sto­ry: while this is the kind of study that’s inter­est­ing on its own as an exam­ple of the kind of research that’s tak­ing place these days exam­in­ing the rela­tion­ship between fun­da­men­tal psy­cho­log­i­cal and cog­ni­tive traits and our polit­i­cal ori­en­ta­tion, part of what makes this new research so inter­est­ing in the con­text of Face­book is that the research was car­ried out by the Cam­bridge depart­ment of psy­chol­o­gy. And it was Cam­bridge Uni­ver­si­ty’s psy­chol­o­gy depart­ment research that was at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal where Alek­sander Kogan was appoint­ed. Research that shares a num­ber of par­al­lels with this new research com­ing out of the Cam­bridge Psy­chol­o­gy depart­ment. In par­tic­u­lar, recall how Kogan’s research was sim­i­lar­ly focused on dis­cern­ing basic psy­cho­log­i­cal char­ac­ter­is­tics from peo­ple based infor­ma­tion like their Face­book “Likes”, and then using those psy­cho­log­i­cal pro­files to pre­dict peo­ple’s pol­i­tics. This new research sounds a lot like an exten­sion of Kogan’s research, and it’s com­ing out of the same depart­ment. Recall how Kogan’s research found that peo­ple who scored high on the “neu­roti­cism” scale were also eas­i­er to manip­u­late with inflam­ma­to­ry con­tent, and much of Cam­bridge Ana­lyt­i­ca’s actu­al ser­vices relied on iden­ti­fy­ing the most neu­rot­ic peo­ple and serv­ing them up provoca­tive ads. This new research appears to more or less val­i­date that polit­i­cal strat­e­gy. It’s a remark­able and scary fun fact.

    Anoth­er par­al­lel between the lax atti­tude Face­book takes towards Mad World News serv­ing up recy­cled old inflam­ma­to­ry sto­ries and their lax atti­tude towards Cam­bridge Ana­lyt­i­ca’s 2016 vot­er micro-tar­get­ing cam­paign was that both Mad World News and Cam­bridge Ana­lyt­i­ca serve up inflam­ma­to­ry con­tent to peo­ple not nec­es­sar­i­ly seek it out. Mad World News is so ubiq­ui­tous you can’t help but come across it if you browse Face­book. Its size and reach makes expo­sure to it some­what inevitable, espe­cial­ly for con­ser­v­a­tive read­ers. And with the Cam­bridge Ana­lyt­i­ca scan­dal the end goal was manip­u­la­tive micro-tar­get­ed polit­i­cal ads based on psy­cho­log­i­cal pro­files inferred from Face­book data. In both cas­es, Face­book was allow­ing itself to be used for extrem­ism out­reach. Tar­get­ed out­reach.

    So we have new research out of Cam­bridge’s psy­chol­o­gy depart­ment that sounds A LOT like the research the Cam­bridge Ana­lyt­i­ca scan­dal was based on, and this new research is con­firm­ing the rela­tion­ship between polit­i­cal ori­en­ta­tion and psy­cho­log­i­cal and cog­ni­tive traits. But it’s also con­firm­ing the premise that the Cam­bridge Ana­lyt­i­ca effort was based on: that if you can iden­ti­fy the basic psy­cho­log­i­cal pro­file of some­one there’s a good chance you can pre­dict their pol­i­tics and the psy­cho­log­i­cal pro­files asso­ci­at­ed with con­ser­v­a­tive pol­i­tics tend to be eas­i­er to manip­u­late with inflam­ma­to­ry and provoca­tive con­tent. Which all, again, is why Face­book’s pro­tec­tion of the Dai­ly Wire’s secret rela­tion­ship with super-ped­dlers of right-wing decep­tive provoca­tive con­tent was so dia­bol­i­cal

    Popular.info

    The dirty secret behind Ben Shapiro’s extra­or­di­nary suc­cess on Face­book

    Judd Legum and Tes­nim Zek­e­ria
    Jun 25, 2020

    The suc­cess of The Dai­ly Wire, the web­site run by right-wing pun­dit Ben Shapiro, on Face­book is mind-bog­gling. The site has a small staff and pri­mar­i­ly aggre­gates con­tent from Twit­ter and oth­er news out­lets. Typ­i­cal­ly, its arti­cles are very short, usu­al­ly less than 500 words, and con­tain no orig­i­nal report­ing.
    *
    And yet, last month, The Dai­ly Wire was the sev­enth-ranked pub­lish­er on Face­book, accord­ing to the ana­lyt­ics ser­vice NewsWhip. Arti­cles pub­lished in The Dai­ly Wire attract­ed 60,616,745 engage­ments in May. Engage­ment is a com­bi­na­tion of shares, likes, and com­ments, and is a way of quan­ti­fy­ing dis­tri­b­u­tion on Face­book. The reach of The Dai­ly Wire’s arti­cles was equal to the New York Times (60,722,727) and more than the Wash­ing­ton Post (49,219,525).

    But that actu­al­ly under­states how well The Dai­ly Wire does on Face­book. While the New York Times pub­lished 15,587 arti­cles in May, and the Wash­ing­ton Post pub­lished 8,048, The Dai­ly Wire pub­lished just 1,141. On a per arti­cle basis, The Dai­ly Wire receives more dis­tri­b­u­tion than any oth­er major pub­lish­er. And it’s not close.

    What explains The Dai­ly Wire’s phe­nom­e­nal suc­cess on Face­book? Pop­u­lar Infor­ma­tion revealed part of the answer last Octo­ber. But the full sto­ry is much dark­er.

    Pop­u­lar Infor­ma­tion has dis­cov­ered a net­work of large Face­book pages — each built by exploit­ing racial bias, reli­gious big­otry, and vio­lence — that sys­tem­at­i­cal­ly pro­mote con­tent from The Dai­ly Wire. These pages, some of which have over 2 mil­lion fol­low­ers, do not dis­close a busi­ness rela­tion­ship with The Dai­ly Wire. But they all post con­tent from The Dai­ly Wire ten or more times each day. More­over, these pages post the exact same con­tent from The Dai­ly Wire at the exact same time.

    The undis­closed rela­tion­ship not only helps explain The Dai­ly Wire’s unlike­ly suc­cess on Face­book but also appears to vio­late Face­book’s rules.

    How to con­vert big­otry and fear into shares and likes

    The net­work of large Face­book pages pro­mot­ing The Dai­ly Wire are all run by Corey and Christy Pep­ple, who are best known as the cre­ators of Mad World News. Face­book pages con­trolled by the Pep­ples include Mad World News (2,176,003 fol­low­ers), The New Resis­tance (2,857,876 fol­low­ers), Right Stuff (610,809 fol­low­ers), Amer­i­ca First (577,753 fol­low­ers), and Amer­i­can Patri­ot (447,799 fol­low­ers).

    The reach of these pages is mas­sive. Con­tent post­ed to these five pages has gen­er­at­ed more than 31 mil­lion engage­ments on Face­book over the last three months, accord­ing to Crowd­Tan­gle, an ana­lyt­ics ser­vice owned by Face­book. To put that in per­spec­tive, the reach of the net­work over this time peri­od exceeds the New York Times (28 mil­lion engage­ments), the Wash­ing­ton Post (20 mil­lion engage­ments), and Huff­Post (19 mil­lion engage­ments).

    How did the pages like Mad World News and The New Resis­tance grow so big? They did it by exploit­ing racism, reli­gious big­otry, and vio­lence.

    Here is how it works. Most of the con­tent on the five pages in this net­work con­sists of links to MadWorldNews.com and TadHaps.com, two web­sites owned by the Pep­ples. These sites iden­ti­fy incen­di­ary sto­ries — that are fre­quent­ly months or years old — that prey on prej­u­dice and fear. The sites then rewrite the sto­ries with no indi­ca­tion that the sto­ry is old. This gen­er­ates a “new” link that is able to thrive in Face­book’s algo­rithm.

    For exam­ple, TadHaps.com pub­lished a sto­ry on June 19, 2020, with the head­line “Fam­i­ly Dis­plays ‘South­ern Pride’ Sign, Stranger Con­fronts Them With Gun.” The arti­cle describes how a man named Mark Wil­son was stand­ing on the side of the road with his fam­i­ly, wav­ing Con­fed­er­ate flags. Accord­ing to Wil­son, a man drove up and point­ed a gun at him and oth­er fam­i­ly mem­bers. The man then drove away with­out harm­ing any­one.

    It’s not men­tioned in Tad­Haps arti­cle, but the inci­dent occurred five years ago, in 2015.

    On June 20, 2020, the Tad­Haps arti­cle was then post­ed to Mad World News, The New Resis­tance, Right Stuff, Amer­i­ca First, and Amer­i­can Patri­ot Face­book pages. It quick­ly racked up about 5,000 total engage­ments on Face­book.

    Oth­er sto­ries pub­lished on Tad­Haps in the last few days include a remorse­less Black gang mem­ber who dragged a police offi­cer behind a stolen car, a fast food restau­rant that was changed the name of menu items to be more respect­ful of Mus­lims, and a 13-year-old girl who was raped by five men. None of the sto­ries men­tion that these inci­dents occurred months or years old.

    The pur­pose of Tad­Haps is not to inform but to manip­u­late the Face­book algo­rithm by recy­cling old sto­ries that elic­it emo­tion­al reac­tions from con­ser­v­a­tives.

    The Dai­ly Wire’s tac­tics and Face­book’s rules

    Why do these tox­ic Face­book pages keep shar­ing con­tent from The Dai­ly Wire? Do the Pep­ples just real­ly like Ben Shapiro’s site? The Dai­ly Wire did not respond to a request for com­ment. But the behav­ior of these pages strong­ly sug­gests that The Dai­ly Wire and Mad World News, LLC, the com­pa­ny owned by Corey and Christy Pep­ple, have a busi­ness rela­tion­ship.

    The Dai­ly Wire is the only web­site out­side of those owned by the Pep­ples that is shared by these five pages. And each of the five Face­book pages shares at least ten Dai­ly Wire links every day. Con­spic­u­ous­ly, the Face­book pages share the exact same links from The Dai­ly Wire at the exact same time.

    ...

    The pat­tern repeats over and over again, ten times or more every day. It’s behav­ior that strong­ly sug­gests that Mad World News, LLC is being paid to pro­mote con­tent from The Dai­ly Wire.

    If that’s the case, The Dai­ly Wire could be vio­lat­ing Face­book’s rules. Face­book allows pages to be paid to post con­tent, but the spon­sor­ship must be dis­closed using Face­book’s brand­ed con­tent tool.

    We define brand­ed con­tent as a cre­ator or pub­lish­er’s con­tent that fea­tures or is influ­enced by a busi­ness part­ner for an exchange of val­ue. Cre­ators must use the brand­ed con­tent tool to tag the fea­tured third par­ty prod­uct, brand, or busi­ness part­ner with their pri­or con­sent. Brand­ed con­tent may only be post­ed by Face­book Pages and pro­files and Insta­gram accounts with access to the brand­ed con­tent tool.

    The activ­i­ty also appears to vio­late Face­book’s pro­hi­bi­tion on coor­di­nat­ed inau­then­tic behav­ior, which includes a ban on activ­i­ty to “arti­fi­cial­ly boost the pop­u­lar­i­ty of con­tent.”

    In response to an inquiry from Pop­u­lar Infor­ma­tion, a Face­book spokesper­son said it inves­ti­gat­ed the behav­ior of these pages and found no vio­la­tion of Face­book’s rules. The spokesper­son said Face­book could not deter­mine if there was a finan­cial rela­tion­ship between the pages con­trolled by Mad World News LLC and The Dai­ly Wire, and that the brand­ed con­tent pol­i­cy did not apply to post­ing links.

    The noto­ri­ous Mad World News

    There is no rea­son that the net­work of Face­book pages run by Corey and Christy Pep­ple should have flown beneath Face­book’s radar. Years ago, the Pep­ples became noto­ri­ous for exploit­ing Face­book with poi­so­nous con­tent.

    At first, Mad World News was effec­tive­ly the couple’s blog: they rewrote pub­lished arti­cles, added their own com­men­tary as “Chris­t­ian Con­ser­v­a­tives,” and shared their posts on Face­book. As the Pep­ples’ blog gained trac­tion on Face­book, they began includ­ing dig­i­tal ads and exper­i­ment­ing with the type of sto­ries they fea­tured. Divi­sive sto­ries, in par­tic­u­lar, per­formed dis­turbing­ly well. “We [all] like division…We thrive on it,” said Christy Pep­ple to The New York Times’ The Dai­ly in 2018. At the time, the site drew rough­ly 20 mil­lion views each month. One month the Pep­ples made more in dig­i­tal ad rev­enue from the site than their com­bined salaries in the pre­vi­ous year, accord­ing to The Dai­ly.

    Most of this rev­enue, how­ev­er, is gen­er­at­ed from decep­tive, if not out­right false, con­tent. News­Guard reports that Mad World News repeat­ed­ly makes “dis­tort­ed or mis­lead­ing claims, includ­ing about dis­cred­it­ed con­spir­a­cy the­o­ries.” In 2016, for exam­ple, an inac­cu­rate arti­cle on late-term abor­tions received 1.1 mil­lion Face­book engage­ments, “mak­ing it the most shared arti­cle about abor­tion on Face­book,” The New York Times report­ed.

    In May of this year, News­Guard flagged a Mad World News piece that accused Antho­ny Fau­ci of con­spir­ing with Bill Gates as part of “sin­is­ter plans to ‘set up’ Pres­i­dent Don­ald Trump.”

    The out­let fre­quent­ly runs sto­ries tar­get­ing Black, Mus­lim, and immi­grant pop­u­la­tions. Recent sto­ry head­lines include: “Atlanta Cops Walk Out In Protest Over Demo­c­rat DA’s ‘Sick Secret’ In Brooks Case,” “NYC Black Man Knocks Down 92-year-Old White Woman, Thanks to BLM & De Bla­sio,” and “Mil­lions Donat­ed to ‘Defund the Police’ Secret­ly Direct­ed To Biden’s Cam­paign.”

    Despite this, Mad World News’ sto­ries rarely elic­it “dis­put­ed” labels or dis­claimers from Face­book. On the site’s “About Us” page, the plat­form attempts to excuse itself of fact-check­ing pro­ce­dures, claim­ing that its con­tent “express­es a per­son­al opin­ion, advo­cates a point of view, or is self-pro­mo­tion­al” and should be treat­ed as such “for the pur­pose of fact-check­ing.”

    The Dai­ly Wire’s his­to­ry of play­ing by its own rules

    The Dai­ly Wire’s appar­ent busi­ness rela­tion­ship with Mad World News isn’t the first time the site has been caught flout­ing Face­book’s rules. Last Octo­ber, Pop­u­lar Infor­ma­tion revealed a clan­des­tine net­work of 14 large Face­book pages that pur­port­ed to be inde­pen­dent but exclu­sive­ly pro­mote con­tent from The Dai­ly Wire in a coor­di­nat­ed fash­ion.

    The net­work clear­ly vio­lat­ed Face­book’s pro­hi­bi­tion on “inau­then­tic behav­ior,” which includes con­ceal­ing “a Page’s pur­pose by mis­lead­ing users about the own­er­ship or con­trol of that Page.” But Face­book refused to take action. “Our inves­ti­ga­tion found that these are real pages run by real peo­ple in the U.S. and do not vio­late our poli­cies,” the com­pa­ny said.

    Months lat­er, after Face­book imple­ment­ed a new pol­i­cy around page own­er­ship, The Dai­ly Wire was forced to acknowl­edge it owned and con­trolled 13 of the 14 pages in the net­work. Face­book has still tak­en no action.

    Face­book CEO Mark Zucker­berg has a rela­tion­ship with Shapiro, who Zucker­berg has host­ed at his home. Accord­ing to a source who has spo­ken with Shapiro, Zucker­berg and Shapiro remain in direct com­mu­ni­ca­tion.

    ———–

    “The dirty secret behind Ben Shapiro’s extra­or­di­nary suc­cess on Face­book” by Judd Legum and Tes­nim Zek­e­ria; Popular.info; 06/25/2020

    “But that actu­al­ly under­states how well The Dai­ly Wire does on Face­book. While the New York Times pub­lished 15,587 arti­cles in May, and the Wash­ing­ton Post pub­lished 8,048, The Dai­ly Wire pub­lished just 1,141. On a per arti­cle basis, The Dai­ly Wire receives more dis­tri­b­u­tion than any oth­er major pub­lish­er. And it’s not close.

    Yes, The Dai­ly Wire receives more dis­tri­b­u­tion than any oth­er major pub­lish­er on a per arti­cle bases and it’s not close. Why is that? Oh right, cheat­ing. Cheat­ing with the help of one of the biggest ped­dlers of provoca­tive right-wing click-bait trash on the inter­net, the Mad World News net­work of Corey and Christy Pep­ple:

    ...
    The net­work of large Face­book pages pro­mot­ing The Dai­ly Wire are all run by Corey and Christy Pep­ple, who are best known as the cre­ators of Mad World News. Face­book pages con­trolled by the Pep­ples include Mad World News (2,176,003 fol­low­ers), The New Resis­tance (2,857,876 fol­low­ers), Right Stuff (610,809 fol­low­ers), Amer­i­ca First (577,753 fol­low­ers), and Amer­i­can Patri­ot (447,799 fol­low­ers).

    The reach of these pages is mas­sive. Con­tent post­ed to these five pages has gen­er­at­ed more than 31 mil­lion engage­ments on Face­book over the last three months, accord­ing to Crowd­Tan­gle, an ana­lyt­ics ser­vice owned by Face­book. To put that in per­spec­tive, the reach of the net­work over this time peri­od exceeds the New York Times (28 mil­lion engage­ments), the Wash­ing­ton Post (20 mil­lion engage­ments), and Huff­Post (19 mil­lion engage­ments).

    How did the pages like Mad World News and The New Resis­tance grow so big? They did it by exploit­ing racism, reli­gious big­otry, and vio­lence.

    Here is how it works. Most of the con­tent on the five pages in this net­work con­sists of links to MadWorldNews.com and TadHaps.com, two web­sites owned by the Pep­ples. These sites iden­ti­fy incen­di­ary sto­ries — that are fre­quent­ly months or years old — that prey on prej­u­dice and fear. The sites then rewrite the sto­ries with no indi­ca­tion that the sto­ry is old. This gen­er­ates a “new” link that is able to thrive in Face­book’s algo­rithm.

    For exam­ple, TadHaps.com pub­lished a sto­ry on June 19, 2020, with the head­line “Fam­i­ly Dis­plays ‘South­ern Pride’ Sign, Stranger Con­fronts Them With Gun.” The arti­cle describes how a man named Mark Wil­son was stand­ing on the side of the road with his fam­i­ly, wav­ing Con­fed­er­ate flags. Accord­ing to Wil­son, a man drove up and point­ed a gun at him and oth­er fam­i­ly mem­bers. The man then drove away with­out harm­ing any­one.

    It’s not men­tioned in Tad­Haps arti­cle, but the inci­dent occurred five years ago, in 2015.

    ...

    The pur­pose of Tad­Haps is not to inform but to manip­u­late the Face­book algo­rithm by recy­cling old sto­ries that elic­it emo­tion­al reac­tions from con­ser­v­a­tives.
    ...

    It’s a sign of just how preva­lent click-bait trash tru­ly is on Face­book: in order to become one of the top pub­lish­ers, The Dai­ly Wire had to ride the coat­tails of MadWorldNews.com. And some­how The Dai­ly Wire is the only site pro­mot­ed by this net­work, a strong indi­ca­tion of a secret com­mer­cial arrange­ment:

    ...
    Why do these tox­ic Face­book pages keep shar­ing con­tent from The Dai­ly Wire? Do the Pep­ples just real­ly like Ben Shapiro’s site? The Dai­ly Wire did not respond to a request for com­ment. But the behav­ior of these pages strong­ly sug­gests that The Dai­ly Wire and Mad World News, LLC, the com­pa­ny owned by Corey and Christy Pep­ple, have a busi­ness rela­tion­ship.

    The Dai­ly Wire is the only web­site out­side of those owned by the Pep­ples that is shared by these five pages. And each of the five Face­book pages shares at least ten Dai­ly Wire links every day. Con­spic­u­ous­ly, the Face­book pages share the exact same links from The Dai­ly Wire at the exact same time.

    The pat­tern repeats over and over again, ten times or more every day. It’s behav­ior that strong­ly sug­gests that Mad World News, LLC is being paid to pro­mote con­tent from The Dai­ly Wire.
    ...

    And yet when faced with this evi­dence, Face­book dis­missed it, first by sug­gest­ing that it could­n’t deter­mine if there was a finan­cial rela­tion­ship between Mad World News LLC and The Dai­ly Wire. But then Face­book went on to assert that its rules about “brand­ed con­tent” — which state that con­tent that a group was paid to post be labeled as brand­ed — does­n’t apply to paid links any­way, which is inac­cu­rate. Face­book explic­it­ly states links count brand­ed con­tent. So when pre­sent­ed with evi­dence of this rela­tion­ship between The Dai­ly Wire and Mad World News LLC, Face­book basi­cal­ly tried to lie to the jour­nal­ists. Which is more or less how Face­book behaved when pre­vi­ous faced with evi­dence of The Dai­ly Wire break­ing Face­book’s rules:

    ...
    In response to an inquiry from Pop­u­lar Infor­ma­tion, a Face­book spokesper­son said it inves­ti­gat­ed the behav­ior of these pages and found no vio­la­tion of Face­book’s rules. The spokesper­son said Face­book could not deter­mine if there was a finan­cial rela­tion­ship between the pages con­trolled by Mad World News LLC and The Dai­ly Wire, and that the brand­ed con­tent pol­i­cy did not apply to post­ing links.

    ...

    The Dai­ly Wire’s appar­ent busi­ness rela­tion­ship with Mad World News isn’t the first time the site has been caught flout­ing Face­book’s rules. Last Octo­ber, Pop­u­lar Infor­ma­tion revealed a clan­des­tine net­work of 14 large Face­book pages that pur­port­ed to be inde­pen­dent but exclu­sive­ly pro­mote con­tent from The Dai­ly Wire in a coor­di­nat­ed fash­ion.

    The net­work clear­ly vio­lat­ed Face­book’s pro­hi­bi­tion on “inau­then­tic behav­ior,” which includes con­ceal­ing “a Page’s pur­pose by mis­lead­ing users about the own­er­ship or con­trol of that Page.” But Face­book refused to take action. “Our inves­ti­ga­tion found that these are real pages run by real peo­ple in the U.S. and do not vio­late our poli­cies,” the com­pa­ny said.

    Months lat­er, after Face­book imple­ment­ed a new pol­i­cy around page own­er­ship, The Dai­ly Wire was forced to acknowl­edge it owned and con­trolled 13 of the 14 pages in the net­work. Face­book has still tak­en no action.
    ...

    And, final­ly, note that if it seems like The Dai­ly Wire has been get­ting excep­tion­al treat­ment from Face­book, even by the lax stan­dards the plat­form has for con­ser­v­a­tive groups, that might have some­thing to do with Mark Zucker­berg’s per­son­al rela­tion­ship with Ben Shapiro:

    ...
    Face­book CEO Mark Zucker­berg has a rela­tion­ship with Shapiro, who Zucker­berg has host­ed at his home. Accord­ing to a source who has spo­ken with Shapiro, Zucker­berg and Shapiro remain in direct com­mu­ni­ca­tion.
    ...

    Now, here’s a Guardian piece on the recent­ly pub­lished Cam­bridge Uni­ver­si­ty study exam­in­ing how basic psy­cho­log­i­cal and cog­ni­tive traits can effect your pol­i­tics. Basic psy­cho­log­i­cal and cog­ni­tive traits like tak­ing in and pro­cess­ing infor­ma­tion. And as the study found, the more dif­fi­cul­ty you have per­ceiv­ing and retain­ing infor­ma­tion, the more like­ly you are to be polit­i­cal­ly con­ser­v­a­tive, with the impli­ca­tion being that decep­tive media tac­tics are going to be more effec­tive on psy­cho­log­i­cal­ly con­ser­v­a­tive-mind­ed indi­vid­u­als:

    The Guardian

    Peo­ple with extrem­ist views less able to do com­plex men­tal tasks, research sug­gests

    Cam­bridge Uni­ver­si­ty team say their find­ings could be used to spot peo­ple at risk from rad­i­cal­i­sa­tion

    Natal­ie Grover
    Sun 21 Feb 2021 19.01 EST

    Last mod­i­fied on Wed 24 Feb

    Our brains hold clues for the ide­olo­gies we choose to live by, accord­ing to research, which has sug­gest­ed that peo­ple who espouse extrem­ist atti­tudes tend to per­form poor­ly on com­plex men­tal tasks.

    Researchers from the Uni­ver­si­ty of Cam­bridge sought to eval­u­ate whether cog­ni­tive dis­po­si­tion – dif­fer­ences in how infor­ma­tion is per­ceived and processed – sculpts ide­o­log­i­cal world-views such as polit­i­cal, nation­al­is­tic and dog­mat­ic beliefs, beyond the impact of tra­di­tion­al demo­graph­ic fac­tors like age, race and gen­der.

    The study, built on pre­vi­ous research, includ­ed more than 330 US-based par­tic­i­pants aged 22 to 63 who were exposed to a bat­tery of tests – 37 neu­ropsy­cho­log­i­cal tasks and 22 per­son­al­i­ty sur­veys – over the course of two weeks.

    The tasks were engi­neered to be neu­tral, not emo­tion­al or polit­i­cal – they involved, for instance, mem­o­ris­ing visu­al shapes. The researchers then used com­pu­ta­tion­al mod­el­ling to extract infor­ma­tion from that data about the participant’s per­cep­tion and learn­ing, and their abil­i­ty to engage in com­plex and strate­gic men­tal pro­cess­ing.

    Over­all, the researchers found that ide­o­log­i­cal atti­tudes mir­rored cog­ni­tive deci­sion-mak­ing, accord­ing to the study pub­lished in the jour­nal Philo­soph­i­cal Trans­ac­tions of the Roy­al Soci­ety B.

    A key find­ing was that peo­ple with extrem­ist atti­tudes tend­ed to think about the world in black and white terms, and strug­gled with com­plex tasks that required intri­cate men­tal steps, said lead author Dr Leor Zmi­grod at Cambridge’s depart­ment of psy­chol­o­gy.

    “Indi­vid­u­als or brains that strug­gle to process and plan com­plex action sequences may be more drawn to extreme ide­olo­gies, or author­i­tar­i­an ide­olo­gies that sim­pli­fy the world,” she said.

    She said anoth­er fea­ture of peo­ple with ten­den­cies towards extrem­ism appeared to be that they were not good at reg­u­lat­ing their emo­tions, mean­ing they were impul­sive and tend­ed to seek out emo­tion­al­ly evoca­tive expe­ri­ences. “And so that kind of helps us under­stand what kind of indi­vid­ual might be will­ing to go in and com­mit vio­lence against inno­cent oth­ers.”

    Par­tic­i­pants who are prone to dog­ma­tism – stuck in their ways and rel­a­tive­ly resis­tant to cred­i­ble evi­dence – actu­al­ly have a prob­lem with pro­cess­ing evi­dence even at a per­cep­tu­al lev­el, the authors found.

    “For exam­ple, when they’re asked to deter­mine whether dots [as part of a neu­ropsy­cho­log­i­cal task] are mov­ing to the left or to the right, they just took longer to process that infor­ma­tion and come to a deci­sion,” Zmi­grod said.

    In some cog­ni­tive tasks, par­tic­i­pants were asked to respond as quick­ly and as accu­rate­ly as pos­si­ble. Peo­ple who leant towards the polit­i­cal­ly con­ser­v­a­tive tend­ed to go for the slow and steady strat­e­gy, while polit­i­cal lib­er­als took a slight­ly more fast and furi­ous, less pre­cise approach.

    “It’s fas­ci­nat­ing, because con­ser­vatism is almost a syn­onym for cau­tion,” she said. “We’re see­ing that – at the very basic neu­ropsy­cho­log­i­cal lev­el – indi­vid­u­als who are polit­i­cal­ly con­ser­v­a­tive … sim­ply treat every stim­uli that they encounter with cau­tion.”

    The “psy­cho­log­i­cal sig­na­ture” for extrem­ism across the board was a blend of con­ser­v­a­tive and dog­mat­ic psy­cholo­gies, the researchers said.

    ...

    “What we found is that demo­graph­ics don’t explain a whole lot; they only explain rough­ly 8% of the vari­ance,” said Zmi­grod. “Where­as, actu­al­ly, when we incor­po­rate these cog­ni­tive and per­son­al­i­ty assess­ments as well, sud­den­ly, our capac­i­ty to explain the vari­ance of these ide­o­log­i­cal world-views jumps to 30% or 40%.”

    ————

    “Peo­ple with extrem­ist views less able to do com­plex men­tal tasks, research sug­gests” by Natal­ie Grover; The Guardian; 02/21/2021

    A key find­ing was that peo­ple with extrem­ist atti­tudes tend­ed to think about the world in black and white terms, and strug­gled with com­plex tasks that required intri­cate men­tal steps, said lead author Dr Leor Zmi­grod at Cambridge’s depart­ment of psy­chol­o­gy.”

    If you’re an extrem­ists, you prob­a­bly tend to view the world in black and white terms. It’s not a par­tic­u­lar­ly sur­pris­ing find­ing. But far more inter­est­ing is that if you’re an extrem­ist you’re prob­a­bly also more like­ly to strug­gle with com­plex tasks. If you have trou­ble pro­cess­ing real­i­ty at a fun­da­men­tal lev­el you’re more like­ly to become an extrem­ist. Again, it’s not too sur­pris­ing, but it’s still a rel­a­tive­ly new and impor­tant find­ing. And it’s even more impor­tant a find­ing if it turns out groups have been exploit­ing exact­ly these psy­cho­log­i­cal vul­ner­a­bil­i­ties for years to rad­i­cal­ize peo­ple over Face­book and oth­er social media plat­forms:

    ...
    She said anoth­er fea­ture of peo­ple with ten­den­cies towards extrem­ism appeared to be that they were not good at reg­u­lat­ing their emo­tions, mean­ing they were impul­sive and tend­ed to seek out emo­tion­al­ly evoca­tive expe­ri­ences. “And so that kind of helps us under­stand what kind of indi­vid­ual might be will­ing to go in and com­mit vio­lence against inno­cent oth­ers.”

    Par­tic­i­pants who are prone to dog­ma­tism – stuck in their ways and rel­a­tive­ly resis­tant to cred­i­ble evi­dence – actu­al­ly have a prob­lem with pro­cess­ing evi­dence even at a per­cep­tu­al lev­el, the authors found.

    “For exam­ple, when they’re asked to deter­mine whether dots [as part of a neu­ropsy­cho­log­i­cal task] are mov­ing to the left or to the right, they just took longer to process that infor­ma­tion and come to a deci­sion,” Zmi­grod said.

    In some cog­ni­tive tasks, par­tic­i­pants were asked to respond as quick­ly and as accu­rate­ly as pos­si­ble. Peo­ple who leant towards the polit­i­cal­ly con­ser­v­a­tive tend­ed to go for the slow and steady strat­e­gy, while polit­i­cal lib­er­als took a slight­ly more fast and furi­ous, less pre­cise approach.

    “It’s fas­ci­nat­ing, because con­ser­vatism is almost a syn­onym for cau­tion,” she said. “We’re see­ing that – at the very basic neu­ropsy­cho­log­i­cal lev­el – indi­vid­u­als who are polit­i­cal­ly con­ser­v­a­tive … sim­ply treat every stim­uli that they encounter with cau­tion.”

    The “psy­cho­log­i­cal sig­na­ture” for extrem­ism across the board was a blend of con­ser­v­a­tive and dog­mat­ic psy­cholo­gies, the researchers said.
    ...

    And note how they found that demo­graph­ics, like race and gen­der, were far less impor­tant in pre­dict­ing your pol­i­tics than these basic psy­cho­log­i­cal and cog­ni­tive traits. It’s the kind of find­ing that could prove to be impor­tant in all sorts of areas, but espe­cial­ly when it comes to polit­i­cal adver­tis­ing:

    ...
    “What we found is that demo­graph­ics don’t explain a whole lot; they only explain rough­ly 8% of the vari­ance,” said Zmi­grod. “Where­as, actu­al­ly, when we incor­po­rate these cog­ni­tive and per­son­al­i­ty assess­ments as well, sud­den­ly, our capac­i­ty to explain the vari­ance of these ide­o­log­i­cal world-views jumps to 30% or 40%.”
    ...

    Wel­come to the future of polit­i­cal manip­u­la­tion. After all, if psy­cho­log­i­cal traits are as pre­dic­tive of pol­i­tics as this study sug­gests, it would be fool­ish not to incor­po­rate that infor­ma­tion into your polit­i­cal mar­ket­ing prac­tices, which is part of why the devel­op­ment of mass data­bas­es of con­sumer psy­cho­log­i­cal pro­files by com­pa­nies like Face­book is so scan­dalous. Or at least should be scan­dalous. That’s why this kind of research is poten­tial­ly so sig­nif­i­cant. It’s the foun­da­tion for the next gen­er­a­tion of Cam­bridge Ana­lyt­i­ca scan­dals:

    Philo­soph­i­cal Trans­ac­tion of the Roy­al Soci­ety B

    The cog­ni­tive and per­cep­tu­al cor­re­lates of ide­o­log­i­cal atti­tudes: a data-dri­ven approach

    Leor Zmi­grod, Ian W. Eisen­berg, Patrick G. Bis­sett, Trevor W. Rob­bins and Rus­sell A. Pol­drack

    Published:22 Feb­ru­ary 2021
    https://doi.org/10.1098/rstb.2020.0424

    Abstract

    Although human exis­tence is enveloped by ide­olo­gies, remark­ably lit­tle is under­stood about the rela­tion­ships between ide­o­log­i­cal atti­tudes and psy­cho­log­i­cal traits. Even less is known about how cog­ni­tive dispositions—individual dif­fer­ences in how infor­ma­tion is per­ceived and processed— sculpt indi­vid­u­als’ ide­o­log­i­cal world­views, pro­cliv­i­ties for extrem­ist beliefs and resis­tance (or recep­tiv­i­ty) to evi­dence. Using an unprece­dent­ed num­ber of cog­ni­tive tasks (n = 37) and per­son­al­i­ty sur­veys (n = 22), along with data-dri­ven analy­ses includ­ing drift-dif­fu­sion and Bayesian mod­el­ling, we uncov­ered the spe­cif­ic psy­cho­log­i­cal sig­na­tures of polit­i­cal, nation­al­is­tic, reli­gious and dog­mat­ic beliefs. Cog­ni­tive and per­son­al­i­ty assess­ments con­sis­tent­ly out­per­formed demo­graph­ic pre­dic­tors in account­ing for indi­vid­ual dif­fer­ences in ide­o­log­i­cal pref­er­ences by 4 to 15-fold. Fur­ther­more, data-dri­ven analy­ses revealed that indi­vid­u­als’ ide­o­log­i­cal atti­tudes mir­rored their cog­ni­tive deci­sion-mak­ing strate­gies. Con­ser­vatism and nation­al­ism were relat­ed to greater cau­tion in per­cep­tu­al deci­sion-mak­ing tasks and to reduced strate­gic infor­ma­tion pro­cess­ing, while dog­ma­tism was asso­ci­at­ed with slow­er evi­dence accu­mu­la­tion and impul­sive ten­den­cies. Reli­gios­i­ty was impli­cat­ed in height­ened agree­able­ness and risk per­cep­tion. Extreme pro-group atti­tudes, includ­ing vio­lence endorse­ment against out­groups, were linked to poor­er work­ing mem­o­ry, slow­er per­cep­tu­al strate­gies, and ten­den­cies towards impul­siv­i­ty and sensation-seeking—reflecting over­laps with the psy­cho­log­i­cal pro­files of con­ser­vatism and dog­ma­tism. Cog­ni­tive and per­son­al­i­ty sig­na­tures were also gen­er­at­ed for ide­olo­gies such as author­i­tar­i­an­ism, sys­tem jus­ti­fi­ca­tion, social dom­i­nance ori­en­ta­tion, patri­o­tism and recep­tiv­i­ty to evi­dence or alter­na­tive view­points; elu­ci­dat­ing their under­pin­nings and high­light­ing avenues for future research. Togeth­er these find­ings sug­gest that ide­o­log­i­cal world­views may be reflec­tive of low-lev­el per­cep­tu­al and cog­ni­tive func­tions.

    This arti­cle is part of the theme issue ‘The polit­i­cal brain: neu­rocog­ni­tive and com­pu­ta­tion­al mech­a­nisms’.

    1. Intro­duc­tion

    One of the most pow­er­ful metaphors in polit­i­cal psy­chol­o­gy has been that of elec­tive affini­ties—the notion that there is a mutu­al attrac­tion between ‘the struc­ture and con­tents of belief sys­tems and the under­ly­ing needs and motives of indi­vid­u­als and groups who sub­scribe to them’ [1]. With roots in Enlight­en­ment phi­los­o­phy and Max Weber’s soci­ol­o­gy, this metaphor con­tends that cer­tain ide­olo­gies res­onate with the psy­cho­log­i­cal pre­dis­po­si­tions of cer­tain peo­ple. So, we can elu­ci­date psy­cho-polit­i­cal process­es by log­i­cal­ly trac­ing these coher­ences, these elec­tive affini­ties between ideas and inter­ests. This anal­o­gy has inspired rich the­o­ries about the epis­temic, rela­tion­al and exis­ten­tial moti­va­tions that dri­ve indi­vid­u­als to adhere to polit­i­cal ide­olo­gies (e.g. [2]), high­light­ing the role of needs for coher­ence, con­nect­ed­ness and cer­tain­ty in struc­tur­ing ide­o­log­i­cal atti­tudes (e.g. [35]).

    Nonethe­less, the method­olo­gies employed to study these ques­tions have been most­ly of a social psy­cho­log­i­cal nature, rely­ing pri­mar­i­ly on self-report mea­sures of needs for order, cog­ni­tive clo­sure, rigid­i­ty and oth­ers (e.g. [2]). This has skewed the aca­d­e­m­ic con­ver­sa­tion towards the needs and inter­ests that ide­olo­gies sat­is­fy, and obscured the role of cog­ni­tive dis­po­si­tions that can pro­mote (or sup­press) ide­o­log­i­cal think­ing [6]. In fact, it is only recent­ly that researchers have begun to employ neu­rocog­ni­tive tasks and ana­lyt­ic approach­es from cog­ni­tive sci­ence in order to tack­le the ques­tion: which cog­ni­tive traits shape an indi­vid­u­al’s ide­o­log­i­cal world­views? In this inves­ti­ga­tion, we sought to apply cog­ni­tive method­olo­gies and ana­lyt­ic tools in order to iden­ti­fy the cog­ni­tive and per­son­al­i­ty cor­re­lates of ide­o­log­i­cal atti­tudes in a data-dri­ven fash­ion. Bor­row­ing meth­ods from cog­ni­tive psy­chol­o­gy, which have estab­lished sophis­ti­cat­ed tech­niques to mea­sure and analyse per­cep­tu­al and cog­ni­tive process­es in an objec­tive and implic­it way, and imple­ment­ing these in the study of ide­ol­o­gy can facil­i­tate the con­struc­tion of a more wholis­tic and rig­or­ous cog­ni­tive sci­ence of ide­ol­o­gy. This can push the anal­o­gy of ‘elec­tive affini­ties’ into the realm of per­cep­tion and cog­ni­tion to allow us to tack­le the ques­tion: are there par­al­lels between indi­vid­u­als’ ide­olo­gies and their gen­er­al per­cep­tu­al or cog­ni­tive styles and strate­gies?

    Fur­ther­more, owing to lim­it­ed resources and siloed research dis­ci­plines, many stud­ies in social psy­chol­o­gy fre­quent­ly focus on a sin­gle ide­o­log­i­cal domain (e.g. polit­i­cal con­ser­vatism) or a sin­gle psy­cho­log­i­cal domain (e.g. ana­lyt­i­cal think­ing). While an in-depth focus on a spe­cif­ic domain is essen­tial for the­o­ret­i­cal devel­op­ment, the selec­tion of hypothe­ses and method­olo­gies can at times suf­fer from prob­lems of bias and a lack of con­cep­tu­al inte­gra­tion across dif­fer­ent ide­o­log­i­cal and psy­cho­log­i­cal domains. Indeed, a grow­ing con­cern has emerged among researchers that psy­chol­o­gists of pol­i­tics, nation­al­ism and reli­gion gen­er­ate hypothe­ses and devel­op study designs that con­firm their pri­or beliefs about the ori­gins of social dis­cord [712]. It is, there­fore, valu­able to com­ple­ment the­o­ry-dri­ven research with data-dri­ven approach­es, which can help to over­come these method­olog­i­cal chal­lenges, as well as offer a wholis­tic view of these com­plex rela­tion­ships by ‘let­ting the data speak’. Per­haps most impor­tant­ly, data-dri­ven research can help val­i­date or chal­lenge the­o­ry-dri­ven find­ings and con­se­quent­ly offer direc­tions for future research.

    The present inves­ti­ga­tion, there­fore, aimed to har­ness nov­el cog­ni­tive approach­es, a data-dri­ven study design, a mix of fre­quen­tist and Bayesian ana­lyt­ic approach­es and a wide-rang­ing assess­ment of both psy­cho­log­i­cal traits and ide­o­log­i­cal domains. It was moti­vat­ed by the ques­tions: to what extent do the ide­olo­gies peo­ple espouse reflect their cog­ni­tive and per­son­al­i­ty char­ac­ter­is­tics? What are the com­mon­al­i­ties and dif­fer­ences between the psy­cho­log­i­cal under­pin­nings of diverse ide­o­log­i­cal ori­en­ta­tions? What are the con­tri­bu­tions of cog­ni­tive process­es ver­sus per­son­al­i­ty traits to the under­stand­ing of ide­olo­gies? and which psy­cho­log­i­cal traits are asso­ci­at­ed with one’s like­li­hood of being attract­ed to par­tic­u­lar ide­olo­gies?

    Impor­tant­ly, although a rig­or­ous cog­ni­tive sci­ence of ide­ol­o­gy may be at its infan­cy, these ques­tions are not entire­ly new—scholars across the sci­ences and human­i­ties have long the­o­rized about the psy­cho­log­i­cal ori­gins of cit­i­zens’ polit­i­cal, nation­al­is­tic and reli­gious atti­tudes [2,13]. A fer­tile lit­er­a­ture has revealed that indi­vid­u­als’ ide­o­log­i­cal incli­na­tions are relat­ed to var­i­ous psy­cho­log­i­cal traits, such as their per­son­al needs for order and struc­ture [35], cog­ni­tive flex­i­bil­i­ty [6,1418], metacog­ni­tion and learn­ing styles [19,20] and even per­cep­tu­al reac­tiv­i­ty to neg­a­tive infor­ma­tion [2124]. The advent of polit­i­cal neu­ro­science [25], illus­trat­ing the neur­al struc­tures and process­es that under­pin (polit­i­cal) ide­ol­o­gy [2632], spurs even more pro­found ques­tions about the ways in which cog­ni­tive mech­a­nisms may medi­ate between the brain and belief.

    Ide­olo­gies can be gen­er­al­ly described as doc­trines that rigid­ly pre­scribe epis­temic and rela­tion­al norms or forms of hos­til­i­ty [33]. The present inves­ti­ga­tion espous­es a domain-gen­er­al out­look towards the def­i­n­i­tion of ideology—focusing on the fac­tors asso­ci­at­ed with think­ing ide­o­log­i­cal­ly in mul­ti­ple domains, such as pol­i­tics, nation­al­ism and reli­gion. This includes dog­ma­tism, which can be con­cep­tu­al­ized as a con­tent-free dimen­sion of ide­o­log­i­cal thought reflect­ing the cer­tain­ty with which ide­o­log­i­cal beliefs are held and the intol­er­ance dis­played towards alter­na­tive or oppos­ing beliefs [3436]. Eval­u­at­ing the psy­cho­log­i­cal sim­i­lar­i­ties and dif­fer­ences between diverse ide­o­log­i­cal ori­en­ta­tions in con­cert facil­i­tates a com­pre­hen­sive overview of the nature of ide­o­log­i­cal cog­ni­tion. Here, we seek to map out the psy­cho­log­i­cal land­scape of these ide­o­log­i­cal ori­en­ta­tions by inves­ti­gat­ing which psy­cho­log­i­cal fac­tors among those mea­sured by a large bat­tery of cog­ni­tive tasks and per­son­al­i­ty sur­veys are most pre­dic­tive of an indi­vid­u­al’s ide­o­log­i­cal incli­na­tions. This work aims to bridge method­olo­gies across the cog­ni­tive and polit­i­cal sci­ences, iden­ti­fy key foci for future research, and illus­trate the use of incor­po­rat­ing cog­ni­tive and per­son­al­i­ty assess­ments when pre­dict­ing ide­o­log­i­cal con­vic­tions.

    The cur­rent study builds on recent work by Eisen­berg et al. [37,38], in which a large sam­ple of par­tic­i­pants (n = 522) com­plet­ed an exten­sive set of 37 well-estab­lished cog­ni­tive tasks and 22 self-report sur­veys focused on self-reg­u­la­tion and per­son­al­i­ty char­ac­ter­is­tics. The process of select­ing these mea­sures from the rel­e­vant lit­er­a­tures was described in detail by Eisen­berg et al. [37], but impor­tant­ly, this was com­plet­ed pri­or to and with no rela­tion to the ques­tion of ide­olo­gies (fig­ure 1). Through fac­tor analy­sis, Eisen­berg et al. [38] con­struct­ed data-dri­ven ontolo­gies of cog­ni­tion and per­son­al­i­ty, iden­ti­fy­ing a 5‑factor struc­ture for the cog­ni­tive task vari­ables and a 12-fac­tor struc­ture for the per­son­al­i­ty sur­vey vari­ables. The pow­er of these ontolo­gies to pre­dict real-world health out­comes was eval­u­at­ed [38]. A study of test–retest reli­a­bil­i­ties demon­strat­ed that the ontol­ogy fac­tor scores pos­sessed high sta­bil­i­ty over time [38,39] (four-month mean test–retest reli­a­bil­i­ty across fac­tors of cog­ni­tive task ontol­ogy: M = 0.82; per­son­al­i­ty sur­vey ontol­ogy: M = 0.86; n = 150); this reli­a­bil­i­ty helps to address the chal­lenges of obtain­ing robust indi­vid­ual dif­fer­ences from cog­ni­tive par­a­digms [3941]. In the present inves­ti­ga­tion, we suc­cess­ful­ly recruit­ed 334 par­tic­i­pants (49.4% female; age: M = 37.07, s.d. = 8.49, range = 22–63, all Unit­ed States (US) res­i­dents) from Eisen­berg et al.‘s orig­i­nal sam­ple [37] and admin­is­tered sur­veys per­tain­ing to var­i­ous polit­i­cal, nation­al­is­tic and reli­gious ide­o­log­i­cal beliefs, as well as dog­ma­tism and its con­cep­tu­al inverse, intel­lec­tu­al humil­i­ty (fig­ure 2). This allowed us to address the ques­tion: what psy­cho­log­i­cal fac­tors are most pre­dic­tive of indi­vid­u­als’ ide­o­log­i­cal ori­en­ta­tions?

    The 5‑factor cog­ni­tive ontol­ogy was cre­at­ed by decom­pos­ing each of the 37 cog­ni­tive tasks into mul­ti­ple depen­dent mea­sures that reflect­ed psy­cho­log­i­cal­ly mean­ing­ful vari­ables, such as accu­ra­cy scores (e.g. in the case of the Keep Track task that requires work­ing mem­o­ry), con­trasts between dif­fer­ent task con­di­tions (e.g. in a task-switch­ing task, includ­ing task-switch cost and cue-switch costs) and fit­ted mod­el para­me­ters used to cap­ture speed­ed deci­sion-mak­ing process­es [38]. Wher­ev­er appro­pri­ate, per­for­mance on two-choice tasks was mod­elled using the drift-dif­fu­sion mod­el (DDM), which trans­forms accu­ra­cy and reac­tion time data into inter­pretable latent vari­ables includ­ing drift rate (cor­re­spond­ing to the aver­age rate of evi­dence accu­mu­la­tion), thresh­old (cor­re­spond­ing to response cau­tion in terms of speed-accu­ra­cy trade-off) and non-deci­sion time (cor­re­spond­ing to the speed of per­cep­tu­al stim­u­lus pro­cess­ing and motor exe­cu­tion). This result­ed in a total of 129 depen­dent cog­ni­tive mea­sures, which explorato­ry fac­tor analy­sis and mod­el selec­tion based on the Bayesian infor­ma­tion cri­te­ri­on (BIC) reduced to five pri­ma­ry cog­ni­tive fac­tors labelled accord­ing to their strongest load­ing vari­ables: (i) Cau­tion (cap­tur­ing the DDM thresh­old para­me­ter), (ii) Per­cep­tu­al Pro­cess­ing Time (cap­tur­ing the DDM non-deci­sion time para­me­ter and stop-sig­nal reac­tion times asso­ci­at­ed with response inhi­bi­tion process­es), (iii) Speed of Evi­dence Accu­mu­la­tion (cap­tur­ing the DDM drift rate para­me­ter and oth­er relat­ed process­es), (iv) Tem­po­ral Dis­count­ing (reflect­ing vari­ables asso­ci­at­ed with the abil­i­ty to delay imme­di­ate grat­i­fi­ca­tion for a larg­er future reward), and (v) Strate­gic Infor­ma­tion Pro­cess­ing (reflect­ing vari­ables asso­ci­at­ed with work­ing mem­o­ry capac­i­ty, plan­ning, cog­ni­tive flex­i­bil­i­ty and oth­er high­er-order strate­gies occur­ring at a longer time-scale than the speed­ed deci­sions mod­elled by the DDM). Detailed infor­ma­tion on the nature of the ontol­ogy and its con­stituent ele­ments can be found in papers by Eisen­berg et al. [3739].

    The same method­ol­o­gy was applied to the 22 self-report per­son­al­i­ty sur­veys, result­ing in 64 depen­dent mea­sures that were reduced to 12 fac­tors using oblique explorato­ry fac­tor analy­sis (fig­ure 3). These per­son­al­i­ty fac­tors were asso­ci­at­ed with spe­cif­ic mea­sure­ment scales aimed at assess­ing var­i­ous psy­cho­log­i­cal con­structs, for exam­ple, Social Risk-Tak­ing and Impul­siv­i­ty. The result­ing 12 per­son­al­i­ty fac­tors were labelled based on their asso­ci­at­ed mea­sures as index­ing: (i) goal-direct­ed­ness, (ii) impul­siv­i­ty, (iii) reward sen­si­tiv­i­ty, (iv) sen­sa­tion-seek­ing, (v) emo­tion­al con­trol, (vi) agree­able­ness, (vii) eth­i­cal risk-tak­ing, (viii) risk per­cep­tion, (ix) eat­ing con­trol, (x) mind­ful­ness, (xi) finan­cial risk-tak­ing, and (xii) social risk-tak­ing. The orig­i­nal selec­tion of sur­veys and tasks was guid­ed by a focus on mea­sures intend­ed to cap­ture self-reg­u­la­tion and goal-direct­ed behav­iour [37]. Notably, per­son­al­i­ty was here broad­ly con­strued in terms of self-report­ed psy­cho­log­i­cal traits mea­sured with estab­lished sur­veys that aim to tap into sta­ble indi­vid­ual dif­fer­ences, and so per­son­al­i­ty was not defined in terms of any par­tic­u­lar mod­el of per­son­al­i­ty (e.g. the Big Five, though a mea­sure of the Big Five traits was includ­ed in the cre­ation of the sur­vey ontol­ogy, see fig­ure 3).

    By frac­tion­at­ing indi­vid­ual dif­fer­ences in psy­cho­log­i­cal traits into self-report­ed per­son­al­i­ty and behav­ioural­ly assessed cog­ni­tion, we address the diver­si­ty in assess­ment meth­ods used by social and cog­ni­tive psy­chol­o­gists to mea­sure ‘cog­ni­tive style’ [5,17]. Indeed, recent stud­ies have shown that self-report and behav­iour­al mea­sures of psy­cho­log­i­cal traits may tap into dif­fer­ent process­es [37,38,42], and that the rela­tion­ship between ide­o­log­i­cal lean­ings and cog­ni­tive style may be stronger when the lat­ter is mea­sured with self-report ques­tion­naires rather than behav­iour­al tasks [5]. A clear method­olog­i­cal dis­tinc­tion can, there­fore, illu­mi­nate the rela­tion­ships between psy­cho­log­i­cal dis­po­si­tions and ide­o­log­i­cal beliefs.

    We mea­sured par­tic­i­pants’ ide­o­log­i­cal incli­na­tions across mul­ti­ple domains by admin­is­ter­ing 16 estab­lished sur­veys of ide­o­log­i­cal ori­en­ta­tions, which were select­ed for inclu­sion fol­low­ing a lit­er­a­ture review [43] that exam­ined con­structs across social and polit­i­cal psy­chol­o­gy and pri­or­i­tized con­structs that were the­o­ret­i­cal­ly influ­en­tial in the field (e.g. sys­tem jus­ti­fi­ca­tion, social dom­i­nance ori­en­ta­tion and author­i­tar­i­an­ism [44,45]), wide­ly used and have under­gone exten­sive scale val­i­da­tion (e.g. intel­lec­tu­al humil­i­ty [46] and the social and eco­nom­ic con­ser­vatism scale [47]). Deci­sions regard­ing con­tro­ver­sial or con­cep­tu­al­ly over­lap­ping ide­o­log­i­cal mea­sures had to be tak­en on bal­ance, and led, for exam­ple, to the assess­ment of author­i­tar­i­an­ism but not right-wing author­i­tar­i­an­ism (which has been crit­i­cized for its con­fla­tion with fun­da­men­tal­ism or con­ser­vatism, e.g. [4851].

    As depict­ed in fig­ure 1, par­tic­i­pants com­plet­ed the ide­o­log­i­cal atti­tudes bat­tery approx­i­mate­ly 25 months after the ini­tial psy­cho­log­i­cal assess­ment. The ini­tial assess­ments did not con­tain mea­sures direct­ly per­tain­ing to ide­o­log­i­cal atti­tudes. The ide­o­log­i­cal atti­tudes sur­veys includ­ed self-report­ed ques­tion­naires on nation­al­ism, patri­o­tism, social and eco­nom­ic con­ser­vatism, sys­tem jus­ti­fi­ca­tion, dog­ma­tism, open­ness to revis­ing one’s view­points and engage­ment with reli­gion (see Mate­ri­als and meth­ods; the elec­tron­ic sup­ple­men­tary mate­r­i­al tables S1 and S2 and fig­ure S1). Explorato­ry fac­tor analy­sis was con­duct­ed to reduce the dimen­sion­al­i­ty of these ide­o­log­i­cal ori­en­ta­tions, reveal­ing a 3‑factor struc­ture cor­re­spond­ing to the fol­low­ing ide­o­log­i­cal fac­tors: polit­i­cal con­ser­vatism, reli­gios­i­ty and dog­ma­tism. We used the fac­tor scores of each par­tic­i­pant from this explorato­ry fac­tor analy­sis to val­i­date and con­dense the find­ings obtained via the 16 ide­o­log­i­cal ori­en­ta­tions (see Meth­ods and mate­ri­als; elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S4 and table S3). For the sake of brevi­ty and clar­i­ty, the focus of the analy­sis is on these ide­o­log­i­cal fac­tor scores, but the analy­ses and data for the con­stituent ide­o­log­i­cal ori­en­ta­tions are avail­able as well in the elec­tron­ic sup­ple­men­tary mate­r­i­al.

    A mul­ti­tude of ana­lyt­ic strate­gies were employed with the aim of rig­or­ous­ly test­ing the rela­tion­ships between cog­ni­tion, per­son­al­i­ty and ide­ol­o­gy. This involved fre­quen­tist regres­sion analy­ses and dimen­sion­al­i­ty reduc­tion, as well as Bayesian mod­el­ling and Bayesian Mod­el Aver­ag­ing in order to quan­ti­fy the evi­den­tial strength for the con­tri­bu­tion of the cog­ni­tive and per­son­al­i­ty traits. This allowed us to elu­ci­date which psy­cho­log­i­cal traits were most strong­ly tied to the diverse ide­olo­gies exam­ined, and to con­struct robust sig­na­tures and pre­dic­tive mod­els that can be used by researchers in both the cog­ni­tive and polit­i­cal sci­ences to move the field for­ward towards more informed the­o­ries of what makes a mind ide­o­log­i­cal.

    ...

    3. Results

    In order to under­stand the cog­ni­tive and per­son­al­i­ty bases of these ide­o­log­i­cal ori­en­ta­tions, we com­put­ed a series of mul­ti­ple regres­sion analy­ses on each of the 16 mea­sured ide­o­log­i­cal ori­en­ta­tions, as well as the three sum­ma­tive ide­o­log­i­cal fac­tors. Two lin­ear mul­ti­ple regres­sion analy­ses were con­duct­ed for each ide­o­log­i­cal out­come vari­able, where­by each analy­sis con­sist­ed of regres­sors asso­ci­at­ed with one of the fol­low­ing fea­ture matri­ces: (i) 5‑factor cog­ni­tive ontol­ogy, (ii) the 12-fac­tor per­son­al­i­ty ontol­ogy. We used the stan­dard­ized beta coef­fi­cients of the lin­ear regres­sion mod­els to gen­er­ate a ‘cog­ni­tive sig­na­ture’ and ‘per­son­al­i­ty sig­na­ture’ of each ide­o­log­i­cal ori­en­ta­tion. Fig­ure 4 depicts the stan­dard­ized esti­mates of the cog­ni­tive and per­son­al­i­ty ontol­ogy scores for each of the three sum­ma­tive ide­o­log­i­cal fac­tors (see the elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ures S5–S8 for the psy­cho­log­i­cal sig­na­tures of all the ide­o­log­i­cal ori­en­ta­tions).

    The results reveal both diver­si­ty and speci­fici­ty in the psy­cho­log­i­cal cor­re­lates of polit­i­cal con­ser­vatism, dog­ma­tism and reli­gios­i­ty. The polit­i­cal con­ser­vatism fac­tor, which reflects ten­den­cies towards polit­i­cal con­ser­vatism and nation­al­ism, was sig­nif­i­cant­ly asso­ci­at­ed with greater cau­tion and tem­po­ral dis­count­ing and reduced strate­gic infor­ma­tion pro­cess­ing in the cog­ni­tive domain, and by greater goal-direct­ed­ness, impul­siv­i­ty, and reward sen­si­tiv­i­ty, and reduced social risk-tak­ing in the per­son­al­i­ty domain. As an illus­tra­tion, fig­ure 5 demon­strates the cog­ni­tive cor­re­lates of all the ide­o­log­i­cal ori­en­ta­tions cap­tured by the polit­i­cal con­ser­vatism fac­tor, reveal­ing that the con­ser­v­a­tive-lean­ing polit­i­cal ide­olo­gies were con­sis­tent­ly relat­ed to greater cau­tion on speed­ed tasks and reduced strate­gic infor­ma­tion pro­cess­ing, with some vari­abil­i­ty in the role of tem­po­ral dis­count­ing, per­cep­tu­al pro­cess­ing time and speed of evi­dence accu­mu­la­tion. The dog­ma­tism fac­tor was sig­nif­i­cant­ly asso­ci­at­ed with reduced speed of evi­dence accu­mu­la­tion in the cog­ni­tive domain and by reduced social risk-tak­ing and agree­able­ness as well as height­ened impul­siv­i­ty and eth­i­cal risk-tak­ing in the per­son­al­i­ty domain. Sim­i­lar­ly to polit­i­cal con­ser­vatism, the reli­gios­i­ty fac­tor was also sig­nif­i­cant­ly asso­ci­at­ed with greater cau­tion on speed­ed tasks, and reduced strate­gic infor­ma­tion pro­cess­ing and social risk-tak­ing, but in con­trast to dog­ma­tism and polit­i­cal con­ser­vatism, reli­gios­i­ty was asso­ci­at­ed with greater agree­able­ness and risk per­cep­tion.

    Next, we inves­ti­gat­ed the rel­a­tive roles of demo­graph­ic vari­ables, self-report­ed per­son­al­i­ty and cog­ni­tion to ide­o­log­i­cal atti­tudes. As evi­dent in fig­ure 6b, for the polit­i­cal con­ser­vatism fac­tor, demo­graph­ic vari­ables alone explained 7.43% of the vari­ance, while demo­graph­ics and the psy­cho­log­i­cal vari­ables togeth­er explained 32.5% of the vari­ance (4.4‑fold increase). For the reli­gios­i­ty fac­tor and the dog­ma­tism fac­tor, demo­graph­ics explained 2.90% and 1.53% of the vari­ance, respec­tive­ly, while the com­bined mod­el explained 23.35% and 23.60% of the vari­ance, respec­tive­ly (cor­re­spond­ing to an 8‑fold and 15-fold increase, respec­tive­ly). Con­se­quent­ly, includ­ing the cog­ni­tive and per­son­al­i­ty vari­ables led to a con­sid­er­able increase in the explana­to­ry pow­er of these mod­els.

    To fur­ther exam­ine the evi­den­tial strength for the roles of demo­graph­ic vari­ables, self-report­ed per­son­al­i­ty and behav­ioural­ly assessed cog­ni­tion to the three ide­o­log­i­cal atti­tude fac­tors, we com­put­ed Bayes fac­tors, which express the rel­a­tive like­li­hood of two regres­sion mod­els giv­en the data and pri­or expec­ta­tions. To cal­cu­late Bayes fac­tors using Bayesian regres­sion, we relied on a default Bayesian approach pro­mot­ed by Wet­zels et al. [64], Roud­er & Morey [65] and Liang et al. [66], and com­pu­ta­tion­al­ly spec­i­fied in the R pack­age BayesFac­tor [67] (using the default Cauchy pri­ors). We com­put­ed Bayes fac­tors, rel­a­tive to the null hypoth­e­sis (BF10), for the regres­sion mod­els con­sist­ing of the dif­fer­ent pre­dic­tor types: (i) demo­graph­ic vari­ables (age, gen­der, edu­ca­tion­al attain­ment and income), (ii) cog­ni­tive ontol­ogy, (iii) per­son­al­i­ty ontol­ogy, (iv) the psy­cho­log­i­cal vari­ables (i.e. the cog­ni­tive and per­son­al­i­ty ontolo­gies com­bined), and (v) the com­bined demo­graph­ic and psy­cho­log­i­cal vari­ables. Final­ly, mod­els con­tain­ing the ‘best pre­dic­tors’ out of the com­bined vari­able set were built using Bayesian Mod­el Aver­ag­ing, as described below.

    As evi­dent in fig­ure 6a, there was deci­sive evi­dence for all mod­els con­sist­ing of both cog­ni­tive and per­son­al­i­ty vari­ables. The demo­graph­ics-only regres­sion mod­el was sub­stan­tial­ly more like­ly than a null mod­el giv­en the present data for the polit­i­cal con­ser­vatism fac­tor (BF10 = 78.26) but there was strong evi­dence in favour of the null mod­el for the dog­ma­tism fac­tor (BF10 = 0.01354) and the reli­gios­i­ty fac­tor (BF10 = 0.081655; fig­ure 6a). This sug­gests that demo­graph­ic vari­ables play a key role in explain­ing ide­o­log­i­cal atti­tudes in the realm of pol­i­tics, but do not explain reli­gios­i­ty or dog­ma­tism in the cur­rent dataset.

    The Bayes fac­tor analy­sis fur­ther illus­trates that there is sub­stan­tial evi­dence in favour of the role of cog­ni­tion in reli­gios­i­ty, and deci­sive evi­dence in favour of its role in polit­i­cal ide­ol­o­gy. By con­trast, there is anec­do­tal evi­dence in favour of the null hypoth­e­sis mod­el rel­a­tive to a cog­ni­tion-only mod­el in the case of dog­ma­tism, sug­gest­ing that adding cog­ni­tive fea­tures does not pro­vide added explana­to­ry pow­er over the inter­cept-only mod­el after tak­ing into account addi­tion­al mod­el com­plex­i­ty. Across all three ide­o­log­i­cal fac­tors, there is deci­sive evi­dence in the cur­rent data in favour of the role of per­son­al­i­ty vari­ables, as well as for mod­els pre­dict­ed by both per­son­al­i­ty and cog­ni­tion, and for a com­bined mod­el with all the psy­cho­log­i­cal and demo­graph­ic vari­ables. In line with past research [5], the per­son­al­i­ty sur­vey ontol­ogy was more pre­dic­tive of ide­o­log­i­cal atti­tudes than the cog­ni­tive task ontol­ogy (fig­ure 6); an effect that was more pro­nounced for dog­ma­tism and reli­gios­i­ty than polit­i­cal con­ser­vatism, high­light­ing the impor­tance of both mea­sure­ment types.

    Addi­tion­al­ly, to eval­u­ate the strength of the evi­dence for the psy­cho­log­i­cal mod­els (con­tain­ing cog­ni­tive and per­son­al­i­ty regres­sors) rel­a­tive to a mod­el based sole­ly on demo­graph­ic vari­ables, we also com­put­ed Bayes fac­tors for all the regres­sion mod­els rel­a­tive to the demo­graph­ic-only mod­el (BF1D; see elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S9). This cor­rob­o­rat­ed the find­ings obtained using the BF10, as the data was extreme­ly more like­ly to occur under mod­els con­tain­ing only cog­ni­tive and per­son­al­i­ty vari­ables than a demo­graph­ics-only mod­el (polit­i­cal con­ser­vatism fac­tor: BF1D = 1.975 × 108; dog­ma­tism fac­tor: BF1D = 5.248 × 107; reli­gios­i­ty fac­tor: BF1D = 3.345 × 105).

    To assess the pre­dic­tive pow­er of these vari­ables, we per­formed an out-of-sam­ple pre­dic­tion using 10-fold cross-val­i­da­tion with L2-reg­u­lar­ized lin­ear regres­sion to pre­dict par­tic­i­pants’ ide­o­log­i­cal ori­en­ta­tions and ide­o­log­i­cal fac­tor scores using the cog­ni­tive and per­son­al­i­ty ontolo­gies. This con­trasts with nor­mal in-sam­ple lin­ear regres­sion, which involves iden­ti­cal mod­els but which are fit on the whole dataset and then fit to the same dataset, rather than to a dif­fer­ent dataset or a sub­set of the data. Con­duct­ing out-of-sam­ple cross-val­i­da­tion thus helps avoid prob­lems of over­fit­ting and is a more gen­uine mea­sure­ment of ‘pre­dic­tion’ than stan­dard regres­sion meth­ods (e.g. [68]). As evi­dent in elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S10, the cross-val­i­dat­ed find­ings were con­sis­tent with the in-sam­ple lin­ear mul­ti­ple regres­sion find­ings; the cog­ni­tive and per­son­al­i­ty ontolo­gies were sig­nif­i­cant­ly pre­dic­tive of par­tic­i­pants’ ide­o­log­i­cal atti­tudes.

    We fur­ther sought to iden­ti­fy the ‘best’ mod­el for each of the three ide­o­log­i­cal fac­tors using a Bayesian Mod­el Aver­ag­ing approach (imple­ment­ed in the bic.glm func­tion in the bma R pack­age [69]) for all pos­si­ble lin­ear addi­tive mod­els using the cog­ni­tive task vari­ables, per­son­al­i­ty sur­vey vari­ables and demo­graph­ic vari­ables (age, gen­der, edu­ca­tion­al attain­ment and income) as regres­sors. The bic.glm func­tion fits gen­er­al­ized lin­ear mod­els with the ‘leaps and bounds’ algo­rithm and the BIC approx­i­ma­tion to Bayes fac­tors [69]. In Bayesian Mod­el Aver­ag­ing, infer­ence about each vari­able is based on the aver­ag­ing of pos­te­ri­or dis­tri­b­u­tions of all con­sid­ered models—rather than a sin­gle select­ed model—given the present data (see the elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S11 for all includ­ed mod­els in the Bayesian Mod­el Aver­ag­ing). We used a Gauss­ian error dis­tri­b­u­tion and defined select­ed vari­ables as hav­ing a pos­te­ri­or prob­a­bil­i­ty above 75% in line with past guide­lines [63,70]. For each of the three ide­o­log­i­cal fac­tors, we then obtained the Bayes fac­tors for the regres­sion mod­el com­posed of these select­ed vari­ables. This approach excludes unnec­es­sary pre­dic­tors and allows us to gen­er­ate the Bayesian regres­sion that exhibits the best com­bi­na­tion of fit and par­si­mo­ny. As depict­ed in fig­ures 6 and 7, each ide­o­log­i­cal fac­tor was best pre­dict­ed by a dif­fer­ent set of vari­ables, all of which were con­sis­tent with the results of the stan­dard­ized esti­mates from the mul­ti­ple lin­ear regres­sion (fig­ure 4). These ‘best’ mod­els all pos­sessed the high­est lev­el of evi­den­tial strength rel­a­tive to an inter­cept-only null mod­el (BF10) and rel­a­tive to a demo­graph­ics-only (BF1D) mod­el (Polit­i­cal Con­ser­vatism: BF10 = 1.428 × 1013, BF1D = 1.825 × 1011; Dog­ma­tism: BF10 = 1.877 × 109, BF1D = 1.386×1011; Reli­gios­i­ty: BF10 = 1.049 × 108, BF1D = 1.285 × 109).

    4. Dis­cus­sion

    While the field of polit­i­cal psy­chol­o­gy has expand­ed and flour­ished over the past two decades, to the best of our knowl­edge there has been no data-dri­ven and well-pow­ered analy­sis of the con­tri­bu­tion of a large set of psy­cho­log­i­cal traits to a wide array of ide­o­log­i­cal beliefs. By admin­is­ter­ing an unprece­dent­ed num­ber of cog­ni­tive tasks and per­son­al­i­ty sur­veys and employ­ing a data-dri­ven men­tal ontol­ogy [37,38], we were able to eval­u­ate the rela­tion­ships between indi­vid­u­als’ cog­ni­tion and per­son­al­i­ty and their ide­o­log­i­cal incli­na­tions. This data-dri­ven approach revealed strik­ing par­al­lels between indi­vid­u­als’ low-lev­el cog­ni­tive dis­po­si­tions and their high-lev­el polit­i­cal, social and dog­mat­ic atti­tudes.

    The exam­i­na­tion of a range of ide­o­log­i­cal atti­tudes per­tain­ing to pol­i­tics, nation­al­ism, reli­gion and dog­ma­tism exposed remark­able sim­i­lar­i­ties and dif­fer­ences between the psy­cho­log­i­cal cor­re­lates of diverse ide­o­log­i­cal ori­en­ta­tions, demon­strat­ing that there may be core psy­cho­log­i­cal under­pin­nings of ide­o­log­i­cal think­ing across domains (such as the con­sis­tent roles of strate­gic infor­ma­tion pro­cess­ing and social risk-tak­ing; fig­ures 4, 5 and 7, and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ures S5–S8) as well as speci­fici­ty that depends on the con­tent of the ide­o­log­i­cal domain (such as the dif­fer­ing con­tri­bu­tions of cau­tion, evi­dence accu­mu­la­tion rate, impul­siv­i­ty and agree­able­ness). Bayesian analy­sis high­light­ed that the most par­si­mo­nious and pre­dic­tive mod­els of polit­i­cal con­ser­vatism include both behav­ioural­ly assessed cog­ni­tive vari­ables and self-report­ed per­son­al­i­ty vari­ables (fig­ures 4, 6 and 7), sug­gest­ing that both mea­sure­ment types are valu­able for pre­dict­ing ide­o­log­i­cal behav­iour and should be treat­ed as com­ple­men­tary sources of explained vari­ance.

    Dog­mat­ic par­tic­i­pants were slow­er to accu­mu­late evi­dence in speed­ed deci­sion-mak­ing tasks but were also more impul­sive and will­ing to take eth­i­cal risks (fig­ure 4 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S6). This com­bi­na­tion of traits—impulsivity in con­junc­tion with slow and impaired accu­mu­la­tion of evi­dence from the deci­sion environment—may result in the dog­mat­ic ten­den­cy to dis­card evi­dence pre­ma­ture­ly and to resist belief updat­ing in light of new infor­ma­tion. This psy­cho­log­i­cal sig­na­ture is nov­el and should inspire fur­ther research on the effect of dog­ma­tism on per­cep­tu­al deci­sion-mak­ing process­es. It is note­wor­thy that impul­siv­i­ty dif­fers here from cau­tion (impli­cat­ed in polit­i­cal con­ser­vatism and reli­gios­i­ty) in terms of mea­sure­ment method (self-report sur­vey ver­sus behav­iour­al task) and its rela­tion­ship to self-con­trol: cau­tion here is oper­a­tional­ized as a trade-off between speed and accu­ra­cy under con­di­tions where both are empha­sized and so is under the influ­ence of some strate­gic con­trol, where­as impul­siv­i­ty can be con­cep­tu­al­ized as a deficit in inhibito­ry con­trol rather than a strate­gic trade-off [71]. Con­se­quent­ly, dog­mat­ic indi­vid­u­als may pos­sess reduced inhi­bi­tion that could be com­pound­ed by slow­er infor­ma­tion uptake, lead­ing to impul­sive deci­sions based on imper­fect­ly processed evi­dence. There has been remark­ably lit­tle con­tem­po­rary research on the cog­ni­tive basis of dog­ma­tism, with a few excep­tions [1719,72,73], and so we hope these find­ings will stim­u­late fur­ther in-depth research on the per­cep­tu­al under­pin­nings of dog­mat­ic think­ing styles.

    Polit­i­cal con­ser­vatism was best explained by reduced strate­gic infor­ma­tion pro­cess­ing, height­ened response cau­tion in per­cep­tu­al deci­sion-mak­ing par­a­digms, and an aver­sion to social risk-tak­ing (fig­ures 4, 5 and 7). These three pre­dic­tors were con­sis­tent­ly impli­cat­ed in the gen­er­al polit­i­cal con­ser­vatism fac­tor (fig­ure 4), as well as the spe­cif­ic polit­i­cal-ide­o­log­i­cal ori­en­ta­tions stud­ied, such as nation­al­ism, author­i­tar­i­an­ism and social con­ser­vatism (fig­ure 5 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5). These data-dri­ven find­ings are remark­ably con­gru­ent with exist­ing the­o­ret­i­cal and empir­i­cal accounts with­in polit­i­cal psy­chol­o­gy and also add impor­tant insights. First­ly, the find­ing that polit­i­cal and nation­al­is­tic con­ser­vatism is asso­ci­at­ed with reduced strate­gic infor­ma­tion pro­cess­ing (reflect­ing vari­ables asso­ci­at­ed with work­ing mem­o­ry capac­i­ty, plan­ning, cog­ni­tive flex­i­bil­i­ty and oth­er high­er-order strate­gies) is con­sis­tent with a large body of lit­er­a­ture [2,5] indi­cat­ing that right-wing ide­olo­gies are fre­quent­ly asso­ci­at­ed with reduced ana­lyt­i­cal think­ing [74,75] and cog­ni­tive flex­i­bil­i­ty [6,15,17]. Addi­tion­al­ly, con­ser­v­a­tive polit­i­cal ide­ol­o­gy was char­ac­ter­ized by a dimin­ished ten­den­cy to take social risks (fig­ure 4 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S4) such as dis­agree­ing with author­i­ty, start­ing a new career mid-life and speak­ing pub­licly about a con­tro­ver­sial top­ic. This cor­rob­o­rates research show­ing that polit­i­cal con­ser­v­a­tives tend to empha­size val­ues of con­for­mi­ty, ingroup loy­al­ty and tra­di­tion­al­ism [7680]. These empir­i­cal con­sis­ten­cies between the cur­rent data-dri­ven find­ings and past the­o­ry-moti­vat­ed research endow the present line of work with fur­ther cred­i­bil­i­ty.

    A polit­i­cal­ly con­ser­v­a­tive out­look was asso­ci­at­ed with greater cau­tion in ide­o­log­i­cal­ly neu­tral speed­ed deci­sion-mak­ing tasks, as oper­a­tional­ized in terms of the DDM para­me­ter for the amount of evi­dence required before com­mit­ting to a deci­sion. Specif­i­cal­ly, the cau­tion with which indi­vid­u­als process and respond to polit­i­cal­ly neu­tral infor­ma­tion was relat­ed to the con­ser­vatism with which they eval­u­ate socio-polit­i­cal infor­ma­tion (fig­ures 4 and 5). It, there­fore, appears that cau­tion may be a time-scale inde­pen­dent deci­sion strat­e­gy: indi­vid­u­als who are polit­i­cal­ly con­ser­v­a­tive may be per­cep­tu­al­ly cau­tious as well. This find­ing sup­ports the idea of ‘elec­tive affini­ties’ [1] between cog­ni­tive dis­po­si­tions and ide­o­log­i­cal incli­na­tions and is com­pat­i­ble with the per­spec­tive that polit­i­cal con­ser­vatism is asso­ci­at­ed with height­ened moti­va­tions to sat­is­fy dis­po­si­tion­al needs for cer­tain­ty and secu­ri­ty [2,3,81,82]. Nonethe­less, to the best of our knowl­edge, ide­o­log­i­cal atti­tudes have nev­er before been inves­ti­gat­ed in rela­tion to cau­tion as mea­sured with cog­ni­tive tasks and drift-dif­fu­sion para­me­ters. The present results, there­fore, offer a nov­el addi­tion to this lit­er­a­ture by sug­gest­ing that polit­i­cal con­ser­vatism may be a man­i­fes­ta­tion of a cau­tious strat­e­gy in pro­cess­ing and respond­ing to infor­ma­tion that is both time-invari­ant and ide­o­log­i­cal­ly neu­tral, and can be man­i­fest even in rapid per­cep­tu­al deci­sion-mak­ing process­es. This is rel­e­vant to the wealth of nov­el research on the role of uncer­tain­ty in the neur­al under­pin­nings of polit­i­cal process­es [26,27,31,83].

    The find­ings reveal fur­ther unex­plored dynam­ics by high­light­ing that ide­o­log­i­cal ori­en­ta­tions which have been wide­ly stud­ied and debat­ed in polit­i­cal psy­chol­o­gy exhib­it both uni­for­mi­ty and vari­abil­i­ty in their cog­ni­tive and per­son­al­i­ty pre­dic­tors. For exam­ple, although social and eco­nom­ic con­ser­vatism pos­sessed many over­lap­ping cor­re­lates (such as height­ened goal-direct­ed­ness and cau­tion; fig­ure 5 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5), eco­nom­ic con­ser­vatism was asso­ci­at­ed with enhanced sen­sa­tion-seek­ing, where­as social con­ser­vatism was not, and in turn, social con­ser­vatism was relat­ed to height­ened agree­able­ness and risk per­cep­tion, while eco­nom­ic con­ser­vatism was not (elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5). This bears on recent debates regard­ing the need to frac­tion­ate con­ser­vatism into its social and eco­nom­ic com­po­nents in order to effec­tive­ly and com­pre­hen­sive­ly under­stand its psy­cho­log­i­cal under­pin­nings [17,43,8487], and high­lights sen­sa­tion-seek­ing and risk per­cep­tion as poten­tial can­di­dates for future study. The results can also help to dis­am­biguate past debates about the con­cep­tu­al over­laps between ide­o­log­i­cal ori­en­ta­tions such as social dom­i­nance ori­en­ta­tion, sys­tem jus­ti­fi­ca­tion and author­i­tar­i­an­ism [44] and their dif­fer­en­tial pre­dic­tive pow­er in rela­tion to real-world out­comes such as prej­u­dice [8890] and pol­i­cy atti­tudes [91]. Here, we found that each of these ide­olo­gies exhib­it­ed a dif­fer­ent cog­ni­tive and per­son­al­i­ty sig­na­ture.

    The psy­cho­log­i­cal sig­na­ture of reli­gios­i­ty con­sist­ed of height­ened cau­tion and reduced strate­gic infor­ma­tion pro­cess­ing in the cog­ni­tive domain (sim­i­lar­ly to con­ser­vatism), and enhanced agree­able­ness, risk per­cep­tion and aver­sion to social risk-tak­ing, in the per­son­al­i­ty domain (fig­ure 4 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S6). The find­ing that reli­gious par­tic­i­pants exhib­it­ed ele­vat­ed cau­tion and risk per­cep­tion is par­tic­u­lar­ly infor­ma­tive to researchers inves­ti­gat­ing the the­o­ry that threat, risk and dis­gust sen­si­tiv­i­ty are linked to moral and reli­gious con­vic­tions [9297], and that these cog­ni­tive and emo­tion­al bias­es may have played a role in the cul­tur­al ori­gins of large-scale orga­nized reli­gions [98,99]. The results sup­port the notion that expe­ri­enc­ing risks as more salient and prob­a­ble may facil­i­tate devo­tion to reli­gious ide­olo­gies that offer expla­na­tions of these risks (by super­nat­ur­al accounts) and ways to mit­i­gate them (via reli­gious devo­tion and com­mu­ni­ties).

    The present data-dri­ven analy­sis reveals the ways in which per­cep­tu­al deci­sion-mak­ing strate­gies can per­co­late into high-lev­el ide­o­log­i­cal beliefs, sug­gest­ing that a dis­sec­tion of the cog­ni­tive anato­my of ide­olo­gies is a pro­duc­tive and illu­mi­nat­ing endeav­our. It elu­ci­dates both the cog­ni­tive vul­ner­a­bil­i­ties to tox­ic ide­olo­gies as well as the traits that make indi­vid­u­als more intel­lec­tu­al­ly hum­ble, recep­tive to evi­dence and ulti­mate­ly resilient to extrem­ist rhetoric. Inter­est­ing­ly, the psy­cho­log­i­cal pro­file of indi­vid­u­als who endorsed extreme pro-group actions, such as ide­o­log­i­cal­ly moti­vat­ed vio­lence against out­groups, was a mix of the polit­i­cal con­ser­vatism sig­na­ture and the dog­ma­tism sig­na­ture (fig­ure 5 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5). This may offer key insights for nuanced edu­ca­tion­al pro­grammes aimed at fos­ter­ing humil­i­ty and social under­stand­ing [100]. By adopt­ing research prac­tices such as rely­ing on com­pre­hen­sive mea­sure­ment approach­es, inte­grat­ing assess­ment meth­ods from cog­ni­tive and social psy­chol­o­gy, using both fre­quen­tist and Bayesian ana­lyt­ic tech­niques, and tem­po­ral­ly sep­a­rat­ing the col­lec­tion of psy­cho­log­i­cal and ide­o­log­i­cal data, the cur­rent inves­ti­ga­tion was able to over­come many method­olog­i­cal con­cerns in social and polit­i­cal psy­chol­o­gy regard­ing biased hypoth­e­sis gen­er­a­tion and repro­ducibil­i­ty [8]. The con­ver­gence between these data-dri­ven results and past the­o­ry-dri­ven research helps to val­i­date exist­ing find­ings and to high­light the degree to which human ide­o­log­i­cal incli­na­tions are root­ed in cog­ni­tive dis­po­si­tions. More­over, this data-dri­ven approach gen­er­at­ed notable nov­el insights that will help guide future research, such as the role of evi­dence accu­mu­la­tion rates and impul­siv­i­ty in dog­ma­tism, or the man­i­fest rela­tion­ship between polit­i­cal con­ser­vatism and cog­ni­tive cau­tion in speed­ed per­cep­tu­al deci­sions (fig­ures 4 and 5). These find­ings under­score the fruit­ful­ness of exam­in­ing the rela­tion­ships between high-lev­el ide­o­log­i­cal atti­tudes and low-lev­el cog­ni­tive process­es, and sug­gest that ide­o­log­i­cal beliefs are amenable to care­ful cog­ni­tive and com­pu­ta­tion­al analy­sis [20,101]. Addi­tion­al­ly, the results sup­port pre­dic­tive mod­els of ide­o­log­i­cal ori­en­ta­tions that incor­po­rate cog­ni­tive and per­son­al­i­ty fac­tors (fig­ures 4, 6 and 7), carv­ing the way for more inter­dis­ci­pli­nary dia­logue in terms of psy­cho­log­i­cal method­ol­o­gy. Future cumu­la­tive research will need to elu­ci­date the ques­tion of causal­i­ty and trans­late these find­ings to more diverse and rep­re­sen­ta­tive sam­ples [102] that address the role of con­text in these rela­tion­ships [103,104]. Recent accounts sug­gest that not only do psy­cho­log­i­cal process­es under­lie ide­o­log­i­cal atti­tudes, atti­tudes also guide behav­iour and deci­sion-mak­ing across domains in ways that can shape per­cep­tion, cog­ni­tion and per­son­al­i­ty [6,33,105]. A wholis­tic, domain-gen­er­al approach to the rela­tion­ship between ide­ol­o­gy and cog­ni­tion can, there­fore, offer a valu­able foun­da­tion for research on the psy­cho­log­i­cal roots of inter­group atti­tudes, xeno­pho­bia and ide­o­log­i­cal extremism—illustrating the myr­i­ad ways in which sub­tle vari­a­tions in men­tal process­es can pre­dis­pose indi­vid­u­als to ide­o­log­i­cal world­views.

    ...

    ————

    “The cog­ni­tive and per­cep­tu­al cor­re­lates of ide­o­log­i­cal atti­tudes: a data-dri­ven approach” by Leor Zmi­grod, Ian W. Eisen­berg, Patrick G. Bis­sett, Trevor W. Rob­bins and Rus­sell A. Pol­drack; Philo­soph­i­cal Trans­ac­tion of the Roy­al Soci­ety B; 02/22/2021

    “Ide­olo­gies can be gen­er­al­ly described as doc­trines that rigid­ly pre­scribe epis­temic and rela­tion­al norms or forms of hos­til­i­ty [33]. The present inves­ti­ga­tion espous­es a domain-gen­er­al out­look towards the def­i­n­i­tion of ideology—focusing on the fac­tors asso­ci­at­ed with think­ing ide­o­log­i­cal­ly in mul­ti­ple domains, such as pol­i­tics, nation­al­ism and reli­gion. This includes dog­ma­tism, which can be con­cep­tu­al­ized as a con­tent-free dimen­sion of ide­o­log­i­cal thought reflect­ing the cer­tain­ty with which ide­o­log­i­cal beliefs are held and the intol­er­ance dis­played towards alter­na­tive or oppos­ing beliefs [3436]. Eval­u­at­ing the psy­cho­log­i­cal sim­i­lar­i­ties and dif­fer­ences between diverse ide­o­log­i­cal ori­en­ta­tions in con­cert facil­i­tates a com­pre­hen­sive overview of the nature of ide­o­log­i­cal cog­ni­tion. Here, we seek to map out the psy­cho­log­i­cal land­scape of these ide­o­log­i­cal ori­en­ta­tions by inves­ti­gat­ing which psy­cho­log­i­cal fac­tors among those mea­sured by a large bat­tery of cog­ni­tive tasks and per­son­al­i­ty sur­veys are most pre­dic­tive of an indi­vid­u­al’s ide­o­log­i­cal incli­na­tions. This work aims to bridge method­olo­gies across the cog­ni­tive and polit­i­cal sci­ences, iden­ti­fy key foci for future research, and illus­trate the use of incor­po­rat­ing cog­ni­tive and per­son­al­i­ty assess­ments when pre­dict­ing ide­o­log­i­cal con­vic­tions.

    A map of the socio-polit­i­cal psy­cho­log­i­cal land­scape. Just like the work that went into the Cam­bridge Ana­lyt­i­ca scan­dal, except in that case it was all about lever­ag­ing Face­book pro­file infor­ma­tion. This study appears to be far more indepth and gen­er­al, which means it’s the kind of research that will be the foun­da­tion for future Cam­bridge Ana­lyt­i­ca-style scan­dals. And what they found was that when you dis­tilled the psy­cho­log­i­cal traits they mea­sured down to three dimen­sions — polit­i­cal con­ser­vatism, dog­ma­tism, and reli­gios­i­ty — there was a remark­ably strong cor­re­la­tion between those psy­cho­log­i­cal traits and fun­da­men­tal cog­ni­tive traits like the cau­tion exhib­it in these cog­ni­tive speed tests. Based on these find­ings there real­ly does appear to be a sig­nif­i­cant cor­re­la­tion between pol­i­tics and these fun­da­men­tal dimen­sions of the per­son­al­i­ty.

    Again, these find­ings aren’t super sur­pris­ing. These asso­ci­a­tions between pol­i­tics and fun­da­men­tal cog­ni­tive process­es have long been sus­pect­ed. But this is the kind of research that allows peo­ple to trans­late those sus­pi­cions into action­able strate­gies of mass manip­u­la­tion, as the Cam­bridge Ana­lyt­i­ca scan­dal made clear. That’s part of why this research is so impor­tant. It’s that kind of research that allows us to exe­cute more sophis­ti­cat­ed Cam­bridge Ana­lyt­i­ca-style manip­u­la­tion cam­paigns in the future but also detect those cam­paigns:

    ...
    The results reveal both diver­si­ty and speci­fici­ty in the psy­cho­log­i­cal cor­re­lates of polit­i­cal con­ser­vatism, dog­ma­tism and reli­gios­i­ty. The polit­i­cal con­ser­vatism fac­tor, which reflects ten­den­cies towards polit­i­cal con­ser­vatism and nation­al­ism, was sig­nif­i­cant­ly asso­ci­at­ed with greater cau­tion and tem­po­ral dis­count­ing and reduced strate­gic infor­ma­tion pro­cess­ing in the cog­ni­tive domain, and by greater goal-direct­ed­ness, impul­siv­i­ty, and reward sen­si­tiv­i­ty, and reduced social risk-tak­ing in the per­son­al­i­ty domain. As an illus­tra­tion, fig­ure 5 demon­strates the cog­ni­tive cor­re­lates of all the ide­o­log­i­cal ori­en­ta­tions cap­tured by the polit­i­cal con­ser­vatism fac­tor, reveal­ing that the con­ser­v­a­tive-lean­ing polit­i­cal ide­olo­gies were con­sis­tent­ly relat­ed to greater cau­tion on speed­ed tasks and reduced strate­gic infor­ma­tion pro­cess­ing, with some vari­abil­i­ty in the role of tem­po­ral dis­count­ing, per­cep­tu­al pro­cess­ing time and speed of evi­dence accu­mu­la­tion. The dog­ma­tism fac­tor was sig­nif­i­cant­ly asso­ci­at­ed with reduced speed of evi­dence accu­mu­la­tion in the cog­ni­tive domain and by reduced social risk-tak­ing and agree­able­ness as well as height­ened impul­siv­i­ty and eth­i­cal risk-tak­ing in the per­son­al­i­ty domain. Sim­i­lar­ly to polit­i­cal con­ser­vatism, the reli­gios­i­ty fac­tor was also sig­nif­i­cant­ly asso­ci­at­ed with greater cau­tion on speed­ed tasks, and reduced strate­gic infor­ma­tion pro­cess­ing and social risk-tak­ing, but in con­trast to dog­ma­tism and polit­i­cal con­ser­vatism, reli­gios­i­ty was asso­ci­at­ed with greater agree­able­ness and risk per­cep­tion.

    ...

    The exam­i­na­tion of a range of ide­o­log­i­cal atti­tudes per­tain­ing to pol­i­tics, nation­al­ism, reli­gion and dog­ma­tism exposed remark­able sim­i­lar­i­ties and dif­fer­ences between the psy­cho­log­i­cal cor­re­lates of diverse ide­o­log­i­cal ori­en­ta­tions, demon­strat­ing that there may be core psy­cho­log­i­cal under­pin­nings of ide­o­log­i­cal think­ing across domains (such as the con­sis­tent roles of strate­gic infor­ma­tion pro­cess­ing and social risk-tak­ing; fig­ures 4, 5 and 7, and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ures S5–S8) as well as speci­fici­ty that depends on the con­tent of the ide­o­log­i­cal domain (such as the dif­fer­ing con­tri­bu­tions of cau­tion, evi­dence accu­mu­la­tion rate, impul­siv­i­ty and agree­able­ness). Bayesian analy­sis high­light­ed that the most par­si­mo­nious and pre­dic­tive mod­els of polit­i­cal con­ser­vatism include both behav­ioural­ly assessed cog­ni­tive vari­ables and self-report­ed per­son­al­i­ty vari­ables (fig­ures 4, 6 and 7), sug­gest­ing that both mea­sure­ment types are valu­able for pre­dict­ing ide­o­log­i­cal behav­iour and should be treat­ed as com­ple­men­tary sources of explained vari­ance.

    Dog­mat­ic par­tic­i­pants were slow­er to accu­mu­late evi­dence in speed­ed deci­sion-mak­ing tasks but were also more impul­sive and will­ing to take eth­i­cal risks (fig­ure 4 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S6). This com­bi­na­tion of traits—impulsivity in con­junc­tion with slow and impaired accu­mu­la­tion of evi­dence from the deci­sion environment—may result in the dog­mat­ic ten­den­cy to dis­card evi­dence pre­ma­ture­ly and to resist belief updat­ing in light of new infor­ma­tion. This psy­cho­log­i­cal sig­na­ture is nov­el and should inspire fur­ther research on the effect of dog­ma­tism on per­cep­tu­al deci­sion-mak­ing process­es. It is note­wor­thy that impul­siv­i­ty dif­fers here from cau­tion (impli­cat­ed in polit­i­cal con­ser­vatism and reli­gios­i­ty) in terms of mea­sure­ment method (self-report sur­vey ver­sus behav­iour­al task) and its rela­tion­ship to self-con­trol: cau­tion here is oper­a­tional­ized as a trade-off between speed and accu­ra­cy under con­di­tions where both are empha­sized and so is under the influ­ence of some strate­gic con­trol, where­as impul­siv­i­ty can be con­cep­tu­al­ized as a deficit in inhibito­ry con­trol rather than a strate­gic trade-off [71]. Con­se­quent­ly, dog­mat­ic indi­vid­u­als may pos­sess reduced inhi­bi­tion that could be com­pound­ed by slow­er infor­ma­tion uptake, lead­ing to impul­sive deci­sions based on imper­fect­ly processed evi­dence. There has been remark­ably lit­tle con­tem­po­rary research on the cog­ni­tive basis of dog­ma­tism, with a few excep­tions [1719,72,73], and so we hope these find­ings will stim­u­late fur­ther in-depth research on the per­cep­tu­al under­pin­nings of dog­mat­ic think­ing styles.

    Polit­i­cal con­ser­vatism was best explained by reduced strate­gic infor­ma­tion pro­cess­ing, height­ened response cau­tion in per­cep­tu­al deci­sion-mak­ing par­a­digms, and an aver­sion to social risk-tak­ing (fig­ures 4, 5 and 7). These three pre­dic­tors were con­sis­tent­ly impli­cat­ed in the gen­er­al polit­i­cal con­ser­vatism fac­tor (fig­ure 4), as well as the spe­cif­ic polit­i­cal-ide­o­log­i­cal ori­en­ta­tions stud­ied, such as nation­al­ism, author­i­tar­i­an­ism and social con­ser­vatism (fig­ure 5 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5). These data-dri­ven find­ings are remark­ably con­gru­ent with exist­ing the­o­ret­i­cal and empir­i­cal accounts with­in polit­i­cal psy­chol­o­gy and also add impor­tant insights. First­ly, the find­ing that polit­i­cal and nation­al­is­tic con­ser­vatism is asso­ci­at­ed with reduced strate­gic infor­ma­tion pro­cess­ing (reflect­ing vari­ables asso­ci­at­ed with work­ing mem­o­ry capac­i­ty, plan­ning, cog­ni­tive flex­i­bil­i­ty and oth­er high­er-order strate­gies) is con­sis­tent with a large body of lit­er­a­ture [2,5] indi­cat­ing that right-wing ide­olo­gies are fre­quent­ly asso­ci­at­ed with reduced ana­lyt­i­cal think­ing [74,75] and cog­ni­tive flex­i­bil­i­ty [6,15,17]. Addi­tion­al­ly, con­ser­v­a­tive polit­i­cal ide­ol­o­gy was char­ac­ter­ized by a dimin­ished ten­den­cy to take social risks (fig­ure 4 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S4) such as dis­agree­ing with author­i­ty, start­ing a new career mid-life and speak­ing pub­licly about a con­tro­ver­sial top­ic. This cor­rob­o­rates research show­ing that polit­i­cal con­ser­v­a­tives tend to empha­size val­ues of con­for­mi­ty, ingroup loy­al­ty and tra­di­tion­al­ism [7680]. These empir­i­cal con­sis­ten­cies between the cur­rent data-dri­ven find­ings and past the­o­ry-moti­vat­ed research endow the present line of work with fur­ther cred­i­bil­i­ty.

    A polit­i­cal­ly con­ser­v­a­tive out­look was asso­ci­at­ed with greater cau­tion in ide­o­log­i­cal­ly neu­tral speed­ed deci­sion-mak­ing tasks, as oper­a­tional­ized in terms of the DDM para­me­ter for the amount of evi­dence required before com­mit­ting to a deci­sion. Specif­i­cal­ly, the cau­tion with which indi­vid­u­als process and respond to polit­i­cal­ly neu­tral infor­ma­tion was relat­ed to the con­ser­vatism with which they eval­u­ate socio-polit­i­cal infor­ma­tion (fig­ures 4 and 5). It, there­fore, appears that cau­tion may be a time-scale inde­pen­dent deci­sion strat­e­gy: indi­vid­u­als who are polit­i­cal­ly con­ser­v­a­tive may be per­cep­tu­al­ly cau­tious as well. This find­ing sup­ports the idea of ‘elec­tive affini­ties’ [1] between cog­ni­tive dis­po­si­tions and ide­o­log­i­cal incli­na­tions and is com­pat­i­ble with the per­spec­tive that polit­i­cal con­ser­vatism is asso­ci­at­ed with height­ened moti­va­tions to sat­is­fy dis­po­si­tion­al needs for cer­tain­ty and secu­ri­ty [2,3,81,82]. Nonethe­less, to the best of our knowl­edge, ide­o­log­i­cal atti­tudes have nev­er before been inves­ti­gat­ed in rela­tion to cau­tion as mea­sured with cog­ni­tive tasks and drift-dif­fu­sion para­me­ters. The present results, there­fore, offer a nov­el addi­tion to this lit­er­a­ture by sug­gest­ing that polit­i­cal con­ser­vatism may be a man­i­fes­ta­tion of a cau­tious strat­e­gy in pro­cess­ing and respond­ing to infor­ma­tion that is both time-invari­ant and ide­o­log­i­cal­ly neu­tral, and can be man­i­fest even in rapid per­cep­tu­al deci­sion-mak­ing process­es. This is rel­e­vant to the wealth of nov­el research on the role of uncer­tain­ty in the neur­al under­pin­nings of polit­i­cal process­es [26,27,31,83].
    ...

    And note the inter­est­ing find­ing dis­tin­guish­ing social con­ser­v­a­tives (those scor­ing high on both polit­i­cal con­ser­vatism and reli­gios­i­ty) from eco­nom­ic con­ser­v­a­tives (those scor­ing high on both polit­i­cal con­ser­vatism but not reli­gios­i­ty): the social con­ser­v­a­tives tend to have height­ened agree­able­ness and risk per­cep­tion the eco­nom­ic con­ser­v­a­tives (i.e. lib­er­tar­i­ans) lacked, while the eco­nom­ic con­ser­v­a­tives had enhanced sen­sa­tion-seek­ing the social con­ser­v­a­tives lacks. It’s the kind of fun­da­men­tal psy­cho­log­i­cal divide that rep­re­sents the kind of wedge-poten­tial that could shape the future of pol­i­tics. Which, again, is why this kind of research is so impor­tant: this is the kind of knowl­edge the future of polit­i­cal cam­paign­ing is going to be based on:

    ...
    The find­ings reveal fur­ther unex­plored dynam­ics by high­light­ing that ide­o­log­i­cal ori­en­ta­tions which have been wide­ly stud­ied and debat­ed in polit­i­cal psy­chol­o­gy exhib­it both uni­for­mi­ty and vari­abil­i­ty in their cog­ni­tive and per­son­al­i­ty pre­dic­tors. For exam­ple, although social and eco­nom­ic con­ser­vatism pos­sessed many over­lap­ping cor­re­lates (such as height­ened goal-direct­ed­ness and cau­tion; fig­ure 5 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5), eco­nom­ic con­ser­vatism was asso­ci­at­ed with enhanced sen­sa­tion-seek­ing, where­as social con­ser­vatism was not, and in turn, social con­ser­vatism was relat­ed to height­ened agree­able­ness and risk per­cep­tion, while eco­nom­ic con­ser­vatism was not (elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S5). This bears on recent debates regard­ing the need to frac­tion­ate con­ser­vatism into its social and eco­nom­ic com­po­nents in order to effec­tive­ly and com­pre­hen­sive­ly under­stand its psy­cho­log­i­cal under­pin­nings [17,43,8487], and high­lights sen­sa­tion-seek­ing and risk per­cep­tion as poten­tial can­di­dates for future study. The results can also help to dis­am­biguate past debates about the con­cep­tu­al over­laps between ide­o­log­i­cal ori­en­ta­tions such as social dom­i­nance ori­en­ta­tion, sys­tem jus­ti­fi­ca­tion and author­i­tar­i­an­ism [44] and their dif­fer­en­tial pre­dic­tive pow­er in rela­tion to real-world out­comes such as prej­u­dice [8890] and pol­i­cy atti­tudes [91]. Here, we found that each of these ide­olo­gies exhib­it­ed a dif­fer­ent cog­ni­tive and per­son­al­i­ty sig­na­ture.

    The psy­cho­log­i­cal sig­na­ture of reli­gios­i­ty con­sist­ed of height­ened cau­tion and reduced strate­gic infor­ma­tion pro­cess­ing in the cog­ni­tive domain (sim­i­lar­ly to con­ser­vatism), and enhanced agree­able­ness, risk per­cep­tion and aver­sion to social risk-tak­ing, in the per­son­al­i­ty domain (fig­ure 4 and elec­tron­ic sup­ple­men­tary mate­r­i­al, fig­ure S6). The find­ing that reli­gious par­tic­i­pants exhib­it­ed ele­vat­ed cau­tion and risk per­cep­tion is par­tic­u­lar­ly infor­ma­tive to researchers inves­ti­gat­ing the the­o­ry that threat, risk and dis­gust sen­si­tiv­i­ty are linked to moral and reli­gious con­vic­tions [9297], and that these cog­ni­tive and emo­tion­al bias­es may have played a role in the cul­tur­al ori­gins of large-scale orga­nized reli­gions [98,99]. The results sup­port the notion that expe­ri­enc­ing risks as more salient and prob­a­ble may facil­i­tate devo­tion to reli­gious ide­olo­gies that offer expla­na­tions of these risks (by super­nat­ur­al accounts) and ways to mit­i­gate them (via reli­gious devo­tion and com­mu­ni­ties).
    ...

    Of course, future polit­i­cal machi­na­tions are only going to be rely­ing on this kind of research if it pans out and can reli­ably pro­duce the desired results. Which remains an open ques­tion. But the ques­tion of whether or not polit­i­cal cam­paigns can suc­cess­ful­ly per­suade vot­ers based on psy­cho­log­i­cal pro­files isn’t going to remain open for­ev­er. The longer Face­book keeps itself ripe for crass psy­cho-polit­i­cal manip­u­la­tion the more empir­i­cal data we’re going to have on whether or not this stuff works. It’s nev­er been entire­ly clear if the Cam­bridge Ana­lyt­i­ca scan­dal actu­al­ly suc­ceed­ed in ulti­mate­ly chang­ing vot­er atti­tudes. But we’ll find out soon­er or lat­er. Thanks to the actions of Face­book, or lack of actions. Typ­i­cal­ly, actions by Mark Zucker­berg or Joel Kaplan that result in a lack of actions by the rules enforce­ment divi­sion of the com­pa­ny. As long as that gen­er­al pat­tern of malign activist neglect con­tin­ues at Face­book, the world is going to find out if this stuff actu­al­ly works. Soon­er rather than lat­er. Or at least the polit­i­cal strate­gists of the future will find out if this actu­al­ly works. The peo­ple being manip­u­lat­ed by it will pre­sum­ably remain in the dark, filled with the kind of arti­fi­cial­ly high lev­els of emo­tion­al angst and con­fu­sion that could make them vote for the Trumps of future.

    So it turns out the future might suck much worse than nec­es­sary, but at least some of the biggest vic­tims of that sucky future — the psy­cho­log­i­cal con­ser­v­a­tive vot­ers who are the most vul­ner­a­ble to cut­ting-edge psy­cho­log­i­cal manip­u­la­tion cam­paigns and are effec­tive­ly tricked into vot­ing against their best inter­ests — won’t nec­es­sar­i­ly be ful­ly aware of the mass suck­i­ness because they’ll be too dis­tract­ed with the cut­ting edge pro­pa­gan­da.

    Final­ly, note that one of the impli­ca­tions of this news Cam­bridge Uni­ver­si­ty research is that trashy right-wing sites like Mad World News that recy­cle emo­tion­al­ly charged con­tent intend­ed to play on peo­ple’s fears and big­otries could arguably be char­ac­ter­ized as cut­ting-edge pro­pa­gan­da and the future of pol­i­tics. In oth­er words, we should ful­ly expect to see more of it. Much more. Because it works on a vis­cer­al lev­el. Mad World News is a prophet­ic name. A self-ful­fill­ing prophet­ic name. The Face­book giant real­ly is dri­ving the world mad with recy­cled big­ot­ed trash and it actu­al­ly moves peo­ple and keeps them emo­tion­al­ly engaged, which is why Mad World News is such a pow­er­house. Mad World News and Cam­bridge Ana­lyt­i­ca is the future of right-wing pol­i­tics. At least that’s what recent research out of Cam­bridge Uni­ver­si­ty’s Psy­chol­o­gy Depart­ment sug­gests. Have fun pon­der­ing that.

    Posted by Pterrafractyl | March 7, 2021, 10:47 pm
  43. What did Mark Zucker­berg know and when did he know it? Those are the ques­tions posed by law­suits made pub­lic last week being waged by groups of Face­book share­hold­ers angered over what they char­ac­ter­ize as a wild­ly expen­sive cor­po­rate bribe paid by Face­book to the US Fed­er­al Trade Com­mis­sion (FTC) in the form of a $5 bil­lion set­tle­ment over the Cam­bridge Ana­lyt­i­ca scan­dal. Accord­ing to the law­suit, The FTC said in court that Facebook’s fine would have been clos­er to $106 mil­lion, but the com­pa­ny agreed to pay $5 bil­lion as a kind of quid pro quo to avoid hav­ing Zucker­berg or Sheryl Sand­berg deposed and any lia­bil­i­ty for the Zucker­berg or even be deposed. Yes, Face­book appar­ent­ly got to pay the FTC in order to allow Zucker­berg to not just avoid lia­bil­i­ty but even being deposed. In Feb­ru­ary 2019, the FTC sent Facebook’s lawyers a draft com­plaint that named both the com­pa­ny and Zucker­berg per­son­al­ly as a defen­dant, accord­ing to the suit, so it would appear the large set­tle­ment was­n’t pre­emp­tive but in response to the direct threat to Zucker­berg’s per­son­al rep­u­ta­tion.

    The suit also alleges Zucker­berg and Sand­berg both declined to be inter­viewed in rela­tion to a pre­vi­ous FTC inves­ti­ga­tion that result­ed in a 2012 set­tle­ment. In that case, Price­wa­ter­house­C­oop­ers was hired to audit Facebook’s pri­va­cy com­pli­ance as part of the 2012 FTC set­tle­ment. Instead of agree­ing to be inter­viewed, the com­pa­ny allowed oth­er man­agers to pro­vide untrue state­ments about the company’s prac­tices.

    It’s quite an explo­sive array of charges, if true. For starters, is over­pay­ing fines in order to excul­pate CEOs from lia­bil­i­ty or depo­si­tions even an option? Does that hap­pen? Because if so, that’s not just a Face­book scan­dal.

    But there’s one aspect of the suit that rais­es all sorts of fas­ci­nat­ing ques­tions. Ques­tions that we prob­a­bly should have been ask­ing all along: The suit notes that part of the moti­va­tion for pay­ing the enor­mous fine was to avoid the pub­lic humil­i­a­tion that Zucker­berg and Sand­berg would have to endure and Zucker­berg report­ed­ly has polit­i­cal ambi­tions. It’s a sus­pi­cion many had after Zucker­berg announced his nation­al ‘lis­ten­ing tour’ in 2017.

    And that prospect of Zucker­berg’s polit­i­cal ambi­tions rais­es the obvi­ous ques­tion we real­ly should have been ask­ing all along: if Zucker­berg is plan­ning on run­ning for office, would­n’t he want Face­book to remain a plat­form ripe for exact­ly the kind of psy­cho-polit­i­cal manip­u­la­tion the Cam­bridge Ana­lyt­i­ca scan­dal was all about? Who would be bet­ter posi­tioned to exploit the pow­er of Face­book’s abil­i­ty to manip­u­late vot­ers than Zucker­berg? After all, all indi­ca­tions are he approved of most of these scan­dalous poli­cies him­self. The guy knows where the bod­ies are buried. He buried many of them per­son­al­ly. It’s what this whole law­suit is all about:

    Politi­co

    Face­book paid bil­lions extra to the FTC to spare Zucker­berg in data suit, share­hold­ers allege

    The com­pa­ny alleged­ly agreed to over­pay on the orig­i­nal $106 mil­lion penal­ty so its CEO and COO could avoid depo­si­tion and per­son­al lia­bil­i­ty.
    Face­book co-founder and CEO Mark Zucker­berg tes­ti­fies before the House Finan­cial Ser­vices Com­mit­tee.

    “The Board has nev­er pro­vid­ed a seri­ous check on Zuckerberg’s unfet­tered author­i­ty,” one set of share­hold­ers said. | Chip Somodevilla/Getty Images

    By LEAH NYLEN

    09/21/2021 03:24 PM EDT

    Face­book con­di­tioned its $5 bil­lion pay­ment to the Fed­er­al Trade Com­mis­sion to resolve the Cam­bridge Ana­lyt­i­ca data leak probe on the agency drop­ping plans to sue Face­book CEO Mark Zucker­berg indi­vid­u­al­ly, share­hold­ers allege in a law­suit.

    In suits made pub­lic Tues­day, two groups of share­hold­ers claimed that mem­bers of Facebook’s board allowed the com­pa­ny to over­pay on its fine in order to pro­tect Zucker­berg, the company’s founder and largest share­hold­er. The com­plaints, which cite inter­nal dis­cus­sions among Facebook’s board mem­bers, were filed in Delaware Court of Chancery last month.

    “Zucker­berg, Sand­berg, and oth­er Face­book direc­tors agreed to autho­rize a mul­ti-bil­lion set­tle­ment with the FTC as an express quid pro quo to pro­tect Zucker­berg from being named in the FTC’s com­plaint, made sub­ject to per­son­al lia­bil­i­ty, or even required to sit for a depo­si­tion,” one of the suits alleged.

    The FTC has nev­er dis­closed that it orig­i­nal­ly planned to name Zucker­berg per­son­al­ly in the law­suit, and the agen­cy’s two Democ­rats at the time vot­ed against the set­tle­ment in part because of the lack of per­son­al lia­bil­i­ty for the CEO.

    ...

    New­ly pub­lic: The groups orig­i­nal­ly filed their suits last year. They amend­ed the com­plaints last month after receiv­ing inter­nal files about the board’s dis­cus­sions on pri­va­cy, which a fed­er­al judge had ordered Face­book to pro­vide.

    How it went down: In Feb­ru­ary 2019, the FTC sent Facebook’s lawyers a draft com­plaint that named both the com­pa­ny and Zucker­berg per­son­al­ly as a defen­dant, the share­hold­ers said. The FTC also said in court that Facebook’s fine would have been clos­er to $106 mil­lion, but the com­pa­ny agreed to the $5 bil­lion penal­ty to avoid hav­ing Zucker­berg or Chief Oper­at­ing Offi­cer Sheryl Sand­berg deposed and any lia­bil­i­ty for the CEO, the suit alleged.

    “The Board has nev­er pro­vid­ed a seri­ous check on Zuckerberg’s unfet­tered author­i­ty,” one set of share­hold­ers said. “Instead, it has enabled him, defend­ed him, and paid bil­lions of dol­lars from Facebook’s cor­po­rate cof­fers to make his prob­lems go away.”

    They also alleged that Zucker­berg and Sand­berg both declined to be inter­viewed by Price­wa­ter­house­C­oop­ers, the firm hired to audit Facebook’s pri­va­cy com­pli­ance as part of a 2012 set­tle­ment with the FTC, allowed oth­er man­agers to pro­vide untrue state­ments about the company’s prac­tices and nev­er pro­vid­ed the board with copies of PwC’s audits.

    The account­ing firm con­clud­ed that Face­book didn’t have enough con­trols in place to pro­tect user data, the sec­ond suit alleged, and “that Facebook’s pri­va­cy con­trols were not oper­at­ing with suf­fi­cient effec­tive­ness to pro­vide rea­son­able assur­ance to pro­tect the pri­va­cy of cov­ered infor­ma­tion.”

    ————

    “Face­book paid bil­lions extra to the FTC to spare Zucker­berg in data suit, share­hold­ers allege” By LEAH NYLEN; Politi­co; 09/21/2021

    How it went down: In Feb­ru­ary 2019, the FTC sent Facebook’s lawyers a draft com­plaint that named both the com­pa­ny and Zucker­berg per­son­al­ly as a defen­dant, the share­hold­ers said. The FTC also said in court that Facebook’s fine would have been clos­er to $106 mil­lion, but the com­pa­ny agreed to the $5 bil­lion penal­ty to avoid hav­ing Zucker­berg or Chief Oper­at­ing Offi­cer Sheryl Sand­berg deposed and any lia­bil­i­ty for the CEO, the suit alleged.

    Face­book f*#% up so bad­ly with the Cam­bridge Ana­lyt­i­ca scan­dal that Zucker­berg him­self was fac­ing pos­si­ble lia­bil­i­ties. Per­son­al­ly. So the com­pa­ny paid what­ev­er it took to make Mark’s prob­lems go away. And it appar­ent­ly took $4.9 bil­lion or so, a price this com­pa­ny was will­ing to pay to pro­tect one per­son. Sure, that one per­son is the founder and CEO, but still, that’s not real­ly how pub­licly trad­ed cor­po­ra­tions are sup­posed to behave.

    It’s the kind of behav­ior that’s so strange from a cor­po­rate stand­point that it does­n’t just raise ques­tions about the influ­ence Zucker­berg has over the board — which would have had to approve this FTC deal — it also rais­es ques­tions about just how much of a lia­bil­i­ty is Zucker­berg’s scan­dalous knowl­edge to the val­ue of every­one else’s shares in Face­book. In oth­er words, while it would gen­er­al­ly be improp­er for a pub­licly trad­ed com­pa­ny to pay bil­lions of dol­lars to pro­tect the CEO, the ques­tion of what’s right or wrong from a cor­po­rate fidu­cia­ry stand­point gets rather com­pli­cat­ed when you think about how much dam­age Mark Zucker­berg could have poten­tial­ly done to share­hold­er had he been deposed. How much would Face­book’s share­hold­er val­ue have fall­en if Zucker­berg, Face­book per­son­i­fied, was deposed? That’s the ques­tion Face­book had to be think­ing about when it approved this deal.

    But it’s not just Zucker­berg who is being pro­tect­ed. Both Zucker­berg and Sheryl Sand­berg declined to be inter­viewed by the audi­tors look­ing into Face­book’s com­pli­ance with the 2012 set­tle­ment. Keep­ing Face­book’s top exec­u­tives out of the line of direct ques­tion­ing is like a cor­po­rate top pri­or­i­ty:

    ...
    “Zucker­berg, Sand­berg, and oth­er Face­book direc­tors agreed to autho­rize a mul­ti-bil­lion set­tle­ment with the FTC as an express quid pro quo to pro­tect Zucker­berg from being named in the FTC’s com­plaint, made sub­ject to per­son­al lia­bil­i­ty, or even required to sit for a depo­si­tion,” one of the suits alleged.

    ...

    They also alleged that Zucker­berg and Sand­berg both declined to be inter­viewed by Price­wa­ter­house­C­oop­ers, the firm hired to audit Facebook’s pri­va­cy com­pli­ance as part of a 2012 set­tle­ment with the FTC, allowed oth­er man­agers to pro­vide untrue state­ments about the company’s prac­tices and nev­er pro­vid­ed the board with copies of PwC’s audits.
    ...

    It’s the kind of pic­ture that hints at a cor­po­rate cul­ture where the top exec­u­tives are on board with all the sleazi­est stuff, every­one knows it, and there­fore every­one knows they can’t allow these top exec­u­tives to be deposed. It’s in every­one’s inter­est. At least every­one who is some­how ben­e­fit­ing from these scan­dalous poli­cies. It’s one of the dark ironies of this share­hold­er law­suit: pay­ing $5 bil­lion to keep Zucker­berg off the stand may have been the net val­ue-sav­ing move in that sit­u­a­tion, even for the share­hold­ers who would have seen more of their stock val­ue wiped away had Zucker­berg been forced to face inves­ti­ga­tors. If they paid $5 bil­lion to bribe the FTC, there’s a greater-than-$5 bil­lion scan­dal about to be exposed. Per­haps quite a few greater-than-$5 bil­lion scan­dals. In a way, the idea that the com­pa­ny paid $5 bil­lion to pro­tect Zucker­berg’s pub­lic image in antic­i­pa­tion of a run for pub­lic office is a far more benign expla­na­tion:

    The Guardian

    Face­book ‘over­paid in data set­tle­ment to avoid nam­ing Zucker­berg’

    Law­suit alleges set­tle­ment in Cam­bridge Ana­lyt­i­ca case dri­ven by desire to pro­tect founder

    Dan Mil­mo Glob­al tech­nol­o­gy edi­tor
    Fri 24 Sep 2021 11.12 EDT
    Last mod­i­fied on Fri 24 Sep 2021 11.39 EDT

    Face­book paid $4.9bn more than nec­es­sary to the US Fed­er­al Trade Com­mis­sion in a set­tle­ment over the Cam­bridge Ana­lyt­i­ca scan­dal in order to pro­tect Mark Zucker­berg, a law­suit has claimed.

    The law­suit alleges that the size of the $5bn set­tle­ment was dri­ven by a desire to pro­tect Facebook’s founder and chief exec­u­tive from being named in the FTC com­plaint.

    Face­book was fined by the FTC in 2019 for “deceiv­ing” users about its abil­i­ty to keep per­son­al infor­ma­tion pri­vate, after a year-long inves­ti­ga­tion into the Cam­bridge Ana­lyt­i­ca data breach, where a UK analy­sis firm har­vest­ed mil­lions of Face­book pro­files of US vot­ers.

    “Zucker­berg, [chief oper­at­ing offi­cer Sheryl] Sand­berg, and oth­er Face­book direc­tors agreed to autho­rise a multi­bil­lion set­tle­ment with the FTC as an express quid pro quo to pro­tect Zucker­berg from being named in the FTC’s com­plaint, made sub­ject to per­son­al lia­bil­i­ty, or even required to sit for a depo­si­tion,” said the share­hold­er law­suit filed in Delaware last month but made pub­lic this week.

    The suit quotes a com­mis­sion­er on the FTC, Rohit Chopra, who said the gov­ern­ment “essen­tial­ly trad­ed get­ting more mon­ey, so that an indi­vid­ual did not have to sub­mit to sworn tes­ti­mo­ny and I just think that’s fun­da­men­tal­ly wrong”.

    The law­suit claims that the set­tle­ment was approx­i­mate­ly $4.9bn more than Facebook’s “max­i­mum expo­sure under the applic­a­ble statute”.

    If Zucker­berg had been per­son­al­ly named in the com­plaint he could have faced sub­stan­tial fines for future vio­la­tions and would have suf­fered “exten­sive rep­u­ta­tion­al harm”, the suit claims. It adds: “The risk would have been high­ly mate­r­i­al to Zucker­berg, who is extra­or­di­nar­i­ly sen­si­tive about his pub­lic image and has been report­ed to have polit­i­cal ambi­tions.”

    The suit also accus­es Face­book of a lax approach to cor­po­rate gov­er­nance, par­tic­u­lar­ly regard­ing its founder. “The board has nev­er pro­vid­ed a seri­ous check on Zuckerberg’s unfet­tered author­i­ty. Instead, it has enabled him, defend­ed him, and paid bil­lions of dol­lars from Facebook’s cor­po­rate cof­fers to make his prob­lems go away.”

    ...

    ————

    “Face­book ‘over­paid in data set­tle­ment to avoid nam­ing Zucker­berg’” by Dan Mil­mo Glob­al; The Guardian; 09/24/2021

    “If Zucker­berg had been per­son­al­ly named in the com­plaint he could have faced sub­stan­tial fines for future vio­la­tions and would have suf­fered “exten­sive rep­u­ta­tion­al harm”, the suit claims. It adds: “The risk would have been high­ly mate­r­i­al to Zucker­berg, who is extra­or­di­nar­i­ly sen­si­tive about his pub­lic image and has been report­ed to have polit­i­cal ambi­tions.”

    It’s hard to ignore what the plain­tiffs point out: Zucker­berg real­ly is extra­or­di­nar­i­ly sen­si­tive about his pub­lic image. And it’s unde­ni­able that spec­u­la­tion about Zucker­berg run­ning for high­er office was ram­pant through­out 2017 and ear­ly 2018. It was the Cam­bridge Ana­lyt­i­ca scan­dal in the spring of 2018 that that real­ly killed that meme. There’s a lot of cir­cum­stan­tial evi­dence back­ing what these share­hold­er plain­tiffs are alleg­ing.

    But did Face­book make these moves on behalf of Mark Zucker­berg? Or because of Mark Zucker­berg? Was allow­ing Zucker­berg to get deposed ever real­ly a real­is­tic option for the com­pa­ny? Or would it have ulti­mate­ly cost share­hold­ers far more than $5 bil­lion in val­ue had it allowed Zucker­berg to be deposed and reveal high-lev­el knowl­edge of all the wrong doing? That’s one of the big ques­tions these share­hold­ers face in mak­ing the case that Face­book paid bil­lions to make Zucker­berg’s prob­lems go away, and not Zucker­berg’s and Face­book’s prob­lems:

    ...
    The suit also accus­es Face­book of a lax approach to cor­po­rate gov­er­nance, par­tic­u­lar­ly regard­ing its founder. “The board has nev­er pro­vid­ed a seri­ous check on Zuckerberg’s unfet­tered author­i­ty. Instead, it has enabled him, defend­ed him, and paid bil­lions of dol­lars from Facebook’s cor­po­rate cof­fers to make his prob­lems go away.”
    ...

    Not that these aren’t mutu­al­ly exclu­sive sce­nar­ios. We could be look­ing at a sit­u­a­tion where Face­book’s board agreed that it was worth the mon­ey to pay $5 bil­lion in fines in order to avoid the poten­tial fall­out of Zucker­berg being deposed because that could have ulti­mate­ly cost the com­pa­ny far more than $5 bil­lion if the scope of Zucker­berg’s aware­ness of wrong­do­ing was revealed to reg­u­la­tors. And at the same time, Zucker­berg might have ongo­ing polit­i­cal ambi­tions that the board is more than hap­py to cor­rupt­ly help pro­tect.

    That’s all part of what’s going to make this law­suit a fas­ci­nat­ing legal sto­ry to watch play out. Zucker­berg’s per­son­al cul­pa­bil­i­ty in a broad range of Face­book scan­dals is Face­book’s best defense against charges that it improp­er­ly paid bil­lions just to pro­tect Zucker­berg’s per­son­al rep­u­ta­tion. Good luck to the share­hold­ers. It’ll be inter­est­ing to see if Zucker­berg ends up get­ting deposed. Or if the Face­book ends up set­tling for a sur­pris­ing large sum with­out a Zucker­berg depo­si­tion.

    Posted by Pterrafractyl | September 29, 2021, 9:43 pm

Post a comment