- Spitfire List - http://spitfirelist.com -

FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE [1]. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE [2].

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE [3].

You can sub­scribe to RSS feed from Spitfirelist.com HERE [3].

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE [4].

This broad­cast was record­ed in one, 60-minute seg­ment. [5]

[6]

Peter Thiel

Intro­duc­tion: This pro­gram fol­lows up FTR #‘s 718 [7] and 946 [8], we exam­ined Face­book, not­ing how it’s cute, warm, friend­ly pub­lic facade obscured a cyn­i­cal, reac­tionary, exploita­tive and, ulti­mate­ly “cor­po­ratist” eth­ic and oper­a­tion.

The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims [9] that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. [10] The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion [11] fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

In addi­tion, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing asso­ci­at­ed with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. This is a ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

Next, we note that Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool [12]:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

The above-men­tioned Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, [13] a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm [14], Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel [15] — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie [16], a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

Pro­gram High­lights Include:

  1. Face­book’s project [17] to incor­po­rate brain-to-com­put­er inter­face into its oper­at­ing sys­tem: ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly [18] in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  4. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  5. ” . . . . Face­book hopes to use [19] opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  6. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
  7. Some telling obser­va­tions [20] by Nigel Oakes, the founder of Cam­bridge Ana­lyt­i­ca par­ent firm SCL: ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”
  8. Fur­ther expo­si­tion [21] of Oakes’ state­ment: ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant [22], a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”
  9. Obser­va­tions about the pos­si­bil­i­ties of Face­book’s goal of hav­ing AI gov­ern­ing the edi­to­r­i­al func­tions of its con­tent: As not­ed in a Pop­u­lar Mechan­ics [23]arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t under­stand. . . .”
  10. Microsoft­’s Tay Chat­bot offers a glimpse [24] into this future: As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

1. The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims [9] that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

“Under­cov­er Face­book mod­er­a­tor Was Instruct­ed Not to Remove Fringe Groups or Hate Speech” by Nick Statt; The Verge; 07/17/2018 [25]

An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups “exceed dele­tion thresh­old,” and that those pages are “sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.” The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims [9] that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. The inves­ti­ga­tion out­lines ques­tion­able prac­tices on behalf of CPL Resources [26], a third-par­ty con­tent mod­er­a­tor firm based in Dublin that Face­book has worked with since 2010.

Those ques­tion­able prac­tices pri­mar­i­ly involve a hands-off approach to flagged and report­ed con­tent like graph­ic vio­lence, hate speech, and racist and oth­er big­ot­ed rhetoric from far-right groups. The under­cov­er reporter says he was also instruct­ed to ignore users who looked as if they were under 13 years of age, which is the min­i­mum age require­ment to sign up for Face­book in accor­dance with the Child Online Pro­tec­tion Act, a 1998 pri­va­cy law passed in the US designed to pro­tect young chil­dren from exploita­tion and harm­ful and vio­lent con­tent on the inter­net. The doc­u­men­tary insin­u­ates that Face­book takes a hands-off approach to such con­tent, includ­ing bla­tant­ly false sto­ries parad­ing as truth, because it engages users for longer and dri­ves up adver­tis­ing rev­enue. . . . 

. . . . And as the Chan­nel 4 doc­u­men­tary makes clear, that thresh­old appears to be an ever-chang­ing met­ric that has no con­sis­ten­cy across par­ti­san lines and from legit­i­mate media orga­ni­za­tions to ones that ped­dle in fake news, pro­pa­gan­da, and con­spir­a­cy the­o­ries. It’s also unclear how Face­book is able to enforce its pol­i­cy with third-par­ty mod­er­a­tors all around the world, espe­cial­ly when they may be incen­tivized by any num­ber of per­for­mance met­rics and per­son­al bias­es. .  . . .

Mean­while, Face­book is ramp­ing up efforts in its arti­fi­cial intel­li­gence divi­sion, with the hope that one day algo­rithms can solve these press­ing mod­er­a­tion prob­lems with­out any human input. Ear­li­er today, the com­pa­ny said it would be accel­er­at­ing its AI research efforts [27] to include more researchers and engi­neers, as well as new acad­e­mia part­ner­ships and expan­sions of its AI research labs in eight loca­tions around the world. . . . .The long-term goal of the company’s AI divi­sion is to cre­ate “machines that have some lev­el of com­mon sense” and that learn “how the world works by obser­va­tion, like young chil­dren do in the first few months of life.” . . . .

2. Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. [10] The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The fol­low­ing arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion [11] fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

Addi­tion­al­ly, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing we typ­i­cal­ly asso­ciate with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. A ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

It’s also worth not­ing that this ser­vice would be per­fect for accom­plish­ing the right-wing’s long-stand­ing goal of purg­ing the fed­er­al gov­ern­ment of lib­er­al employ­ees. A goal that ‘Alt-Right’ neo-Nazi troll Charles C. John­son and ‘Alt-Right’ neo-Nazi bil­lion­aire Peter Thiel report­ed­ly was help­ing the Trump team accom­plish dur­ing the tran­si­tion peri­od [28]. An ide­o­log­i­cal purge of the State Depart­ment is report­ed­ly already under­way [29].  

“Aggre­gateIQ Had Data of Thou­sands of Face­book Users” by Aliya Ram and Han­nah Kuch­ler; Finan­cial Times; 06/01/2018 [30]

Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion [11] fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called “AIQ John­ny Scraper” reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts.

The tech­nol­o­gy group now says it shut down the John­ny Scraper app this week along with 13 oth­ers that could be relat­ed to Aggre­gateIQ, with a total of 1,000 users.

Ime Archi­bong, vice-pres­i­dent of prod­uct part­ner­ships, said the com­pa­ny was inves­ti­gat­ing whether there had been any mis­use of data. “We have sus­pend­ed an addi­tion­al 14 apps this week, which were installed by around 1,000 peo­ple,” he said. “They were all cre­at­ed after 2014 and so did not have access to friends’ data. How­ev­er, these apps appear to be linked to Aggre­gateIQ, which was affil­i­at­ed with Cam­bridge Ana­lyt­i­ca. So we have sus­pend­ed them while we inves­ti­gate fur­ther.”.

Accord­ing to files seen by the Finan­cial Times, Aggre­gateIQ had stored a list of 759,934 Face­book users in a table that record­ed home address­es, phone num­bers and email address­es for some pro­files.

Jeff Sil­vester, Aggre­gateIQ chief oper­at­ing offi­cer, said the file came from soft­ware designed for a par­tic­u­lar client, which tracked which users had liked a par­tic­u­lar page or were post­ing pos­i­tive and neg­a­tive com­ments.

“I believe as part of that the client did attempt to match peo­ple who had liked their Face­book page with sup­port­ers in their vot­er file [online elec­toral records],” he said. “I believe the result of this match­ing is what you are look­ing at. This is a fair­ly com­mon task that vot­er file tools do all of the time.”

He added that the pur­pose of the John­ny Scraper app was to repli­cate Face­book posts made by one of AggregateIQ’s clients into smart­phone apps that also belonged to the client.

Aggre­gateIQ has sought to dis­tance itself [31] from an inter­na­tion­al pri­va­cy scan­dal engulf­ing Face­book and Cam­bridge Ana­lyt­i­ca, despite alle­ga­tions from Christo­pher Wylie [32], a whistle­blow­er at the now-defunct UK firm, that it had act­ed as the Cana­di­an branch of the organ­i­sa­tion.

The files do not indi­cate whether users had giv­en per­mis­sion for their Face­book “Likes” to be tracked through third-par­ty apps, or whether they were scraped from pub­licly vis­i­ble pages. Mr Vick­ery, who analysed AggregateIQ’s files after uncov­er­ing a trove of infor­ma­tion online, said that the com­pa­ny appeared to have gath­ered data from Face­book users despite telling Cana­di­an MPs “we don’t real­ly process data on folks”.

The files also include posts that focus on polit­i­cal issues with state­ments such as: “Like if you agree with Rea­gan that ‘gov­ern­ment is the prob­lem’,” but it is not clear if this infor­ma­tion orig­i­nat­ed on Face­book. Mr Sil­vester said the soft­ware Aggre­gateIQ had designed allowed its client to browse pub­lic com­ments. “It is pos­si­ble that some of those pub­lic com­ments or posts are in the file,” he said. . . .

. . . . “The over­all theme of these com­pa­nies and the way their tools work is that every­thing is reliant on every­thing else, but has enough inde­pen­dent oper­abil­i­ty to pre­serve deni­a­bil­i­ty,” said Mr Vick­ery. “But when you com­bine all these dif­fer­ent data sources togeth­er it becomes some­thing else.” . . . .

3. Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool [12]:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

“Zucker­berg Set Up Fraud­u­lent Scheme to ‘Weaponise’ Data, Court Case Alleges” by Car­ole Cad­wal­ladr and Emma Gra­ham-Har­ri­son; The Guardian; 05/24/2018 [12]

Mark Zucker­berg faces alle­ga­tions that he devel­oped a “mali­cious and fraud­u­lent scheme” to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive “weaponised” the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.

A legal motion filed last week in the supe­ri­or court of San Mateo draws upon exten­sive con­fi­den­tial emails and mes­sages between Face­book senior exec­u­tives includ­ing Mark Zucker­berg. He is named indi­vid­u­al­ly in the case and, it is claimed, had per­son­al over­sight of the scheme.

Face­book rejects all claims, and has made a motion to have the case dis­missed using a free speech defence.

It claims the first amend­ment pro­tects its right to make “edi­to­r­i­al deci­sions” as it sees fit. Zucker­berg and oth­er senior exec­u­tives have assert­ed that Face­book is a plat­form not a pub­lish­er, most recent­ly in tes­ti­mo­ny to Con­gress.

Heather Whit­ney, a legal schol­ar who has writ­ten about social media com­pa­nies for the Knight First Amend­ment Insti­tute at Colum­bia Uni­ver­si­ty [33], said, in her opin­ion, this exposed a poten­tial ten­sion for Face­book.

“Facebook’s claims in court that it is an edi­tor for first amend­ment pur­pos­es and thus free to cen­sor and alter the con­tent avail­able on its site is in ten­sion with their, espe­cial­ly recent, claims before the pub­lic and US Con­gress to be neu­tral plat­forms.”

The com­pa­ny that has filed the case, a for­mer start­up called Six4Three, is now try­ing to stop Face­book from hav­ing the case thrown out and has sub­mit­ted legal argu­ments that draw on thou­sands of emails, the details of which are cur­rent­ly redact­ed. Face­book has until next Tues­day to file a motion request­ing that the evi­dence remains sealed, oth­er­wise the doc­u­ments will be made pub­lic.

The devel­op­er alleges the cor­re­spon­dence shows Face­book paid lip ser­vice to pri­va­cy con­cerns in pub­lic but behind the scenes exploit­ed its users’ pri­vate infor­ma­tion.

It claims inter­nal emails and mes­sages reveal a cyn­i­cal and abu­sive sys­tem set up to exploit access to users’ pri­vate infor­ma­tion, along­side a raft of anti-com­pet­i­tive behav­iours. . . .

. . . . The papers sub­mit­ted to the court last week allege Face­book was not only aware of the impli­ca­tions of its pri­va­cy pol­i­cy, but active­ly exploit­ed them, inten­tion­al­ly cre­at­ing and effec­tive­ly flag­ging up the loop­hole that Cam­bridge Ana­lyt­i­ca used to col­lect data on up to 87 mil­lion Amer­i­can users.

The law­suit also claims Zucker­berg mis­led the pub­lic and Con­gress about Facebook’s role in the Cam­bridge Ana­lyt­i­ca scan­dal [34] by por­tray­ing it as a vic­tim of a third par­ty that had abused its rules for col­lect­ing and shar­ing data.

“The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,” legal doc­u­ments said.

The law­suit claims to have uncov­ered fresh evi­dence con­cern­ing how Face­book made deci­sions about users’ pri­va­cy. It sets out alle­ga­tions that, in 2012, Facebook’s adver­tis­ing busi­ness, which focused on desk­top ads, was dev­as­tat­ed by a rapid and unex­pect­ed shift to smart­phones.

Zucker­berg respond­ed by forc­ing devel­op­ers to buy expen­sive ads on the new, under­used mobile ser­vice or risk hav­ing their access to data at the core of their busi­ness cut off, the court case alleges.

“Zucker­berg weaponised the data of one-third of the planet’s pop­u­la­tion in order to cov­er up his fail­ure to tran­si­tion Facebook’s busi­ness from desk­top com­put­ers to mobile ads before the mar­ket became aware that Facebook’s finan­cial pro­jec­tions in its 2012 IPO fil­ings were false,” one court fil­ing said.

In its lat­est fil­ing, Six4Three alleges Face­book delib­er­ate­ly used its huge amounts of valu­able and high­ly per­son­al user data to tempt devel­op­ers to cre­ate plat­forms with­in its sys­tem, imply­ing that they would have long-term access to per­son­al infor­ma­tion, includ­ing data from sub­scribers’ Face­book friends. 

Once their busi­ness­es were run­ning, and reliant on data relat­ing to “likes”, birth­days, friend lists and oth­er Face­book minu­ti­ae, the social media com­pa­ny could and did tar­get any that became too suc­cess­ful, look­ing to extract mon­ey from them, co-opt them or destroy them, the doc­u­ments claim.

Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access.

The law­suit alleges that Face­book ini­tial­ly focused on kick­start­ing its mobile adver­tis­ing plat­form, as the rapid adop­tion of smart­phones dec­i­mat­ed the desk­top adver­tis­ing busi­ness in 2012.

It lat­er used its abil­i­ty to cut off data to force rivals out of busi­ness, or coerce own­ers of apps Face­book cov­et­ed into sell­ing at below the mar­ket price, even though they were not break­ing any terms of their con­tracts, accord­ing to the doc­u­ments. . . .

. . . . David God­kin, Six4Three’s lead coun­sel said: “We believe the pub­lic has a right to see the evi­dence and are con­fi­dent the evi­dence clear­ly demon­strates the truth of our alle­ga­tions, and much more.”

Sandy Parak­i­las, a for­mer Face­book employ­ee turned whistle­blow­er who has tes­ti­fied to the UK par­lia­ment about its busi­ness prac­tices, said the alle­ga­tions were a “bomb­shell”. He claimed to MPs Facebook’s senior exec­u­tives were aware of abus­es of friends’ data back in 2011-12 and he was warned not to look into the issue.

“They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,” he said. “If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.” . . .

4. Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, [13] a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm [14], Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

“Cam­bridge Ana­lyt­i­ca to File for Bank­rupt­cy After Mis­use of Face­book Data” by Nicholas Con­fes­sore and Matthew Rosen­berg; The New York Times; 5/02/2018. [13]

. . . . In a state­ment post­ed to its web­site [35], Cam­bridge Ana­lyt­i­ca said the con­tro­ver­sy had dri­ven away vir­tu­al­ly all of the company’s cus­tomers, forc­ing it to file for bank­rupt­cy in both the Unit­ed States and Britain. The elec­tions divi­sion of Cambridge’s British affil­i­ate, SCL Group, will also shut down, the com­pa­ny said.

But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . 

. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm [14], Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. Mr. Prince found­ed the pri­vate secu­ri­ty firm Black­wa­ter, which was renamed Xe Ser­vices after Black­wa­ter con­trac­tors were con­vict­ed of killing Iraqi civil­ians.

Cam­bridge and SCL offi­cials pri­vate­ly raised the pos­si­bil­i­ty that Emer­da­ta could be used for a Black­wa­ter-style rebrand­ing of Cam­bridge Ana­lyt­i­ca and the SCL Group, accord­ing two peo­ple with knowl­edge of the com­pa­nies, who asked for anonymi­ty to describe con­fi­den­tial con­ver­sa­tions. One plan under con­sid­er­a­tion was to sell off the com­bined company’s data and intel­lec­tu­al prop­er­ty.

An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . 

5. In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

“Black­Rock Is Wor­ried Tech­nol­o­gy Firms Are About to Know ‘Every Sin­gle Thing You Do’” by John Detrix­he; Quartz; 11/02/2017 [36]

The pres­i­dent of Black­Rock, the world’s biggest asset man­ag­er, is among those who think big tech­nol­o­gy firms [37] could invade the finan­cial industry’s turf. Google and Face­book have thrived by col­lect­ing and stor­ing data about con­sumer habits—our emails, search queries, and the videos we watch. Under­stand­ing of our finan­cial lives could be an even rich­er source of data for them to sell to adver­tis­ers.

“I wor­ry about the data,” said Black­Rock pres­i­dent Robert Kapi­to at a con­fer­ence in Lon­don today (Nov. 2). “We’re going to have some seri­ous com­peti­tors.”

If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said.

Kapi­to is wor­ried because the effort to win con­trol of pay­ment sys­tems is already underway—Apple will allow iMes­sage users [38] to send cash to each oth­er, and Face­book is inte­grat­ing per­son-to-per­son Pay­Pal pay­ments [39] into its Mes­sen­ger app.

As more pay­ments flow through mobile phones, banks are wor­ried they could get left behind, rel­e­gat­ed to serv­ing as low-mar­gin util­i­ties. To fight back, they’ve start­ed ini­tia­tives such as Zelle to com­pete with pay­ment ser­vices like Pay­Pal.

Bar­clays CEO Jes Sta­ley point­ed out at the con­fer­ence that banks prob­a­bly have the “rich­est data pool” of any sec­tor, and he said some 25% of the UK’s econ­o­my flows through Barl­cays’ pay­ment sys­tems. The indus­try could use that infor­ma­tion to offer bet­ter ser­vices. Com­pa­nies could alert peo­ple that they’re not sav­ing enough for retire­ment, or sug­gest ways to save mon­ey on their expens­es. The trick is access­ing that data and ana­lyz­ing it like a big tech­nol­o­gy com­pa­ny would.

And banks still have one thing going for them: There’s a mas­sive fortress of rules and reg­u­la­tions sur­round­ing the indus­try. “No one wants to be reg­u­lat­ed like we are,” Sta­ley said.

6. Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

“Face­book to Banks: Give Us Your Data, We’ll Give You Our Users” by Emi­ly Glaz­er, Deepa Seethara­man and Anna­Maria Andri­o­tis; The Wall Street Jour­nal; 08/06/2018 [40]

Face­book Inc. wants your finan­cial data.

The social-media giant has asked large U.S. banks to share detailed finan­cial infor­ma­tion about their cus­tomers, includ­ing card trans­ac­tions and check­ing-account bal­ances, as part of an effort to offer new ser­vices to users.

Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter.

Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said.

Data pri­va­cy [41] is a stick­ing point in the banks’ con­ver­sa­tions with Face­book, accord­ing to peo­ple famil­iar with the mat­ter. The talks are tak­ing place as Face­book faces sev­er­al inves­ti­ga­tions over its ties to polit­i­cal ana­lyt­ics firm Cam­bridge Ana­lyt­i­ca, which accessed data on as many as 87 mil­lion Face­book users with­out their con­sent.

One large U.S. bank pulled away from talks due to pri­va­cy con­cerns, some of the peo­ple said.

Face­book has told banks that the addi­tion­al cus­tomer infor­ma­tion could be used to offer ser­vices that might entice users to spend more time on Mes­sen­ger, a per­son famil­iar with the dis­cus­sions said. The com­pa­ny is try­ing to deep­en user engage­ment: Investors shaved more than $120 bil­lion from its mar­ket val­ue in one day last month after it said its growth is start­ing to slow. [42].

Face­book said it wouldn’t use the bank data for ad-tar­get­ing pur­pos­es or share it with third par­ties. . . .

. . . . Alpha­bet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to pro­vide basic bank­ing ser­vices on appli­ca­tions such as Google Assis­tant and Alexa, accord­ing to peo­ple famil­iar with the con­ver­sa­tions. . . . 

7. In FTR #946 [43], we exam­ined Cam­bridge Ana­lyt­i­ca, its Trump and Steve Ban­non-linked tech firm that har­vest­ed Face­book data on behalf of the Trump cam­paign.

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel [15] — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie [16], a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

“Spy Contractor’s Idea Helped Cam­bridge Ana­lyt­i­ca Har­vest Face­book Data” by NICHOLAS CONFESSORE and MATTHEW ROSENBERG; The New York Times; 03/27/2018 [44]

As a start-up called Cam­bridge Ana­lyt­i­ca [45] sought to har­vest the Face­book data of tens of mil­lions of Amer­i­cans in sum­mer 2014, the com­pa­ny received help from at least one employ­ee at Palan­tir Tech­nolo­gies, a top Sil­i­con Val­ley con­trac­tor to Amer­i­can spy agen­cies and the Pen­ta­gon. It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times.

Cam­bridge ulti­mate­ly took a sim­i­lar approach. By ear­ly sum­mer, the com­pa­ny found a uni­ver­si­ty researcher to har­vest data using a per­son­al­i­ty ques­tion­naire and Face­book app. The researcher scraped pri­vate data from over 50 mil­lion Face­book users — and Cam­bridge Ana­lyt­i­ca [46] went into busi­ness sell­ing so-called psy­cho­me­t­ric pro­files of Amer­i­can vot­ers, set­ting itself on a col­li­sion course with reg­u­la­tors and law­mak­ers in the Unit­ed States and Britain.

The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel [15] — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book.

“There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,” said Christo­pher Wylie [16], a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . .

. . . .The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .

. . . . Doc­u­ments and inter­views indi­cate that start­ing in 2013, Mr. Chmieli­auskas began cor­re­spond­ing with Mr. Wylie and a col­league from his Gmail account. At the time, Mr. Wylie and the col­league worked for the British defense and intel­li­gence con­trac­tor SCL Group, which formed Cam­bridge Ana­lyt­i­ca with Mr. Mer­cer the next year. The three shared Google doc­u­ments to brain­storm ideas about using big data to cre­ate sophis­ti­cat­ed behav­ioral pro­files, a prod­uct code-named “Big Dad­dy.”

A for­mer intern at SCL — Sophie Schmidt, the daugh­ter of Eric Schmidt, then Google’s exec­u­tive chair­man — urged the com­pa­ny to link up with Palan­tir, accord­ing to Mr. Wylie’s tes­ti­mo­ny and a June 2013 email viewed by The Times.

“Ever come across Palan­tir. Amus­ing­ly Eric Schmidt’s daugh­ter was an intern with us and is try­ing to push us towards them?” one SCL employ­ee wrote to a col­league in the email.

. . . . But he [Wylie] said some Palan­tir employ­ees helped engi­neer Cambridge’s psy­cho­graph­ic mod­els.

“There were Palan­tir staff who would come into the office and work on the data,” Mr. Wylie told law­mak­ers. “And we would go and meet with Palan­tir staff at Palan­tir.” He did not pro­vide an exact num­ber for the employ­ees or iden­ti­fy them.

Palan­tir employ­ees were impressed with Cambridge’s back­ing from Mr. Mer­cer, one of the world’s rich­est men, accord­ing to mes­sages viewed by The Times. And Cam­bridge Ana­lyt­i­ca viewed Palantir’s Sil­i­con Val­ley ties as a valu­able resource for launch­ing and expand­ing its own busi­ness.

In an inter­view this month with The Times, Mr. Wylie said that Palan­tir employ­ees were eager to learn more about using Face­book data and psy­cho­graph­ics. Those dis­cus­sions con­tin­ued through spring 2014, accord­ing to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix vis­it­ed Palantir’s Lon­don office on Soho Square. One side was set up like a high-secu­ri­ty office, Mr. Wylie said, with sep­a­rate rooms that could be entered only with par­tic­u­lar codes. The oth­er side, he said, was like a tech start-up — “weird inspi­ra­tional quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieli­auskas con­tin­ued to com­mu­ni­cate with Mr. Wylie’s team in 2014, as the Cam­bridge employ­ees were locked in pro­tract­ed nego­ti­a­tions with a researcher at Cam­bridge Uni­ver­si­ty, Michal Kosin­s­ki, to obtain Face­book data through an app Mr. Kosin­s­ki had built. The data was cru­cial to effi­cient­ly scale up Cambridge’s psy­cho­met­rics prod­ucts so they could be used in elec­tions and for cor­po­rate clients. . . .

8a. Some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

Face­book wants to read your thoughts [17].

  1. ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly [18] in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
  3. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  4. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

Face­book Lit­er­al­ly Wants to Read Your Thoughts” by Kris­ten V. Brown; Giz­modo; 4/19/2017. [17]

At Facebook’s annu­al devel­op­er con­fer­ence, F8, on Wednes­day, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er.

What if you could type direct­ly from your brain?” Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute.

“That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly [18] in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

“Our world is both dig­i­tal and phys­i­cal,” she said. “Our goal is to cre­ate and ship new, cat­e­go­ry-defin­ing con­sumer prod­ucts that are social first, at scale.”

She also showed a video that demon­strat­ed a sec­ond tech­nol­o­gy that showed the abil­i­ty to “lis­ten” to human speech through vibra­tions on the skin. This tech has been in devel­op­ment to aid peo­ple with dis­abil­i­ties, work­ing a lit­tle like a Braille that you feel with your body rather than your fin­gers. Using actu­a­tors and sen­sors, a con­nect­ed arm­band was able to con­vey to a woman in the video a tac­tile vocab­u­lary of nine dif­fer­ent words.

Dugan adds that it’s also pos­si­ble to “lis­ten” to human speech by using your skin. It’s like using braille but through a sys­tem of actu­a­tors and sen­sors. Dugan showed a video exam­ple of how a woman could fig­ure out exact­ly what objects were select­ed on a touch­screen based on inputs deliv­ered through a con­nect­ed arm­band.

Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. Brain-com­put­er inter­face tech­nol­o­gy is still in its infan­cy. So far, researchers have been suc­cess­ful in using it to allow peo­ple with dis­abil­i­ties to con­trol par­a­lyzed or pros­thet­ic limbs. But stim­u­lat­ing the brain’s motor cor­tex is a lot sim­pler than read­ing a person’s thoughts and then trans­lat­ing those thoughts into some­thing that might actu­al­ly be read by a com­put­er.

The end goal is to build an online world that feels more immer­sive and real—no doubt so that you spend more time on Face­book.

“Our brains pro­duce enough data to stream 4 HD movies every sec­ond. The prob­lem is that the best way we have to get infor­ma­tion out into the world — speech — can only trans­mit about the same amount of data as a 1980s modem,” CEO Mark Zucker­berg said in a Face­book post. “We’re work­ing on a sys­tem that will let you type straight from your brain about 5x faster than you can type on your phone today. Even­tu­al­ly, we want to turn it into a wear­able tech­nol­o­gy that can be man­u­fac­tured at scale. Even a sim­ple yes/no ‘brain click’ would help make things like aug­ment­ed real­i­ty feel much more nat­ur­al.”

“That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,” she said. “Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.”

Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly [18] in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone.

8b. More about Face­book’s brain-to-com­put­er [19] inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

“Face­book Plans Ethics Board to Mon­i­tor Its Brain-Com­put­er Inter­face Work” by Josh Con­stine; Tech Crunch; 4/19/2017. [19]

Face­book will assem­ble an inde­pen­dent Eth­i­cal, Legal and Social Impli­ca­tions (ELSI) pan­el to over­see its devel­op­ment of a direct brain-to-com­put­er typ­ing inter­face [47] it pre­viewed today at its F8 con­fer­ence. Facebook’s R&D depart­ment Build­ing 8’s head Regi­na Dugan tells TechCrunch, “It’s ear­ly days . . . we’re in the process of form­ing it right now.”

Mean­while, much of the work on the brain inter­face is being con­duct­ed by Facebook’s uni­ver­si­ty research part­ners like UC Berke­ley and Johns Hop­kins. Facebook’s tech­ni­cal lead on the project, Mark Chevil­let, says, “They’re all held to the same stan­dards as the NIH or oth­er gov­ern­ment bod­ies fund­ing their work, so they already are work­ing with insti­tu­tion­al review boards at these uni­ver­si­ties that are ensur­ing that those stan­dards are met.” Insti­tu­tion­al review boards ensure test sub­jects aren’t being abused and research is being done as safe­ly as pos­si­ble.

Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on “skin-hear­ing” that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. Dugan insists, “None of the work that we do that is relat­ed to this will be absent of these kinds of insti­tu­tion­al review boards.”

So at least there will be inde­pen­dent ethi­cists work­ing to min­i­mize the poten­tial for mali­cious use of Facebook’s brain-read­ing tech­nol­o­gy to steal or police people’s thoughts.

Dur­ing our inter­view, Dugan showed her cog­nizance of people’s con­cerns, repeat­ing the start of her keynote speech today say­ing, “I’ve nev­er seen a tech­nol­o­gy that you devel­oped with great impact that didn’t have unin­tend­ed con­se­quences that need­ed to be guardrailed or man­aged. In any new tech­nol­o­gy you see a lot of hype talk, some apoc­a­lyp­tic talk and then there’s seri­ous work which is real­ly focused on bring­ing suc­cess­ful out­comes to bear in a respon­si­ble way.”

In the past, she says the safe­guards have been able to keep up with the pace of inven­tion. “In the ear­ly days of the Human Genome Project there was a lot of con­ver­sa­tion about whether we’d build a super race or whether peo­ple would be dis­crim­i­nat­ed against for their genet­ic con­di­tions and so on,” Dugan explains. “Peo­ple took that very seri­ous­ly and were respon­si­ble about it, so they formed what was called a ELSI pan­el . . . By the time that we got the tech­nol­o­gy avail­able to us, that frame­work, that con­trac­tu­al, eth­i­cal frame­work had already been built, so that work will be done here too. That work will have to be done.” . . . .

Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, “The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.”

Facebook’s dom­i­na­tion of social net­work­ing and adver­tis­ing give it bil­lions in prof­it per quar­ter to pour into R&D. But its old “Move fast and break things” phi­los­o­phy is a lot more fright­en­ing when it’s build­ing brain scan­ners. Hope­ful­ly Face­book will pri­or­i­tize the assem­bly of the ELSI ethics board Dugan promised and be as trans­par­ent as pos­si­ble about the devel­op­ment of this excit­ing-yet-unnerv­ing tech­nol­o­gy.…

  1. In FTR #‘s 718 [7] and 946 [43], we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er [17] tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:  ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

9a. Nigel Oakes is the founder of SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca. His com­ments are relat­ed in a New York Times arti­cle. ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”

“Face­book Gets Grilling in U.K. That It Avoid­ed in U.S.” by Adam Satar­i­ano; The New York Times [West­ern Edi­tion]; 4/27/2018; p. B3. [20]

. . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .

9b. Mr. Oakes’ com­ments are relat­ed in detail in anoth­er Times arti­cle. ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant [22], a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”

“The Ori­gins of an Ad Man’s Manip­u­la­tion Empire” by Ellen Bar­ry; The New York Times [West­ern Edi­tion]; 4/21/2018; p. A4. [21]

. . . . Adolf Hitler “didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,” he told the aca­d­e­m­ic, Emma L. Bri­ant [22], a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims.

This sort of cam­paign, he con­tin­ued, did not require bells and whis­tles from tech­nol­o­gy or social sci­ence.

“What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,” he told Dr. Bri­ant. “Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.” . . .

9c. Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”
“Microsoft Ter­mi­nates Its Tay AI Chat­bot after She Turns into a Nazi” by Peter Bright; Ars Tech­ni­ca; 3/24/2016. [24]

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot [48], into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing [49] that “Hitler was right I hate the jews.”

@TheBigBrebowski [50] ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016 [51]

 Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardianquotes one [52] where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism.” . . .

But like all teenagers, she seems to be angry with her moth­er.

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot [48], into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing [49] that “Hitler was right I hate the jews.”

@TheBigBrebowski [50] ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016 [51]

Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardian quotes one [52] where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “Ricky Ger­vais learned total­i­tar­i­an­ism from Adolf Hitler, the inven­tor of athe­ism.”

In addi­tion to turn­ing the bot off, Microsoft has delet­ed many of the offend­ing tweets. But this isn’t an action to be tak­en light­ly; Red­mond would do well to remem­ber that it was humans attempt­ing to pull the plug on Skynet that proved to be the last straw, prompt­ing the sys­tem to attack Rus­sia in order to elim­i­nate its ene­mies. We’d bet­ter hope that Tay does­n’t sim­i­lar­ly retal­i­ate. . . .

9d. As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

The Most Dan­ger­ous Thing About AI Is That It Has to Learn From Us” by Eric Limer; Pop­u­lar Mechan­ics; 3/24/2016. [23]

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros log­ic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot  [53]that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it [54]:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly stag­ger­ing. 

Microsoft has since delet­ed some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions  [55]memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have point­ed out [56], no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neur­al net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get start­ed. They can only get that from us. There is no oth­er way. 

But before you give up on human­ity entire­ly, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age [57]—and pranksters pro-active­ly went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neur­al net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly [58], espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actu­al, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can real­ly love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of fun­ny when you aren’t talk­ing about lit­eral all-pow­er­ful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. . . .

. . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand.