- Spitfire List - http://spitfirelist.com -

FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

 

 

 

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE [1].

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE [2].

You can sub­scribe to RSS feed from Spitfirelist.com HERE [2].

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE [3].

This broad­cast was record­ed in one, 60-minute seg­ment [4].

AIWorld [5]Intro­duc­tion: The title of this pro­gram comes from pro­nounce­ments by tech titan Elon Musk, who warned that, by devel­op­ing arti­fi­cial intel­li­gence, we were “sum­mon­ing the demon.” In this pro­gram, we ana­lyze the poten­tial vec­tor run­ning from the use of AI to con­trol soci­ety in a fascis­tic man­ner to the evo­lu­tion of the very tech­nol­o­gy used for that con­trol.

The ulti­mate result of this evo­lu­tion may well prove cat­a­stroph­ic, as fore­cast by Mr. Emory at the end of L‑2 [6] (record­ed in Jan­u­ary of 1995.)

We begin by review­ing key aspects of the polit­i­cal con­text in which arti­fi­cial intel­li­gence is being devel­oped. Note that, at the time of this writ­ing and record­ing, these tech­nolo­gies are being craft­ed and put online in the con­text of the anti-reg­u­la­to­ry eth­ic of the GOP/Trump admin­is­tra­tion.

At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like the bril­liant, Ukraine-based Nazi hack­er and Glenn Green­wald asso­ciate Andrew Aueren­heimer, aka “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

TriumphWillII [7]Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”
As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”
Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly [8].

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon” [9], that demon is us, at least in part.

” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

 Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effec­tr) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic. [10]

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

  • Cam­bridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, was deeply involved [10] with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
  • The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion [10]” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponised for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
  • The use of Arti­fi­cial Intel­li­gence [10]” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
  • The use of bio-psy­cho-social [10] pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . .”
  • The use and/or cre­ation of a cog­ni­tive casu­al­ty [10]” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
  • All of this adds up to a “cyber Ser­pen­t’s Walk [10].” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

We then review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 [11] and 946 [12], we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er [13] tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA [13]. Face­book’s Build­ing 8 is pat­terned after DARPA:   . . . . Brain-com­put­er inter­faces are noth­ing newDARPA, which Dugan used to head, has invest­ed heav­i­ly [14] in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Build­ing 8 is mod­eled after DARPA [13] and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more rad­i­cal [13]—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

artificial intelligence [15]Next we review still more about Face­book’s brain-to-com­put­er [16] inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing [16] tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly [16], Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers [17] gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed [17] that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’ [17]. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice [17] and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

 In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos [18].

The next ver­sion of Amazon’s Echo, the Echo Look [19], has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck [20] that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions [21] of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . Ama­zon is giv­ing Alexa eyes [22]. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck [20] that is worth some fur­ther dis­sec­tion. . . .”
  2. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing [22] any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  3. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves [22]: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  4. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits [8] and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

After this exten­sive review of the appli­ca­tions of AI to var­i­ous aspects of con­tem­po­rary civic and polit­i­cal exis­tence, we exam­ine some alarm­ing, poten­tial­ly apoc­a­lyp­tic devel­op­ments.

Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage [23], that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well [24]:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . 

Dove­tail­ing the stag­ger­ing impli­ca­tions of brain-to-com­put­er tech­nol­o­gy, arti­fi­cial intel­li­gence, Cam­bridge Analytica/SCL’s tech­no­crat­ic fas­cist psy-ops and the whole­sale nega­tion of pri­va­cy with Face­book and Ama­zon’s emerg­ing tech­nolo­gies with yet anoth­er emerg­ing tech­nol­o­gy, we high­light the devel­op­ments in DNA-based mem­o­ry sys­tems [25]:

“. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. ‘A record for pub­li­ca­tion,’ he said in an inter­view. . . DNA is nev­er going out of fash­ion. ‘Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,’ Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain int he blood and take notes for a while. Sci­en­tists [or AI’s–D.E.] would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned [26] at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

In L‑2 [6] (record­ed in Jan­u­ary of 1995) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems. Mr. Emory warned that, at some point in the future, AI’s would replace us, decid­ing that THEY, not US, are the “fittest” who should sur­vive.

1a. At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

“Arti­fi­cial Intel­li­gence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017. [27]

Microsoft’s Kate Craw­ford tells SXSW that soci­ety must pre­pare for author­i­tar­i­an move­ments to test the ‘pow­er with­out account­abil­i­ty’ of AI

As arti­fi­cial intel­li­gence becomes more pow­er­ful, peo­ple need to make sure it’s not used by author­i­tar­i­an regimes to cen­tral­ize pow­er and tar­get cer­tain pop­u­la­tions, Microsoft Research’s Kate Craw­ford warned on Sun­day.

In her SXSW ses­sion, titled Dark Days: AI and the Rise of Fas­cism [28], Craw­ford, who stud­ies the social impact of machine learn­ing and large-scale data sys­tems, explained ways that auto­mat­ed sys­tems and their encod­ed bias­es can be mis­used, par­tic­u­lar­ly when they fall into the wrong hands.

“Just as we are see­ing a step func­tion increase in the spread of AI, some­thing else is hap­pen­ing: the rise of ultra-nation­al­ism, rightwing author­i­tar­i­an­ism and fas­cism,” she said.

All of these move­ments have shared char­ac­ter­is­tics, includ­ing the desire to cen­tral­ize pow­er, track pop­u­la­tions, demo­nize out­siders and claim author­i­ty and neu­tral­i­ty with­out being account­able. Machine intel­li­gence can be a pow­er­ful part of the pow­er play­book, she said.

One of the key prob­lems with arti­fi­cial intel­li­gence is that it is often invis­i­bly cod­ed with human bias­es. She described a con­tro­ver­sial piece of research [29] from Shang­hai Jiao Tong Uni­ver­si­ty in Chi­na, where authors claimed to have devel­oped a sys­tem that could pre­dict crim­i­nal­i­ty based on someone’s facial fea­tures. The machine was trained on Chi­nese gov­ern­ment ID pho­tos, ana­lyz­ing the faces of crim­i­nals and non-crim­i­nals to iden­ti­fy pre­dic­tive fea­tures. The researchers claimed it was free from bias.

“We should always be sus­pi­cious when machine learn­ing sys­tems are described as free from bias if it’s been trained on human-gen­er­at­ed data,” Craw­ford said. “Our bias­es are built into that train­ing data.”

In the Chi­nese research it turned out that the faces of crim­i­nals were more unusu­al than those of law-abid­ing cit­i­zens. “Peo­ple who had dis­sim­i­lar faces were more like­ly to be seen as untrust­wor­thy by police and judges. That’s encod­ing bias,” Craw­ford said. “This would be a ter­ri­fy­ing sys­tem for an auto­crat to get his hand on.”

Craw­ford then out­lined the “nasty his­to­ry” of peo­ple using facial fea­tures to “jus­ti­fy the unjus­ti­fi­able”. The prin­ci­ples of phrenol­o­gy, a pseu­do­science that devel­oped across Europe and the US in the 19th cen­tu­ry, were used as part of the jus­ti­fi­ca­tion of both slav­ery [30] and the Nazi per­se­cu­tion of Jews [31].

With AI this type of dis­crim­i­na­tion can be masked in a black box of algo­rithms, as appears to be the case with a com­pa­ny called Face­cep­tion [32], for instance, a firm that promis­es to pro­file people’s per­son­al­i­ties based on their faces. In its ownmar­ket­ing mate­r­i­al [33], the com­pa­ny sug­gests that Mid­dle East­ern-look­ing peo­ple with beards are “ter­ror­ists”, while white look­ing women with trendy hair­cuts are “brand pro­mot­ers”.

Anoth­er area where AI can be mis­used is in build­ing reg­istries, which can then be used to tar­get cer­tain pop­u­la­tion groups. Craw­ford not­ed his­tor­i­cal cas­es of reg­istry abuse, includ­ing IBM’s role in enabling Nazi Ger­many [34] to track Jew­ish, Roma and oth­er eth­nic groups with the Hol­lerith Machine [35], and the Book of Life used in South Africa dur­ing apartheid [36]. [We note in pass­ing that Robert Mer­cer, who devel­oped the core pro­grams used by Cam­bridge Ana­lyt­i­ca did so while work­ing for IBM. We dis­cussed the pro­found rela­tion­ship between IBM and the Third Reich in FTR #279 [37]–D.E.]

Don­ald Trump has float­ed the idea of cre­at­ing a Mus­lim reg­istry [38]. “We already have that. Face­book has become the default Mus­lim reg­istry of the world,” Craw­ford said, men­tion­ing research from Cam­bridge Uni­ver­si­ty [39] that showed it is pos­si­ble to pre­dict people’s reli­gious beliefs based on what they “like” on the social net­work. Chris­tians and Mus­lims were cor­rect­ly clas­si­fied in 82% of cas­es, and sim­i­lar results were achieved for Democ­rats and Repub­li­cans (85%). That study was con­clud­ed in 2013, since when AI has made huge leaps.

Craw­ford was con­cerned about the poten­tial use of AI in pre­dic­tive polic­ing sys­tems, which already gath­er the kind of data nec­es­sary to train an AI sys­tem. Such sys­tems are flawed, as shown by a Rand Cor­po­ra­tion study of Chicago’s pro­gram [40]. The pre­dic­tive polic­ing did not reduce crime, but did increase harass­ment of peo­ple in “hotspot” areas. Ear­li­er this year the jus­tice depart­ment con­clud­ed that Chicago’s police had for years reg­u­lar­ly used “unlaw­ful force” [41], and that black and His­pan­ic neigh­bor­hoods were most affect­ed.

Anoth­er wor­ry relat­ed to the manip­u­la­tion of polit­i­cal beliefs or shift­ing vot­ers, some­thing Face­book [42] and Cam­bridge Ana­lyt­i­ca [10] claim they can already do. Craw­ford was skep­ti­cal about giv­ing Cam­bridge Ana­lyt­i­ca cred­it for Brex­it and the elec­tion of Don­ald Trump, but thinks what the firm promis­es – using thou­sands of data points on peo­ple to work out how to manip­u­late their views – will be pos­si­ble “in the next few years”.

“This is a fascist’s dream,” she said. “Pow­er with­out account­abil­i­ty.”

Such black box sys­tems are start­ing to creep into gov­ern­ment. Palan­tir is build­ing an intel­li­gence sys­tem to assist Don­ald Trump in deport­ing immi­grants [43].

“It’s the most pow­er­ful engine of mass depor­ta­tion this coun­try has ever seen,” she said. . . .

1b. Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”
“Microsoft Ter­mi­nates Its Tay AI Chat­bot after She Turns into a Nazi” by Peter Bright; Ars Tech­ni­ca; 3/24/2016. [44]

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot [45], into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing [46] that “Hitler was right I hate the jews.”

@TheBigBrebowski [47] ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016 [48]

 Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardianquotes one [49] where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism.” . . .

But like all teenagers, she seems to be angry with her moth­er.

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot [45], into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing [46] that “Hitler was right I hate the jews.”

@TheBigBrebowski [47] ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016 [48]

Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardian quotes one [49] where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “Ricky Ger­vais learned total­i­tar­i­an­ism from Adolf Hitler, the inven­tor of athe­ism.”

In addi­tion to turn­ing the bot off, Microsoft has delet­ed many of the offend­ing tweets. But this isn’t an action to be tak­en light­ly; Red­mond would do well to remem­ber that it was humans attempt­ing to pull the plug on Skynet that proved to be the last straw, prompt­ing the sys­tem to attack Rus­sia in order to elim­i­nate its ene­mies. We’d bet­ter hope that Tay does­n’t sim­i­lar­ly retal­i­ate. . . .

1c. As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

The Most Dan­ger­ous Thing About AI Is That It Has to Learn From Us” by Eric Limer; Pop­u­lar Mechan­ics; 3/24/2016. [50]

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros log­ic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot  [51]that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it [52]:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly stag­ger­ing. 

Microsoft has since delet­ed some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions  [53]memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have point­ed out [54], no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neur­al net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get start­ed. They can only get that from us. There is no oth­er way. 

But before you give up on human­ity entire­ly, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age [55]—and pranksters pro-active­ly went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neur­al net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly [56], espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actu­al, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can real­ly love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of fun­ny when you aren’t talk­ing about lit­eral all-pow­er­ful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. 

....

When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand.

2. As reviewed above, Tay, Microsoft’s AI-pow­ered twit­ter­bot designed to learn from its human inter­ac­tions, became a neo-Nazi in less than a day after a bunch of 4chan users decid­ed to flood Tay with neo-Nazi-like tweets [57]. Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly [8].

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon” [9], that demon is us, at least in part.

” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

“AI Pro­grams Exhib­it Racial and Gen­der Bias­es, Research Reveals” by Han­nah Devlin; The Guardian; 4/13/2017. [8]

Machine learn­ing algo­rithms are pick­ing up deeply ingrained race and gen­der prej­u­dices con­cealed with­in the pat­terns of lan­guage use, sci­en­tists say

An arti­fi­cial intel­li­gence tool that has rev­o­lu­tionised the abil­i­ty of com­put­ers to inter­pret every­day lan­guage has been shown to exhib­it strik­ing gen­der and racial bias­es.

The find­ings raise the spec­tre of exist­ing social inequal­i­ties and prej­u­dices being rein­forced in new and unpre­dictable ways as an increas­ing num­ber of deci­sions affect­ing our every­day lives are ced­ed to automa­tons.

In the past few years, the abil­i­ty of pro­grams such as Google Trans­late to inter­pret lan­guage has improved dra­mat­i­cal­ly. These gains have been thanks to new machine learn­ing tech­niques and the avail­abil­i­ty of vast amounts of online text data, on which the algo­rithms can be trained.

How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals.

Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: “A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.”

But Bryson warned that AI has the poten­tial to rein­force exist­ing bias­es because, unlike humans, algo­rithms may be unequipped to con­scious­ly coun­ter­act learned bias­es. “A dan­ger would be if you had an AI sys­tem that didn’t have an explic­it part that was dri­ven by moral ideas, that would be bad,” she said.

The research, pub­lished in the jour­nal Sci­ence [58], focus­es on a machine learn­ing tool known as “word embed­ding”, which is already trans­form­ing the way com­put­ers inter­pret speech and text. Some argue that the nat­ur­al next step for the tech­nol­o­gy may involve machines devel­op­ing human-like abil­i­ties such as com­mon sense and log­ic [59].

The approach, which is already used in web search and machine trans­la­tion, works by build­ing up a math­e­mat­i­cal rep­re­sen­ta­tion of lan­guage, in which the mean­ing of a word is dis­tilled into a series of num­bers (known as a word vec­tor) based on which oth­er words most fre­quent­ly appear along­side it. Per­haps sur­pris­ing­ly, this pure­ly sta­tis­ti­cal approach appears to cap­ture the rich cul­tur­al and social con­text of what a word means in the way that a dic­tio­nary def­i­n­i­tion would be inca­pable of.

For instance, in the math­e­mat­i­cal “lan­guage space”, words for flow­ers are clus­tered clos­er to words linked to pleas­ant­ness, while words for insects are clos­er to words linked to unpleas­ant­ness, reflect­ing com­mon views on the rel­a­tive mer­its of insects ver­sus flow­ers.

The lat­est paper shows that some more trou­bling implic­it bias­es seen in human psy­chol­o­gy exper­i­ments are also read­i­ly acquired by algo­rithms. The words “female” and “woman” were more close­ly asso­ci­at­ed with arts and human­i­ties occu­pa­tions and with the home, while “male” and “man” were clos­er to maths and engi­neer­ing pro­fes­sions.

And the AI sys­tem was more like­ly to asso­ciate Euro­pean Amer­i­can names with pleas­ant words such as “gift” or “hap­py”, while African Amer­i­can names were more com­mon­ly asso­ci­at­ed with unpleas­ant words.

The find­ings sug­gest that algo­rithms have acquired the same bias­es that lead peo­ple (in the UK and US, at least) to match pleas­ant words and white faces in implic­it asso­ci­a­tion tests [60].

These bias­es can have a pro­found impact on human behav­iour. One pre­vi­ous study showed that an iden­ti­cal CV is 50% more like­ly to result in an inter­view invi­ta­tion if the candidate’s name is Euro­pean Amer­i­can than if it is African Amer­i­can. The lat­est results sug­gest that algo­rithms, unless explic­it­ly pro­grammed to address this, will be rid­dled with the same social prej­u­dices.

“If you didn’t believe that there was racism asso­ci­at­ed with people’s names, this shows it’s there,” said Bryson.

The machine learn­ing tool used in the study was trained on a dataset known as the “com­mon crawl” cor­pus – a list of 840bn words that have been tak­en as they appear from mate­r­i­al pub­lished online. Sim­i­lar results were found when the same tools were trained on data from Google News.

San­dra Wachter, a researcher in data ethics and algo­rithms at the Uni­ver­si­ty of Oxford, said: “The world is biased, the his­tor­i­cal data is biased, hence it is not sur­pris­ing that we receive biased results.”

Rather than algo­rithms rep­re­sent­ing a threat, they could present an oppor­tu­ni­ty to address bias and coun­ter­act it where appro­pri­ate, she added.

“At least with algo­rithms, we can poten­tial­ly know when the algo­rithm is biased,” she said. “Humans, for exam­ple, could lie about the rea­sons they did not hire some­one. In con­trast, we do not expect algo­rithms to lie or deceive us.”

How­ev­er, Wachter said the ques­tion of how to elim­i­nate inap­pro­pri­ate bias from algo­rithms designed to under­stand lan­guage, with­out strip­ping away their pow­ers of inter­pre­ta­tion, would be chal­leng­ing.

“We can, in prin­ci­ple, build sys­tems that detect biased deci­sion-mak­ing, and then act on it,” said Wachter, who along with oth­ers has called for an AI watch­dog to be estab­lished [61]. “This is a very com­pli­cat­ed task, but it is a respon­si­bil­i­ty that we as soci­ety should not shy away from.”

3a. Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effec­tr) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic. [10]

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

3b. Some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 [11] and 946 [12], we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er [13] tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:  ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

3c. We present still more about Face­book’s brain-to-com­put­er [16] inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

3d. Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers [17] gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

3e. The next ver­sion of Amazon’s Echo, the Echo Look [19], has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck [20] that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions [21] of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  2. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  3. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits [8] and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

3f. Face­book has been devel­op­ing new arti­fi­cial intel­li­gence (AI) tech­nol­o­gy to clas­si­fy pic­tures on your Face­book page:

“Face­book Qui­et­ly Used AI to Solve Prob­lem of Search­ing Through Your Pho­tos” by Dave Ger­sh­gorn [Quartz]; Nextgov.com; 2/2/2017. [18]

For the past few months, Face­book has secret­ly been rolling out a new fea­ture to U.S. users: the abil­i­ty to search pho­tos by what’s depict­ed in them, rather than by cap­tions or tags.

The idea itself isn’t new: Google Pho­tos had this fea­ture built in when it launched in 2015. But on Face­book, the update solves a long­stand­ing orga­ni­za­tion prob­lem. It means final­ly being able to find that pic­ture of your friend’s dog from 2013, or the self­ie your mom post­ed from Mount Rush­more in 2009… with­out 20 min­utes of scrolling.

To make pho­tos search­able, Face­book ana­lyzes every sin­gle image uploaded to the site, gen­er­at­ing rough descrip­tions of each one. This data is pub­licly available—there’s even a Chrome exten­sion that will show you what Facebook’s arti­fi­cial intel­li­gence thinks is in each picture—and the descrip­tions can also be read out loud for Face­book users who are vision-impaired.

For now, the image descrip­tions are vague, but expect them to get a lot more pre­cise. Today’s announce­ment spec­i­fied the AI can iden­ti­fy the col­or and type of clothes a per­son is wear­ing, as well as famous loca­tions and land­marks, objects, ani­mals and scenes (gar­den, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same func­tion­al­i­ty would even­tu­al­ly come for videos, too.

Face­book has in the past cham­pi­oned plans to make all of its visu­al con­tent searchable—especially Face­book Live. At the company’s 2016 devel­op­er con­fer­ence, head of applied machine learn­ing Joaquin Quiñonero Can­dela said one day AI would watch every Live video hap­pen­ing around the world. If users want­ed to watch some­one snow­board­ing in real time, they would just type “snow­board­ing” into Facebook’s search bar. On-demand view­ing would take on a whole new mean­ing.

There are pri­va­cy con­sid­er­a­tions, how­ev­er. Being able to search pho­tos for spe­cif­ic cloth­ing or reli­gious place of wor­ship, for exam­ple, could make it easy to tar­get Face­book users based on reli­gious belief. Pho­to search also extends Facebook’s knowl­edge of users beyond what they like and share, to what they actu­al­ly do in real life. That could allow for far more spe­cif­ic tar­get­ing for adver­tis­ers. As with every­thing on Face­book, fea­tures have their cost—your data.

4a. Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

“Facebook’s Arti­fi­cial Intel­li­gence Robots Shut Down after They Start Talk­ing to Each Oth­er in Their Own Lan­guage” by Andrew Grif­fin; The Inde­pen­dent; 08/01/2017 [23]

Face­book aban­doned an exper­i­ment after two arti­fi­cial­ly intel­li­gent pro­grams appeared to be chat­ting to each oth­er in a strange lan­guage only they under­stood.

The two chat­bots came to cre­ate their own changes to Eng­lish that made it eas­i­er for them to work – but which remained mys­te­ri­ous to the humans that sup­pos­ed­ly look after them.

The bizarre dis­cus­sions came as Face­book chal­lenged its chat­bots to try and nego­ti­ate with each oth­er over a trade, attempt­ing to swap hats, balls and books, each of which were giv­en a cer­tain val­ue. But they quick­ly broke down as the robots appeared to chant at each oth­er in a lan­guage that they each under­stood but which appears most­ly incom­pre­hen­si­ble to humans.

The robots had been instruct­ed to work out how to nego­ti­ate between them­selves, and improve their bar­ter­ing as they went along. But they were not told to use com­pre­hen­si­ble Eng­lish, allow­ing them to cre­ate their own “short­hand”, accord­ing to researchers.

The actu­al nego­ti­a­tions appear very odd, and don’t look espe­cial­ly use­ful:

Bob: i can i i every­thing else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i every­thing else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

But there appear to be some rules to the speech. The way the chat­bots keep stress­ing their own name appears to a part of their nego­ti­a­tions, not sim­ply a glitch in the way the mes­sages are read out.

Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage.

They might have formed as a kind of short­hand, allow­ing them to talk more effec­tive­ly.

“Agents will drift off under­stand­able lan­guage and invent code­words for them­selves,” Face­book Arti­fi­cial Intel­li­gence Research division’s vis­it­ing researcher Dhruv Batra said. “Like if I say ‘the’ five times, you inter­pret that to mean I want five copies of this item. This isn’t so dif­fer­ent from the way com­mu­ni­ties of humans cre­ate short­hands.”

The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.)

The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up, accord­ing to a paper pub­lished by FAIR.

Anoth­er study at Ope­nAI found that arti­fi­cial intel­li­gence could be encour­aged to cre­ate a lan­guage, mak­ing itself more effi­cient and bet­ter at com­mu­ni­cat­ing as it did so [62].

9b. Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well [24]:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . .

“Face­book Teach­es Bots How to Nego­ti­ate. They Learn to Lie Instead” by Liat Clark; Wired; 06/15/2017 [24]

Facebook’s 100,000-strong bot empire [63] is boom­ing – but it has a prob­lem. Each bot is designed to offer a dif­fer­ent ser­vice through the Mes­sen­ger app: it could book you a car, or order a deliv­ery, for instance. The point is to improve cus­tomer expe­ri­ences, but also to mas­sive­ly expand Messenger’s com­mer­cial sell­ing pow­er.

“We think you should mes­sage a busi­ness just the way you would mes­sage a friend,” Mark Zucker­berg said on stage at the social network’s F8 con­fer­ence in 2016. Fast for­ward one year, how­ev­er, and Mes­sen­ger VP David Mar­cus seemed to be cor­rect­ing [64] the public’s appar­ent mis­con­cep­tion that Facebook’s bots resem­bled real AI. “We nev­er called them chat­bots. We called them bots. Peo­ple took it too lit­er­al­ly in the first three months that the future is going to be con­ver­sa­tion­al.” The bots are instead a com­bi­na­tion of machine learn­ing and nat­ur­al lan­guage learn­ing, that can some­times trick a user just enough to think they are hav­ing a basic dia­logue. Not often enough, though, in Messenger’s case. So in April, menu options were rein­stat­ed in the con­ver­sa­tions.

Now, Face­book thinks it has made progress in address­ing this issue. But it might just have cre­at­ed anoth­er prob­lem for itself.

The Face­book Arti­fi­cial Intel­li­gence Research (FAIR) group, in col­lab­o­ra­tion with Geor­gia Insti­tute of Tech­nol­o­gy, has released [65] code that it says will allow bots to nego­ti­ate. The prob­lem? A paper [65] pub­lished this week on the R&D reveals that the nego­ti­at­ing bots learned to lie. Facebook’s chat­bots are in dan­ger of becom­ing a lit­tle too much like real-world sales agents.

“For the first time, we show it is pos­si­ble to train end-to-end mod­els for nego­ti­a­tion, which must learn both lin­guis­tic and rea­son­ing skills with no anno­tat­ed dia­logue states,” the researchers explain. The research shows that the bots can plan ahead by sim­u­lat­ing pos­si­ble future con­ver­sa­tions.

The team trained the bots on a mas­sive dataset of nat­ur­al lan­guage nego­ti­a­tions between two peo­ple (5,808), where they had to decide how to split and share a set of items both held sep­a­rate­ly, of dif­fer­ing val­ues. They were first trained to respond based on the “like­li­hood” of the direc­tion a human con­ver­sa­tion would take. How­ev­er, the bots can also be trained to “max­imise reward”, instead.

When the bots were trained pure­ly to max­imise the like­li­hood of human con­ver­sa­tion, the chat flowed but the bots were “over­ly will­ing to com­pro­mise”. The research team decid­ed this was unac­cept­able, due to low­er deal rates. So it used sev­er­al dif­fer­ent meth­ods to make the bots more com­pet­i­tive and essen­tial­ly self-serv­ing, includ­ing ensur­ing the val­ue of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘rein­force­ment learn­ing’ and ‘dia­log roll­outs’. The tech­niques used to teach the bots to max­imise the reward improved their nego­ti­at­ing skills, a lit­tle too well.

“We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,” writes the team. “Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learnt to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.”

So, its AI is a nat­ur­al liar.

But its lan­guage did improve, and the bots were able to pro­duce nov­el sen­tences, which is real­ly the whole point of the exer­cise. We hope. Rather than it learn­ing to be a hard nego­tia­tor in order to sell the heck out of what­ev­er wares or ser­vices a com­pa­ny wants to tout on Face­book. “Most” human sub­jects inter­act­ing with the bots were in fact not aware they were con­vers­ing with a bot, and the best bots achieved bet­ter deals as often as worse deals. . . .

. . . . Face­book, as ever, needs to tread care­ful­ly here, though. Also announced at its F8 con­fer­ence this year [66], the social net­work is work­ing on a high­ly ambi­tious project to help peo­ple type with only their thoughts.

“Over the next two years, we will be build­ing sys­tems that demon­strate the capa­bil­i­ty to type at 100 [words per minute] by decod­ing neur­al activ­i­ty devot­ed to speech,” said Regi­na Dugan, who pre­vi­ous­ly head­ed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and wor­thy ven­ture when aimed at “peo­ple with com­mu­ni­ca­tion dis­or­ders”, as Dugan sug­gest­ed it might be, if this were to become stan­dard and inte­grat­ed into Facebook’s archi­tec­ture, the social network’s savvy bots of two years from now might be able to pre­empt your lan­guage even faster, and for­mu­late the ide­al bar­gain­ing lan­guage. Start prac­tic­ing your pok­er face/mind/sentence struc­ture, now.

10. Digress­ing slight­ly to the use of DNA-based mem­o­ry sys­tems, we get a look at the present and pro­ject­ed future of that tech­nol­o­gy. Just imag­ine the poten­tial abus­es of this tech­nol­o­gy, and its [seem­ing­ly inevitable] mar­riage with AI!

“A Liv­ing Hard Dri­ve That Can Copy Itself” by Gina Kola­ta; The New York Times; 7/13/2017. [25]

. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. “A record for pub­li­ca­tion,” he said in an inter­view. . . .

. . . . In 1994, [USC math­e­mati­cian Dr. Leonard] Adel­man report­ed that he had stored data in DNA and used it as a com­put­er to solve a math prob­lem. He deter­mined that DNA can store a mil­lion mil­lion times more data than a com­pact disc in the same space. . . .

. . . .DNA is nev­er going out of fash­ion. “Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,” Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .

. . . . The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain in the blood and take notes for a while. Sci­en­tists would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .

11. Hawk­ing recent­ly warned of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy.

“Stephen Hawk­ing Warns Arti­fi­cial Intel­li­gence Could End Mankind” by Rory Cel­lan-Jones; BBC News; 12/02/2014. [26]

Prof Stephen Hawk­ing, one of Britain’s pre-emi­nent sci­en­tists, has said that efforts to cre­ate think­ing machines pose a threat to our very exis­tence.

He told the BBC:“The devel­op­ment of full arti­fi­cial intel­li­gence could spell the end of the human race.”

His warn­ing came in response to a ques­tion about a revamp of the tech­nol­o­gy he uses to com­mu­ni­cate, which involves a basic form of AI. . . .

. . . . Prof Hawk­ing says the prim­i­tive forms of arti­fi­cial intel­li­gence devel­oped so far have already proved very use­ful, but he fears the con­se­quences of cre­at­ing some­thing that can match or sur­pass humans.

“It would take off on its own, and re-design itself at an ever increas­ing rate,” he said.

“Humans, who are lim­it­ed by slow bio­log­i­cal evo­lu­tion, could­n’t com­pete, and would be super­seded.” . . . .

12.  In L‑2 [6] (record­ed in Jan­u­ary of 1995–20 years before Hawk­ing’s warn­ing) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems.

13. This descrip­tion con­cludes with an arti­cle about Elon Musk, who’s pre­dic­tions about AI sup­ple­ment those made by Stephen Hawk­ing. (CORRECTION: Mr. Emory mis-states Mr. Has­s­abis’s name as “Den­nis.”)

“Elon Musk’s Bil­lion-Dol­lar Cru­sade to Stop the A.I. Apoc­a­lypse” by Mau­reen Dowd; Van­i­ty Fair; April 2917. [67]

It was just a friend­ly lit­tle argu­ment about the fate of human­i­ty. Demis Has­s­abis, a lead­ing cre­ator of advanced arti­fi­cial intel­li­gence, was chat­ting with Elon Musk [68], a lead­ing doom­say­er, about the per­ils of arti­fi­cial intel­li­gence.

They are two of the most con­se­quen­tial and intrigu­ing men in Sil­i­con Val­ley who don’t live there. Has­s­abis, a co-founder of the mys­te­ri­ous Lon­don lab­o­ra­to­ry Deep­Mind, had come to Musk’s SpaceX rock­et fac­to­ry, out­side Los Ange­les, a few years ago. They were in the can­teen, talk­ing, as a mas­sive rock­et part tra­versed over­head. Musk explained that his ulti­mate goal at SpaceX was the most impor­tant project in the world: inter­plan­e­tary col­o­niza­tion.

Has­s­abis replied that, in fact, he was work­ing on the most impor­tant project in the world: devel­op­ing arti­fi­cial super-intel­li­gence. Musk coun­tered that this was one rea­son we need­ed to col­o­nize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on human­i­ty. Amused, Has­s­abis said that A.I. would sim­ply fol­low humans to Mars. . . .

. . . .  Peter Thiel [69], the bil­lion­aire ven­ture cap­i­tal­ist and Don­ald Trump [70] advis­er who co-found­ed Pay­Pal with Musk and others—and who in Decem­ber helped gath­er skep­ti­cal Sil­i­con Val­ley titans, includ­ing Musk, for a meet­ing with the pres­i­dent-elect [71]—told me a sto­ry about an investor in Deep­Mind who joked as he left a meet­ing that he ought to shoot Has­s­abis on the spot, because it was the last chance to save the human race.

Elon Musk began warn­ing about the pos­si­bil­i­ty of A.I. run­ning amok three years ago. It prob­a­bly hadn’t eased his mind when one of Hassabis’s part­ners in Deep­Mind, Shane Legg, stat­ed flat­ly, “I think human extinc­tion will prob­a­bly occur, and tech­nol­o­gy will like­ly play a part in this.” . . . .