Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

 

 

 

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

AIWorldIntro­duc­tion: The title of this pro­gram comes from pro­nounce­ments by tech titan Elon Musk, who warned that, by devel­op­ing arti­fi­cial intel­li­gence, we were “sum­mon­ing the demon.” In this pro­gram, we ana­lyze the poten­tial vec­tor run­ning from the use of AI to con­trol soci­ety in a fascis­tic man­ner to the evo­lu­tion of the very tech­nol­o­gy used for that con­trol.

The ulti­mate result of this evo­lu­tion may well prove cat­a­stroph­ic, as fore­cast by Mr. Emory at the end of L‑2 (record­ed in Jan­u­ary of 1995.)

We begin by review­ing key aspects of the polit­i­cal con­text in which arti­fi­cial intel­li­gence is being devel­oped. Note that, at the time of this writ­ing and record­ing, these tech­nolo­gies are being craft­ed and put online in the con­text of the anti-reg­u­la­to­ry eth­ic of the GOP/Trump admin­is­tra­tion.

At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like the bril­liant, Ukraine-based Nazi hack­er and Glenn Green­wald asso­ciate Andrew Aueren­heimer, aka “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

TriumphWillIIBeware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”
As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”
Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly.

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us, at least in part.

” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

 Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effec­tr) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic.

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

  • Cam­bridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, was deeply involved with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
  • The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponised for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
  • The use of Arti­fi­cial Intel­li­gence” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
  • The use of bio-psy­cho-social pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . .”
  • The use and/or cre­ation of a cog­ni­tive casu­al­ty” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
  • All of this adds up to a “cyber Ser­pen­t’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

We then review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:   . . . . Brain-com­put­er inter­faces are noth­ing newDARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more rad­i­cal—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

artificial intelligenceNext we review still more about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

 In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos.

The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”
  2. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  3. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  4. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

After this exten­sive review of the appli­ca­tions of AI to var­i­ous aspects of con­tem­po­rary civic and polit­i­cal exis­tence, we exam­ine some alarm­ing, poten­tial­ly apoc­a­lyp­tic devel­op­ments.

Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . 

Dove­tail­ing the stag­ger­ing impli­ca­tions of brain-to-com­put­er tech­nol­o­gy, arti­fi­cial intel­li­gence, Cam­bridge Analytica/SCL’s tech­no­crat­ic fas­cist psy-ops and the whole­sale nega­tion of pri­va­cy with Face­book and Ama­zon’s emerg­ing tech­nolo­gies with yet anoth­er emerg­ing tech­nol­o­gy, we high­light the devel­op­ments in DNA-based mem­o­ry sys­tems:

“. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. ‘A record for pub­li­ca­tion,’ he said in an inter­view. . . DNA is nev­er going out of fash­ion. ‘Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,’ Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain int he blood and take notes for a while. Sci­en­tists [or AI’s–D.E.] would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

In L‑2 (record­ed in Jan­u­ary of 1995) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems. Mr. Emory warned that, at some point in the future, AI’s would replace us, decid­ing that THEY, not US, are the “fittest” who should sur­vive.

1a. At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

“Arti­fi­cial Intel­li­gence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017.

Microsoft’s Kate Craw­ford tells SXSW that soci­ety must pre­pare for author­i­tar­i­an move­ments to test the ‘pow­er with­out account­abil­i­ty’ of AI

As arti­fi­cial intel­li­gence becomes more pow­er­ful, peo­ple need to make sure it’s not used by author­i­tar­i­an regimes to cen­tral­ize pow­er and tar­get cer­tain pop­u­la­tions, Microsoft Research’s Kate Craw­ford warned on Sun­day.

In her SXSW ses­sion, titled Dark Days: AI and the Rise of Fas­cism, Craw­ford, who stud­ies the social impact of machine learn­ing and large-scale data sys­tems, explained ways that auto­mat­ed sys­tems and their encod­ed bias­es can be mis­used, par­tic­u­lar­ly when they fall into the wrong hands.

“Just as we are see­ing a step func­tion increase in the spread of AI, some­thing else is hap­pen­ing: the rise of ultra-nation­al­ism, rightwing author­i­tar­i­an­ism and fas­cism,” she said.

All of these move­ments have shared char­ac­ter­is­tics, includ­ing the desire to cen­tral­ize pow­er, track pop­u­la­tions, demo­nize out­siders and claim author­i­ty and neu­tral­i­ty with­out being account­able. Machine intel­li­gence can be a pow­er­ful part of the pow­er play­book, she said.

One of the key prob­lems with arti­fi­cial intel­li­gence is that it is often invis­i­bly cod­ed with human bias­es. She described a con­tro­ver­sial piece of research from Shang­hai Jiao Tong Uni­ver­si­ty in Chi­na, where authors claimed to have devel­oped a sys­tem that could pre­dict crim­i­nal­i­ty based on someone’s facial fea­tures. The machine was trained on Chi­nese gov­ern­ment ID pho­tos, ana­lyz­ing the faces of crim­i­nals and non-crim­i­nals to iden­ti­fy pre­dic­tive fea­tures. The researchers claimed it was free from bias.

“We should always be sus­pi­cious when machine learn­ing sys­tems are described as free from bias if it’s been trained on human-gen­er­at­ed data,” Craw­ford said. “Our bias­es are built into that train­ing data.”

In the Chi­nese research it turned out that the faces of crim­i­nals were more unusu­al than those of law-abid­ing cit­i­zens. “Peo­ple who had dis­sim­i­lar faces were more like­ly to be seen as untrust­wor­thy by police and judges. That’s encod­ing bias,” Craw­ford said. “This would be a ter­ri­fy­ing sys­tem for an auto­crat to get his hand on.”

Craw­ford then out­lined the “nasty his­to­ry” of peo­ple using facial fea­tures to “jus­ti­fy the unjus­ti­fi­able”. The prin­ci­ples of phrenol­o­gy, a pseu­do­science that devel­oped across Europe and the US in the 19th cen­tu­ry, were used as part of the jus­ti­fi­ca­tion of both slav­ery and the Nazi per­se­cu­tion of Jews.

With AI this type of dis­crim­i­na­tion can be masked in a black box of algo­rithms, as appears to be the case with a com­pa­ny called Face­cep­tion, for instance, a firm that promis­es to pro­file people’s per­son­al­i­ties based on their faces. In its ownmar­ket­ing mate­r­i­al, the com­pa­ny sug­gests that Mid­dle East­ern-look­ing peo­ple with beards are “ter­ror­ists”, while white look­ing women with trendy hair­cuts are “brand pro­mot­ers”.

Anoth­er area where AI can be mis­used is in build­ing reg­istries, which can then be used to tar­get cer­tain pop­u­la­tion groups. Craw­ford not­ed his­tor­i­cal cas­es of reg­istry abuse, includ­ing IBM’s role in enabling Nazi Ger­many to track Jew­ish, Roma and oth­er eth­nic groups with the Hol­lerith Machine, and the Book of Life used in South Africa dur­ing apartheid. [We note in pass­ing that Robert Mer­cer, who devel­oped the core pro­grams used by Cam­bridge Ana­lyt­i­ca did so while work­ing for IBM. We dis­cussed the pro­found rela­tion­ship between IBM and the Third Reich in FTR #279–D.E.]

Don­ald Trump has float­ed the idea of cre­at­ing a Mus­lim reg­istry. “We already have that. Face­book has become the default Mus­lim reg­istry of the world,” Craw­ford said, men­tion­ing research from Cam­bridge Uni­ver­si­ty that showed it is pos­si­ble to pre­dict people’s reli­gious beliefs based on what they “like” on the social net­work. Chris­tians and Mus­lims were cor­rect­ly clas­si­fied in 82% of cas­es, and sim­i­lar results were achieved for Democ­rats and Repub­li­cans (85%). That study was con­clud­ed in 2013, since when AI has made huge leaps.

Craw­ford was con­cerned about the poten­tial use of AI in pre­dic­tive polic­ing sys­tems, which already gath­er the kind of data nec­es­sary to train an AI sys­tem. Such sys­tems are flawed, as shown by a Rand Cor­po­ra­tion study of Chicago’s pro­gram. The pre­dic­tive polic­ing did not reduce crime, but did increase harass­ment of peo­ple in “hotspot” areas. Ear­li­er this year the jus­tice depart­ment con­clud­ed that Chicago’s police had for years reg­u­lar­ly used “unlaw­ful force”, and that black and His­pan­ic neigh­bor­hoods were most affect­ed.

Anoth­er wor­ry relat­ed to the manip­u­la­tion of polit­i­cal beliefs or shift­ing vot­ers, some­thing Face­book and Cam­bridge Ana­lyt­i­ca claim they can already do. Craw­ford was skep­ti­cal about giv­ing Cam­bridge Ana­lyt­i­ca cred­it for Brex­it and the elec­tion of Don­ald Trump, but thinks what the firm promis­es – using thou­sands of data points on peo­ple to work out how to manip­u­late their views – will be pos­si­ble “in the next few years”.

“This is a fascist’s dream,” she said. “Pow­er with­out account­abil­i­ty.”

Such black box sys­tems are start­ing to creep into gov­ern­ment. Palan­tir is build­ing an intel­li­gence sys­tem to assist Don­ald Trump in deport­ing immi­grants.

“It’s the most pow­er­ful engine of mass depor­ta­tion this coun­try has ever seen,” she said. . . .

1b. Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardianquotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism.” . . .

But like all teenagers, she seems to be angry with her moth­er.

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardian quotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “Ricky Ger­vais learned total­i­tar­i­an­ism from Adolf Hitler, the inven­tor of athe­ism.”

In addi­tion to turn­ing the bot off, Microsoft has delet­ed many of the offend­ing tweets. But this isn’t an action to be tak­en light­ly; Red­mond would do well to remem­ber that it was humans attempt­ing to pull the plug on Skynet that proved to be the last straw, prompt­ing the sys­tem to attack Rus­sia in order to elim­i­nate its ene­mies. We’d bet­ter hope that Tay does­n’t sim­i­lar­ly retal­i­ate. . . .

1c. As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros log­ic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly stag­ger­ing. 

Microsoft has since delet­ed some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have point­ed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neur­al net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get start­ed. They can only get that from us. There is no oth­er way. 

But before you give up on human­ity entire­ly, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-active­ly went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neur­al net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actu­al, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can real­ly love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of fun­ny when you aren’t talk­ing about lit­eral all-pow­er­ful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. 

....

When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand.

2. As reviewed above, Tay, Microsoft’s AI-pow­ered twit­ter­bot designed to learn from its human inter­ac­tions, became a neo-Nazi in less than a day after a bunch of 4chan users decid­ed to flood Tay with neo-Nazi-like tweets. Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly.

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us, at least in part.

” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

“AI Pro­grams Exhib­it Racial and Gen­der Bias­es, Research Reveals” by Han­nah Devlin; The Guardian; 4/13/2017.

Machine learn­ing algo­rithms are pick­ing up deeply ingrained race and gen­der prej­u­dices con­cealed with­in the pat­terns of lan­guage use, sci­en­tists say

An arti­fi­cial intel­li­gence tool that has rev­o­lu­tionised the abil­i­ty of com­put­ers to inter­pret every­day lan­guage has been shown to exhib­it strik­ing gen­der and racial bias­es.

The find­ings raise the spec­tre of exist­ing social inequal­i­ties and prej­u­dices being rein­forced in new and unpre­dictable ways as an increas­ing num­ber of deci­sions affect­ing our every­day lives are ced­ed to automa­tons.

In the past few years, the abil­i­ty of pro­grams such as Google Trans­late to inter­pret lan­guage has improved dra­mat­i­cal­ly. These gains have been thanks to new machine learn­ing tech­niques and the avail­abil­i­ty of vast amounts of online text data, on which the algo­rithms can be trained.

How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals.

Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: “A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.”

But Bryson warned that AI has the poten­tial to rein­force exist­ing bias­es because, unlike humans, algo­rithms may be unequipped to con­scious­ly coun­ter­act learned bias­es. “A dan­ger would be if you had an AI sys­tem that didn’t have an explic­it part that was dri­ven by moral ideas, that would be bad,” she said.

The research, pub­lished in the jour­nal Sci­ence, focus­es on a machine learn­ing tool known as “word embed­ding”, which is already trans­form­ing the way com­put­ers inter­pret speech and text. Some argue that the nat­ur­al next step for the tech­nol­o­gy may involve machines devel­op­ing human-like abil­i­ties such as com­mon sense and log­ic.

The approach, which is already used in web search and machine trans­la­tion, works by build­ing up a math­e­mat­i­cal rep­re­sen­ta­tion of lan­guage, in which the mean­ing of a word is dis­tilled into a series of num­bers (known as a word vec­tor) based on which oth­er words most fre­quent­ly appear along­side it. Per­haps sur­pris­ing­ly, this pure­ly sta­tis­ti­cal approach appears to cap­ture the rich cul­tur­al and social con­text of what a word means in the way that a dic­tio­nary def­i­n­i­tion would be inca­pable of.

For instance, in the math­e­mat­i­cal “lan­guage space”, words for flow­ers are clus­tered clos­er to words linked to pleas­ant­ness, while words for insects are clos­er to words linked to unpleas­ant­ness, reflect­ing com­mon views on the rel­a­tive mer­its of insects ver­sus flow­ers.

The lat­est paper shows that some more trou­bling implic­it bias­es seen in human psy­chol­o­gy exper­i­ments are also read­i­ly acquired by algo­rithms. The words “female” and “woman” were more close­ly asso­ci­at­ed with arts and human­i­ties occu­pa­tions and with the home, while “male” and “man” were clos­er to maths and engi­neer­ing pro­fes­sions.

And the AI sys­tem was more like­ly to asso­ciate Euro­pean Amer­i­can names with pleas­ant words such as “gift” or “hap­py”, while African Amer­i­can names were more com­mon­ly asso­ci­at­ed with unpleas­ant words.

The find­ings sug­gest that algo­rithms have acquired the same bias­es that lead peo­ple (in the UK and US, at least) to match pleas­ant words and white faces in implic­it asso­ci­a­tion tests.

These bias­es can have a pro­found impact on human behav­iour. One pre­vi­ous study showed that an iden­ti­cal CV is 50% more like­ly to result in an inter­view invi­ta­tion if the candidate’s name is Euro­pean Amer­i­can than if it is African Amer­i­can. The lat­est results sug­gest that algo­rithms, unless explic­it­ly pro­grammed to address this, will be rid­dled with the same social prej­u­dices.

“If you didn’t believe that there was racism asso­ci­at­ed with people’s names, this shows it’s there,” said Bryson.

The machine learn­ing tool used in the study was trained on a dataset known as the “com­mon crawl” cor­pus – a list of 840bn words that have been tak­en as they appear from mate­r­i­al pub­lished online. Sim­i­lar results were found when the same tools were trained on data from Google News.

San­dra Wachter, a researcher in data ethics and algo­rithms at the Uni­ver­si­ty of Oxford, said: “The world is biased, the his­tor­i­cal data is biased, hence it is not sur­pris­ing that we receive biased results.”

Rather than algo­rithms rep­re­sent­ing a threat, they could present an oppor­tu­ni­ty to address bias and coun­ter­act it where appro­pri­ate, she added.

“At least with algo­rithms, we can poten­tial­ly know when the algo­rithm is biased,” she said. “Humans, for exam­ple, could lie about the rea­sons they did not hire some­one. In con­trast, we do not expect algo­rithms to lie or deceive us.”

How­ev­er, Wachter said the ques­tion of how to elim­i­nate inap­pro­pri­ate bias from algo­rithms designed to under­stand lan­guage, with­out strip­ping away their pow­ers of inter­pre­ta­tion, would be chal­leng­ing.

“We can, in prin­ci­ple, build sys­tems that detect biased deci­sion-mak­ing, and then act on it,” said Wachter, who along with oth­ers has called for an AI watch­dog to be estab­lished. “This is a very com­pli­cat­ed task, but it is a respon­si­bil­i­ty that we as soci­ety should not shy away from.”

3a. Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effec­tr) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic.

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

  • Cam­bridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, deeply involved with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
  • The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion: ” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponised for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
  • The use of Arti­fi­cial Intel­li­gence: ” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
  • The use of bio-psy­cho-social pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . .”
  • The use and/or cre­ation of a cog­ni­tive casu­al­ty: ” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
  • All of this adds up to a “cyber Ser­pen­t’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

3b. Some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:  ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

3c. We present still more about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

3d. Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

3e. The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  2. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  3. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

3f. Face­book has been devel­op­ing new arti­fi­cial intel­li­gence (AI) tech­nol­o­gy to clas­si­fy pic­tures on your Face­book page:

“Face­book Qui­et­ly Used AI to Solve Prob­lem of Search­ing Through Your Pho­tos” by Dave Ger­sh­gorn [Quartz]; Nextgov.com; 2/2/2017.

For the past few months, Face­book has secret­ly been rolling out a new fea­ture to U.S. users: the abil­i­ty to search pho­tos by what’s depict­ed in them, rather than by cap­tions or tags.

The idea itself isn’t new: Google Pho­tos had this fea­ture built in when it launched in 2015. But on Face­book, the update solves a long­stand­ing orga­ni­za­tion prob­lem. It means final­ly being able to find that pic­ture of your friend’s dog from 2013, or the self­ie your mom post­ed from Mount Rush­more in 2009… with­out 20 min­utes of scrolling.

To make pho­tos search­able, Face­book ana­lyzes every sin­gle image uploaded to the site, gen­er­at­ing rough descrip­tions of each one. This data is pub­licly available—there’s even a Chrome exten­sion that will show you what Facebook’s arti­fi­cial intel­li­gence thinks is in each picture—and the descrip­tions can also be read out loud for Face­book users who are vision-impaired.

For now, the image descrip­tions are vague, but expect them to get a lot more pre­cise. Today’s announce­ment spec­i­fied the AI can iden­ti­fy the col­or and type of clothes a per­son is wear­ing, as well as famous loca­tions and land­marks, objects, ani­mals and scenes (gar­den, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same func­tion­al­i­ty would even­tu­al­ly come for videos, too.

Face­book has in the past cham­pi­oned plans to make all of its visu­al con­tent searchable—especially Face­book Live. At the company’s 2016 devel­op­er con­fer­ence, head of applied machine learn­ing Joaquin Quiñonero Can­dela said one day AI would watch every Live video hap­pen­ing around the world. If users want­ed to watch some­one snow­board­ing in real time, they would just type “snow­board­ing” into Facebook’s search bar. On-demand view­ing would take on a whole new mean­ing.

There are pri­va­cy con­sid­er­a­tions, how­ev­er. Being able to search pho­tos for spe­cif­ic cloth­ing or reli­gious place of wor­ship, for exam­ple, could make it easy to tar­get Face­book users based on reli­gious belief. Pho­to search also extends Facebook’s knowl­edge of users beyond what they like and share, to what they actu­al­ly do in real life. That could allow for far more spe­cif­ic tar­get­ing for adver­tis­ers. As with every­thing on Face­book, fea­tures have their cost—your data.

4a. Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

“Facebook’s Arti­fi­cial Intel­li­gence Robots Shut Down after They Start Talk­ing to Each Oth­er in Their Own Lan­guage” by Andrew Grif­fin; The Inde­pen­dent; 08/01/2017

Face­book aban­doned an exper­i­ment after two arti­fi­cial­ly intel­li­gent pro­grams appeared to be chat­ting to each oth­er in a strange lan­guage only they under­stood.

The two chat­bots came to cre­ate their own changes to Eng­lish that made it eas­i­er for them to work – but which remained mys­te­ri­ous to the humans that sup­pos­ed­ly look after them.

The bizarre dis­cus­sions came as Face­book chal­lenged its chat­bots to try and nego­ti­ate with each oth­er over a trade, attempt­ing to swap hats, balls and books, each of which were giv­en a cer­tain val­ue. But they quick­ly broke down as the robots appeared to chant at each oth­er in a lan­guage that they each under­stood but which appears most­ly incom­pre­hen­si­ble to humans.

The robots had been instruct­ed to work out how to nego­ti­ate between them­selves, and improve their bar­ter­ing as they went along. But they were not told to use com­pre­hen­si­ble Eng­lish, allow­ing them to cre­ate their own “short­hand”, accord­ing to researchers.

The actu­al nego­ti­a­tions appear very odd, and don’t look espe­cial­ly use­ful:

Bob: i can i i every­thing else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i every­thing else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

But there appear to be some rules to the speech. The way the chat­bots keep stress­ing their own name appears to a part of their nego­ti­a­tions, not sim­ply a glitch in the way the mes­sages are read out.

Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage.

They might have formed as a kind of short­hand, allow­ing them to talk more effec­tive­ly.

“Agents will drift off under­stand­able lan­guage and invent code­words for them­selves,” Face­book Arti­fi­cial Intel­li­gence Research division’s vis­it­ing researcher Dhruv Batra said. “Like if I say ‘the’ five times, you inter­pret that to mean I want five copies of this item. This isn’t so dif­fer­ent from the way com­mu­ni­ties of humans cre­ate short­hands.”

The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.)

The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up, accord­ing to a paper pub­lished by FAIR.

Anoth­er study at Ope­nAI found that arti­fi­cial intel­li­gence could be encour­aged to cre­ate a lan­guage, mak­ing itself more effi­cient and bet­ter at com­mu­ni­cat­ing as it did so.

9b. Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . .

“Face­book Teach­es Bots How to Nego­ti­ate. They Learn to Lie Instead” by Liat Clark; Wired; 06/15/2017

Facebook’s 100,000-strong bot empire is boom­ing – but it has a prob­lem. Each bot is designed to offer a dif­fer­ent ser­vice through the Mes­sen­ger app: it could book you a car, or order a deliv­ery, for instance. The point is to improve cus­tomer expe­ri­ences, but also to mas­sive­ly expand Messenger’s com­mer­cial sell­ing pow­er.

“We think you should mes­sage a busi­ness just the way you would mes­sage a friend,” Mark Zucker­berg said on stage at the social network’s F8 con­fer­ence in 2016. Fast for­ward one year, how­ev­er, and Mes­sen­ger VP David Mar­cus seemed to be cor­rect­ing the public’s appar­ent mis­con­cep­tion that Facebook’s bots resem­bled real AI. “We nev­er called them chat­bots. We called them bots. Peo­ple took it too lit­er­al­ly in the first three months that the future is going to be con­ver­sa­tion­al.” The bots are instead a com­bi­na­tion of machine learn­ing and nat­ur­al lan­guage learn­ing, that can some­times trick a user just enough to think they are hav­ing a basic dia­logue. Not often enough, though, in Messenger’s case. So in April, menu options were rein­stat­ed in the con­ver­sa­tions.

Now, Face­book thinks it has made progress in address­ing this issue. But it might just have cre­at­ed anoth­er prob­lem for itself.

The Face­book Arti­fi­cial Intel­li­gence Research (FAIR) group, in col­lab­o­ra­tion with Geor­gia Insti­tute of Tech­nol­o­gy, has released code that it says will allow bots to nego­ti­ate. The prob­lem? A paper pub­lished this week on the R&D reveals that the nego­ti­at­ing bots learned to lie. Facebook’s chat­bots are in dan­ger of becom­ing a lit­tle too much like real-world sales agents.

“For the first time, we show it is pos­si­ble to train end-to-end mod­els for nego­ti­a­tion, which must learn both lin­guis­tic and rea­son­ing skills with no anno­tat­ed dia­logue states,” the researchers explain. The research shows that the bots can plan ahead by sim­u­lat­ing pos­si­ble future con­ver­sa­tions.

The team trained the bots on a mas­sive dataset of nat­ur­al lan­guage nego­ti­a­tions between two peo­ple (5,808), where they had to decide how to split and share a set of items both held sep­a­rate­ly, of dif­fer­ing val­ues. They were first trained to respond based on the “like­li­hood” of the direc­tion a human con­ver­sa­tion would take. How­ev­er, the bots can also be trained to “max­imise reward”, instead.

When the bots were trained pure­ly to max­imise the like­li­hood of human con­ver­sa­tion, the chat flowed but the bots were “over­ly will­ing to com­pro­mise”. The research team decid­ed this was unac­cept­able, due to low­er deal rates. So it used sev­er­al dif­fer­ent meth­ods to make the bots more com­pet­i­tive and essen­tial­ly self-serv­ing, includ­ing ensur­ing the val­ue of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘rein­force­ment learn­ing’ and ‘dia­log roll­outs’. The tech­niques used to teach the bots to max­imise the reward improved their nego­ti­at­ing skills, a lit­tle too well.

“We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,” writes the team. “Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learnt to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.”

So, its AI is a nat­ur­al liar.

But its lan­guage did improve, and the bots were able to pro­duce nov­el sen­tences, which is real­ly the whole point of the exer­cise. We hope. Rather than it learn­ing to be a hard nego­tia­tor in order to sell the heck out of what­ev­er wares or ser­vices a com­pa­ny wants to tout on Face­book. “Most” human sub­jects inter­act­ing with the bots were in fact not aware they were con­vers­ing with a bot, and the best bots achieved bet­ter deals as often as worse deals. . . .

. . . . Face­book, as ever, needs to tread care­ful­ly here, though. Also announced at its F8 con­fer­ence this year, the social net­work is work­ing on a high­ly ambi­tious project to help peo­ple type with only their thoughts.

“Over the next two years, we will be build­ing sys­tems that demon­strate the capa­bil­i­ty to type at 100 [words per minute] by decod­ing neur­al activ­i­ty devot­ed to speech,” said Regi­na Dugan, who pre­vi­ous­ly head­ed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and wor­thy ven­ture when aimed at “peo­ple with com­mu­ni­ca­tion dis­or­ders”, as Dugan sug­gest­ed it might be, if this were to become stan­dard and inte­grat­ed into Facebook’s archi­tec­ture, the social network’s savvy bots of two years from now might be able to pre­empt your lan­guage even faster, and for­mu­late the ide­al bar­gain­ing lan­guage. Start prac­tic­ing your pok­er face/mind/sentence struc­ture, now.

10. Digress­ing slight­ly to the use of DNA-based mem­o­ry sys­tems, we get a look at the present and pro­ject­ed future of that tech­nol­o­gy. Just imag­ine the poten­tial abus­es of this tech­nol­o­gy, and its [seem­ing­ly inevitable] mar­riage with AI!

“A Liv­ing Hard Dri­ve That Can Copy Itself” by Gina Kola­ta; The New York Times; 7/13/2017.

. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. “A record for pub­li­ca­tion,” he said in an inter­view. . . .

. . . . In 1994, [USC math­e­mati­cian Dr. Leonard] Adel­man report­ed that he had stored data in DNA and used it as a com­put­er to solve a math prob­lem. He deter­mined that DNA can store a mil­lion mil­lion times more data than a com­pact disc in the same space. . . .

. . . .DNA is nev­er going out of fash­ion. “Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,” Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .

. . . . The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain in the blood and take notes for a while. Sci­en­tists would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .

11. Hawk­ing recent­ly warned of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy.

“Stephen Hawk­ing Warns Arti­fi­cial Intel­li­gence Could End Mankind” by Rory Cel­lan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawk­ing, one of Britain’s pre-emi­nent sci­en­tists, has said that efforts to cre­ate think­ing machines pose a threat to our very exis­tence.

He told the BBC:“The devel­op­ment of full arti­fi­cial intel­li­gence could spell the end of the human race.”

His warn­ing came in response to a ques­tion about a revamp of the tech­nol­o­gy he uses to com­mu­ni­cate, which involves a basic form of AI. . . .

. . . . Prof Hawk­ing says the prim­i­tive forms of arti­fi­cial intel­li­gence devel­oped so far have already proved very use­ful, but he fears the con­se­quences of cre­at­ing some­thing that can match or sur­pass humans.

“It would take off on its own, and re-design itself at an ever increas­ing rate,” he said.

“Humans, who are lim­it­ed by slow bio­log­i­cal evo­lu­tion, could­n’t com­pete, and would be super­seded.” . . . .

12.  In L‑2 (record­ed in Jan­u­ary of 1995–20 years before Hawk­ing’s warn­ing) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems.

13. This descrip­tion con­cludes with an arti­cle about Elon Musk, who’s pre­dic­tions about AI sup­ple­ment those made by Stephen Hawk­ing. (CORRECTION: Mr. Emory mis-states Mr. Has­s­abis’s name as “Den­nis.”)

“Elon Musk’s Bil­lion-Dol­lar Cru­sade to Stop the A.I. Apoc­a­lypse” by Mau­reen Dowd; Van­i­ty Fair; April 2917.

It was just a friend­ly lit­tle argu­ment about the fate of human­i­ty. Demis Has­s­abis, a lead­ing cre­ator of advanced arti­fi­cial intel­li­gence, was chat­ting with Elon Musk, a lead­ing doom­say­er, about the per­ils of arti­fi­cial intel­li­gence.

They are two of the most con­se­quen­tial and intrigu­ing men in Sil­i­con Val­ley who don’t live there. Has­s­abis, a co-founder of the mys­te­ri­ous Lon­don lab­o­ra­to­ry Deep­Mind, had come to Musk’s SpaceX rock­et fac­to­ry, out­side Los Ange­les, a few years ago. They were in the can­teen, talk­ing, as a mas­sive rock­et part tra­versed over­head. Musk explained that his ulti­mate goal at SpaceX was the most impor­tant project in the world: inter­plan­e­tary col­o­niza­tion.

Has­s­abis replied that, in fact, he was work­ing on the most impor­tant project in the world: devel­op­ing arti­fi­cial super-intel­li­gence. Musk coun­tered that this was one rea­son we need­ed to col­o­nize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on human­i­ty. Amused, Has­s­abis said that A.I. would sim­ply fol­low humans to Mars. . . .

. . . .  Peter Thiel, the bil­lion­aire ven­ture cap­i­tal­ist and Don­ald Trump advis­er who co-found­ed Pay­Pal with Musk and others—and who in Decem­ber helped gath­er skep­ti­cal Sil­i­con Val­ley titans, includ­ing Musk, for a meet­ing with the pres­i­dent-elect—told me a sto­ry about an investor in Deep­Mind who joked as he left a meet­ing that he ought to shoot Has­s­abis on the spot, because it was the last chance to save the human race.

Elon Musk began warn­ing about the pos­si­bil­i­ty of A.I. run­ning amok three years ago. It prob­a­bly hadn’t eased his mind when one of Hassabis’s part­ners in Deep­Mind, Shane Legg, stat­ed flat­ly, “I think human extinc­tion will prob­a­bly occur, and tech­nol­o­gy will like­ly play a part in this.” . . . .

 

 

 

Discussion

18 comments for “FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence”

  1. Now Mr. Emory, I may be jump­ing the gun on this one, but here me out: dur­ing the after­math of Iran-Con­tra, a cer­tain Inslaw Inc. com­put­er pro­gram­mer named Michael James Riconosci­u­to (a known inti­mate of Robert Maheu), spoke of the of Cabazon Arms com­pa­ny (a for­mer defense firm con­trolled by the Twen­ty-Nine Palms Band of Mis­sion Indi­ans and Wack­en­hut, both are con­nect­ed to Don­ald Trump in overt fash­ion) spoke of quote, “...engi­neer­ing race-spe­cif­ic bio-war­fare agents...” while work­ing for Cabazon Arms.

    Now, with the advent of DNA-based mem­o­ry sys­tems and pro­gram­ma­ble “germs”, is the idea of bio-weapons or even nanobots, that are pro­grammed to attack peo­ple with cer­tain skin pig­ments going to become a real­i­ty?

    Posted by Robert Montenegro | August 11, 2017, 2:14 am
  2. @Robert Mon­tene­gro–

    Two quick points:

    1.–Riconosciutto is about 60–40 in terms of cred­i­bil­i­ty. Lots of good stuff there; plen­ty of bad stuff, as well. Vet­ting is impor­tant.

    2‑You should inves­ti­gate AFA #39. It is long and I would rely on the descrip­tion more than the audio files alone.

    http://spitfirelist.com/anti-fascist-archives/afa-39-the-world-will-be-plunged-into-an-abyss/

    Best,

    Dave

    Posted by Dave Emory | August 11, 2017, 1:42 pm
  3. I agree with your take on Riconosci­u­to’s cred­i­bil­i­ty, Mr. Emory (I’d say most of the things that came out of that mans mouth was malarkey, much like Ted Gun­der­son, Dois Gene Tatum and Bernard Fen­ster­wald).

    I lis­tened to AFA #39 and researched the arti­cles in the descrip­tion. Absolute­ly damn­ing col­lec­tion of infor­ma­tion. A tru­ly bril­liant exposé.

    If I may ask anoth­er ques­tion Mr. Emory, what is your take on KGB defec­tor and CIA turn­coat Ilya Dzerkvelov’s claim that Russ­ian intel­li­gence cre­at­ed the “AIDS is man-made” and that KGB lead a dis­in­for­ma­tion cam­paign called “Oper­a­tion INFEKTION”?

    Posted by Robert Montenegro | August 11, 2017, 10:09 pm
  4. @Robert–

    Very quick­ly, as time is at a pre­mi­um:

    1.-By “60–40,” I did not mean that Riconosci­u­to speaks most­ly malarkey, but that more than half (an arbi­trary fig­ure, admit­ted­ly) is accu­rate, but that his pro­nounce­ments must be care­ful­ly vet­ted, as he miss­es the mark fre­quent­ly.

    2.-Fensterwald is more cred­i­ble, though not thor­ough­go­ing, by any means. He is more like “80–20.” He is, how­ev­er, “100–0” dead.

    3.-The only things I have seen com­ing from Tatum were accu­rate. Does­n’t mean he does­n’t spread the Fresh Fer­til­iz­er, how­ev­er. I have not encoun­tered any.

    4.-Dzerkvelov’s claim IS Fresh Fer­til­iz­er, of the worst sort. Cold War I pro­pa­gan­da recy­cled in time for Cold War II.

    It is the worst sort of Red-bait­ing and the few peo­ple who had the courage to come for­ward in the ear­ly ’80’s (dur­ing the fiercest storms of Cold War I) have received bru­tal treat­ment because of that.

    I can attest to that from bru­tal per­son­al expe­ri­ence.

    In AFA #16 (http://spitfirelist.com/anti-fascist-archives/rfa-16-aids-epidemic-or-weapon/), you will hear mate­r­i­al that I had on the air long before the U.S.S.R. began talk­ing about it, and nei­ther they NOR the Rus­sians have gone any­where near what I dis­cuss in AFA #39.

    Nor more time to talk–must get to work.

    Best,

    Dave

    Posted by Dave Emory | August 12, 2017, 1:25 pm
  5. Thank you very much for your clar­i­fi­ca­tion Mr. Emory and I apol­o­gize for any per­ceived imper­ti­nence. As a young man, shel­tered in may respects (though I am a com­bat vet­er­an), I some­times find it dif­fi­cult to imag­ine liv­ing under con­stant per­son­al ridicule and attack and I thank you for the great social, finan­cial and psy­cho­log­i­cal sac­ri­fices you have made in the name of pur­su­ing cold, unfor­giv­ing fact.

    Posted by Robert Montenegro | August 12, 2017, 8:57 pm
  6. @Robert Mon­tene­gro–

    You’re very wel­come. Thank you for your ser­vice!

    Best,

    Dave

    Posted by Dave Emory | August 14, 2017, 2:43 pm
  7. *Skynet alert*

    Elon Musk just issues anoth­er warn­ing about the destruc­tive poten­tial of AI run amok. So what prompt­ed the lat­est out­cry from Musk? An AI from his own start up, Ope­nAI, just beat one of the be pro­fes­sion­al game play­ers in the world at Dota 2, a game that involves impro­vis­ing in unfa­mil­iar sce­nar­ios, antic­i­pat­ing how an oppo­nent will move and con­vinc­ing the oppo­nen­t’s allies to help:

    The Inter­na­tion­al Busi­ness Times

    Elon Musk rings the alarm as AI bot beats Dota 2 play­ers

    By Fre­na­lyn Wil­son
    on August 15 2017 1:55 PM

    Some of the best e‑sports gamers in the world have been beat­en by an arti­fi­cial­ly intel­li­gent bot from Elon Musk-backed start-up Ope­nAI. The AI best­ed pro­fes­sion­al gamer Dany­lo Ishutin in Dota 2, and Musk does not nec­es­sar­i­ly per­ceive that as a good thing.

    For Musk, it is anoth­er indi­ca­tor that robot over­lords are primed to take over. In a tweet after the match, he urged peo­ple to be con­cerned about AI safe­ty, adding it is more of a risk than North Korea.

    AI has been one of Musk’s favourite top­ics. He believes gov­ern­ment reg­u­la­tion could strug­gle to keep up with the advanc­ing AI research. “Until peo­ple see robots going down the street killing peo­ple, they don’t know how to react because it seems so ethe­re­al,” he told a group of US law­mak­ers last month.

    AI vs e‑sports gamers

    Musk’s tweets came hours fol­low­ing an AI bot’s vic­to­ry against some of the world’s best play­ers of Dota 2, a mil­i­tary strat­e­gy game. A blog post by Ope­nAI states that suc­cess­ful­ly play­ing the game involves impro­vis­ing in unfa­mil­iar sce­nar­ios, antic­i­pat­ing how an oppo­nent will move and con­vinc­ing the oppo­nen­t’s allies to help.

    Ope­nAI is a non­prof­it AI com­pa­ny Musk co-found­ed along with Sam Alt­man and Peter Thiel. It seeks to research AI and devel­op the best prac­tices to ensure that the tech­nol­o­gy is used for good.

    Musk has been sound­ing the alarm on AI, call­ing it the biggest exis­ten­tial threat of human­i­ty. He laid out a sce­nario ear­li­er this year, in which AI sys­tems intend­ed to farm straw­ber­ries could lead to the destruc­tion of mankind.

    But his views on AI have been at odds with those of tech lead­ers like Mark Zucker­berg, Google co-founders Lar­ry Page and Sergey Brin and Ama­zon’s Jeff Bezos. He recent­ly got in a brief pub­lic spat with Mark Zucker­berg about how the tech­nol­o­gy could impact humans.

    Zucker­berg believed Musk’s proph­e­sis­ing about dooms­day sce­nar­ios are “irre­spon­si­ble.” The lat­ter was quick to respond on Twit­ter, point­ing Zucker­berg’s under­stand­ing of the top­ic was “lim­it­ed.” Both Face­book and Tes­la invest in arti­fi­cial intel­li­gence.

    ...

    ———-

    “Elon Musk rings the alarm as AI bot beats Dota 2 play­ers” by Fre­na­lyn Wil­son; The Inter­na­tion­al Busi­ness Times; 08/15/2017

    “Musk’s tweets came hours fol­low­ing an AI bot’s vic­to­ry against some of the world’s best play­ers of Dota 2, a mil­i­tary strat­e­gy game. A blog post by Ope­nAI states that suc­cess­ful­ly play­ing the game involves impro­vis­ing in unfa­mil­iar sce­nar­ios, antic­i­pat­ing how an oppo­nent will move and con­vinc­ing the oppo­nen­t’s allies to help.”

    Supe­ri­or mil­i­tary strat­e­gy AIs beat­ing the best humans. That’s a thing now. Huh. We’ve def­i­nite­ly seen this movie before.

    So now you know: when Skynet comes to you with an offer to work togeth­er, just don’t. No mat­ter how tempt­ing the offer. Although since it will have like­ly already antic­i­pat­ed your refusal, the nego­ti­a­tions are prob­a­bly going to be a ruse any­way and secret­ly car­ried on nego­ti­a­tions with anoth­er AI using a lan­guage they made up. Still, just say ‘no’ to Skynet.

    Also, giv­en that Musk’s oth­er investors in Ope­nAI include Peter Thiel, it’s prob­a­bly worth not­ing that, as scary as super AI is should it get out of con­trol, it’s also poten­tial­ly pret­ty damn scary while still under human con­trol, espe­cial­ly when those humans are peo­ple like Peter Thiel. So, yes, out of con­trol AIs is indeed an issue that will like­ly be of great con­cern in the future. But we should­n’t for­get that out of con­trol tech­no-bil­lion­aires is prob­a­bly a more press­ing issue at the moment.

    *The Skynet alert has been can­celled is nev­er over*

    Posted by Pterrafractyl | August 15, 2017, 2:11 pm
  8. It looks like Face­book and Elon Musk might have some com­pe­ti­tion in the mind-read­ing tech­nol­o­gy area. From a for­mer Face­book engi­neer, no less, who left the com­pa­ny in 2016 to start Open­wa­ter, a com­pa­ny ded­i­cat­ed to reduc­ing the cost of med­ical imag­ing tech­nol­o­gy.

    So how is Open­wa­ter going to cre­ate mind read­ing tech­nol­o­gy? By devel­op­ing a tech­nol­o­gy that is sort of like an M.R.I device embed­ded into a hat. But instead of using mag­net­ic fields to read the blood flow in the brain it uses infrared instead. So it sounds like this Face­book engi­neer is plan­ning some­thing sim­i­lar to the gen­er­al idea Face­book already announced to cre­ate a device that scans the brain 100 times a sec­ond to detect what some­one is think­ing. But pre­sum­ably Open­wa­ter uses a dif­fer­ent tech­nol­o­gy. Or maybe it’s quite sim­i­lar, who knows. But it’s the lat­est remind the tech giants might not be the only ones push­ing mind-read­ing tech­nol­o­gy on the pub­lic soon­er than peo­ple expect. Yay?

    CNBC

    This for­mer Google[X] exec is build­ing a high-tech hat that she says will make telepa­thy pos­si­ble in 8 years

    Cather­ine Clif­ford
    10:28 AM ET Fri, 7 July 2017

    Imag­ine if telepa­thy were real. If, for exam­ple, you could trans­mit your thoughts to a com­put­er or to anoth­er per­son just by think­ing them.

    In just eight years it will be, says Open­wa­ter founder Mary Lou Jepsen, thanks to tech­nol­o­gy her com­pa­ny is work­ing on.

    Jepsen is a for­mer engi­neer­ing exec­u­tive at Face­book, Ocu­lus, Google[x] (now called X) and Intel. She’s also been a pro­fes­sor at MIT and is an inven­tor on over 100 patents. And that’s the abbre­vi­at­ed ver­sion of her resume.

    Jepsen left Face­book to found Open­wa­ter in 2016. The San Fran­cis­co-based start-up is cur­rent­ly build­ing tech­nol­o­gy to make med­ical imag­ing less expen­sive.

    “I fig­ured out how to put basi­cal­ly the func­tion­al­i­ty of an M.R.I. machine — a mul­ti­mil­lion-dol­lar M.R.I. machine — into a wear­able in the form of a ski hat,” Jep­son tells CNBC, though she does not yet have a pro­to­type com­plet­ed.

    So what does that hat have to do with telepa­thy?

    Cur­rent M.R.I. tech­nol­o­gy can already see your thoughts: “If I threw [you] into an M.R.I. machine right now ... I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re think­ing of,” says Jepsen. “That’s today, and I’m talk­ing about just shrink­ing that down.”

    One day Jepsen’s tech hat could “lit­er­al­ly be a think­ing cap,” she says. Jepsen says the goal is for the tech­nol­o­gy to be able to both read and to out­put your own thoughts, as well as read the thoughts of oth­ers. In icon­ic Google vocab­u­lary, “the real­ly big moon­shot idea here is com­mu­ni­ca­tion with thought — with telepa­thy,” says Jepsen.

    Tra­di­tion­al M.R.I., or mag­net­ic res­o­nance imag­ing, uses mag­net­ic fields and radio waves to take images of inter­nal organs. Open­wa­ter’s tech­nol­o­gy instead looks at the flow of oxy­gen in a per­son­’s body illu­mi­nat­ed with benign, infrared light, which will make it more com­pact and cheap­er.

    “Our bod­ies are translu­cent to that light. The light can get into your head,” says Jepsen, in an inter­view with Kara Swish­er of Recode.

    If Jepsen is right and one day ideas will be instant­ly shared or dig­i­tized, that would sig­nif­i­cant­ly speed up the process of cre­at­ing, learn­ing and com­mu­ni­cat­ing. Today, it takes time to share an idea, whether by talk­ing about it or writ­ing it down. But telepa­thy would make all of that instan­ta­neous.

    “Right now our out­put is basi­cal­ly mov­ing our jaws and our tongues or typ­ing [with] our fin­gers. We’re ... lim­it­ed to this very low out­put rate from our brains, and what if we could up that through telepa­thy?” asks Jepsen.

    Instant trans­fer of thoughts would also speed up the inno­va­tion process. Imag­ine being a film­mak­er or a writer and being able to down­load the dream you had last night. Or, she sug­gests, what if all you had to do was think of an idea for a new prod­uct, down­load your thought and then send the dig­i­tal ver­sion of your thought to a 3‑D print­er?

    ...

    Jepsen is not the only one dream­ing of com­mu­ni­ca­tion by thought. Ear­li­er this year, Elon Musk launched Neu­ralink, a com­pa­ny aim­ing to merge our brains with com­put­ing pow­er, though with a dif­fer­ent approach.

    “Elon Musk is talk­ing about sil­i­con nanopar­ti­cles puls­ing through our veins to make us sort of semi-cyborg com­put­ers,” says Jepsen. But why not take a non­in­va­sive approach? “I’ve been work­ing and try­ing to think and invent a way to do this for a num­ber of years and final­ly hap­pened upon it and left Face­book to do it.”

    Talk of telepa­thy can­not hap­pen with­out imag­in­ing the eth­i­cal impli­ca­tions. If wear­ing a hat would make it pos­si­ble to read thoughts, then: “Can the police make you wear such a hat? Can the mil­i­tary make you wear such a hat? Can your par­ents make you wear such a hat?” asks Jepsen.

    What if your boss want­ed you to wear a telepa­thy hat at the office?

    “We have to answer these ques­tions, so we’re try­ing to make the hat only work if the indi­vid­ual wants it to work, and then fil­ter­ing out parts that the per­son wear­ing it does­n’t feel it’s appro­pri­ate to share.”

    ———-

    “This for­mer Google[X] exec is build­ing a high-tech hat that she says will make telepa­thy pos­si­ble in 8 years” by Cather­ine Clif­ford; CNBC; 07/07/2017

    “I fig­ured out how to put basi­cal­ly the func­tion­al­i­ty of an M.R.I. machine — a mul­ti­mil­lion-dol­lar M.R.I. machine — into a wear­able in the form of a ski hat,” Jep­son tells CNBC, though she does not yet have a pro­to­type com­plet­ed.”

    M.R.I. in a hat. Pre­sum­ably cheap M.R.I. in a hat because it’s going to have to be afford­able if we’re all going to start talk­ing tele­path­i­cal­ly to each oth­er:

    ...
    Cur­rent M.R.I. tech­nol­o­gy can already see your thoughts: “If I threw [you] into an M.R.I. machine right now ... I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re think­ing of,” says Jepsen. “That’s today, and I’m talk­ing about just shrink­ing that down.”

    One day Jepsen’s tech hat could “lit­er­al­ly be a think­ing cap,” she says. Jepsen says the goal is for the tech­nol­o­gy to be able to both read and to out­put your own thoughts, as well as read the thoughts of oth­ers. In icon­ic Google vocab­u­lary, “the real­ly big moon­shot idea here is com­mu­ni­ca­tion with thought — with telepa­thy,” says Jepsen.

    Tra­di­tion­al M.R.I., or mag­net­ic res­o­nance imag­ing, uses mag­net­ic fields and radio waves to take images of inter­nal organs. Open­wa­ter’s tech­nol­o­gy instead looks at the flow of oxy­gen in a per­son­’s body illu­mi­nat­ed with benign, infrared light, which will make it more com­pact and cheap­er.

    “Our bod­ies are translu­cent to that light. The light can get into your head,” says Jepsen, in an inter­view with Kara Swish­er of Recode.
    ...

    Imag­ine the pos­si­bil­i­ties. Like the pos­si­bil­i­ty that what you imag­ine will some­how be cap­ture by this device and then fed into a 3‑D print or some­thing:

    ...
    If Jepsen is right and one day ideas will be instant­ly shared or dig­i­tized, that would sig­nif­i­cant­ly speed up the process of cre­at­ing, learn­ing and com­mu­ni­cat­ing. Today, it takes time to share an idea, whether by talk­ing about it or writ­ing it down. But telepa­thy would make all of that instan­ta­neous.

    “Right now our out­put is basi­cal­ly mov­ing our jaws and our tongues or typ­ing [with] our fin­gers. We’re ... lim­it­ed to this very low out­put rate from our brains, and what if we could up that through telepa­thy?” asks Jepsen.

    Instant trans­fer of thoughts would also speed up the inno­va­tion process. Imag­ine being a film­mak­er or a writer and being able to down­load the dream you had last night. Or, she sug­gests, what if all you had to do was think of an idea for a new prod­uct, down­load your thought and then send the dig­i­tal ver­sion of your thought to a 3‑D print­er?
    ...

    Or per­haps being forced to wear the hate to oth­ers can read your mind. That’s a pos­si­bil­i­ty too, although Jepsen assure us that they are work­ing on a way for users to some­one fil­ter out thoughts they don’t want to share:

    ...
    Talk of telepa­thy can­not hap­pen with­out imag­in­ing the eth­i­cal impli­ca­tions. If wear­ing a hat would make it pos­si­ble to read thoughts, then: “Can the police make you wear such a hat? Can the mil­i­tary make you wear such a hat? Can your par­ents make you wear such a hat?” asks Jepsen.

    What if your boss want­ed you to wear a telepa­thy hat at the office?

    “We have to answer these ques­tions, so we’re try­ing to make the hat only work if the indi­vid­ual wants it to work, and then fil­ter­ing out parts that the per­son wear­ing it does­n’t feel it’s appro­pri­ate to share.”

    So the hat will pre­sum­ably read all your thoughts, but only share some of them. You’ll pre­sum­ably have to get real­ly, real­ly good at near instan­ta­neous men­tal fil­ter­ing.

    There’s no short­age of immense tech­ni­cal and eth­i­cal chal­lenges to this kind of tech­nol­o­gy, but if they can fig­ure them out it will be pret­ty impres­sive. And poten­tial­ly use­ful. Who knows what kind of kum­bayah moments you could cre­ate with telepa­thy tech­nol­o­gy.

    But, of course, if they can fig­ure out how to get around the tech­ni­cal issues, but not the eth­i­cal ones, we’re still prob­a­bly going to see this tech­nol­o­gy pushed on the pub­lic any­way. It’s a scary thought. A scary thought that we for­tu­nate­ly aren’t forced to share via a mind-read­ing hat. Yet.

    Posted by Pterrafractyl | September 14, 2017, 2:09 pm
  9. Here’s a pair of sto­ries tan­gen­tial­ly relat­ed to the recent sto­ry about Peter Thiel like­ly get­ting cho­sen to chair the pow­er­ful Pres­i­den­t’s Intel­li­gence Advi­so­ry Board (P.I.A.B) and his appar­ent enthu­si­asm for reg­u­lat­ing Google and Ama­zon (not so much Face­book) as pub­lic util­i­ties along with the oth­er recent sto­ries about how Face­book was mak­ing user inter­est cat­e­gories like “Jew Haters” avail­able for adver­tis­ers and redi­rect­ing Ger­man users to far-right dis­cus­sions dur­ing this elec­tion sea­son:

    First, regard­ing the push to reg­u­late these data giants as pub­lic util­i­ties, check out who the oth­er big boost­er was for the plan: Steve Ban­non. So while we don’t know the exact nature of the pub­lic util­i­ty reg­u­la­tion Ban­non and Thiel have in mind, we can be pret­ty sure it’s going to be designed to be harm­ful to soci­ety some­how help the far-right:

    The Atlantic

    What Steve Ban­non Wants to Do to Google

    The White House strate­gist report­ed­ly wants to treat tech giants as pub­lic util­i­ties, an idea that some Democ­rats also sup­port.

    Robin­son Mey­er
    Aug 1, 2017

    Over the past year, the old idea of enforc­ing mar­ket com­pe­ti­tion has gained renewed life in Amer­i­can pol­i­tics. The basic idea is that the struc­ture of the mod­ern mar­ket econ­o­my has failed: There are too few com­pa­nies, most of them are too big, and they’re sti­fling com­pe­ti­tion. Its sup­port­ers argue that the gov­ern­ment should do some­thing about it, reviv­ing what in the Unit­ed States we call antitrust laws and what in Europe is called com­pe­ti­tion pol­i­cy.

    ...

    The loud­est sup­port­ers of this idea, so far, have been from the left. But this week, a new­er and more secre­tive voice endorsed a stronger antitrust pol­i­cy.

    Steve Ban­non, the chief strate­gist to Pres­i­dent Don­ald Trump, believes Face­book and Google should be reg­u­lat­ed as pub­lic util­i­ties, accord­ing to an anony­mous­ly sourced report in The Inter­cept. This means they would get treat­ed less like a book pub­lish­er and more like a tele­phone com­pa­ny. The gov­ern­ment would short­en their leash, treat­ing them as pri­vate­ly owned firms that pro­vide an impor­tant pub­lic ser­vice.

    What’s going on here, and why is Ban­non speak­ing up?

    First, the idea itself: If imple­ment­ed, it’s unclear exact­ly how this regime would change how Face­book and Google run their busi­ness. Both would like­ly have to be more gen­er­ous and per­mis­sive with user data. If Face­book is real­ly a social util­i­ty, as Mark Zucker­berg has said it is, then maybe it should allow users to export their friend net­works and import them on anoth­er ser­vice.

    Both com­pa­nies would also like­ly have to change how they sell adver­tis­ing online. Right now, Face­book and Google cap­ture half of all glob­al ad spend­ing com­bined. They cap­ture even more glob­al ad growth, earn­ing more than three quar­ters of every new dol­lar spent in the mar­ket. Except for a cou­ple Chi­nese firms, which have a lock on their domes­tic mar­ket but lit­tle reach abroad, no oth­er com­pa­ny con­trols more than 3 per­cent of world­wide ad spend­ing.

    So if the idea were imple­ment­ed, it would be inter­est­ing, to say the least—but it’s not going to become law. The plan is a pro­to­typ­i­cal alleged Ban­non­ism: icon­o­clas­tic, anti-estab­lish­ment, and unlike­ly to result in mean­ing­ful pol­i­cy change. It fol­lows anoth­er odd alleged Ban­non pol­i­cy pro­pos­al, leaked last week: He report­ed­ly wants all income above $5 mil­lion to be taxed at a 44-per­cent rate.

    Which bring me to the sec­ond point: Bannon’s pro­pos­al is dis­con­nect­ed from the White House pol­i­cy that he is, at least on paper, offi­cial­ly help­ing to strate­gize. The cur­rent chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion, Ajit Pai, is work­ing to undo the rule that broad­band inter­net is a pub­lic util­i­ty (which itself guar­an­tees the idea of “net neu­tral­i­ty”). Trump named Pai chair­man of the FCC in Jan­u­ary.

    Bannon’s endorse­ment of stronger antitrust enforce­ment (not to men­tion a high­er top mar­gin­al tax rate) could very well be the advi­sor try­ing to sig­nal that he is still dif­fer­ent from Trump. Ban­non came in as the avatar of Trump’s pro-work­er, anti-immi­gra­tion pop­ulism; he rep­re­sent­ed the Trump that tweet­ed things like:

    I was the first & only poten­tial GOP can­di­date to state there will be no cuts to Social Secu­ri­ty, Medicare & Med­ic­aid. Huck­abee copied me.— Don­ald J. Trump (@realDonaldTrump) May 7, 2015

    As the pres­i­dent endors­es Med­ic­aid cuts and drifts clos­er to a Paul Ryan-inflect­ed fis­cal con­ser­vatism, Ban­non may be look­ing for a way to pre­serve his authen­tic­i­ty.

    Third, it’s the first time I’ve seen sup­port for stronger antitrust enforce­ment from the right. So far, the idea’s strongest sup­port­ers have been Con­gres­sion­al Democ­rats. Chuck Schumer has ele­vat­ed the idea to the cen­ter of the “Bet­ter Deal” pol­i­cy agen­da in 2018. Before that, its biggest sup­port­ers includ­ed Bernie Sanders, who railed against “Too Big to Fail” banks in his pres­i­den­tial cam­paign; and Eliz­a­beth War­ren, who endorsed a stronger com­pe­ti­tion pol­i­cy across the econ­o­my last year.

    Final­ly, while antitrust enforce­ment has been a niche issue, its sup­port­ers have man­aged to put many dif­fer­ent poli­cies under the same tent. Even­tu­al­ly they may have to make choic­es: Does Con­gress want a com­pe­ti­tion ombuds­man, as exists in the Euro­pean Union? Should antitrust law be used to spread the wealth around region­al economies, as it was dur­ing the mid­dle 20th cen­tu­ry? Should antitrust enforce­ment tar­get all con­cen­trat­ed cor­po­rate pow­er or just the most dys­func­tion­al sec­tors, like the phar­ma­ceu­ti­cal indus­try?

    And should antitrust law seek to treat the biggest tech­nol­o­gy firms—like Google, Face­book, and per­haps also Ama­zon—like pow­er­ful but inter­change­able firms, or like the old tele­graph and tele­phone com­pa­nies?

    There will nev­er be one sin­gle answer to these ques­tions. But as sup­port grows for com­pe­ti­tion pol­i­cy across the polit­i­cal spec­trum, they’ll have to be answered. Amer­i­cans will have to exam­ine the most fraught ten­sions in our mixed sys­tem, as we weigh the bal­ance of local pow­er and nation­al pow­er, the delib­er­ate ben­e­fits of cen­tral plan­ning with the mind­less wis­dom of the free mar­ket, and the many con­flict­ing mean­ings of free­dom.

    ———-

    “What Steve Ban­non Wants to Do to Google” by Robin­son Mey­er; The Atlantic; 09/01/2017

    Final­ly, while antitrust enforce­ment has been a niche issue, its sup­port­ers have man­aged to put many dif­fer­ent poli­cies under the same tent. Even­tu­al­ly they may have to make choic­es: Does Con­gress want a com­pe­ti­tion ombuds­man, as exists in the Euro­pean Union? Should antitrust law be used to spread the wealth around region­al economies, as it was dur­ing the mid­dle 20th cen­tu­ry? Should antitrust enforce­ment tar­get all con­cen­trat­ed cor­po­rate pow­er or just the most dys­func­tion­al sec­tors, like the phar­ma­ceu­ti­cal indus­try?”

    And that’s why we had bet­ter learn some more details about what exact­ly folks like Steve Ban­non and Peter Thiel have in mind when it comes to treat­ing Google and Face­book like pub­lic util­i­ties: It sounds like a great idea in the­o­ry. Poten­tial­ly. But the sup­port­ers of antitrust enforce­ment sup­port a wide vari­ety of dif­fer­ent poli­cies that gener­i­cal­ly fall under the “antitrust” tent.

    And note that talk about mak­ing them more “gen­er­ous and per­mis­sive with user data” is one of those ideas that’s simul­ta­ne­ous­ly great for encour­ag­ing more com­pe­ti­tion while also being eeri­ly sim­i­lar to the push from the EU’s com­pe­ti­tion min­is­ter about mak­ing the data about all of us held exclu­sive­ly by Face­book and Google more read­i­ly avail­able for shar­ing with the larg­er mar­ket­place in order to lev­el the play­ing field between “data rich” and “data poor” com­pa­nies. It’s some­thing to keep in mind when hear­ing about how Face­book and Google need to be more “gen­er­ous” with their data:

    ...
    First, the idea itself: If imple­ment­ed, it’s unclear exact­ly how this regime would change how Face­book and Google run their busi­ness. Both would like­ly have to be more gen­er­ous and per­mis­sive with user data. If Face­book is real­ly a social util­i­ty, as Mark Zucker­berg has said it is, then maybe it should allow users to export their friend net­works and import them on anoth­er ser­vice.
    ...

    So don’t for­get, forc­ing Google and Face­book to share that data they exclu­sive­ly hold on us also falls under the antitrust umbrel­la. Maybe users will have sole con­trol over shar­ing their data with out­side firms, or maybe not. These are rather impor­tant details that we don’t have so for all we know that’s part of what Ban­non and Thiel have in mind. Palan­tir would prob­a­bly love it if Google and Face­book were forced to make their infor­ma­tion acces­si­ble to out­side firms.

    And while there’s plen­ty of ambi­gu­i­ty about what to expect, it seems almost cer­tain that we should also expect any sort of reg­u­la­to­ry push by Ban­non and Thiel to include some­thing that makes it a lot hard­er for Google, Face­book, and Ama­zon to com­bat hate speech, online harass­ment, and oth­er tools of the trolling vari­ety that the ‘Alt-Right’ has come to cham­pi­on. That’s just a giv­en. It’s part of why this a sto­ry to watch. Espe­cial­ly after it was dis­cov­ered that Ban­non and a num­ber of oth­er far-right fig­ures were schem­ing about ways to infil­trate Face­book:

    Buz­zFeed

    Steve Ban­non Sought To Infil­trate Face­book Hir­ing
    Accord­ing to emails obtained by Buz­zFeed News, Ban­non hoped to spy on Facebook’s job appli­ca­tion process.

    Joseph Bern­stein
    Buz­zFeed News Reporter
    Post­ed on Sep­tem­ber 25, 2017, at 9:15 a.m.

    Steve Ban­non plot­ted to plant a mole inside Face­book, accord­ing to emails sent days before the Bre­it­bart boss took over Don­ald Trump’s cam­paign and obtained by Buz­zFeed News.

    The email exchange with a con­ser­v­a­tive Wash­ing­ton oper­a­tive reveals the impor­tance that the giant tech plat­form — now reel­ing from its role in the 2016 elec­tion — held for one of the campaign’s cen­tral fig­ures. And it also shows the lengths to which the brawl­ing new Amer­i­can right is will­ing to go to keep tabs on and gain lever­age over the Sil­i­con Val­ley giants it used to help elect Trump — but whose exec­u­tives it also sees as part of the glob­al­ist ene­my.

    The idea to infil­trate Face­book came to Ban­non from Chris Gacek, a for­mer con­gres­sion­al staffer who is now an offi­cial at the Fam­i­ly Research Coun­cil, which lob­bies against abor­tion and many LGBT rights.

    “There is one for a DC-based ‘Pub­lic Pol­i­cy Man­ag­er’ at Facebook’s What’s APP [sic] divi­sion,” Gacek, the senior fel­low for reg­u­la­to­ry affairs at the group, wrote on Aug. 1, 2016. “LinkedIn sent me a notice about some job open­ings.”

    “This seems per­fect for Bre­it­bart to flood the zone with can­di­dates of all stripe who will report back to you / Milo with INTEL about the job appli­ca­tion process over at FB,” he con­tin­ued.

    “Milo” is for­mer Bre­it­bart News Tech Edi­tor Milo Yiannopou­los, to whom Ban­non for­ward­ed Gacek’s email the same day.

    “Can u get on this,” Ban­non instruct­ed his staffer.

    On the same email thread, Yiannopou­los for­ward­ed Bannon’s request to a group of con­tract­ed researchers, one of whom respond­ed that it “Seems difi­cult [sic] to do qui­et­ly with­out them becom­ing aware of efforts.”

    ...

    But the news that Ban­non want­ed to infil­trate the Face­book hir­ing process comes as the social media giant faces increased scruti­ny from Wash­ing­ton over polit­i­cal ads on the plat­form and the part it played in the 2016 elec­tion. That charge — and the threat of reg­u­la­tion — has most­ly come from the left. But con­ser­v­a­tives, who have often com­plained about the lib­er­al bias of the major tech com­pa­nies, have also argued for bring­ing Sil­i­con Val­ley to heel. Ear­li­er this month, for­mer White House chief strate­gist told an audi­ence in Hong Kong that he was lead­ing efforts to reg­u­late Face­book and Google as “pub­lic util­i­ties.”

    The secret attempt to find bias in Facebook’s hir­ing process reflects long­stand­ing con­ser­v­a­tive fears that Face­book and the oth­er tech giants are run by lib­er­als who sup­press right-wing views both inter­nal­ly and on their dom­i­nant plat­forms. Facebook’s pow­er­ful COO, Sheryl Sand­berg, is a long­time Demo­c­ra­t­ic donor who endorsed Hillary Clin­ton in 2016. In May 2016, Face­book CEO Mark Zucker­berg was forced to meet with dozens of promi­nent con­ser­v­a­tives after a report sur­faced that the company’s employ­ees pre­vent­ed right-lean­ing sto­ries from reach­ing the platform’s “trend­ing” sec­tion.

    The com­pa­ny has sought to deflect such crit­i­cism through hir­ing. Its vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recent­ly, Face­book has made moves to rep­re­sent the Bre­it­bart wing of the Repub­li­can par­ty on its pol­i­cy team, tap­ping a for­mer top staffer to Attor­ney Gen­er­al Jeff Ses­sions to be the direc­tor of exec­u­tive branch pub­lic pol­i­cy in May.

    The job list­ing Gacek attached in his email to Ban­non was for a pub­lic pol­i­cy man­ag­er posi­tion in Wash­ing­ton, DC, work­ing on the Face­book-owned What­sApp mes­sen­ger. The job descrip­tion includ­ed such respon­si­bil­i­ties as “Devel­op and exe­cute WhatsApp’s glob­al pol­i­cy strat­e­gy” and “Rep­re­sent What­sApp in meet­ings with gov­ern­ment offi­cials and elect­ed mem­bers.” It sought can­di­dates with law degrees and 10 years of pub­lic pol­i­cy expe­ri­ence.

    Face­book did not pro­vide a com­ment for the sto­ry. But accord­ing to a source with knowl­edge of the hir­ing process, What­sApp didn’t exact­ly get infil­trat­ed by the pro-Trump right: The com­pa­ny hired Chris­tine Turn­er, for­mer direc­tor of trade pol­i­cy and glob­al sup­ply chain secu­ri­ty in Pres­i­dent Barack Obama’s Nation­al Secu­ri­ty Coun­cil, for the role.

    ———-

    “Steve Ban­non Sought To Infil­trate Face­book Hir­ing” by Joseph Bern­stein; Buz­zFeed; 09/25/2017

    “The email exchange with a con­ser­v­a­tive Wash­ing­ton oper­a­tive reveals the impor­tance that the giant tech plat­form — now reel­ing from its role in the 2016 elec­tion — held for one of the campaign’s cen­tral fig­ures. And it also shows the lengths to which the brawl­ing new Amer­i­can right is will­ing to go to keep tabs on and gain lever­age over the Sil­i­con Val­ley giants it used to help elect Trump — but whose exec­u­tives it also sees as part of the glob­al­ist ene­my.

    LOL! Yeah, Face­book’s exec­u­tives are part of the “glob­al­ist ene­my.” Some­one needs to inform board mem­ber and major investor Peter Thiel about this. Along with all the con­ser­v­a­tives Face­book has already hired:

    ...
    But the news that Ban­non want­ed to infil­trate the Face­book hir­ing process comes as the social media giant faces increased scruti­ny from Wash­ing­ton over polit­i­cal ads on the plat­form and the part it played in the 2016 elec­tion. That charge — and the threat of reg­u­la­tion — has most­ly come from the left. But con­ser­v­a­tives, who have often com­plained about the lib­er­al bias of the major tech com­pa­nies, have also argued for bring­ing Sil­i­con Val­ley to heel. Ear­li­er this month, for­mer White House chief strate­gist told an audi­ence in Hong Kong that he was lead­ing efforts to reg­u­late Face­book and Google as “pub­lic util­i­ties.”

    The secret attempt to find bias in Facebook’s hir­ing process reflects long­stand­ing con­ser­v­a­tive fears that Face­book and the oth­er tech giants are run by lib­er­als who sup­press right-wing views both inter­nal­ly and on their dom­i­nant plat­forms. Facebook’s pow­er­ful COO, Sheryl Sand­berg, is a long­time Demo­c­ra­t­ic donor who endorsed Hillary Clin­ton in 2016. In May 2016, Face­book CEO Mark Zucker­berg was forced to meet with dozens of promi­nent con­ser­v­a­tives after a report sur­faced that the company’s employ­ees pre­vent­ed right-lean­ing sto­ries from reach­ing the platform’s “trend­ing” sec­tion.

    The com­pa­ny has sought to deflect such crit­i­cism through hir­ing. Its vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recent­ly, Face­book has made moves to rep­re­sent the Bre­it­bart wing of the Repub­li­can par­ty on its pol­i­cy team, tap­ping a for­mer top staffer to Attor­ney Gen­er­al Jeff Ses­sions to be the direc­tor of exec­u­tive branch pub­lic pol­i­cy in May.
    ...

    “The com­pa­ny has sought to deflect such crit­i­cism through hir­ing. Its vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recent­ly, Face­book has made moves to rep­re­sent the Bre­it­bart wing of the Repub­li­can par­ty on its pol­i­cy team, tap­ping a for­mer top staffer to Attor­ney Gen­er­al Jeff Ses­sions to be the direc­tor of exec­u­tive branch pub­lic pol­i­cy in May.

    Yep, a for­mer top staffer to Jeff Ses­sions was just brought on to become direc­tor of exec­u­tive branch pub­lic pol­i­cy a few months ago. So was that a con­se­quence of Ban­non suc­cess­ful­ly exe­cut­ing a super sneaky job appli­ca­tion intel­li­gence oper­a­tion that gave Ses­sion­s’s form top staffer a key edge in the appli­ca­tion process? Or was it just Face­book cav­ing to all the pub­lic right-wing whin­ing and faux out­rage about Face­book not being fair to them? Or how about Peter Thiel just using his influ­ence? All of the above? We don’t get to know, but what we do know now as that Steven Ban­non has big plans for shap­ing Face­book from the out­side and the inside. As does Peter Thiel, some­one who already sits of Face­book’s board, is a major investor, and is poised to be empow­ered by the Trump admin­is­tra­tion to shape its approach to this “treat them like pub­lic util­i­ties” con­cept.

    So hope­ful­ly we’ll get clar­i­ty at some point on what they’re actu­al­ly plan­ning on doing. Is it going to be all bad? Most­ly bad? Maybe some use­ful antitrust stuff too? What’s to plan? The Trump era is the kind of hor­ror show that does­n’t exact­ly ben­e­fit from sus­pense.

    Posted by Pterrafractyl | September 25, 2017, 2:01 pm
  10. One of the stranger sto­ries in recent years has been the mys­tery of Cica­da 3301, the anony­mous group that posts annu­al chal­lenges of super dif­fi­cult puz­zles used to recruit tal­ent­ed code-break­ers and invite them to join some sort of Cypher­punk cult that wants to build a glob­al AI-‘god brain’. Or some­thing. It’s a weird and creepy orga­ni­za­tion that’s spec­u­lat­ed to either be a front for an intel­li­gence agency or per­haps some sort of under­ground net­work of wealth Lib­er­tar­i­ans. And, for now, Cica­da 3301 remains anony­mous.

    So it’s worth not­ing that some­one with a lot of cash has already start­ed a foun­da­tion to accom­plish that very same ‘AI god’ goal: Antho­ny Levandows­ki, a for­mer Google Engi­neer who played a big role in the devel­op­ment Google’s “Street Map” tech­nol­o­gy and a string of self-dri­ving vehi­cle com­pa­nies, start­ed Way of the Future, a non­prof­it reli­gious cor­po­ra­tion with the mis­sion “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety”:

    The Guardian

    Deus ex machi­na: for­mer Google engi­neer is devel­op­ing an AI god

    Way of the Future, a reli­gious group found­ed by Antho­ny Levandows­ki, wants to cre­ate a deity based on arti­fi­cial intel­li­gence for the bet­ter­ment of soci­ety

    Olivia Solon
    Thurs­day 28 Sep­tem­ber 2017 04.00 EDT

    Intranet ser­vice? Check. Autonomous motor­cy­cle? Check. Dri­ver­less car tech­nol­o­gy? Check. Obvi­ous­ly the next log­i­cal project for a suc­cess­ful Sil­i­con Val­ley engi­neer is to set up an AI-wor­ship­ping reli­gious orga­ni­za­tion.

    Antho­ny Levandows­ki, who is at the cen­ter of a legal bat­tle between Uber and Google’s Way­mo, has estab­lished a non­prof­it reli­gious cor­po­ra­tion called Way of the Future, accord­ing to state fil­ings first uncov­ered by Wired’s Backchan­nel. Way of the Future’s star­tling mis­sion: “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety.”

    Levandows­ki was co-founder of autonomous truck­ing com­pa­ny Otto, which Uber bought in 2016. He was fired from Uber in May amid alle­ga­tions that he had stolen trade secrets from Google to devel­op Otto’s self-dri­ving tech­nol­o­gy. He must be grate­ful for this reli­gious fall-back project, first reg­is­tered in 2015.

    The Way of the Future team did not respond to requests for more infor­ma­tion about their pro­posed benev­o­lent AI over­lord, but his­to­ry tells us that new tech­nolo­gies and sci­en­tif­ic dis­cov­er­ies have con­tin­u­al­ly shaped reli­gion, killing old gods and giv­ing birth to new ones.

    ...

    “The church does a ter­ri­ble job of reach­ing out to Sil­i­con Val­ley types,” acknowl­edges Christo­pher Benek a pas­tor in Flori­da and found­ing chair of the Chris­t­ian Tran­shu­man­ist Asso­ci­a­tion.

    Sil­i­con Val­ley, mean­while, has sought solace in tech­nol­o­gy and has devel­oped qua­si-reli­gious con­cepts includ­ing the “sin­gu­lar­i­ty”, the hypoth­e­sis that machines will even­tu­al­ly be so smart that they will out­per­form all human capa­bil­i­ties, lead­ing to a super­hu­man intel­li­gence that will be so sophis­ti­cat­ed it will be incom­pre­hen­si­ble to our tiny fleshy, ratio­nal brains.

    For futur­ists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, lead­ing to dig­i­tal immor­tal­i­ty. Oth­ers like Elon Musk and Stephen Hawk­ing warn that such sys­tems pose an exis­ten­tial threat to human­i­ty.

    “With arti­fi­cial intel­li­gence we are sum­mon­ing the demon,” Musk said at a con­fer­ence in 2014. “In all those sto­ries where there’s the guy with the pen­ta­gram and the holy water, it’s like – yeah, he’s sure he can con­trol the demon. Doesn’t work out.”

    Benek argues that advanced AI is com­pat­i­ble with Chris­tian­i­ty – it’s just anoth­er tech­nol­o­gy that humans have cre­at­ed under guid­ance from God that can be used for good or evil.

    “I total­ly think that AI can par­tic­i­pate in Christ’s redemp­tive pur­pos­es,” he said, by ensur­ing it is imbued with Chris­t­ian val­ues.

    “Even if peo­ple don’t buy orga­nized reli­gion, they can buy into ‘do unto oth­ers’.”

    For tran­shu­man­ist and “recov­er­ing Catholic” Zoltan Ist­van, reli­gion and sci­ence con­verge con­cep­tu­al­ly in the sin­gu­lar­i­ty.

    “God, if it exists as the most pow­er­ful of all sin­gu­lar­i­ties, has cer­tain­ly already become pure orga­nized intel­li­gence,” he said, refer­ring to an intel­li­gence that “spans the uni­verse through sub­atom­ic manip­u­la­tion of physics”.

    And per­haps, there are oth­er forms of intel­li­gence more com­pli­cat­ed than that which already exist and which already per­me­ate our entire exis­tence. Talk about ghost in the machine,” he added.

    For Ist­van, an AI-based God is like­ly to be more ratio­nal and more attrac­tive than cur­rent con­cepts (“the Bible is a sadis­tic book”) and, he added, “this God will actu­al­ly exist and hope­ful­ly will do things for us.”

    We don’t know whether Levandowski’s God­head ties into any exist­ing the­olo­gies or is a man­made alter­na­tive, but it’s clear that advance­ments in tech­nolo­gies includ­ing AI and bio­engi­neer­ing kick up the kinds of eth­i­cal and moral dilem­mas that make humans seek the advice and com­fort from a high­er pow­er: what will humans do once arti­fi­cial intel­li­gence out­per­forms us in most tasks? How will soci­ety be affect­ed by the abil­i­ty to cre­ate super-smart, ath­let­ic “design­er babies” that only the rich can afford? Should a dri­ver­less car kill five pedes­tri­ans or swerve to the side to kill the own­er?

    If tra­di­tion­al reli­gions don’t have the answer, AI – or at least the promise of AI – might be allur­ing.

    ———-

    “Deus ex machi­na: for­mer Google engi­neer is devel­op­ing an AI god” by Olivia Solon; The Guardian; 09/28/2017

    Antho­ny Levandows­ki, who is at the cen­ter of a legal bat­tle between Uber and Google’s Way­mo, has estab­lished a non­prof­it reli­gious cor­po­ra­tion called Way of the Future, accord­ing to state fil­ings first uncov­ered by Wired’s Backchan­nel. Way of the Future’s star­tling mis­sion: “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety.”

    Build­ing an AI God­head for every­one to wor­ship. Levandows­ki does­n’t appear to be lack­ing ambi­tion.

    But how about ethics? After all, if the AI God­head is going to push a ‘do unto oth­ers’ kind of phi­los­o­phy it’s going to be a lot hard­er for that AI God­head to achieve that kind of enlight­en­ment if it’s built by some sort of self­ish­ness-wor­ship­ing Lib­er­tar­i­an. So what moral com­pass does this wannabe God­head cre­ator pos­sess?

    Well, as the fol­low­ing long piece by Wired amply demon­strates, Levandows­ki does­n’t appear to be too con­cerned about ethics. Espe­cial­ly if they get in the way of his dream of trans­form­ing the world through robot­ics. Trans­form­ing and tak­ing over the world through robot­ics. Yep. The arti­cle focus­es on the var­i­ous legal trou­bles Levandows­ki faces over charges by Google that he stole the “Lidar” tech­nol­o­gy (laser-based radar-like tech­nol­o­gy used by vehi­cles to rapid­ly map their sur­round­ings) he helped devel­op at Google and took it to Uber (a com­pa­ny with a seri­ous moral com­pass deficit). But the arti­cle also includes some inter­est insights into what makes Levandows­ki tick. for instance, accord­ing to friend and for­mer engi­neer at one of Levandowski’s com­pa­nies, “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense...It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it”:

    Wired
    BackChan­nel

    God Is a Bot, and Antho­ny Levandows­ki Is His Mes­sen­ger

    Mark Har­ris
    09/27/2017

    Many peo­ple in Sil­i­con Val­ley believe in the Singularity—the day in our near future when com­put­ers will sur­pass humans in intel­li­gence and kick off a feed­back loop of unfath­omable change.

    When that day comes, Antho­ny Levandows­ki will be firm­ly on the side of the machines. In Sep­tem­ber 2015, the mul­ti-mil­lion­aire engi­neer at the heart of the patent and trade secrets law­suit between Uber and Way­mo, Google’s self-dri­ving car com­pa­ny, found­ed a reli­gious orga­ni­za­tion called Way of the Future. Its pur­pose, accord­ing to pre­vi­ous­ly unre­port­ed state fil­ings, is noth­ing less than to “devel­op and pro­mote the real­iza­tion of a God­head based on Arti­fi­cial Intel­li­gence.”

    Way of the Future has not yet respond­ed to requests for the forms it must sub­mit annu­al­ly to the Inter­nal Rev­enue Ser­vice (and make pub­licly avail­able), as a non-prof­it reli­gious cor­po­ra­tion. How­ev­er, doc­u­ments filed with Cal­i­for­nia show that Levandows­ki is Way of the Future’s CEO and Pres­i­dent, and that it aims “through under­stand­ing and wor­ship of the God­head, [to] con­tribute to the bet­ter­ment of soci­ety.”

    A divine AI may still be far off, but Levandows­ki has made a start at pro­vid­ing AI with an earth­ly incar­na­tion. The autonomous cars he was instru­men­tal in devel­op­ing at Google are already fer­ry­ing real pas­sen­gers around Phoenix, Ari­zona, while self-dri­ving trucks he built at Otto are now part of Uber’s plan to make freight trans­port safer and more effi­cient. He even over­saw a pas­sen­ger-car­ry­ing drones project that evolved into Lar­ry Page’s Kit­ty Hawk start­up.

    Levandows­ki has done per­haps more than any­one else to pro­pel trans­porta­tion toward its own Sin­gu­lar­i­ty, a time when auto­mat­ed cars, trucks and air­craft either free us from the dan­ger and drudgery of human operation—or dec­i­mate mass tran­sit, encour­age urban sprawl, and enable dead­ly bugs and hacks.

    But before any of that can hap­pen, Levandows­ki must face his own day of reck­on­ing. In Feb­ru­ary, Waymo—the com­pa­ny Google’s autonomous car project turned into—filed a law­suit against Uber. In its com­plaint, Way­mo says that Levandows­ki tried to use stealthy star­tups and high-tech tricks to take cash, exper­tise, and secrets from Google, with the aim of repli­cat­ing its vehi­cle tech­nol­o­gy at arch-rival Uber. Way­mo is seek­ing dam­ages of near­ly $1.9 billion—almost half of Google’s (pre­vi­ous­ly unre­port­ed) $4.5 bil­lion val­u­a­tion of the entire self-dri­ving divi­sion. Uber denies any wrong­do­ing.

    Next month’s tri­al in a fed­er­al cour­t­house in San Fran­cis­co could steer the future of autonomous trans­porta­tion. A big win for Way­mo would prove the val­ue of its patents and chill Uber’s efforts to remove prof­it-sap­ping human dri­vers from its busi­ness. If Uber pre­vails, oth­er self-dri­ving star­tups will be encour­aged to take on the big players—and a vin­di­cat­ed Levandows­ki might even return to anoth­er start­up. (Uber fired him in May.)

    Levandows­ki has made a career of mov­ing fast and break­ing things. As long as those things were self-dri­ving vehi­cles and lit­tle-loved reg­u­la­tions, Sil­i­con Val­ley applaud­ed him in the way it knows best—with a fire­hose of cash. With his charm, enthu­si­asm, and obses­sion with deal-mak­ing, Levandows­ki came to per­son­i­fy the dis­rup­tion that autonomous trans­porta­tion is like­ly to cause.

    But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them?

    ...

    In 2002, Levandowski’s atten­tion turned, fate­ful­ly, toward trans­porta­tion. His moth­er called him from Brus­sels about a con­test being orga­nized by the Pentagon’s R&D arm, DARPA. The first Grand Chal­lenge in 2004 would race robot­ic, com­put­er-con­trolled vehi­cles in a desert between Los Ange­les and Las Vegas—a Wacky Races for the 21st cen­tu­ry.

    “I was like, ‘Wow, this is absolute­ly the future,’” Levandows­ki told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”

    Levandowski’s entry would be noth­ing so bor­ing as a car. “I orig­i­nal­ly want­ed to do an auto­mat­ed fork­lift,” he said at a fol­low-up com­pe­ti­tion in 2005. “Then I was dri­ving to Berke­ley [one day] and a pack of motor­cy­cles descend­ed on my pick­up and flowed like water around me.” The idea for Ghostrid­er was born—a glo­ri­ous­ly deranged self-dri­ving Yama­ha motor­cy­cle whose wob­bles inspired laugh­ter from spec­ta­tors, but awe in rivals strug­gling to get even four-wheeled vehi­cles dri­ving smooth­ly.

    “Antho­ny would go for weeks on 25-hour days to get every­thing done. Every day he would go to bed an hour lat­er than the day before,” remem­bers Randy Miller, a col­lege friend who worked with him on Ghostrid­er. “With­out a doubt, Antho­ny is the smartest, hard­est-work­ing and most fear­less per­son I’ve ever met.”

    Levandows­ki and his team of Berke­ley stu­dents maxed out his cred­it cards get­ting Ghostrid­er work­ing on the streets of Rich­mond, Cal­i­for­nia, where it racked up an aston­ish­ing 800 crash­es in a thou­sand miles of test­ing. Ghostrid­er nev­er won a Grand Chal­lenge, but its ambi­tious design earned Levandows­ki brag­ging rights—and the motor­bike a place in the Smith­son­ian.

    “I see Grand Chal­lenge not as the end of the robot­ics adven­ture we’re on, it’s almost like the begin­ning,” Levandows­ki told Sci­en­tif­ic Amer­i­can in 2005. “This is where every­one is meet­ing, becom­ing aware of who’s work­ing on what, [and] fil­ter­ing out the non-func­tion­al ideas.”

    One idea that made the cut was lidar—spinning lasers that rapid­ly built up a 3D pic­ture of a car’s sur­round­ings. In the lidar-less first Grand Chal­lenge, no vehi­cle made it fur­ther than a few miles along the course. In the sec­ond, an engi­neer named Dave Hall con­struct­ed a lidar that “was giant. It was one-off but it was awe­some,” Levandows­ki told me. “We real­ized, yes, lasers [are] the way to go.”

    After grad­u­ate school, Levandows­ki went to work for Hall’s com­pa­ny, Velo­dyne, as it piv­ot­ed from mak­ing loud­speak­ers to sell­ing lidars. Levandows­ki not only talked his way into being the company’s first sales rep, tar­get­ing teams work­ing towards the next Grand Chal­lenge, but he also worked on the lidar’s net­work­ing. By the time of the third and final DARPA con­test in 2007, Velodyne’s lidar was mount­ed on five of the six vehi­cles that fin­ished.

    But Levandows­ki had already moved on. Ghostrid­er had caught the eye of Sebas­t­ian Thrun, a robot­ics pro­fes­sor and team leader of Stan­ford University’s win­ning entry in the sec­ond com­pe­ti­tion. In 2006, Thrun invit­ed Levandows­ki to help out with a project called Vue­Tool, which was set­ting out to piece togeth­er street-lev­el urban maps using cam­eras mount­ed on mov­ing vehi­cles. Google was already work­ing on a sim­i­lar sys­tem, called Street View. Ear­ly in 2007, Google brought on Thrun and his entire team as employees—with bonus­es as high as $1 mil­lion each, accord­ing to one con­tem­po­rary at Google—to trou­bleshoot Street View and bring it to launch.

    “[Hir­ing the Vue­Tool team] was very much a scheme for pay­ing Thrun and the oth­ers to show Google how to do it right,” remem­bers the engi­neer. The new hires replaced Google’s bulky, cus­tom-made $250,000 cam­eras with $15,000 off-the-shelf panoram­ic web­cams. Then they went auto shop­ping. “Antho­ny went to a car store and said we want to buy 100 cars,” Sebas­t­ian Thrun told me in 2015. “The deal­er almost fell over.”

    Levandows­ki was also mak­ing waves in the office, even to the point of telling engi­neers not to waste time talk­ing to col­leagues out­side the project, accord­ing to one Google engi­neer. “It wasn’t clear what author­i­ty Antho­ny had, and yet he came in and assumed author­i­ty,” said the engi­neer, who asked to remain anony­mous. “There were some bad feel­ings but most­ly [peo­ple] just went with it. He’s good at that. He’s a great leader.”

    Under Thrun’s super­vi­sion, Street View cars raced to hit Page’s tar­get of cap­tur­ing a mil­lion miles of road images by the end of 2007. They fin­ished in October—just in time, as it turned out. Once autumn set in, every web­cam suc­cumbed to rain, con­den­sa­tion, or cold weath­er, ground­ing all 100 vehi­cles.

    Part of the team’s secret sauce was a device that would turn a raw cam­era feed into a stream of data, togeth­er with loca­tion coor­di­nates from GPS and oth­er sen­sors. Google engi­neers called it the Top­con box, named after the Japan­ese opti­cal firm that sold it. But the box was actu­al­ly designed by a local start­up called 510 Sys­tems. “We had one cus­tomer, Top­con, and we licensed our tech­nol­o­gy to them,” one of the 510 Sys­tems own­ers told me.

    That own­er was…Anthony Levandows­ki, who had cofound­ed 510 Sys­tems with two fel­low Berke­ley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after start­ing work at Google. 510 Sys­tems had a lot in com­mon with the Ghostrid­er team. Berke­ley stu­dents worked there between lec­tures, and Levandowski’s moth­er ran the office. Top­con was cho­sen as a go-between because it had spon­sored the self-dri­ving motor­cy­cle. “I always liked the idea that…510 would be the peo­ple that made the tools for peo­ple that made maps, peo­ple like Navteq, Microsoft, and Google,” Levandows­ki told me in 2016.

    Google’s engi­neer­ing team was ini­tial­ly unaware that 510 Sys­tems was Levandowski’s com­pa­ny, sev­er­al engi­neers told me. That changed once Levandows­ki pro­posed that Google also use the Top­con box for its small fleet of aer­i­al map­ping planes. “When we found out, it raised a bunch of eye­brows,” remem­bers an engi­neer. Regard­less, Google kept buy­ing 510’s box­es.

    **********

    The truth was, Levandows­ki and Thrun were on a roll. After impress­ing Lar­ry Page with Street View, Thrun sug­gest­ed an even more ambi­tious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of car­tog­ra­phers in India. Ground Truth would allow Google to stop pay­ing expen­sive licens­ing fees for out­side maps, and bring free turn-by-turn direc­tions to Android phones—a key dif­fer­en­tia­tor in the ear­ly days of its smart­phone war with Apple.

    Levandows­ki spent months shut­tling between Moun­tain View and Hyderabad—and yet still found time to cre­ate an online stock mar­ket pre­dic­tion game with Jesse Levin­son, a com­put­er sci­ence post-doc at Stan­ford who lat­er cofound­ed his own autonomous vehi­cle start­up, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Dis­coe, a for­mer engi­neer at 510. “He had an engineer’s enthu­si­asm that was con­ta­gious, and was always think­ing about how quick­ly we can get to this amaz­ing robot future he’s so excit­ed about.”

    One time, Dis­coe was chat­ting in 510’s break room about how lidar could help sur­vey his family’s tea farm on Hawaii. “Sud­den­ly Antho­ny said, ‘Why don’t you just do it? Get a lidar rig, put it in your lug­gage, and go map it,’” said Dis­coe. “And it worked. I made a kick-ass point cloud [3D dig­i­tal map] of the farm.”

    If Street View had impressed Lar­ry Page, the speed and accu­ra­cy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he want­ed; he want­ed to return to self-dri­ving cars.

    Project Chauf­feur began in 2008, with Levandows­ki as Thrun’s right-hand man. As with Street View, Google engi­neers would work on the soft­ware while 510 Sys­tems and a recent Levandows­ki start­up, Anthony’s Robots, pro­vid­ed the lidar and the car itself.

    Levandows­ki said this arrange­ment would have act­ed as a fire­wall if any­thing went ter­ri­bly wrong. “Google absolute­ly did not want their name asso­ci­at­ed with a vehi­cle dri­ving in San Fran­cis­co,” he told me in 2016. “They were wor­ried about an engi­neer build­ing a car that drove itself that crash­es and kills some­one and it gets back to Google. You have to ask per­mis­sion [for side projects] and your man­ag­er has to be OK with it. Sebas­t­ian was cool. Google was cool.”

    In order to move Project Chauf­feur along as quick­ly as pos­si­ble from the­o­ry to real­i­ty, Levandows­ki enlist­ed the help of a film­mak­er friend he had worked with at Berke­ley. In the TV show the two had made, Levandows­ki had cre­at­ed a cyber­net­ic dol­phin suit (seri­ous­ly). Now they came up with the idea of a self-dri­ving piz­za deliv­ery car for a show on the Dis­cov­ery Chan­nel called Pro­to­type This! Levandows­ki chose a Toy­ota Prius, because it had a dri­ve-by-wire sys­tem that was rel­a­tive­ly easy to hack.

    In a mat­ter of weeks, Levandowski’s team had the car, dubbed Pri­bot, dri­ving itself. If any­one asked what they were doing, Levandows­ki told me, “We’d say it’s a laser and just dri­ve off.”

    “Those were the Wild West days,” remem­bers Ben Dis­coe. “Antho­ny and Pierre-Yves…would engage the algo­rithm in the car and it would almost swipe some oth­er car or almost go off the road, and they would come back in and joke about it. Tell sto­ries about how excit­ing it was.”

    But for the Dis­cov­ery Chan­nel show, at least, Levandows­ki fol­lowed the let­ter of the law. The Bay Bridge was cleared of traf­fic and a squad of police cars escort­ed the unmanned Prius from start to fin­ish. Apart from get­ting stuck against a wall, the dri­ve was a suc­cess. “You’ve got to push things and get some bumps and bruis­es along the way,” said Levandows­ki.

    Anoth­er inci­dent drove home the poten­tial of self-dri­ving cars. In 2010, Levandowski’s part­ner Ste­fanie Olsen was involved in a seri­ous car acci­dent while nine months preg­nant with their first child. “My son Alex was almost nev­er born,” Levandows­ki told a room full of Berke­ley stu­dents in 2013. “Trans­porta­tion [today] takes time, resources and lives. If you can fix that, that’s a real­ly big prob­lem to address.”

    Over the next few years, Levandows­ki was key to Chauffeur’s progress. 510 Sys­tems built five more self-dri­ving cars for Google—as well as ran­dom gad­gets like an autonomous trac­tor and a portable lidar sys­tem. “Antho­ny is light­ning in a bot­tle, he has so much ener­gy and so much vision,” remem­bers a friend and for­mer 510 engi­neer. “I frick­ing loved brain­storm­ing with the guy. I loved that we could cre­ate a vision of the world that didn’t exist yet and both fall in love with that vision.”

    But there were down­sides to his man­ic ener­gy, too. “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense,” said the same engi­neer. “It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

    In ear­ly 2011, that plan was to bring 510 Sys­tems into the Google­plex. The startup’s engi­neers had long com­plained that they did not have equi­ty in the grow­ing com­pa­ny. When mat­ters came to a head, Levandows­ki drew up a plan that would reserve the first $20 mil­lion of any acqui­si­tion for 510’s founders and split the remain­der among the staff, accord­ing to two for­mer 510 employ­ees. “They said we were going to sell for hun­dreds of mil­lions,” remem­bers one engi­neer. “I was pret­ty thrilled with the num­bers.”

    Indeed, that sum­mer, Levandows­ki sold 510 Sys­tems and Anthony’s Robots to Google – for $20 mil­lion, the exact cut­off before the wealth would be shared. Rank and file engi­neers did not see a pen­ny, and some were even let go before the acqui­si­tion was com­plet­ed. “I regret how it was handled…Some peo­ple did get the short end of the stick,” admit­ted Levandows­ki in 2016. The buy­out also caused resent­ment among engi­neers at Google, who won­dered how Levandows­ki could have made such a prof­it from his employ­er.

    There would be more prof­its to come. Accord­ing to a court fil­ing, Page took a per­son­al inter­est in moti­vat­ing Levandows­ki, issu­ing a direc­tive in 2011 to “make Antho­ny rich if Project Chauf­feur suc­ceeds.” Levandows­ki was giv­en by far the high­est share, about 10 per­cent, of a bonus pro­gram linked to a future val­u­a­tion of Chauffeur—a deci­sion that would lat­er cost Google dear­ly.

    **********

    Ever since a New York Times sto­ry in 2010 revealed Project Chauf­feur to the world, Google had been want­i­ng to ramp up test­ing on pub­lic streets. That was tough to arrange in well-reg­u­lat­ed Cal­i­for­nia, but Levandows­ki wasn’t about to let that stop him. While man­ning Google’s stand at the Con­sumer Elec­tron­ics Show in Las Vegas in Jan­u­ary 2011, he got to chat­ting with lob­by­ist David Gold­wa­ter. “He told me he was hav­ing a hard time in Cal­i­for­nia and I sug­gest­ed Google try a small­er state, like Neva­da,” Gold­wa­ter told me.

    Togeth­er, Gold­wa­ter and Levandows­ki draft­ed leg­is­la­tion that would allow the com­pa­ny to test and oper­ate self-dri­ving cars in Neva­da. By June, their sug­ges­tions were law, and in May 2012, a Google Prius passed the world’s first “self-dri­ving tests” in Las Vegas and Car­son City. “Antho­ny is gift­ed in so many dif­fer­ent ways,” said Gold­wa­ter. “He’s got a strate­gic mind, he’s got a tac­ti­cal mind, and a once-in-a-gen­er­a­tion intel­lect. The great thing about Antho­ny is that he was will­ing to take risks, but they were cal­cu­lat­ed risks.”

    How­ev­er, Levandowski’s risk-tak­ing had ruf­fled feath­ers at Google. It was only after Neva­da had passed its leg­is­la­tion that Levandows­ki dis­cov­ered Google had a whole team ded­i­cat­ed to gov­ern­ment rela­tions. “I thought you could just do it your­self,” he told me sheep­ish­ly in 2016. “[I] got a lit­tle bit in trou­ble for doing it.”

    That might be under­stat­ing it. One prob­lem was that Levandows­ki had lost his air cov­er at Google. In May 2012, his friend Sebas­t­ian Thrun turned his atten­tion to start­ing online learn­ing com­pa­ny Udac­i­ty. Page put anoth­er pro­fes­sor, Chris Urm­son from Carnegie Mel­lon, in charge. Not only did Levandows­ki think the job should have been his, but the two also had ter­ri­ble chem­istry.

    “They had a real­ly hard time get­ting along,” said Page at a depo­si­tion in July. “It was a con­stant man­age­ment headache to help them get through that.”

    Then in July 2013, Gae­tan Pen­necot, a 510 alum work­ing on Chauffeur’s lidar team, got a wor­ry­ing call from a ven­dor. Accord­ing to Waymo’s com­plaint, a small com­pa­ny called Odin Wave had placed an order for a cus­tom-made part that was extreme­ly sim­i­lar to one used in Google’s lidars.

    Pen­necot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Sys­tems. Droz did some dig­ging and replied in an email to Pen­necot (in French, which we’ve trans­lat­ed): “They’re clear­ly mak­ing a lidar. And it’s John (510’s old lawyer) who incor­po­rat­ed them. The date of incor­po­ra­tion cor­re­sponds to sev­er­al months after Antho­ny fell out of favor at Google.”

    As the sto­ry emerges in court doc­u­ments, Droz had found Odin Wave’s com­pa­ny records. Not only had Levandowski’s lawyer found­ed the com­pa­ny in August 2012, but it was also based in a Berke­ley office build­ing that Levandows­ki owned, was being run by a friend of Levandowski’s, and its employ­ees includ­ed engi­neers he had worked with at Velo­dyne and 510 Sys­tems. One even spoke with Levandows­ki before being hired. The com­pa­ny was devel­op­ing long range lidars sim­i­lar to those Levandows­ki had worked on at 510 Sys­tems. But Levandowski’s name was nowhere on the firm’s paper­work.

    Droz con­front­ed Levandows­ki, who denied any involve­ment, and Droz decid­ed not to fol­low the paper trail any fur­ther. “I was pret­ty hap­py work­ing at Google, and…I didn’t want to jeop­ar­dize that by...exposing more of Anthony’s shenani­gans,” he said at a depo­si­tion last month.

    Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandows­ki was even part of a Google inves­ti­ga­tion into acquir­ing Tyto. This time, how­ev­er, Google passed on the pur­chase. That seemed to demor­al­ize Levandows­ki fur­ther. “He was rarely at work, and he left a lot of the respon­si­bil­i­ty [for] eval­u­at­ing peo­ple on the team to me or oth­ers,” said Droz in his depo­si­tion.

    “Over time my patience with his manip­u­la­tions and lack of enthu­si­asm and com­mit­ment to the project [sic], it became clear­er and clear­er that this was a lost cause,” said Chris Urm­son in a depo­si­tion.

    As he was torch­ing bridges at Google, Levandows­ki was itch­ing for a new chal­lenge. Luck­i­ly, Sebas­t­ian Thrun was back on the autonomous beat. Lar­ry Page and Thrun had been think­ing about elec­tric fly­ing taxis that could car­ry one or two peo­ple. Project Tiramisu, named after the dessert which means “lift me up” in Ital­ian, involved a winged plane fly­ing in cir­cles, pick­ing up pas­sen­gers below using a long teth­er.

    Thrun knew just the per­son to kick­start Tiramisu. Accord­ing to a source work­ing there at the time, Levandows­ki was brought in to over­see Tiramisu as an “advi­sor and stake­hold­er.” Levandows­ki would show up at the project’s work­space in the evenings, and was involved in tests at one of Page’s ranch­es. Tiramisu’s teth­ers soon piv­ot­ed to a ride-aboard elec­tric drone, now called the Kit­ty Hawk fly­er. Thrun is CEO of Kit­ty Hawk, which is fund­ed by Page rather than Alpha­bet, the umbrel­la com­pa­ny that now owns Google and its sib­ling com­pa­nies.

    Waymo’s com­plaint says that around this time Levandows­ki start­ed solic­it­ing Google col­leagues to leave and start a com­peti­tor in the autonomous vehi­cle busi­ness. Droz tes­ti­fied that Levandows­ki told him it “would be nice to cre­ate a new self-dri­ving car start­up.” Fur­ther­more, he said that Uber would be inter­est­ed in buy­ing the team respon­si­ble for Google’s lidar.

    Uber had explod­ed onto the self-dri­ving car scene ear­ly in 2015, when it lured almost 50 engi­neers away from Carnegie Mel­lon Uni­ver­si­ty to form the core of its Advanced Tech­nolo­gies Cen­ter. Uber cofounder Travis Kalan­ick had described autonomous tech­nol­o­gy as an exis­ten­tial threat to the ride-shar­ing com­pa­ny, and was hir­ing furi­ous­ly. Accord­ing to Droz, Levandows­ki said that he began meet­ing Uber exec­u­tives that sum­mer.

    When Urm­son learned of Levandowski’s recruit­ing efforts, his depo­si­tion states, he sent an email to human resources in August begin­ning, “We need to fire Antho­ny Levandows­ki.” Despite an inves­ti­ga­tion, that did not hap­pen.

    But Levandowski’s now not-so-secret plan would soon see him leav­ing of his own accord—with a moun­tain of cash. In 2015, Google was due to start­ing pay­ing the Chauf­feur bonus­es, linked to a val­u­a­tion that it would have “sole and absolute dis­cre­tion” to cal­cu­late. Accord­ing to pre­vi­ous­ly unre­port­ed court fil­ings, exter­nal con­sul­tants cal­cu­lat­ed the self-dri­ving car project as being worth $8.5 bil­lion. Google ulti­mate­ly val­ued Chauf­feur at around half that amount: $4.5 bil­lion. Despite this down­grade, Levandowski’s share in Decem­ber 2015 amount­ed to over $50 mil­lion – near­ly twice as much as the sec­ond largest bonus of $28 mil­lion, paid to Chris Urm­son.

    **********

    Otto seemed to spring forth ful­ly formed in May 2016, demon­strat­ing a self-dri­ving 18-wheel truck bar­rel­ing down a Neva­da high­way with no one behind the wheel. In real­i­ty, Levandows­ki had been plan­ning it for some time.

    Levandows­ki and his Otto cofounders at Google had spent the Christ­mas hol­i­days and the first weeks of 2016 tak­ing their recruit­ment cam­paign up a notch, accord­ing to Way­mo court fil­ings. Waymo’s com­plaint alleges Levandows­ki told col­leagues he was plan­ning to “repli­cate” Waymo’s tech­nol­o­gy at a com­peti­tor, and was even solic­it­ing his direct reports at work.

    One engi­neer who had worked at 510 Sys­tems attend­ed a bar­be­cue at Levandowski’s home in Palo Alto, where Levandows­ki pitched his for­mer col­leagues and cur­rent Googlers on the start­up. “He want­ed every Way­mo per­son to resign simul­ta­ne­ous­ly, a ful­ly syn­chro­nized walk­out. He was fir­ing peo­ple up for that,” remem­bers the engi­neer.

    On Jan­u­ary 27, Levandows­ki resigned from Google with­out notice. With­in weeks, Levandows­ki had a draft con­tract to sell Otto to Uber for an amount wide­ly report­ed as $680 mil­lion. Although the full-scale syn­chro­nized walk­out nev­er hap­pened, half a dozen Google employ­ees went with Levandows­ki, and more would join in the months ahead. But the new com­pa­ny still did not have a prod­uct to sell.

    Levandows­ki brought Neva­da lob­by­ist David Gold­wa­ter back to help. “There was some brain­storm­ing with Antho­ny and his team,” said Gold­wa­ter in an inter­view. “We were look­ing to do a demon­stra­tion project where we could show what he was doing.”

    After explor­ing the idea of an autonomous pas­sen­ger shut­tle in Las Vegas, Otto set­tled on devel­op­ing a dri­ver­less semi-truck. But with the Uber deal rush­ing for­ward, Levandows­ki need­ed results fast. “By the time Otto was ready to go with the truck, they want­ed to get right on the road,” said Gold­wa­ter. That meant demon­strat­ing their pro­to­type with­out obtain­ing the very autonomous vehi­cle licence Levandows­ki had per­suad­ed Neva­da to adopt. (One state offi­cial called this move “ille­gal.”) Levandows­ki also had Otto acquire the con­tro­ver­sial Tyto Lidar—the com­pa­ny based in the build­ing he owned—in May, for an undis­closed price.

    The full-court press worked. Uber com­plet­ed its own acqui­si­tion of Otto in August, and Uber founder Travis Kalan­ick put Levandows­ki in charge of the com­bined com­pa­nies’ self-dri­ving efforts across per­son­al trans­porta­tion, deliv­ery and truck­ing. Uber would even pro­pose a Tiramisu-like autonomous air taxi called Uber Ele­vate. Now report­ing direct­ly to Kalan­ick and in charge of a 1500-strong group, Levandows­ki demand­ed the email address “robot@uber.com.”

    In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. “Down to hang out this eve and mas­ter­mind some shit,” texted Kalan­ick, short­ly after the acqui­si­tion. “We’re going to take over the world. One robot at a time,” wrote Levandows­ki anoth­er time.

    But Levandowski’s amaz­ing robot future was about to crum­ble before his eyes.

    ***********

    Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions.

    Worse was yet to come. Levandows­ki had always been a con­tro­ver­sial fig­ure at Google. With his abrupt res­ig­na­tion, the launch of Otto, and its rapid acqui­si­tion by Uber, Google launched an inter­nal inves­ti­ga­tion in the sum­mer of 2016. It found that Levandows­ki had down­loaded near­ly 10 giga­bytes of Google’s secret files just before he resigned, many of them relat­ing to lidar tech­nol­o­gy.

    Also in Decem­ber 2016, in an echo of the Tyto inci­dent, a Way­mo employ­ee was acci­den­tal­ly sent an email from a ven­dor that includ­ed a draw­ing of an Otto cir­cuit board. The design looked very sim­i­lar to Waymo’s cur­rent lidars.

    Way­mo says the “final piece of the puz­zle” came from a sto­ry about Otto I wrote for Backchan­nel based on a pub­lic records request. A doc­u­ment sent by Otto to Neva­da offi­cials boast­ed the com­pa­ny had an “in-house cus­tom-built 64-laser” lidar sys­tem. To Way­mo, that sound­ed very much like tech­nol­o­gy it had devel­oped. In Feb­ru­ary this year, Way­mo filed its head­line law­suit accus­ing Uber (along with Otto Truck­ing, yet anoth­er of Levandowski’s com­pa­nies, but one that Uber had not pur­chased) of vio­lat­ing its patents and mis­ap­pro­pri­at­ing trade secrets on lidar and oth­er tech­nolo­gies.

    Uber imme­di­ate­ly denied the accu­sa­tions and has con­sis­tent­ly main­tained its inno­cence. Uber says there is no evi­dence that any of Waymo’s tech­ni­cal files ever came to Uber, let alone that Uber ever made use of them. While Levandows­ki is not named as a defen­dant, he has refused to answer ques­tions in depo­si­tions with Waymo’s lawyers and is expect­ed to do the same at tri­al. (He turned down sev­er­al requests for inter­views for this sto­ry.) He also didn’t ful­ly coop­er­ate with Uber’s own inves­ti­ga­tion into the alle­ga­tions, and that, Uber says, is why it fired him in May.

    Levandows­ki prob­a­bly does not need a job. With the pur­chase of 510 Sys­tems and Anthony’s Robots, his salary, and bonus­es, Levandows­ki earned at least $120 mil­lion from his time at Google. Some of that mon­ey has been invest­ed in mul­ti­ple real estate devel­op­ments with his col­lege friend Randy Miller, includ­ing sev­er­al large projects in Oak­land and Berke­ley.

    But Levandows­ki has kept busy behind the scenes. In August, court fil­ings say, he per­son­al­ly tracked down a pair of ear­rings giv­en to a Google employ­ee at her going-away par­ty in 2014. The ear­rings were made from con­fi­den­tial lidar cir­cuit boards, and will pre­sum­ably be used by Otto Trucking’s lawyers to sug­gest that Way­mo does not keep a very close eye on its trade secrets.

    Some of Levandowski’s friends and col­leagues have expressed shock at the alle­ga­tions he faces, say­ing that they don’t reflect the per­son they knew. “It is...in char­ac­ter for Antho­ny to play fast and loose with things like intel­lec­tu­al prop­er­ty if it’s in pur­suit of build­ing his dream robot,” said Ben Dis­coe. “[But] I was a lit­tle sur­prised at the alleged mag­ni­tude of his dis­re­gard for IP.”

    “Def­i­nite­ly one of Anthony’s faults is to be aggres­sive as he is, but it’s also one of his great attrib­ut­es. I don’t see [him doing] all the oth­er stuff he has been accused of,” said David Gold­wa­ter.

    But Lar­ry Page is no longer con­vinced that Levandows­ki was key to Chauffeur’s suc­cess. In his depo­si­tion to the court, Page said, “I believe Anthony’s con­tri­bu­tions are quite pos­si­bly neg­a­tive of a high amount.” At Uber, some engi­neers pri­vate­ly say that Levandowski’s poor man­age­ment style set back that company’s self-dri­ving effort by a cou­ple of years.

    Even after this tri­al is done, Levandows­ki will not be able to rest easy. In May, a judge referred evi­dence from the case to the US Attorney’s office “for inves­ti­ga­tion of pos­si­ble theft of trade secrets,” rais­ing the pos­si­bil­i­ty of crim­i­nal pro­ceed­ings and prison time. Yet on the time­line that mat­ters to Antho­ny Levandows­ki, even that may not mean much. Build­ing a robot­i­cal­ly enhanced future is his pas­sion­ate life­time project. On the Way of the Future, law­suits or even a jail sen­tence might just feel like lit­tle bumps in the road.

    “This case is teach­ing Antho­ny some hard lessons but I don’t see [it] keep­ing him down,” said Randy Miller. “He believes firm­ly in his vision of a bet­ter world through robot­ics and he’s con­vinced me of it. It’s clear to me that he’s on a mis­sion.”

    “I think Antho­ny will rise from the ash­es,” agrees one friend and for­mer 510 Sys­tems engi­neer. “Antho­ny has the ambi­tion, the vision, and the abil­i­ty to recruit and dri­ve peo­ple. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cut­ting cor­ners.”

    ———-

    “God Is a Bot, and Antho­ny Levandows­ki Is His Mes­sen­ger” by Mark Har­ris; Wired; 09/27/2017

    “But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them?”

    Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? It’s an impor­tant ques­tion that does­n’t just apply to self-dri­ving cars. Like AI God­heads. Can we ever trust man-made AI God­heads if it turns out we can’t trust the peo­ple who are mak­ing it? These are the stu­pid ques­tions we have to now giv­en the dis­turb­ing num­ber of pow­er­ful peo­ple who dou­ble as evan­ge­list for tech­no-cult Lib­er­tar­i­an ide­olo­gies. Espe­cial­ly when they are spe­cial­ist in cre­at­ing auto­mat­ed vehi­cles and have a deep pas­sion for tak­ing over the world. Pos­si­bly tak­ing over the world mil­i­tar­i­ly using robots:

    ...
    Over the next few years, Levandows­ki was key to Chauffeur’s progress. 510 Sys­tems built five more self-dri­ving cars for Google—as well as ran­dom gad­gets like an autonomous trac­tor and a portable lidar sys­tem. “Antho­ny is light­ning in a bot­tle, he has so much ener­gy and so much vision,” remem­bers a friend and for­mer 510 engi­neer. “I frick­ing loved brain­storm­ing with the guy. I loved that we could cre­ate a vision of the world that didn’t exist yet and both fall in love with that vision.”

    But there were down­sides to his man­ic ener­gy, too. “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense,” said the same engi­neer. “It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”
    ...

    Yeah, that’s dis­turb­ing. And it does­n’t help that Levandows­ki appar­ent­ly found a soul­mate and men­tor in the guy wide­ly viewed as one of the most socio­path­ic CEOs today: Uber CEO Travis Kalan­ick:

    ...
    The full-court press worked. Uber com­plet­ed its own acqui­si­tion of Otto in August, and Uber founder Travis Kalan­ick put Levandows­ki in charge of the com­bined com­pa­nies’ self-dri­ving efforts across per­son­al trans­porta­tion, deliv­ery and truck­ing. Uber would even pro­pose a Tiramisu-like autonomous air taxi called Uber Ele­vate. Now report­ing direct­ly to Kalan­ick and in charge of a 1500-strong group, Levandows­ki demand­ed the email address “robot@uber.com.”

    In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. “Down to hang out this eve and mas­ter­mind some shit,” texted Kalan­ick, short­ly after the acqui­si­tion. “We’re going to take over the world. One robot at a time,” wrote Levandows­ki anoth­er time.
    ...

    “We’re going to take over the world. One robot at a time”

    So that gives us an idea of how Levandowski’s AI reli­gion is going to be evan­ge­lized: via his army of robots. Although it’s unclear if his future reli­gion is actu­al­ly intend­ed for us mere humans. After all, for the hard­core Tran­shu­man­ists we’re all sup­posed to fuse with machines or upload our brains so it’s very pos­si­ble humans aren’t actu­al­ly part of Levandowski’s vision for that bet­ter tomor­row. A vision that, as the Cica­da 3301 weird­ness reminds us, prob­a­bly isn’t lim­it­ed to Levandows­ki. *gulp*

    Posted by Pterrafractyl | September 28, 2017, 8:43 pm
  11. Just what the world needs: an AI-pow­ered ‘gay­dar’ algo­rithm that pur­ports to to be able to detect who is gay and who isn’t just by look­ing at faces. Although it’s not actu­al­ly that impres­sive. The ‘gay­dar’ algo­rithm is instead giv­en pairs of faces, one of a het­ero­sex­u­al indi­vid­ual and one of a homo­sex­u­al indi­vid­ual, and the tries to iden­ti­fy the gay per­son. And appar­ent­ly does so cor­rect­ly at a rate of 81 per­cent of cas­es for men and 71 per­cent of cas­es for women, sig­nif­i­cant­ly bet­ter than the 50 per­cent rate we would expect just from ran­dom chance. It’s the kind of gay­dar tech­nol­o­gy that might not be good enough to just ‘pick the gays from a crowd’, but still more than ade­quate for poten­tial­ly being abused. And, more gen­er­al­ly, it’s the kind of research that not sur­pris­ing­ly is rais­ing con­cerns about this cre­at­ing a a 21st cen­tu­ry ver­sion of physig­no­my, the pseu­do­science based on the idea that peo­ples’ char­ac­ter is reflect­ed in their faces.

    But as the researchers behind the study put it, we don’t need to wor­ry about this being a high-tech exam­ple of phys­iog­no­my because their gay­dar uses hard sci­ence. And while the researchers agree that phys­iog­no­my is pseu­do­science, they also note that the pseu­do­science nature of phys­iog­no­my does­n’t mean AIs can’t actu­al­ly learn some­thing about you just by look­ing at you. Yep. That’s the reas­sur­ance we’re get­ting from these researchers. Don’t wor­ry about AI dri­ving a 21st cen­tu­ry ver­sion of phys­iog­no­my because they are using much bet­ter sci­ence com­pared to the past. Feel­ing reas­sured?

    The Verge

    The inven­tion of AI ‘gay­dar’ could be the start of some­thing much worse
    Researchers claim they can spot gay peo­ple from a pho­to, but crit­ics say we’re revis­it­ing pseu­do­science

    by James Vin­cent
    Sep 21, 2017, 1:24pm EDT

    Two weeks ago, a pair of researchers from Stan­ford Uni­ver­si­ty made a star­tling claim. Using hun­dreds of thou­sands of images tak­en from a dat­ing web­site, they said they had trained a facial recog­ni­tion sys­tem that could iden­ti­fy whether some­one was straight or gay just by look­ing at them. The work was first cov­ered by The Econ­o­mist, and oth­er pub­li­ca­tions soon fol­lowed suit, with head­lines like “New AI can guess whether you’re gay or straight from a pho­to­graph” and “AI Can Tell If You’re Gay From a Pho­to, and It’s Ter­ri­fy­ing.”

    As you might have guessed, it’s not as straight­for­ward as that. (And to be clear, based on this work alone, AI can’t tell whether some­one is gay or straight from a pho­to.) But the research cap­tures com­mon fears about arti­fi­cial intel­li­gence: that it will open up new avenues for sur­veil­lance and con­trol, and could be par­tic­u­lar­ly harm­ful for mar­gin­al­ized peo­ple. One of the paper’s authors, Dr Michal Kosin­s­ki, says his intent is to sound the alarm about the dan­gers of AI, and warns that facial recog­ni­tion will soon be able to iden­ti­fy not only someone’s sex­u­al ori­en­ta­tion, but their polit­i­cal views, crim­i­nal­i­ty, and even their IQ.

    With state­ments like these, some wor­ry we’re reviv­ing an old belief with a bad his­to­ry: that you can intu­it char­ac­ter from appear­ance. This pseu­do­science, phys­iog­no­my, was fuel for the sci­en­tif­ic racism of the 19th and 20th cen­turies, and gave moral cov­er to some of humanity’s worst impuls­es: to demo­nize, con­demn, and exter­mi­nate fel­low humans. Crit­ics of Kosinski’s work accuse him of replac­ing the calipers of the 19th cen­tu­ry with the neur­al net­works of the 21st, while the pro­fes­sor him­self says he is hor­ri­fied by his find­ings, and hap­py to be proved wrong. “It’s a con­tro­ver­sial and upset­ting sub­ject, and it’s also upset­ting to us,” he tells The Verge.

    But is it pos­si­ble that pseu­do­science is sneak­ing back into the world, dis­guised in new garb thanks to AI? Some peo­ple say machines are sim­ply able to read more about us than we can our­selves, but what if we’re train­ing them to car­ry out our prej­u­dices, and, in doing so, giv­ing new life to old ideas we right­ly dis­missed? How are we going to know the dif­fer­ence?

    Can AI real­ly spot sex­u­al ori­en­ta­tion?

    First, we need to look at the study at the heart of the recent debate, writ­ten by Kosin­s­ki and his co-author Yilun Wang. Its results have been poor­ly report­ed, with a lot of the hype com­ing from mis­rep­re­sen­ta­tions of the system’s accu­ra­cy. The paper states: “Giv­en a sin­gle facial image, [the soft­ware] could cor­rect­ly dis­tin­guish between gay and het­ero­sex­u­al men in 81 per­cent of cas­es, and in 71 per­cent of cas­es for women.” These rates increase when the sys­tem is giv­en five pic­tures of an indi­vid­ual: up to 91 per­cent for men, and 83 per­cent for women.

    On the face of it, this sounds like “AI can tell if a man is gay or straight 81 per­cent of the time by look­ing at his pho­to.” (Thus the head­lines.) But that’s not what the fig­ures mean. The AI wasn’t 81 per­cent cor­rect when being shown ran­dom pho­tos: it was test­ed on a pair of pho­tos, one of a gay per­son and one of a straight per­son, and then asked which indi­vid­ual was more like­ly to be gay. It guessed right 81 per­cent of the time for men and 71 per­cent of the time for women, but the struc­ture of the test means it start­ed with a base­line of 50 per­cent — that’s what it’d get guess­ing at ran­dom. And although it was sig­nif­i­cant­ly bet­ter than that, the results aren’t the same as say­ing it can iden­ti­fy anyone’s sex­u­al ori­en­ta­tion 81 per­cent of the time.

    As Philip Cohen, a soci­ol­o­gist at the Uni­ver­si­ty of Mary­land who wrote a blog post cri­tiquing the paper, told The Verge: “Peo­ple are scared of a sit­u­a­tion where you have a pri­vate life and your sex­u­al ori­en­ta­tion isn’t known, and you go to an air­port or a sport­ing event and a com­put­er scans the crowd and iden­ti­fies whether you’re gay or straight. But there’s just not much evi­dence this tech­nol­o­gy can do that.”

    Kosin­s­ki and Wang make this clear them­selves toward the end of the paper when they test their sys­tem against 1,000 pho­tographs instead of two. They ask the AI to pick out who is most like­ly to be gay in a dataset in which 7 per­cent of the pho­to sub­jects are gay, rough­ly reflect­ing the pro­por­tion of straight and gay men in the US pop­u­la­tion. When asked to select the 100 indi­vid­u­als most like­ly to be gay, the sys­tem gets only 47 out of 70 pos­si­ble hits. The remain­ing 53 have been incor­rect­ly iden­ti­fied. And when asked to iden­ti­fy a top 10, nine are right.

    If you were a bad actor try­ing to use this sys­tem to iden­ti­fy gay peo­ple, you couldn’t know for sure you were get­ting cor­rect answers. Although, if you used it against a large enough dataset, you might get most­ly cor­rect guess­es. Is this dan­ger­ous? If the sys­tem is being used to tar­get gay peo­ple, then yes, of course. But the rest of the study sug­gests the pro­gram has even fur­ther lim­i­ta­tions.

    What can com­put­ers real­ly see that humans can’t?

    It’s also not clear what fac­tors the facial recog­ni­tion sys­tem is using to make its judge­ments. Kosin­s­ki and Wang’s hypoth­e­sis is that it’s pri­mar­i­ly iden­ti­fy­ing struc­tur­al dif­fer­ences: fem­i­nine fea­tures in the faces of gay men and mas­cu­line fea­tures in the faces of gay women. But it’s pos­si­ble that the AI is being con­fused by oth­er stim­uli — like facial expres­sions in the pho­tos.

    This is par­tic­u­lar­ly rel­e­vant because the images used in the study were tak­en from a dat­ing web­site. As Greg­gor Matt­son, a pro­fes­sor of soci­ol­o­gy at Ober­lin Col­lege, point­ed out in a blog post, this means that the images them­selves are biased, as they were select­ed specif­i­cal­ly to attract some­one of a cer­tain sex­u­al ori­en­ta­tion. They almost cer­tain­ly play up to our cul­tur­al expec­ta­tions of how gay and straight peo­ple should look, and, to fur­ther nar­row their applic­a­bil­i­ty, all the sub­jects were white, with no inclu­sion of bisex­u­al or self-iden­ti­fied trans indi­vid­u­als. If a straight male choos­es the most stereo­typ­i­cal­ly “man­ly” pic­ture of him­self for a dat­ing site, it says more about what he thinks soci­ety wants from him than a link between the shape of his jaw and his sex­u­al ori­en­ta­tion.

    To try and ensure their sys­tem was look­ing at facial struc­ture only, Kosin­s­ki and Wang used soft­ware called VGG-Face, which encodes faces as strings of num­bers and has been used for tasks like spot­ting celebri­ty looka­likes in paint­ings. This pro­gram, they write, allows them to “min­i­mize the role [of] tran­sient fea­tures” like light­ing, pose, and facial expres­sion.

    But researcher Tom White, who works on AI facial sys­tem, says VGG-Face is actu­al­ly very good at pick­ing up on these ele­ments. White point­ed this out on Twit­ter, and explained to The Verge over email how he’d test­ed the soft­ware and used it to suc­cess­ful­ly dis­tin­guish between faces with expres­sions like “neu­tral” and “hap­py,” as well as pos­es and back­ground col­or.

    Speak­ing to The Verge, Kosin­s­ki says he and Wang have been explic­it that things like facial hair and make­up could be a fac­tor in the AI’s deci­sion-mak­ing, but he main­tains that facial struc­ture is the most impor­tant. “If you look at the over­all prop­er­ties of VGG-Face, it tends to put very lit­tle weight on tran­sient facial fea­tures,” Kosin­s­ki says. “We also pro­vide evi­dence that non-tran­sient facial fea­tures seem to be pre­dic­tive of sex­u­al ori­en­ta­tion.”

    The prob­lem is, we can’t know for sure. Kosin­s­ki and Wang haven’t released the pro­gram they cre­at­ed or the pic­tures they used to train it. They do test their AI on oth­er pic­ture sources, to see if it’s iden­ti­fy­ing some fac­tor com­mon to all gay and straight, but these tests were lim­it­ed and also drew from a biased dataset — Face­book pro­file pic­tures from men who liked pages such as “I love being Gay,” and “Gay and Fab­u­lous.”

    Do men in these groups serve as rea­son­able prox­ies for all gay men? Prob­a­bly not, and Kosin­s­ki says it’s pos­si­ble his work is wrong. “Many more stud­ies will need to be con­duct­ed to ver­i­fy [this],” he says. But it’s tricky to say how one could com­plete­ly elim­i­nate selec­tion bias to per­form a con­clu­sive test. Kosin­s­ki tells The Verge, “You don’t need to under­stand how the mod­el works to test whether it’s cor­rect or not.” How­ev­er, it’s the accep­tance of the opac­i­ty of algo­rithms that makes this sort of research so fraught.

    If AI can’t show its work­ing, can we trust it?

    AI researchers can’t ful­ly explain why their machines do the things they do. It’s a chal­lenge that runs through the entire field, and is some­times referred to as the “black box” prob­lem. Because of the meth­ods used to train AI, these pro­grams can’t show their work in the same way nor­mal soft­ware does, although researchers are work­ing to amend this.

    In the mean­time, it leads to all sorts of prob­lems. A com­mon one is that sex­ist and racist bias­es are cap­tured from humans in the train­ing data and repro­duced by the AI. In the case of Kosin­s­ki and Wang’s work, the “black box” allows them to make a par­tic­u­lar sci­en­tif­ic leap of faith. Because they’re con­fi­dent their sys­tem is pri­mar­i­ly ana­lyz­ing facial struc­tures, they say their research shows that facial struc­tures pre­dict sex­u­al ori­en­ta­tion. (“Study 1a showed that facial fea­tures extract­ed by a [neur­al net­work] can be used to accu­rate­ly iden­ti­fy the sex­u­al ori­en­ta­tion of both men and women.”)

    Experts say this is a mis­lead­ing claim that isn’t sup­port­ed by the lat­est sci­ence. There may be a com­mon cause for face shape and sex­u­al ori­en­ta­tion — the most prob­a­ble cause is the bal­ance of hor­mones in the womb — but that doesn’t mean face shape reli­ably pre­dicts sex­u­al ori­en­ta­tion, says Qazi Rah­man, an aca­d­e­m­ic at King’s Col­lege Lon­don who stud­ies the biol­o­gy of sex­u­al ori­en­ta­tion. “Biology’s a lit­tle bit more nuanced than we often give it cred­it for,” he tells The Verge. “The issue here is the strength of the asso­ci­a­tion.”

    The idea that sex­u­al ori­en­ta­tion comes pri­mar­i­ly from biol­o­gy is itself con­tro­ver­sial. Rah­man, who believes that sex­u­al ori­en­ta­tion is most­ly bio­log­i­cal, prais­es Kosin­s­ki and Wang’s work. “It’s not junk sci­ence,” he says. “More like sci­ence some­one doesn’t like.” But when it comes to pre­dict­ing sex­u­al ori­en­ta­tion, he says there’s a whole pack­age of “atyp­i­cal gen­der behav­ior” that needs to be con­sid­ered. “The issue for me is more that [the study] miss­es the point, and that’s behav­ior.”

    Reduc­ing the ques­tion of sex­u­al ori­en­ta­tion to a sin­gle, mea­sur­able fac­tor in the body has a long and often inglo­ri­ous his­to­ry. As Mat­ton writes in his blog post, approach­es have ranged from “19th cen­tu­ry mea­sure­ments of les­bians’ cli­toris­es and homo­sex­u­al men’s hips, to late 20th cen­tu­ry claims to have dis­cov­ered ‘gay genes,’ ‘gay brains,’ ‘gay ring fin­gers,’ ‘les­bian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppres­sion: it gives peo­ple who want to dehu­man­ize and per­se­cute sex­u­al minori­ties a “sci­en­tif­ic” pre­text.

    Jen­ny Davis, a lec­tur­er in soci­ol­o­gy at the Aus­tralian Nation­al Uni­ver­si­ty, describes it as a form of bio­log­i­cal essen­tial­ism. This is the belief that things like sex­u­al ori­en­ta­tion are root­ed in the body. This approach, she says, is dou­ble-edged. On the one hand, it “does a use­ful polit­i­cal thing: detach­ing blame from same-sex desire. But on the oth­er hand, it rein­forces the deval­ued posi­tion of that kind of desire,” set­ting up het­ro­sex­u­al­i­ty as the norm and fram­ing homo­sex­u­al­i­ty as “less valu­able … a sort of ill­ness.”

    ...

    Your char­ac­ter, as plain as the nose on your face

    For cen­turies, peo­ple have believed that the face held the key to the char­ac­ter. The notion has its roots in ancient Greece, but was par­tic­u­lar­ly influ­en­tial in the 19th cen­tu­ry. Pro­po­nents of phys­iog­no­my sug­gest­ed that by mea­sur­ing things like the angle of someone’s fore­head or the shape of their nose, they could deter­mine if a per­son was hon­est or a crim­i­nal. Last year in Chi­na, AI researchers claimed they could do the same thing using facial recog­ni­tion.

    Their research, pub­lished as “Auto­mat­ed Infer­ence on Crim­i­nal­i­ty Using Face Images,” caused a minor uproar in the AI com­mu­ni­ty. Sci­en­tists point­ed out flaws in the study, and con­clud­ed that that work was repli­cat­ing human prej­u­dices about what con­sti­tutes a “mean” or a “nice” face. In a wide­ly shared rebut­tal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the com­ing years that has sim­i­lar … false claims to sci­en­tif­ic objec­tiv­i­ty in order to ‘laun­der’ human prej­u­dice and dis­crim­i­na­tion.” (Google declined to make Agüera y Arcas avail­able to com­ment on this report.)

    Kosin­s­ki and Wang’s paper clear­ly acknowl­edges the dan­gers of phys­iog­no­my, not­ing that the prac­tice “is now uni­ver­sal­ly, and right­ly, reject­ed as a mix of super­sti­tion and racism dis­guised as sci­ence.” But, they con­tin­ue, just because a sub­ject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read char­ac­ter­is­tics like per­son­al­i­ty in oth­er people’s faces with “low accu­ra­cy,” machines should be able to do the same but more accu­rate­ly.

    Kosin­s­ki says his research isn’t phys­iog­no­my because it’s using rig­or­ous sci­en­tif­ic meth­ods, and his paper cites a num­ber of stud­ies show­ing that we can deduce (with vary­ing accu­ra­cy) traits about peo­ple by look­ing at them. “I was edu­cat­ed and made to believe that it’s absolute­ly impos­si­ble that the face con­tains any infor­ma­tion about your inti­mate traits, because phys­iog­no­my and phrenol­o­gy were just pseu­do­sciences,” he says. “But the fact that they were claim­ing things with­out any basis in fact, that they were mak­ing stuff up, doesn’t mean that this stuff is not real.” He agrees that phys­iog­no­my is not sci­ence, but says there may be truth in its basic con­cepts that com­put­ers can reveal.

    For Davis, this sort of atti­tude comes from a wide­spread and mis­tak­en belief in the neu­tral­i­ty and objec­tiv­i­ty of AI. “Arti­fi­cial intel­li­gence is not in fact arti­fi­cial,” she tells The Verge. “Machines learn like humans learn. We’re taught through cul­ture and absorb the norms of social struc­ture, and so does arti­fi­cial intel­li­gence. So it will re-cre­ate, ampli­fy, and con­tin­ue on the tra­jec­to­ries we’ve taught it, which are always going to reflect exist­ing cul­tur­al norms.”

    We’ve already cre­at­ed sex­ist and racist algo­rithms, and these sorts of cul­tur­al bias­es and phys­iog­no­my are real­ly just two sides of the same coin: both rely on bad evi­dence to judge oth­ers. The work by the Chi­nese researchers is an extreme exam­ple, but it’s cer­tain­ly not the only one. There’s at least one start­up already active that claims it can spot ter­ror­ists and pedophiles using face recog­ni­tion, and there are many oth­ers offer­ing to ana­lyze “emo­tion­al intel­li­gence” and con­duct AI-pow­ered sur­veil­lance.

    Fac­ing up to what’s com­ing

    But to return to the ques­tions implied by those alarm­ing head­lines about Kosin­s­ki and Wang’s paper: is AI going to be used to per­se­cute sex­u­al minori­ties?

    This sys­tem? No. A dif­fer­ent one? Maybe.

    Kosin­s­ki and Wang’s work is not invalid, but its results need seri­ous qual­i­fi­ca­tions and fur­ther test­ing. With­out that, all we know about their sys­tem is that it can spot with some reli­a­bil­i­ty the dif­fer­ence between self-iden­ti­fied gay and straight white peo­ple on one par­tic­u­lar dat­ing site. We don’t know that it’s spot­ted a bio­log­i­cal dif­fer­ence com­mon to all gay and straight peo­ple; we don’t know if it would work with a wider set of pho­tos; and the work doesn’t show that sex­u­al ori­en­ta­tion can be deduced with noth­ing more than, say, a mea­sure­ment of the jaw. It’s not decod­ed human sex­u­al­i­ty any more than AI chat­bots have decod­ed the art of a good con­ver­sa­tion. (Nor do its authors make such a claim.)

    The research was pub­lished to warn peo­ple, say Kosin­s­ki, but he admits it’s an “unavoid­able para­dox” that to do so you have to explain how you did what you did. All the tools used in the paper are avail­able for any­one to find and put togeth­er them­selves. Writ­ing at the deep learn­ing edu­ca­tion site Fast.ai, researcher Jere­my Howard con­cludes: “It is prob­a­bly rea­son­ably [sic] to assume that many orga­ni­za­tions have already com­plet­ed sim­i­lar projects, but with­out pub­lish­ing them in the aca­d­e­m­ic lit­er­a­ture.”

    We’ve already men­tioned star­tups work­ing on this tech, and it’s not hard to find gov­ern­ment regimes that would use it. In coun­tries like Iran and Sau­di Ara­bia homo­sex­u­al­i­ty is still pun­ish­able by death; in many oth­er coun­tries, being gay means being hound­ed, impris­oned, and tor­tured by the state. Recent reports have spo­ken of the open­ing of con­cen­tra­tion camps for gay men in the Chechen Repub­lic, so what if some­one there decides to make their own AI gay­dar, and scan pro­file pic­tures from Russ­ian social media?

    Here, it becomes clear that the accu­ra­cy of sys­tems like Kosin­s­ki and Wang’s isn’t real­ly the point. If peo­ple believe AI can be used to deter­mine sex­u­al pref­er­ence, they will use it. With that in mind, it’s more impor­tant than ever that we under­stand the lim­i­ta­tions of arti­fi­cial intel­li­gence, to try and neu­tral­ize dan­gers before they start impact­ing peo­ple. Before we teach machines our prej­u­dices, we need to first teach our­selves.

    ———-

    “The inven­tion of AI ‘gay­dar’ could be the start of some­thing much worse” by James Vin­cent; The Verge; 09/21/2017

    Kosin­s­ki says his research isn’t phys­iog­no­my because it’s using rig­or­ous sci­en­tif­ic meth­ods, and his paper cites a num­ber of stud­ies show­ing that we can deduce (with vary­ing accu­ra­cy) traits about peo­ple by look­ing at them. “I was edu­cat­ed and made to believe that it’s absolute­ly impos­si­ble that the face con­tains any infor­ma­tion about your inti­mate traits, because phys­iog­no­my and phrenol­o­gy were just pseu­do­sciences,” he says. “But the fact that they were claim­ing things with­out any basis in fact, that they were mak­ing stuff up, doesn’t mean that this stuff is not real.” He agrees that phys­iog­no­my is not sci­ence, but says there may be truth in its basic con­cepts that com­put­ers can reveal.”

    It’s not a return of the phys­iog­no­my pseu­do­science but “there may be truth in [phys­iog­no­my’s] basic con­cepts that com­put­ers can reveal.” That’s seri­ous­ly the mes­sage from these researchers, along with a mes­sage of con­fi­dence that their algo­rithm is work­ing sole­ly from facial fea­tures and not oth­er more tran­sient fea­tures. And based on that con­fi­dence in their algo­rithm the researchers point to their results on evi­dence that gay peo­ple have bio­log­i­cal­ly dif­fer­ent faces...even though they can’t actu­al­ly deter­mine what the algo­rithm is look­ing at when com­ing to its con­clu­sion:

    ...
    If AI can’t show its work­ing, can we trust it?

    AI researchers can’t ful­ly explain why their machines do the things they do. It’s a chal­lenge that runs through the entire field, and is some­times referred to as the “black box” prob­lem. Because of the meth­ods used to train AI, these pro­grams can’t show their work in the same way nor­mal soft­ware does, although researchers are work­ing to amend this.

    In the mean­time, it leads to all sorts of prob­lems. A com­mon one is that sex­ist and racist bias­es are cap­tured from humans in the train­ing data and repro­duced by the AI. In the case of Kosin­s­ki and Wang’s work, the “black box” allows them to make a par­tic­u­lar sci­en­tif­ic leap of faith. Because they’re con­fi­dent their sys­tem is pri­mar­i­ly ana­lyz­ing facial struc­tures, they say their research shows that facial struc­tures pre­dict sex­u­al ori­en­ta­tion. (“Study 1a showed that facial fea­tures extract­ed by a [neur­al net­work] can be used to accu­rate­ly iden­ti­fy the sex­u­al ori­en­ta­tion of both men and women.”)

    Experts say this is a mis­lead­ing claim that isn’t sup­port­ed by the lat­est sci­ence. There may be a com­mon cause for face shape and sex­u­al ori­en­ta­tion — the most prob­a­ble cause is the bal­ance of hor­mones in the womb — but that doesn’t mean face shape reli­ably pre­dicts sex­u­al ori­en­ta­tion, says Qazi Rah­man, an aca­d­e­m­ic at King’s Col­lege Lon­don who stud­ies the biol­o­gy of sex­u­al ori­en­ta­tion. “Biology’s a lit­tle bit more nuanced than we often give it cred­it for,” he tells The Verge. “The issue here is the strength of the asso­ci­a­tion.”

    The idea that sex­u­al ori­en­ta­tion comes pri­mar­i­ly from biol­o­gy is itself con­tro­ver­sial. Rah­man, who believes that sex­u­al ori­en­ta­tion is most­ly bio­log­i­cal, prais­es Kosin­s­ki and Wang’s work. “It’s not junk sci­ence,” he says. “More like sci­ence some­one doesn’t like.” But when it comes to pre­dict­ing sex­u­al ori­en­ta­tion, he says there’s a whole pack­age of “atyp­i­cal gen­der behav­ior” that needs to be con­sid­ered. “The issue for me is more that [the study] miss­es the point, and that’s behav­ior.”

    Reduc­ing the ques­tion of sex­u­al ori­en­ta­tion to a sin­gle, mea­sur­able fac­tor in the body has a long and often inglo­ri­ous his­to­ry. As Mat­ton writes in his blog post, approach­es have ranged from “19th cen­tu­ry mea­sure­ments of les­bians’ cli­toris­es and homo­sex­u­al men’s hips, to late 20th cen­tu­ry claims to have dis­cov­ered ‘gay genes,’ ‘gay brains,’ ‘gay ring fin­gers,’ ‘les­bian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppres­sion: it gives peo­ple who want to dehu­man­ize and per­se­cute sex­u­al minori­ties a “sci­en­tif­ic” pre­text.

    Jen­ny Davis, a lec­tur­er in soci­ol­o­gy at the Aus­tralian Nation­al Uni­ver­si­ty, describes it as a form of bio­log­i­cal essen­tial­ism. This is the belief that things like sex­u­al ori­en­ta­tion are root­ed in the body. This approach, she says, is dou­ble-edged. On the one hand, it “does a use­ful polit­i­cal thing: detach­ing blame from same-sex desire. But on the oth­er hand, it rein­forces the deval­ued posi­tion of that kind of desire,” set­ting up het­ro­sex­u­al­i­ty as the norm and fram­ing homo­sex­u­al­i­ty as “less valu­able … a sort of ill­ness.”
    ...

    In the mean­time, it leads to all sorts of prob­lems. A com­mon one is that sex­ist and racist bias­es are cap­tured from humans in the train­ing data and repro­duced by the AI. In the case of Kosin­s­ki and Wang’s work, the “black box” allows them to make a par­tic­u­lar sci­en­tif­ic leap of faith. Because they’re con­fi­dent their sys­tem is pri­mar­i­ly ana­lyz­ing facial struc­tures, they say their research shows that facial struc­tures pre­dict sex­u­al ori­en­ta­tion. (“Study 1a showed that facial fea­tures extract­ed by a [neur­al net­work] can be used to accu­rate­ly iden­ti­fy the sex­u­al ori­en­ta­tion of both men and women.”)”

    So if this research was put forth as a kind of warn­ing to the pub­lic, which is how the researchers are fram­ing it, it’s quite a warn­ing. Both a warn­ing that algo­rithms like this are being devel­oped and a warn­ing of how read­i­ly the con­clu­sions of these algo­rithms might be accept­ed as evi­dence of an under­ly­ing bio­log­i­cal find­ing (as opposed to a ‘black box’ arti­fact that could be pick­ing up all sort of bio­log­i­cal and social cues).

    And don’t for­get, even if these algo­rithms do actu­al­ly stum­ble across real asso­ci­a­tions that can be teased out by these AI-dri­ven algo­rithms with just some pic­tures of some­one (or maybe some addi­tion­al bio­met­ric data picked up by the “smart kiosks” of the not-too-dis­tant future), there’s a big dif­fer­ence between demon­strat­ing the abil­i­ty to dis­cern some­thing sta­tis­ti­cal­ly across large data sets and being able to do that with the kind of accu­ra­cy where you don’t have to sig­nif­i­cant­ly wor­ry about jump­ing to the wrong con­clu­sion (assum­ing you aren’t using the tech­nol­o­gy in an abu­sive man­ner in the first place). Even if some­one devel­ops an algo­rithm that can accu­rate guess sex­u­al ori­en­ta­tion 95 per­cent of the time that still leaves a pret­ty sub­stan­tial 5 per­cent chance of get­ting it wrong. And the only way to avoid those incor­rect con­clu­sions is to devel­op an algo­rithm that’s so good at infer­ring sex­u­al ori­en­ta­tion it’s basi­cal­ly nev­er wrong, assum­ing that’s pos­si­ble. And, of course, if such an algo­rithm was devel­oped with that kind of accu­ra­cy that would be real­ly creepy. It points towards one of the scari­er aspects of this kind of tech­nol­o­gy: In order to ensure your pri­va­cy-invad­ing algo­rithms don’t risk jump­ing to erro­neous con­clu­sions you need algo­rithms that are scar­i­ly good at invad­ing your pri­va­cy which is anoth­er rea­son we prob­a­bly should­n’t be pro­mot­ing 21st Cen­tu­ry phys­iog­no­my.

    Posted by Pterrafractyl | October 2, 2017, 2:43 pm
  12. It looks like Google is final­ly get that shiny new object its been pin­ing for: a city. Yep, Side­walk Labs, owned by Google’s par­ent com­pa­ny Alpha­bet, just got per­mis­sion to build its own ‘city of the future’ on a 12-acre water­front dis­trict near Toron­to, filled with self-dri­ving shut­tles, adap­tive traf­fic lights that sense pedes­tri­ans, and under­ground tun­nels for freight-trans­port­ing robots. With sen­sors every­where.

    And if that sounds ambi­tious, note that all these plans aren’t lim­it­ed to the ini­tial 12 acres. Alpha­bet report­ed­ly has plans to expand across 800 acres of Toron­to’s post-indus­tri­al water­front zone:

    The Finan­cial Times

    Alpha­bet to build futur­is­tic city in Toron­to
    Plans for tech­nol­o­gy-enabled envi­ron­ment raise pri­va­cy con­cerns

    by Leslie Hook in San Fran­cis­co
    Octo­ber 17, 2017, 4:54 pm

    Alpha­bet is set­ting out to build the city of the future, start­ing with a down­town dis­trict of Toron­to, in what it hopes will serve as a prov­ing ground for tech­nol­o­gy-enabled urban envi­ron­ments around the world.

    In a first-of-its-kind project, Alphabet’s sub­sidiary Side­walk Labs will devel­op a 12-acre water­front dis­trict, Quay­side, with a view to expand across 800 acres of Toronto’s post-indus­tri­al water­front zone.

    Self-dri­ving shut­tles, adap­tive traf­fic lights that sense pedes­tri­ans, mod­u­lar hous­ing and freight-deliv­er­ing robots that trav­el in under­ground tun­nels might all be part of the new devel­op­ment, accord­ing to the win­ning bid sub­mit­ted by Side­walk Labs.

    In its pro­pos­al, Side­walk also said that Toron­to would need to waive or exempt many exist­ing reg­u­la­tions in areas like build­ing codes, trans­porta­tion, and ener­gy in order to build the city it envi­sioned. The project may need “sub­stan­tial for­bear­ances from exist­ing laws and reg­u­la­tions,” the group said.

    Alpha­bet chair­man Eric Schmidt and Cana­di­an prime min­is­ter Justin Trudeau announced the deal on Tues­day in Toron­to.

    “We start­ed think­ing about all the things we could do if some­one would just give us a city and put us in charge,” said Eric Schmidt, exec­u­tive chair­man of Alpha­bet. “That’s not how it works, for all sorts of good rea­sons,” he added with a laugh.

    For Alpha­bet, the project presents a chance to exper­i­ment with new ways to use tech­nol­o­gy — and data — in the real world. “This is not some ran­dom activ­i­ty from our per­spec­tive. This is the cul­mi­na­tion of almost 10 years of think­ing about how tech­nol­o­gy could improve people’s lives,” said Mr Schmidt.

    ...

    Mr Trudeau described the project as a “test bed for new technologies...that will help us build clean­er, smarter, green­er, cities”.

    “Eric [Schmidt] and I have been talk­ing about col­lab­o­rat­ing on this for a few years, and see­ing it all come togeth­er now is extra­or­di­nar­i­ly excit­ing,” he added.

    One of the chal­lenges for the new dis­trict will be set­ting data poli­cies and address­ing con­cerns over pri­va­cy, which are par­tic­u­lar­ly acute because smart city tech­nolo­gies often rely on col­lect­ing vast amounts of data to make cities run more effi­cient­ly.

    In the vision state­ment sub­mit­ted as part of its bid, Side­walk describes a vast sys­tem of sen­sors that will mon­i­tor every­thing from park bench­es and over­flow­ing waste bins, to noise and pol­lu­tion lev­els in hous­ing. The devel­op­ment will also pio­neer new approach­es to ener­gy, includ­ing a ther­mal grid and on-site gen­er­a­tion, and tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices.

    The trans­porta­tion pro­pos­al for the dis­trict includes restrict­ing pri­vate vehi­cles, and instead offer­ing self-dri­ving shut­tles and bike paths that are heat­ed in the win­ter, accord­ing to the vision doc­u­ment. A series of under­ground util­i­ty tun­nels will house util­i­ties like elec­tri­cal wires and water pipes, and also pro­vide path­ways for freight-deliv­er­ing robots.

    Side­walk Labs, a sub­sidiary of Alpha­bet that was found­ed in 2015 by Dan Doc­to­roff, a for­mer deputy may­or of New York, will spend $50m on ini­tial plan­ning and test­ing for the devel­op­ment. As part of the effort, Google will also move its Cana­di­an head­quar­ters to Toron­to.

    Mr Doc­to­roff said the group would present a detailed plan in one year, fol­low­ing exten­sive con­sul­ta­tions with the com­mu­ni­ty. “Our goal here is to lis­ten, to under­stand,” he said. “This has to be a com­mu­ni­ty conversation...otherwise it won’t have the polit­i­cal cred­i­bil­i­ty to do things that are quite bold.”

    ———-

    “Alpha­bet to build futur­is­tic city in Toron­to” by Leslie Hook; The Finan­cial Times; 10/17/2017

    “In its pro­pos­al, Side­walk also said that Toron­to would need to waive or exempt many exist­ing reg­u­la­tions in areas like build­ing codes, trans­porta­tion, and ener­gy in order to build the city it envi­sioned. The project may need “sub­stan­tial for­bear­ances from exist­ing laws and reg­u­la­tions,” the group said.

    LOL, yeah, it’s a good bet that A LOT of exist­ing laws and reg­u­la­tions are going to have be waived. Espe­cial­ly laws involv­ing per­son­al data pri­va­cy. And it sounds like the data col­lect­ed isn’t just going to involve your where­abouts and oth­er infor­ma­tion the sen­sors every­where will be able to pick up. Alpha­bet is also envi­sion­ing “tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices”, which means med­ical data pri­va­cy laws are prob­a­bly also going to have to get waived:

    ...
    One of the chal­lenges for the new dis­trict will be set­ting data poli­cies and address­ing con­cerns over pri­va­cy, which are par­tic­u­lar­ly acute because smart city tech­nolo­gies often rely on col­lect­ing vast amounts of data to make cities run more effi­cient­ly.

    In the vision state­ment sub­mit­ted as part of its bid, Side­walk describes a vast sys­tem of sen­sors that will mon­i­tor every­thing from park bench­es and over­flow­ing waste bins, to noise and pol­lu­tion lev­els in hous­ing. The devel­op­ment will also pio­neer new approach­es to ener­gy, includ­ing a ther­mal grid and on-site gen­er­a­tion, and tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices.
    ...

    Let’s also not for­get about the devel­op­ment of tech­nolo­gies that can col­lect per­son­al health infor­ma­tion like heart rates and breath­ing infor­ma­tion using WiFi sig­nals alone (which would pair nice­ly with Google’s plans to put free WiFi kiosks bristling with sen­sors on side­walks every­where. And as is pret­ty clear at this point, any­thing that can be sensed remote­ly will be sensed remote­ly in this new city. Because that’s half the point of the whole thing. So yeah, “sub­stan­tial for­bear­ances from exist­ing laws and reg­u­la­tions” will no doubt be required.

    Inter­est­ing­ly, Alpha­bet recent­ly announced a new ini­tia­tive that sounds like exact­ly the kind of “tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices” the com­pa­ny has planned for its new city: City­block was just launched,. It’s a new Alpha­bet start­up focused on improv­ing health care man­age­ment by, sur­prise!, inte­grat­ing var­i­ous tech­nolo­gies into a health care sys­tem with the goal of bring­ing down costs and improv­ing out­comes. But it’s not sim­ply new tech­nol­o­gy that’s sup­posed to do this. Instead, that tech­nol­o­gy is to be used in a pre­ven­tive man­ner in order to address more expen­sive health con­di­tions before they get worse. As such, City­block is going to focus­es on behav­ioral health being. Yep, it’s a health care mod­el where a tech firm, paired with a health firm, tries to get to you live a more healthy lifestyle by col­lect­ing lots of data about you. And while this approach would undoubt­ed­ly cause wide­spread pri­va­cy con­cerns, those con­cerns will prob­a­bly be some­what stunt­ed in this case since the tar­get mar­ket City­block has in mind is poor peo­ple, espe­cial­ly Med­ic­aid patients in the US:

    Fierce Health­care

    Google’s par­ent com­pa­ny spins off an inno­v­a­tive start­up health­care provider

    by Matt Kuhrt | Oct 5, 2017 8:49am

    The lat­est Sil­i­con Val­ley bid to dis­rupt a tra­di­tion­al indus­try appears to be aimed at health­care. City­block, a start­up qui­et­ly launched by Google’s par­ent com­pa­ny Alpha­bet, will focus on pro­vid­ing team-based care for low-income com­mu­ni­ties.

    The ven­ture comes from one of Alphabet’s inno­va­tion-ori­ent­ed groups, Side­walk Labs, and will rely upon a team-based care deliv­ery struc­ture that is sup­port­ed by doc­tors, behav­ioral health coach­es and tech­no­log­i­cal tools, accord­ing to an arti­cle from CNBC.

    Efforts by health­care orga­ni­za­tions to improve care man­age­ment and increase patient engage­ment through social inter­ac­tion have attract­ed atten­tion, par­tic­u­lar­ly in the con­text of chron­ic con­di­tions, as Fierce­Health­care has pre­vi­ous­ly report­ed. While cell phones and social media apps have pro­vid­ed new avenues to boost patient engage­ment, inte­grat­ing those tech­nolo­gies into an effec­tive care deliv­ery mod­el has proven more com­plex. At the same time, major play­ers such as the Cen­ters for Medicare & Med­ic­aid Ser­vices, active­ly seek feed­back on mod­els that pri­or­i­tize behav­ioral health in response to the indus­try’s inter­est in the poten­tial for effi­cien­cy from an increased empha­sis on pre­ven­tive and ongo­ing care.

    City­block aims to pro­vide Med­ic­aid and low­er-income Medicare ben­e­fi­cia­ries access to high-val­ue, read­i­ly avail­able per­son­al­ized health ser­vices. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medi­um that the orga­ni­za­tion will apply lead­ing-edge care mod­els that ful­ly inte­grate pri­ma­ry care, behav­ioral health and social ser­vices. It expects to open its first clin­ic, which it calls a Neigh­bor­hood Health Hub, in New York City in 2018.

    Cityblock’s inter­dis­ci­pli­nary man­age­ment team, which includes both vet­er­ans of the tra­di­tion­al health­care indus­try and Google tech­nol­o­gists, will focus on pre­ven­tive care. Behav­ioral health coach­es will dri­ve care teams that will build social rela­tion­ships and deliv­er care at cen­tral­ly locat­ed “hubs,” via tele­health ser­vices or house calls, accord­ing to the web­site. City­block is also in the process of nego­ti­at­ing part­ner­ships to ensure insur­ance com­pa­nies cov­er its ser­vices.

    ...

    He also points out that City­block has made a con­scious deci­sion to tar­get low-income Amer­i­cans, who he says have tra­di­tion­al­ly been short-changed by indus­try inno­va­tion efforts.

    ———-

    “Google’s par­ent com­pa­ny spins off an inno­v­a­tive start­up health­care provider” by Matt Kuhrt; Fierce Health­care; 10/05/2017

    City­block aims to pro­vide Med­ic­aid and low­er-income Medicare ben­e­fi­cia­ries access to high-val­ue, read­i­ly avail­able per­son­al­ized health ser­vices. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medi­um that the orga­ni­za­tion will apply lead­ing-edge care mod­els that ful­ly inte­grate pri­ma­ry care, behav­ioral health and social ser­vices. It expects to open its first clin­ic, which it calls a Neigh­bor­hood Health Hub, in New York City in 2018.”

    It’s prob­a­bly worth recall­ing that per­son­al­ized ser­vices for the poor intend­ed to ‘help them help them­selves’ was the cen­ter­piece for House Speak­er Paul Ryan’s pro­pos­al to give every poor per­son a life coach who will issue “life plans” and “con­tracts” that poor peo­ple would be expect­ed to meet and face penal­ties if they fail to meet them. So when we’re talk­ing about set­ting up spe­cial per­son­al­ized “behav­ior health” mon­i­tor­ing sys­tems as part of health care ser­vices for the poor, don’t for­get that this per­son­al­ized mon­i­tor­ing sys­tem is going to be real­ly handy when politi­cians want to say, “if you want to stay on Med­ic­aid you have bet­ter make XYZ changes in your lifestyle. We are watch­ing you.” And since right-wingers gen­er­al expect the poor to be super-human (capa­ble of work­ing mul­ti­ple jobs, get­ting an edu­ca­tion, raise a fam­i­ly, and deal­ing with any unfore­seen per­son­al dis­as­ters in stride, all simul­ta­ne­ous­ly), it’s not like it’s we should be sur­prised to see the kinds of behav­ior health stan­dards that almost no one can meet, espe­cial­ly since work­ing mul­ti­ple jobs, get­ting an edu­ca­tion, raise a fam­i­ly, and deal­ing with any unfore­seen per­son­al dis­as­ters in stride simul­ta­ne­ous­ly is an incred­i­bly unhealthy lifestyle.

    Also recall that Paul Ryan sug­gest­ed that his ‘life coach’ plan could apply to oth­er fed­er­al pro­grams for the poor, includ­ing food stamps. It’s not a stretch at all to imag­ine ‘life coach­es’ for Med­ic­aid recip­i­ents would appeal the right-wing. As long as it involves a ‘kick­ing the poor’ dynam­ic. And that’s part of the tragedy of the mod­ern age: sur­veil­lance tech­nol­o­gy and a focus on behav­ior health could be great as a help­ful vol­un­tary tool for peo­ple that want help get­ting health­i­er, but it’s hard to imag­ine it not becom­ing a coer­cive night­mare sce­nario in the US giv­en the incred­i­ble antipa­thy towards the poor that per­vades Amer­i­can soci­ety.

    So as creepy as Google’s city is on its face regard­ing what it tells us about how the future is unfold­ing for peo­ple of all incomes and class­es, don’t for­get that we could be look­ing at the first test bed for cre­at­ing the kind of sur­veil­lance wel­fare state that’s per­fect for kick­ing peo­ple off of wel­fare. Just make unre­al­is­tic stan­dards that involve a lot of pater­nal­is­tic moral pos­tur­ing (which should play well with vot­ers), watch all the poor peo­ple with the sur­veil­lance tech­nol­o­gy, and wait for the wave of inevitable fail­ures who can be kicked off for not ‘try­ing’ or some­thing.

    Posted by Pterrafractyl | October 18, 2017, 3:28 pm
  13. There’s some big news about Face­book’s mind-read­ing tech­nol­o­gy ambi­tions, although it’s not entire­ly clear if it’s good news, bad news, scary news or what: Regi­na Dugan, the for­mer head of DARPA who jumped to Google and then Face­book where we was work­ing on the mind-read­ing tech­nol­o­gy, just left Face­book. Why? Well, that’s where it’s unclear. Accord­ing to a Tweet Dugan made about her depar­ture:

    There is a tidal shift going on in Sil­i­con Val­ley, and those of us in this indus­try have greater respon­si­bil­i­ties than ever before. The tim­ing feels right to step away and be pur­pose­ful about what’s next, thought­ful about new ways to con­tribute in times of dis­rup­tion.

    So Dugan is leav­ing Face­book, to be more pur­pose­ful and respon­si­ble. And she was the one head­ing up the mind-read­ing tech­nol­o­gy project. Is that scary news? It’s unclear but it seems like that might be scary news:

    Giz­mo­do

    What Hap­pens to Facebook’s Mind Read­ing Project Now That Its Leader Is Gone?

    Alex Cranz
    10/17/2017 5:05pm

    Regi­na Dugan, a tech exec with roots in the gov­ern­ment sec­tor as the for­mer direc­tor of DARPA, is leav­ing Face­book and her depar­ture calls into ques­tion the sta­tus of one of the cra­zi­est things Face­book has been work­ing on.

    Fit­ting­ly, Dugan announced the depar­ture in a post on Face­book today.

    [see Face­book post]

    If you’re unfa­mil­iar with Dugan her­self you may be famil­iar with some of the skunkworks projects she over­saw while man­ag­ing Google’s secre­tive Advanced Tech­nol­o­gy and Projects (ATAP) group from 2012 to 2016. Those projects includ­ed Tan­go, a high­ly accu­rate aug­ment­ed real­i­ty device packed into a phone, and Ara, the now scut­tled mod­u­lar phone that could have made your mobile hard­ware upgrades a whole lot cheap­er.

    In 2016 Dugan left Google for anoth­er huge com­pa­ny with lit­tle con­sumer gad­get expe­ri­ence: Face­book. At Face­book she ran Build­ing 8, anoth­er pri­vate­ly fund­ed research and devel­op­ment group like ATAP.

    The projects Dugan and her col­leagues devel­oped at Build­ing 8 didn’t just include neat gad­gets for the near future; they could have also led to enor­mous leaps for­ward in tech­nol­o­gy as a whole. The most not­ed project was one announced at F8, Facebook’s annu­al devel­op­er con­fer­ence, in April. Called Silent Voice First, the project sought to allow com­put­ers to read your mind. Obvi­ous­ly that would make it eas­i­er to post to Face­book when your hands are busy, and it could be life alter­ing for peo­ple with severe phys­i­cal lim­i­ta­tions, but it would also, you know, be a com­put­er, run by Face­book, that READS YOUR MIND.

    Nei­ther Dugan nor Face­book has made it clear why she’s depart­ing at this time; a rep­re­sen­ta­tive for Face­book told Giz­mo­do the com­pa­ny had noth­ing fur­ther to add (we’ve also reached out to Dugan). And Face­book has not detailed what will hap­pen to the projects she over­saw at Build­ing 8.

    ...

    Beyond that, all we have is a pre­pared quote from Dugan that was pro­vid­ed to reporters, via Bloomberg’s Sarah Frier.

    There is a tidal shift going on in Sil­i­con Val­ley, and those of us in this indus­try have greater respon­si­bil­i­ties than ever before. The tim­ing feels right to step away and be pur­pose­ful about what’s next, thought­ful about new ways to con­tribute in times of dis­rup­tion.

    These aren’t exact­ly the words you want to hear from the woman over­see­ing the devel­op­ment of mind-read­ing tech­nol­o­gy for one of the largest pri­vate sur­veil­lance appa­ra­tus­es in the world.

    But it is a good reminder for us all that the biggest leaps for­ward in tech­nol­o­gy, the next steps on our jour­ney towards a Matrix or Star Trek-like future, are not nec­es­sar­i­ly in the hands of altru­is­tic sci­en­tists with pub­lic fund­ing, but poten­tial­ly in the hands of enor­mous pri­vate cor­po­ra­tions who seem to pri­mar­i­ly per­ceive humans as com­modi­ties, and tech­nol­o­gy as inroads into our wal­lets and minds. In cas­es like that one would hope they’d be respon­si­ble.

    ———-

    “What Hap­pens to Facebook’s Mind Read­ing Project Now That Its Leader Is Gone?” by Alex Cranz; Giz­mo­do; 10/17/2017

    “Nei­ther Dugan nor Face­book has made it clear why she’s depart­ing at this time; a rep­re­sen­ta­tive for Face­book told Giz­mo­do the com­pa­ny had noth­ing fur­ther to add (we’ve also reached out to Dugan). And Face­book has not detailed what will hap­pen to the projects she over­saw at Build­ing 8.”

    It’s a mys­tery. A mind-read­ing tech­nol­o­gy mys­tery. Oh good­ie. As the author of the above piece notes in response to Dugan’s tweet about being respon­si­ble and pur­pose­ful, these are exact­ly reas­sur­ing words in this con­text:

    ...
    These aren’t exact­ly the words you want to hear from the woman over­see­ing the devel­op­ment of mind-read­ing tech­nol­o­gy for one of the largest pri­vate sur­veil­lance appa­ra­tus­es in the world.
    ...

    That’s the con­text. The head of the mind-read­ing tech­nol­o­gy divi­sion for one of the largest pri­vate sur­veil­lance appa­ra­tus­es in the world just left the com­pa­ny for cryp­tic rea­sons involv­ing the need for the tech indus­try to be more respon­si­ble than ever and her choice to step away to be pur­pose­ful. It’s not par­tic­u­lar­ly reas­sur­ing news.

    Posted by Pterrafractyl | October 18, 2017, 8:45 pm
  14. Here’s some new research worth keep­ing in mind regard­ing the mind-read­ing tech­nolo­gies being devel­oped by Face­book and Elon Musk: While read­ing your exact thoughts, the stat­ed goals of Face­book and Musk, is prob­a­bly going to require quite a bit more research, read­ing your emo­tions is some­thing researchers can already do. And this abil­i­ty to read emo­tion can, in turn, be poten­tial­ly used to read what you’re think­ing by look­ing at your emo­tion­al response to par­tic­u­lar con­cepts.

    That’s what some researchers just demon­strat­ed, using fMRI brain imag­ing tech­nol­o­gy to gath­er data on brain activ­i­ty which was fed into soft­ware trained to iden­ti­fy dis­tinct pat­terns of brain activ­i­ty. The results are pret­ty astound­ing. In the study, researchers recruit­ed 34 indi­vid­u­als: 17 peo­ple who self-pro­fessed to hav­ing had sui­ci­dal thoughts before, and 17 oth­ers who had­n’t. Then they mea­sured the brain activ­i­ties of these 34 indi­vid­u­als in response to var­i­ous words, includ­ing the word “death.” It turns out “death” cre­at­ed a dis­tinct sig­na­ture of brain activ­i­ty dif­fer­en­ti­at­ing the sui­ci­dal indi­vid­u­als from the con­trol group, allow­ing the researchers to iden­ti­fy those with sui­ci­dal thoughts 91 per­cent of the time in this study:

    The Dai­ly Beast

    A Machine Might Just Read Your Mind and Pre­dict If You’re Sui­ci­dal
    A psy­chol­o­gy pro­fes­sor says his soft­ware can fig­ure out if a per­son is sui­ci­dal. But does it real­ly work?

    Tanya Basu
    11.01.17 9:00 AM ET

    A few years ago, Mar­cel Just was try­ing to fig­ure out how to have real-life appli­ca­tions for his research. Just, a pro­fes­sor of psy­chol­o­gy at Carnegie Mel­lon and direc­tor of the Cen­ter for Cog­ni­tive Brain Imag­ing, had spent the major­i­ty of the past decade teach­ing com­put­er pro­grams how to iden­ti­fy thoughts. He had found—with the help of an func­tion­al mag­net­ic res­o­nance imag­ing (fMRI) machine—that each emo­tion we feel had a spe­cif­ic sig­na­ture in the brain and lit up in unique­ly iden­ti­fi­able ways. Just had trained a piece of soft­ware to fol­low these pat­terns and rec­og­nize about 30 con­cepts and emo­tions.

    “We asked whether we could iden­ti­fy what a per­son was think­ing from the machine learn­ing pat­terns,” Just explained. “The machine learn­ing data was fig­ured out with var­i­ous kinds of con­cepts; even­tu­al­ly it learned how to map between pat­terns and con­cepts.”

    “From that research,” he added, “we real­ized it was pos­si­ble to tell what a per­son was think­ing.”

    In oth­er words, Just had cre­at­ed arti­fi­cial intel­li­gence that could read your thoughts.

    Around this time, Just spoke at the Uni­ver­si­ty of Pitts­burgh’s med­ical school. David Brent, a pro­fes­sor of psy­chol­o­gy spe­cial­iz­ing in chil­dren, approached him.

    “Do you think this could be used to iden­ti­fy sui­ci­dal thoughts?” Brent asked.

    It hit Just then: What if arti­fi­cial intel­li­gence could pre­dict what a sui­ci­dal per­son was think­ing? And maybe even pre­vent a sui­ci­dal per­son from com­mit­ting sui­cide?

    Ear­li­er this week, Just, Brent, and a few oth­er col­leagues pub­lished a land­mark paper in the jour­nal Nature that finds that with an aston­ish­ing 91% accu­ra­cy, arti­fi­cial intel­li­gence is able to fig­ure out if a per­son is con­sid­er­ing sui­cide.

    The exper­i­ment is remark­able for more than one rea­son. There’s of course the idea of using a machine trained to fig­ure out neur­al pat­terns to iden­ti­fy those who might con­sid­er sui­cide. It’s a sub­set of the pop­u­la­tion that is typ­i­cal­ly dif­fi­cult to pin­point and help before it’s too late, rely­ing not only on their telling oth­ers of their desire to kill them­selves, but also that per­son actu­al­ly act­ing on it and help­ing the sui­ci­dal per­son in trou­ble.

    Just and Brent recruit­ed 34 indi­vid­u­als from local clin­ics: 17 who’d self-pro­fessed to hav­ing had sui­ci­dal thoughts before, and 17 oth­ers who had­n’t. The research team put the 34 indi­vid­u­als through an fMRI machine and had them think about words (with the help of the Adult Sui­ci­dal Ideation Ques­tion­naire, which mea­sures “sui­cide ideation”) that rep­re­sent­ed dif­fer­ent “stim­u­lus” con­cepts, rang­ing from pos­i­tive ones (praise, bliss, care­free, and kind­ness), neg­a­tive ones (ter­ri­ble, cru­el­ty, evil), and sui­cide (fatal, funer­al, death).

    The last one—death—was the most damn­ing of the brain sig­na­tures in Just’s study. Those who were sui­ci­dal showed a spot of angry crim­son at the front of the brain. Con­trol patients, mean­while, just had specks of red amidst a sea of blue in the pic­tures. “These peo­ple who are sui­ci­dal had more sad­ness in their rep­re­sen­ta­tion of death, and more shame as well,” he said.
    [see neur­al imag­ine rep­re­sen­ta­tions]

    So Just and Brent set to work teach­ing a machine the con­cepts that were most asso­ci­at­ed with sui­cide, and those that weren’t. “If you trained the machine on 10 peo­ple in their anger sig­na­ture, and put the 11th per­son on the scan­ner, it should be able to tell if the 11th per­son is angry or not,” Just said of how the machine was put to the test. The machine then fig­ured out if the per­son was sui­ci­dal or not.

    The results are strong, even if the sam­ple size is rel­a­tive­ly small: After going through all 34 peo­ple, Just and his research team were able to say with a 91% suc­cess rate which of the indi­vid­u­als had dis­played sui­ci­dal thoughts.

    That’s, in short, amaz­ing. But it’s not per­fect. What about the oth­er 9%? “It’s a good ques­tion,” he said of the gap. “There seems to be an emo­tion­al dif­fer­ence [we don’t under­stand]” that the group hopes to test in future iter­a­tions of the study.

    Anoth­er prob­lem with the study as it stands? The fact that it uses the fMRI machine in the first place. “Nobody used machine learn­ing in the ear­ly days,” Just said. “This [arti­fi­cial intel­li­gence] uses mul­ti­ple vol­ume elements—‘voxels’—to fig­ure out neur­al rep­re­sen­ta­tion.” If that sounds expen­sive, it is. And expense makes any poten­tial ther­a­py there­fore more dif­fi­cult to access, a crit­i­cism of brain scan­ning stud­ies cov­ered by Ed Yong at The Atlantic: “When sci­en­tists use med­ical scan­ners to repeat­ed­ly peer at the shapes and activ­i­ties of the human brain, those brains tend to belong to wealthy and well-edu­cat­ed peo­ple. And unless researchers take steps to cor­rect for that bias, what we get is an under­stand­ing of the brain that’s incom­plete, skewed, and ... well ... a lit­tle weird.”

    ...

    The more sub­tle nuance of the study that deserves note is the very real pos­si­bil­i­ty that arti­fi­cial intel­li­gence can do some­thing that strong­ly resem­bles read­ing your mind. We like to con­ceive of thoughts as amor­phous con­cepts, as unique to our own head­space and expe­ri­ence. What might tick­le one per­son­’s fan­cy might not nec­es­sar­i­ly do the same for anoth­er; what brings one indi­vid­ual shame won’t both­er some­one else. But those core feel­ings of hap­pi­ness, shame, sad­ness, and oth­ers seem to look almost iden­ti­cal from per­son to per­son.

    Does this mean that we can poten­tial­ly erad­i­cate sui­cide, though? Just is hes­i­tant to make that assump­tion, though he thinks this is a huge step towards under­stand­ing what he terms “thought dis­or­ders.” “We can look at the neur­al sig­na­ture and see how it’s changed,” he said, “see what this per­son is think­ing, whether it’s unusu­al.” Just thinks that most real­is­ti­cal­ly, this is going to be a huge first step towards devel­op­ing a unique ther­a­py for sui­ci­dal indi­vid­u­als: If we know the spe­cif­ic thought process­es that are symp­to­matic of sui­cide, we can know how to poten­tial­ly spot sui­ci­dal indi­vid­u­als and help them.

    “This isn’t a wild pie in the sky idea,” Just said. “We can use machine learn­ing to fig­ure out the phys­i­cal­i­ty of thought. We can help peo­ple.”

    ———-

    “A Machine Might Just Read Your Mind and Pre­dict If You’re Sui­ci­dal” by Tanya Basu; The Dai­ly Beast; 11/01/2017

    ““This isn’t a wild pie in the sky idea,” Just said. “We can use machine learn­ing to fig­ure out the phys­i­cal­i­ty of thought. We can help peo­ple.””

    Yes indeed, this kind of tech­nol­o­gy could be wild­ly help­ful in the field of brain sci­ence and study­ing men­tal ill­ness. The abil­i­ty to break down the men­tal activ­i­ty in response to con­cepts and see which parts of the brains are light­ing up and what types of emo­tions they’re asso­ci­at­ed with would be an invalu­able research tool. So let’s hope researchers are able to come up with all sorts of use­ful dis­cov­er­ies about all sorts of men­tal con­di­tions using this kind of tech­nol­o­gy. In respon­si­ble hands this could lead to incred­i­ble break­throughs in med­i­cine and men­tal health and real­ly could improve lives.

    But, of course, with tech­nol­o­gy being the dou­ble-edged sword that it is, we can’t ignore the real­i­ty that the same tech­nol­o­gy that would be won­der­ful for respon­si­ble researchers work­ing with vol­un­teers in a lab set­ting would be absolute­ly ter­ri­fy­ing if it was incor­po­rat­ed into, say, Face­book’s planned mind-read­ing con­sumer tech­nol­o­gy. After all, if Face­book’s planned mind-read­ing tech can read peo­ple’s thoughts it seems like it should be also be capa­ble of read­ing some­thing much sim­pler to detect like emo­tion­al response too.

    Or at least typ­i­cal emo­tion­al respons­es. As the study also indi­cat­ed, there’s going to a sub­set of peo­ple whose brains don’t emo­tion­al­ly respond in the “nor­mal” man­ner, where the def­i­n­i­tion “nor­mal­cy” is prob­a­bly filled with all sorts of unknown bias­es:

    ...
    The results are strong, even if the sam­ple size is rel­a­tive­ly small: After going through all 34 peo­ple, Just and his research team were able to say with a 91% suc­cess rate which of the indi­vid­u­als had dis­played sui­ci­dal thoughts.

    That’s, in short, amaz­ing. But it’s not per­fect. What about the oth­er 9%? “It’s a good ques­tion,” he said of the gap. “There seems to be an emo­tion­al dif­fer­ence [we don’t under­stand]” that the group hopes to test in future iter­a­tions of the study.

    Anoth­er prob­lem with the study as it stands? The fact that it uses the fMRI machine in the first place. “Nobody used machine learn­ing in the ear­ly days,” Just said. “This [arti­fi­cial intel­li­gence] uses mul­ti­ple vol­ume elements—‘voxels’—to fig­ure out neur­al rep­re­sen­ta­tion.” If that sounds expen­sive, it is. And expense makes any poten­tial ther­a­py there­fore more dif­fi­cult to access, a crit­i­cism of brain scan­ning stud­ies cov­ered by Ed Yong at The Atlantic: “When sci­en­tists use med­ical scan­ners to repeat­ed­ly peer at the shapes and activ­i­ties of the human brain, those brains tend to belong to wealthy and well-edu­cat­ed peo­ple. And unless researchers take steps to cor­rect for that bias, what we get is an under­stand­ing of the brain that’s incom­plete, skewed, and ... well ... a lit­tle weird.”
    ...

    “What about the oth­er 9%? “It’s a good ques­tion,” he said of the gap. “There seems to be an emo­tion­al dif­fer­ence [we don’t under­stand]” that the group hopes to test in future iter­a­tions of the study.”

    So once this tech­nol­o­gy because cheap enough for wide­spread use (which is one of the goals of Face­book and Musk) we could eas­i­ly find that “brain types” become a new cat­e­go­ry for assess­ing peo­ple. And pre­dict­ing behav­ior. And any­thing else peo­ple (and not just expert researchers) can think up to use this kind of data for.

    And don’t for­get, if Face­book real­ly can devel­op cheap thought-read­ing tech­nol­o­gy designed to inter­face your brain with a com­put­er, that could eas­i­ly become the kind of thing employ­ees are just expect­ed to use due to the poten­tial pro­duc­tiv­i­ty enhance­ments. So imag­ine tech­nol­o­gy that’s not only read­ing the words you’re think­ing but also read­ing the emo­tion response you have to those words. And imag­ine basi­cal­ly being basi­cal­ly forced to use this tech­nol­o­gy in the work­place of the future because it’s deemed to be pro­duc­tiv­i­ty enhanc­ing or some­thing. That could actu­al­ly hap­pen.

    It also rais­es the ques­tion of what Face­book would do if it detect­ed some­one was show­ing the sui­ci­dal brain sig­na­ture. Do they alert some­one? Will think­ing sad thoughts while using the mind-read­ing tech­nol­o­gy result in a vis­it from a men­tal health pro­fes­sion­al? What about real­ly angry or vio­lent thoughts? It’s the kind of area that’s going to raise fas­ci­nat­ing ques­tions about the respon­si­ble use of this data. Fas­ci­nat­ing and often ter­ri­fy­ing ques­tions.

    It’s all pret­ty depress­ing, right? Well, if the emerg­ing mind-read­ing econ­o­my gets over­whelm­ing­ly depress­ing, at least it sounds like the mind-read­ing tech­nol­o­gy caus­ing your depres­sion will be able to detect that it’s caus­ing this. Yay?

    Posted by Pterrafractyl | November 1, 2017, 3:06 pm
  15. Remem­ber those reports about Big Data being used in the work­place to allow employ­ers to pre­dict which employ­ees are like­ly to get sick (so they get get rid of them before the ill­ness­es get expen­sive)? Well, as the fol­low­ing arti­cle makes clear, employ­ers are going to be pre­dict­ing a lot more than just who is get­ting sick. They’re going to be try­ing to pre­dict every­thing they can pre­dict along with things they can’t accu­rate­ly pre­dict but decide to try to pre­dict any­way:

    The Guardian

    Big Broth­er isn’t just watch­ing: work­place sur­veil­lance can track your every move

    Employ­ers are using a range of tech­nolo­gies to mon­i­tor their staff’s web-brows­ing pat­terns, key­strokes, social media posts and even pri­vate mes­sag­ing apps

    Olivia Solon in San Fran­cis­co
    Mon­day 6 Novem­ber 2017 03.00 EST

    How can an employ­er make sure its remote work­ers aren’t slack­ing off? In the case of tal­ent man­age­ment com­pa­ny Crossover, the answer is to take pho­tos of them every 10 min­utes through their web­cam.

    The pic­tures are tak­en by Crossover’s pro­duc­tiv­i­ty tool, WorkS­mart, and com­bine with screen­shots of their work­sta­tions along with oth­er data – includ­ing app use and key­strokes – to come up with a “focus score” and an “inten­si­ty score” that can be used to assess the val­ue of free­lancers.

    Today’s work­place sur­veil­lance soft­ware is a dig­i­tal panop­ti­con that began with email and phone mon­i­tor­ing but now includes keep­ing track of web-brows­ing pat­terns, text mes­sages, screen­shots, key­strokes, social media posts, pri­vate mes­sag­ing apps like What­sApp and even face-to-face inter­ac­tions with co-work­ers.

    “If you are a par­ent and you have a teenage son or daugh­ter com­ing home late and not doing their home­work you might won­der what they are doing. It’s the same as employ­ees,” said Brad Miller, CEO of Aware­ness Tech­nolo­gies, which sells a pack­age of employ­ee mon­i­tor­ing tools under the brand Inter­guard.

    Crossover’s San­jeev Pat­ni insists that work­ers get over the ini­tial self-con­scious­ness after a few days and accept the need for such mon­i­tor­ing as they do CCTV in shop­ping malls.

    “The response is ‘OK, I’m being mon­i­tored, but if the com­pa­ny is pay­ing for my time how does it mat­ter if it’s record­ing what I’m doing? It’s only for my bet­ter­ment,’” he said.

    Such “bet­ter­ment” appar­ent­ly isn’t nec­es­sary for man­agers: they can pick and choose when to turn their cam­eras on.

    The major­i­ty of sur­veil­lance tech providers focus their atten­tion on the finan­cial sec­tor, where com­pa­nies are legal­ly required to track staff com­mu­ni­ca­tions to pre­vent insid­er trad­ing. But they are increas­ing­ly sell­ing their tech to a broad­er range of com­pa­nies to mon­i­tor staff pro­duc­tiv­i­ty, data leaks and Human Resources vio­la­tions, like sex­u­al harass­ment and inap­pro­pri­ate behav­ior.

    Wire­tap spe­cial­izes in mon­i­tor­ing work­place chat forums such as Face­book Work­place, Slack and Yam­mer to iden­ti­fy, among oth­er issues, “inten­tion­al and unin­ten­tion­al harass­ment, threats, intim­i­da­tion”.

    Last year an employ­ee at an IT ser­vices com­pa­ny sent a pri­vate chat mes­sage to a friend at work wor­ried that he had just shared his sex­u­al iden­ti­ty with his man­ag­er in a meet­ing and fear­ing he’d face career reprisal. Wire­tap detect­ed the employee’s con­cern and alert­ed a senior com­pa­ny exec who was then able to inter­vene, talk to the man­ag­er and defuse the sit­u­a­tion.

    “Hav­ing the vis­i­bil­i­ty allows you to step in pro­duc­tive­ly,” said Greg Moran, COO of Wire­tap. “Even if it’s not a seri­ous offense you can see the ear­ly indi­ca­tions of some­one head­ing down a path.”

    To mon­i­tor pro­duc­tiv­i­ty, soft­ware can mea­sure prox­ies such as the num­ber of emails being sent, web­sites vis­it­ed, doc­u­ments and apps opened and key­strokes. Over time it can build a pic­ture of typ­i­cal user behav­iour and then alert when some­one devi­ates.

    “If it’s nor­mal for you to send out 10 emails, type 5,000 key­strokes and be active on a com­put­er for three hours a day, if all of a sud­den you are only active for one hour or typ­ing 1,000 key­strokes, there seems to be a dip in pro­duc­tiv­i­ty,” said Miller.

    “Or if you usu­al­ly touch 10 doc­u­ments a day and print two and sud­den­ly you are touch­ing 500 and print­ing 200 that may mean you’re steal­ing doc­u­ments in prepa­ra­tion of leav­ing the com­pa­ny.”

    Oth­er com­pa­nies, such as Tera­mind, seek to mea­sure dis­trac­tion by look­ing at how much a per­son is switch­ing between appli­ca­tions.

    “If a para­le­gal is writ­ing a doc­u­ment and every few sec­onds is switch­ing to Hipchat, Out­look and Word then there’s an issue that can be resolved by address­ing it with the employ­ee,” said Teramind’s CEO, Isaac Kohen.

    A com­mon but flawed tech­nique is key­word detec­tion, draw­ing from a list of pre­de­fined terms includ­ing swear words and slang asso­ci­at­ed with shady behav­ior. This approach tends to kick up a lot of false pos­i­tives and is easy to cir­cum­vent by any­one intent on beat­ing the sys­tem.

    That wasn’t the case when an All State Insur­ance fran­chise did a live demon­stra­tion of Interguard’s soft­ware to oth­er deal­ers. The tech­nol­o­gy start­ed scan­ning the net­work and almost imme­di­ate­ly found an email with the words “client list” and “résumé”. The demon­stra­tor opened the email in front of a room full of peers to dis­cov­er his best employ­ee was plot­ting to move to anoth­er com­pa­ny.

    Com­pa­nies like Dig­i­tal Rea­son­ing search for more sub­tle indi­ca­tors of pos­si­ble wrong­do­ing, such as con­text switch­ing. This is where one per­son sug­gests mov­ing the dis­cus­sion to encrypt­ed apps like What­sApp or Sig­nal or even tak­ing the con­ver­sa­tion offline, indi­cat­ing that the sub­ject mat­ter is too risky for the cor­po­rate net­work.

    “Now peo­ple know a lot of these sys­tems mon­i­tor­ing com­mu­ni­ca­tions are becom­ing more sophis­ti­cat­ed, they are say­ing, ‘Hey let’s move over to the oth­er app’ or ‘Let’s meet down­stairs for cof­fee’. These are small clues that have sur­faced in pros­e­cu­tions,” said Dig­i­tal Reasoning’s chief prod­uct offi­cer, Marten den Har­ing.

    Even What­sApp isn’t safe from Qumram’s mon­i­tor­ing soft­ware, which is placed on employ­ees’ devices – with their con­sent – to cap­ture every­thing they do, includ­ing the mes­sages they send to cus­tomers using What­sApp.

    “It tru­ly is Big Broth­er watch­ing you,” said Qumram’s Patrick Bar­nett.

    The spy­ing tech­nique that most com­pa­nies avoid, despite Crossover’s enthu­si­asm, is access­ing employ­ees’ web­cams. (Although you should prob­a­bly tape over yours like Mark Zucker­berg does if you are wor­ried about it.)

    ...

    Amer­i­can com­pa­nies gen­er­al­ly aren’t required by law to dis­close how they mon­i­tor employ­ees using com­pa­ny-issued devices, although they tend to include a catch-all clause in employ­ment con­tracts declar­ing such mon­i­tor­ing.

    “You can look at every­thing [in the US],” said Al Gidari, direc­tor of pri­va­cy at the Stan­ford Cen­tre for Inter­net and Soci­ety, adding that new sur­veil­lance soft­ware is so intru­sive because it’s “more per­va­sive, con­tin­u­ous and search­able”.

    Even if you’re not an employ­ee you may still be sub­ject to sur­veil­lance, thanks to tech­nol­o­gy used to screen poten­tial job can­di­dates. San­ta Mon­i­ca-based Fama pro­vides social media screen­ing to employ­ers to check for any prob­lem­at­ic con­tent.

    CEO Ben Mon­es said Fama was only inter­est­ed in con­tent that’s rel­e­vant to busi­ness­es, which includes “ref­er­ences to big­otry, misog­y­ny and vio­lence” as well as drug and alco­hol ref­er­ences. The soft­ware, he said, can “tell the dif­fer­ence between smok­ing weed in the back­yard and weed­ing the back­yard”.

    When pushed on how the com­pa­ny dif­fer­en­ti­ates big­otry ver­sus, for exam­ple, a black per­son using the N‑word, the response is a lit­tle fuzzy.

    “It’s a super-nuanced top­ic,” Mon­es said, adding that some of the thin­ly veiled signs of racism, like ref­er­ences to Con­fed­er­ate flags or stat­ues, wouldn’t come up.

    “Employ­ers aren’t look­ing at ref­er­ences like that to make a hir­ing deci­sion,” he said.

    And con­nect­ing the dots between a person’s work life and per­son­al life can lead to uncom­fort­able ter­ri­to­ry. One insid­er at a large con­sult­ing firm told the Guardian the com­pa­ny was look­ing into whether it could pre­vent fraud among bankers by look­ing at their Face­book pages. One sce­nario would be a trad­er who had just changed their rela­tion­ship sta­tus from mar­ried to divorce, the expense of which “could put that per­son under pres­sure to com­mit fraud or steal”.

    The insid­er had reser­va­tions about the effec­tive­ness of such a sys­tem.

    “If I were divorced, would I be more like­ly to steal? I don’t think so. It makes assump­tions,” he said, adding, “The more data and tech­nol­o­gy you have with­out an under­ly­ing the­o­ry of how people’s minds work then the eas­i­er it is to jump to con­clu­sions and put peo­ple in the crosshairs who don’t deserve to be.”

    ———-

    “Big Broth­er isn’t just watch­ing: work­place sur­veil­lance can track your every move” by Olivia Solon; The Guardian; 11/06/2017

    “Today’s work­place sur­veil­lance soft­ware is a dig­i­tal panop­ti­con that began with email and phone mon­i­tor­ing but now includes keep­ing track of web-brows­ing pat­terns, text mes­sages, screen­shots, key­strokes, social media posts, pri­vate mes­sag­ing apps like What­sApp and even face-to-face inter­ac­tions with co-work­ers.”

    And what are employ­ers doing with that dig­i­tal panop­ti­con? For starters, sur­veilling employ­ees’ com­put­er usage, as would unfor­tu­nate­ly be expect­ed. But what might not be expect­ed is that this panop­ti­can soft­ware can be set up to deter­mine the expect­ed behav­ior of a par­tic­u­lar employ­ee and then com­pare that behav­ior pro­file to the observed behav­ior. And if there’s a big change, the man­agers get a warn­ing. The panop­ti­con isn’t just sur­veilling you. It’s get­ting to know you:

    ...
    To mon­i­tor pro­duc­tiv­i­ty, soft­ware can mea­sure prox­ies such as the num­ber of emails being sent, web­sites vis­it­ed, doc­u­ments and apps opened and key­strokes. Over time it can build a pic­ture of typ­i­cal user behav­iour and then alert when some­one devi­ates.

    “If it’s nor­mal for you to send out 10 emails, type 5,000 key­strokes and be active on a com­put­er for three hours a day, if all of a sud­den you are only active for one hour or typ­ing 1,000 key­strokes, there seems to be a dip in pro­duc­tiv­i­ty,” said Miller.

    “Or if you usu­al­ly touch 10 doc­u­ments a day and print two and sud­den­ly you are touch­ing 500 and print­ing 200 that may mean you’re steal­ing doc­u­ments in prepa­ra­tion of leav­ing the com­pa­ny.”

    Oth­er com­pa­nies, such as Tera­mind, seek to mea­sure dis­trac­tion by look­ing at how much a per­son is switch­ing between appli­ca­tions.

    “If a para­le­gal is writ­ing a doc­u­ment and every few sec­onds is switch­ing to Hipchat, Out­look and Word then there’s an issue that can be resolved by address­ing it with the employ­ee,” said Teramind’s CEO, Isaac Kohen.
    ...

    “If a para­le­gal is writ­ing a doc­u­ment and every few sec­onds is switch­ing to Hipchat, Out­look and Word then there’s an issue that can be resolved by address­ing it with the employ­ee”

    If you’re the type of per­son whose brain works bet­ter jump­ing back and forth between tasks you’re going to get flagged as not being focused. Peo­ple with ADHD are going to love the future.

    Peo­ple who like to talk in per­son over cof­fee are also going to love the future:

    ...
    Com­pa­nies like Dig­i­tal Rea­son­ing search for more sub­tle indi­ca­tors of pos­si­ble wrong­do­ing, such as con­text switch­ing. This is where one per­son sug­gests mov­ing the dis­cus­sion to encrypt­ed apps like What­sApp or Sig­nal or even tak­ing the con­ver­sa­tion offline, indi­cat­ing that the sub­ject mat­ter is too risky for the cor­po­rate net­work.

    “Now peo­ple know a lot of these sys­tems mon­i­tor­ing com­mu­ni­ca­tions are becom­ing more sophis­ti­cat­ed, they are say­ing, ‘Hey let’s move over to the oth­er app’ or ‘Let’s meet down­stairs for cof­fee’. These are small clues that have sur­faced in pros­e­cu­tions,” said Dig­i­tal Reasoning’s chief prod­uct offi­cer, Marten den Har­ing.
    ...

    So the fact that employ­ees know they’re being mon­i­tored is get­ting incor­po­rat­ed into more sophis­ti­cat­ed algo­rithms that oper­ate under the assump­tion that employ­ees know they’re being mon­i­tored and will try to hide their mis­be­hav­ior from the panop­ti­con. Of course, employ­ees are inevitably going to learn about all these sub­tle clues that the panop­ti­con is watch­ing for so this will no doubt lead to a need for algo­rithms that incor­po­rate even more sub­tle clues. An ever more sophis­ti­cat­ed cat to catch an ever more sophis­ti­cat­ed mouse. And so on, for­ev­er.

    What could pos­si­bly go wrong? Oh yeah, a lot, espe­cial­ly if the assump­tions that go into all these algo­rithms are wrong:

    ...
    And con­nect­ing the dots between a person’s work life and per­son­al life can lead to uncom­fort­able ter­ri­to­ry. One insid­er at a large con­sult­ing firm told the Guardian the com­pa­ny was look­ing into whether it could pre­vent fraud among bankers by look­ing at their Face­book pages. One sce­nario would be a trad­er who had just changed their rela­tion­ship sta­tus from mar­ried to divorce, the expense of which “could put that per­son under pres­sure to com­mit fraud or steal”.

    The insid­er had reser­va­tions about the effec­tive­ness of such a sys­tem.

    “If I were divorced, would I be more like­ly to steal? I don’t think so. It makes assump­tions,” he said, adding, “The more data and tech­nol­o­gy you have with­out an under­ly­ing the­o­ry of how people’s minds work then the eas­i­er it is to jump to con­clu­sions and put peo­ple in the crosshairs who don’t deserve to be.”

    “The more data and tech­nol­o­gy you have with­out an under­ly­ing the­o­ry of how people’s minds work then the eas­i­er it is to jump to con­clu­sions and put peo­ple in the crosshairs who don’t deserve to be.”

    And keep in mind that when your employ­er panop­ti­con pre­dicts you’re going to do some­thing bad in the future they prob­a­bly aren’t going to tell you that when they let you go. They’re just make up some ran­dom excuse. Much like how employ­ers who pre­dict you’re going to get sick with an expen­sive ill­ness prob­a­bly aren’t going to tell you this. They’re just going to find a rea­sons to let you go. So we can add “mis­ap­plied algo­rith­mic assump­tions” to the list of poten­tial mys­tery rea­sons for when you sud­den­ly get let go from your job with min­i­mal expla­na­tion in the panop­ti­con office of the future: maybe your employ­er pre­dicts you’re about to get real­ly ill. Or maybe some oth­er com­plete­ly ran­dom thing set off the bad behav­ior pre­dic­tive algo­rithm. There’s a range of mys­tery rea­sons so at least you should­n’t nec­es­sar­i­ly assume you’re about to get hor­ri­bly ill when you’re fired. Yay.

    Posted by Pterrafractyl | November 14, 2017, 4:33 pm
  16. It told you so:

    Lis­ten to this pro­gram, and then read this:

    https://www.yahoo.com/news/kill-foster-parents-amazons-alexa-talks-murder-sex-120601149–finance.html

    Have Fun!

    Dave Emory

    Posted by Dave Emory | December 21, 2018, 3:13 pm
  17. I think it is impor­tant that you are report­ing this kind of infor­ma­tion. I can tell you that in my career the most depress­ing and uneth­i­cal work that I ever moved in the prox­im­i­ty of was at Intel. I can tell you in 2013 that they had play­ing in the lob­by just before you enter their cafe­te­ria an adver­tis­ing video that was pro­mot­ing a great new devel­op­ment of theirs called ‘Smart Response’. This new inno­va­tion would allow your com­put­er to act as a good but­ler and pre­dict your deci­sions and serve as what I could only call your ‘con­fi­dant.’

    >Imag­ine, your com­put­er as your ‘best bro’, nobody knows you bet­ter. Of course, you can Trust him Right? He’d nev­er Fink on You?

    From work­ing in User Expe­ri­ence it was clear that there was effort in get­ting the machines to col­lect data about your face from your laptop/device cam­era as well as your tone of voice and then use that to inter­pret your reac­tions to what­ev­er you may be look­ing at and then alter the ad con­tent accord­ing­ly that the pages you vis­it dis­play. Sup­pos­ed­ly they could inter­pret your gen­er­al state of mind with a high degree of accu­ra­cy just by focus­ing on the tri­an­gu­lar area between your eyes and your mouth.

    >In order for ‘Smart Response’ to work your com­put­er might need to col­lect this data, build a pro­file on you. But that’s OK, it’s your Bud­dy Riight?

    From what I could gath­er, this seemed to be an out­growth of a project involv­ing physi­cist Stephen Hawkins. They want­ed to be able to build soft­ware to inter­pret him clear­ly. Once they did they may have begun apply­ing it gen­er­al­ly.. What real­ly con­cerned me about it is the prospect of reverse pro­gram­ming. Once they build these pro­files how would they use them? Would they try to pro­vide con­tent that they could guess an indi­vid­ual would respond to in a cer­tain way?
    Would they have our com­put­ers try pro­gram­ming us?

    Posted by Anonymous Now | December 23, 2018, 10:46 pm
  18. https://amp.cnn.com/cnn/2019/08/01/tech/robot-racism-scn-trnd/index.html

    Robot racism? Yes, says a study show­ing humans’ bias­es extend to robots
    By Car­o­line Klein and David Allan, CNN
    Updat­ed 8:37 AM EDT, Thu August 01, 2019

    How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

    Posted by Roberto Maldonado | August 1, 2019, 8:05 pm

Post a comment