Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: Devel­op­ing analy­sis pre­sent­ed in FTR #968, this broad­cast explores fright­en­ing devel­op­ments and poten­tial devel­op­ments in the world of arti­fi­cial intelligence–the ulti­mate man­i­fes­ta­tion of what Mr. Emory calls “tech­no­crat­ic fas­cism.”

In order to under­score what we mean by tech­no­crat­ic fas­cism, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Per­haps the last and most per­ilous man­i­fes­ta­tion of tech­no­crat­ic fas­cism con­cerns Antho­ny  Levandows­ki, an engi­neer at the foun­da­tion of the devel­op­ment of Google Street Map tech­nol­o­gy and self-dri­ving cars. He is propos­ing an AI God­head that would rule the world and would be wor­shipped as a God by the plan­et’s cit­i­zens. Insight into his per­son­al­i­ty was pro­vid­ed by an asso­ciate: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary senseIt was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

As we saw in FTR #968, AI’s have incor­po­rat­ed many flaws of their cre­ators, augur­ing very poor­ly for the sub­jects of Levandowski’s AI God­head.

It is also inter­est­ing to con­tem­plate what may hap­pen when AI’s are designed by oth­er AI’s- machines design­ing oth­er machines.

After a detailed review of some of the omi­nous real and devel­op­ing AI-relat­ed tech­nol­o­gy, the pro­gram high­lights Antho­ny Levandows­ki, the bril­liant engi­neer who was instru­men­tal in devel­op­ing Google’s Street Maps, Way­mo’s self-dri­ving cars, Otto’s self-dri­ving trucks, the Lidar tech­nol­o­gy cen­tral to self-dri­ving vehi­cles and the Way of the Future, super AI God­head.

Fur­ther insight into Levandowski’s per­son­al­i­ty can be gleaned from e‑mails with Travis Kalan­ick, for­mer CEO of Uber: ” . . . . In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. ‘Down to hang out this eve and mas­ter­mind some shit,’ texted Kalan­ick, short­ly after the acqui­si­tion. ‘We’re going to take over the world. One robot at a time,’ wrote Levandows­ki anoth­er time. . . .”

Those who view self-dri­ving cars and oth­er AI-based tech­nolo­gies as flaw­less would do well to con­sid­er the fol­low­ing: ” . . . .Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions. . . . .”

Not­ing Levandowski’s per­son­al­i­ty quirks, the arti­cle pos­es a fun­da­men­tal ques­tion: ” . . . . But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? . . . .”

Levandowski’s Otto self-dri­ving trucks might be weighed against the prog­nos­ti­ca­tions of dark horse Pres­i­den­tial can­di­date and for­mer tech exec­u­tive Andrew Wang: . . . . ‘All you need is self-dri­ving cars to desta­bi­lize soci­ety,’ Mr. Yang said over lunch at a Thai restau­rant in Man­hat­tan last month, in his first inter­view about his cam­paign. In just  a few years, he said, ‘we’re going to have a mil­lion truck dri­vers out of work who are 94 per­cent male, with an  aver­age  lev­el of edu­ca­tion of high school or one year of col­lege.’ ‘That one inno­va­tion,’ he added, ‘will be enough to cre­ate riots in the street. And we’re about to do the  same thing to retail work­ers, call cen­ter work­ers, fast-food work­ers, insur­ance com­pa­nies, account­ing firms.’ . . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

Pro­gram High­lights Include:

  1. Levandowski’s appar­ent shep­herd­ing of a com­pa­ny called–perhaps significantly–Odin Wave to uti­lize Lidar-like tech­nol­o­gy.
  2. The role of DARPA in ini­ti­at­ing the self-dri­ving vehi­cles con­test that was Levandowski’s point of entry into his tech ven­tures.
  3. Levandowski’s devel­op­ment of the Ghostrid­er self-dri­ving motor­cy­cles, which expe­ri­enced 800 crash­es in 1,000 miles.

1a. In order to under­score what we mean by tech­no­crat­ic fas­cism, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

1b. Antho­ny  Levandows­ki, an engi­neer at the foun­da­tion of the devel­op­ment of Google Street Map tech­nol­o­gy and self-dri­ving cars, is propos­ing an AI God­head that would rule the world and would be wor­shipped as a God by the plan­et’s cit­i­zens. Insight into his per­son­al­i­ty was pro­vid­ed by an asso­ciate: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary senseIt was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

1c. Tran­si­tion­ing from our last program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . . This is not altru­ism. . . .

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [West­ern Edi­tion].

 They are a dream of researchers but per­haps a night­mare for high­ly skilled com­put­er pro­gram­mers: arti­fi­cial­ly intel­li­gent machines that can build oth­er arti­fi­cial­ly intel­li­gent machines. With recent speech­es in both Sil­i­con Val­ley and Chi­na, Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data.

AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. The project is part of a much larg­er effort to bring the lat­est and great­est A.I. tech­niques to a wider col­lec­tion of com­pa­nies and soft­ware devel­op­ers.

The tech indus­try is promis­ing every­thing from smart­phone apps that can rec­og­nize faces to cars that can dri­ve on their own. But by some esti­mates, only 10,000 peo­ple world­wide have the edu­ca­tion, expe­ri­ence and tal­ent need­ed to build the com­plex and some­times mys­te­ri­ous math­e­mat­i­cal algo­rithms that will dri­ve this new breed of arti­fi­cial intel­li­gence.

The world’s largest tech busi­ness­es, includ­ing Google, Face­book and Microsoft, some­times pay mil­lions of dol­lars a year to A.I. experts, effec­tive­ly cor­ner­ing the mar­ket for this hard-to-find tal­ent. The short­age isn’t going away any­time soon, just because mas­ter­ing these skills takes years of work. The indus­try is not will­ing to wait. Com­pa­nies are devel­op­ing all sorts of tools that will make it eas­i­er for any oper­a­tion to build its own A.I. soft­ware, includ­ing things like image and speech recog­ni­tion ser­vices and online chat­bots. “We are fol­low­ing the same path that com­put­er sci­ence has fol­lowed with every new type of tech­nol­o­gy,” said Joseph Sirosh, a vice pres­i­dent at Microsoft, which recent­ly unveiled a tool to help coders build deep neur­al net­works, a type of com­put­er algo­rithm that is dri­ving much of the recent progress in the A.I. field. “We are elim­i­nat­ing a lot of the heavy lift­ing.” This is not altru­ism.

Researchers like Mr. Dean believe that if more peo­ple and com­pa­nies are work­ing on arti­fi­cial intel­li­gence, it will pro­pel their own research. At the same time, com­pa­nies like Google, Ama­zon and Microsoft see seri­ous mon­ey in the trend that Mr. Sirosh described. All of them are sell­ing cloud-com­put­ing ser­vices that can help oth­er busi­ness­es and devel­op­ers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief tech­ni­cal offi­cer of Mal­ong, a start-up in Chi­na that offers sim­i­lar ser­vices. “And the tools are not yet sat­is­fy­ing all the demand.”

This is most like­ly what Google has in mind for AutoML, as the com­pa­ny con­tin­ues to hail the project’s progress. Google’s chief exec­u­tive, Sun­dar Pichai, boast­ed about AutoML last month while unveil­ing a new Android smart­phone.

Even­tu­al­ly, the Google project will help com­pa­nies build sys­tems with arti­fi­cial intel­li­gence even if they don’t have exten­sive exper­tise, Mr. Dean said. Today, he esti­mat­ed, no more than a few thou­sand com­pa­nies have the right tal­ent for build­ing A.I., but many more have the nec­es­sary data. “We want to go from thou­sands of orga­ni­za­tions solv­ing machine learn­ing prob­lems to mil­lions,” he said.

Google is invest­ing heav­i­ly in cloud-com­put­ing ser­vices — ser­vices that help oth­er busi­ness­es build and run soft­ware — which it expects to be one of its pri­ma­ry eco­nom­ic engines in the years to come. And after snap­ping up such a large por­tion of the world’s top A.I researchers, it has a means of jump-start­ing this engine.

Neur­al net­works are rapid­ly accel­er­at­ing the devel­op­ment of A.I. Rather than build­ing an image-recog­ni­tion ser­vice or a lan­guage trans­la­tion app by hand, one line of code at a time, engi­neers can much more quick­ly build an algo­rithm that learns tasks on its own. By ana­lyz­ing the sounds in a vast col­lec­tion of old tech­ni­cal sup­port calls, for instance, a machine-learn­ing algo­rithm can learn to rec­og­nize spo­ken words.

But build­ing a neur­al net­work is not like build­ing a web­site or some run-of-themill smart­phone app. It requires sig­nif­i­cant math skills, extreme tri­al and error, and a fair amount of intu­ition. Jean-François Gag­né, the chief exec­u­tive of an inde­pen­dent machine-learn­ing lab called Ele­ment AI, refers to the process as “a new kind of com­put­er pro­gram­ming.”

In build­ing a neur­al net­work, researchers run dozens or even hun­dreds of exper­i­ments across a vast net­work of machines, test­ing how well an algo­rithm can learn a task like rec­og­niz­ing an image or trans­lat­ing from one lan­guage to anoth­er. Then they adjust par­tic­u­lar parts of the algo­rithm over and over again, until they set­tle on some­thing that works. Some call it a “dark art,” just because researchers find it dif­fi­cult to explain why they make par­tic­u­lar adjust­ments.

But with AutoML, Google is try­ing to auto­mate this process. It is build­ing algo­rithms that ana­lyze the devel­op­ment of oth­er algo­rithms, learn­ing which meth­ods are suc­cess­ful and which are not. Even­tu­al­ly, they learn to build more effec­tive machine learn­ing. Google said AutoML could now build algo­rithms that, in some cas­es, iden­ti­fied objects in pho­tos more accu­rate­ly than ser­vices built sole­ly by human experts. Bar­ret Zoph, one of the Google researchers behind the project, believes that the same method will even­tu­al­ly work well for oth­er tasks, like speech recog­ni­tion or machine trans­la­tion. This is not always an easy thing to wrap your head around. But it is part of a sig­nif­i­cant trend in A.I. research. Experts call it “learn­ing to learn” or “met­alearn­ing.”

Many believe such meth­ods will sig­nif­i­cant­ly accel­er­ate the progress of A.I. in both the online and phys­i­cal worlds. At the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, researchers are build­ing tech­niques that could allow robots to learn new tasks based on what they have learned in the past. “Com­put­ers are going to invent the algo­rithms for us, essen­tial­ly,” said a Berke­ley pro­fes­sor, Pieter Abbeel. “Algo­rithms invent­ed by com­put­ers can solve many, many prob­lems very quick­ly — at least that is the hope.”

This is also a way of expand­ing the num­ber of peo­ple and busi­ness­es that can build arti­fi­cial intel­li­gence. These meth­ods will not replace A.I. researchers entire­ly. Experts, like those at Google, must still do much of the impor­tant design work.

But the belief is that the work of a few experts can help many oth­ers build their own soft­ware. Rena­to Negrin­ho, a researcher at Carnegie Mel­lon Uni­ver­si­ty who is explor­ing tech­nol­o­gy sim­i­lar to AutoML, said this was not a real­i­ty today but should be in the years to come. “It is just a mat­ter of when,” he said.

2a.We next review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:   . . . . Brain-com­put­er inter­faces are noth­ing newDARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more rad­i­cal—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

artificial intelligence2b.     Next we review still more about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

2c.  Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions gath­er­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

2d.  In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos.

2e.  The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”
  2. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  3. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  4. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . 

2f. Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

2g. Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . 

3a. One of the stranger sto­ries in recent years has been the mys­tery of Cica­da 3301the anony­mous group that posts annu­al chal­lenges of super dif­fi­cult puz­zles used to recruit tal­ent­ed code-break­ers and invite them to join some sort of Cypher­punk cult that wants to build a glob­al AI-‘god brain’. Or some­thing. It’s a weird and creepy orga­ni­za­tion that’s spec­u­lat­ed to either be a front for an intel­li­gence agency or per­haps some sort of under­ground net­work of wealth Lib­er­tar­i­ans. And, for now, Cica­da 3301 remains anony­mous.

In that con­text, it’s worth not­ing that some­one with a lot of cash has already start­ed a foun­da­tion to accom­plish that very same ‘AI god’ goal: Antho­ny Levandows­ki, a for­mer Google Engi­neer who played a big role in the devel­op­ment Google’s “Street Map” tech­nol­o­gy and a string of self-dri­ving vehi­cle com­pa­nies, start­ed Way of the Future, a non­prof­it reli­gious cor­po­ra­tion with the mis­sion “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety”:

“Deus ex machi­na: for­mer Google engi­neer is devel­op­ing an AI god” by Olivia Solon; The Guardian; 09/28/2017

Intranet ser­vice? Check. Autonomous motor­cy­cle? Check. Dri­ver­less car tech­nol­o­gy? Check. Obvi­ous­ly the next log­i­cal project for a suc­cess­ful Sil­i­con Val­ley engi­neer is to set up an AI-wor­ship­ping reli­gious orga­ni­za­tion.

Antho­ny Levandows­ki, who is at the cen­ter of a legal bat­tle between Uber and Google’s Way­mo, has estab­lished a non­prof­it reli­gious cor­po­ra­tion called Way of the Future, accord­ing to state fil­ings first uncov­ered by Wired’s Backchan­nelWay of the Future’s star­tling mis­sion: “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety.”

Levandows­ki was co-founder of autonomous truck­ing com­pa­ny Otto, which Uber bought in 2016. He was fired from Uber in May amid alle­ga­tions that he had stolen trade secrets from Google to devel­op Otto’s self-dri­ving tech­nol­o­gy. He must be grate­ful for this reli­gious fall-back project, first reg­is­tered in 2015.

The Way of the Future team did not respond to requests for more infor­ma­tion about their pro­posed benev­o­lent AI over­lord, but his­to­ry tells us that new tech­nolo­gies and sci­en­tif­ic dis­cov­er­ies have con­tin­u­al­ly shaped reli­gion, killing old gods and giv­ing birth to new ones.

“The church does a ter­ri­ble job of reach­ing out to Sil­i­con Val­ley types,” acknowl­edges Christo­pher Benek a pas­tor in Flori­da and found­ing chair of the Chris­t­ian Tran­shu­man­ist Asso­ci­a­tion.

Sil­i­con Val­ley, mean­while, has sought solace in tech­nol­o­gy and has devel­oped qua­si-reli­gious con­cepts includ­ing the “sin­gu­lar­i­ty”, the hypoth­e­sis that machines will even­tu­al­ly be so smart that they will out­per­form all human capa­bil­i­ties, lead­ing to a super­hu­man intel­li­gence that will be so sophis­ti­cat­ed it will be incom­pre­hen­si­ble to our tiny fleshy, ratio­nal brains.

For futur­ists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, lead­ing to dig­i­tal immor­tal­i­ty. Oth­ers like Elon Musk and Stephen Hawk­ing warn that such sys­tems pose an exis­ten­tial threat to human­i­ty.

“With arti­fi­cial intel­li­gence we are sum­mon­ing the demon,” Musk said at a con­fer­ence in 2014. “In all those sto­ries where there’s the guy with the pen­ta­gram and the holy water, it’s like – yeah, he’s sure he can con­trol the demon. Doesn’t work out.”

Benek argues that advanced AI is com­pat­i­ble with Chris­tian­i­ty – it’s just anoth­er tech­nol­o­gy that humans have cre­at­ed under guid­ance from God that can be used for good or evil.

“I total­ly think that AI can par­tic­i­pate in Christ’s redemp­tive pur­pos­es,” he said, by ensur­ing it is imbued with Chris­t­ian val­ues.

“Even if peo­ple don’t buy orga­nized reli­gion, they can buy into ‘do unto oth­ers’.”

For tran­shu­man­ist and “recov­er­ing Catholic” Zoltan Ist­van, reli­gion and sci­ence con­verge con­cep­tu­al­ly in the sin­gu­lar­i­ty.

“God, if it exists as the most pow­er­ful of all sin­gu­lar­i­ties, has cer­tain­ly already become pure orga­nized intel­li­gence,” he said, refer­ring to an intel­li­gence that “spans the uni­verse through sub­atom­ic manip­u­la­tion of physics”.

And per­haps, there are oth­er forms of intel­li­gence more com­pli­cat­ed than that which already exist and which already per­me­ate our entire exis­tence. Talk about ghost in the machine,” he added.

For Ist­van, an AI-based God is like­ly to be more ratio­nal and more attrac­tive than cur­rent con­cepts (“the Bible is a sadis­tic book”) and, he added, “this God will actu­al­ly exist and hope­ful­ly will do things for us.”

We don’t know whether Levandowski’s God­head ties into any exist­ing the­olo­gies or is a man­made alter­na­tive, but it’s clear that advance­ments in tech­nolo­gies includ­ing AI and bio­engi­neer­ing kick up the kinds of eth­i­cal and moral dilem­mas that make humans seek the advice and com­fort from a high­er pow­er: what will humans do once arti­fi­cial intel­li­gence out­per­forms us in most tasks? How will soci­ety be affect­ed by the abil­i­ty to cre­ate super-smart, ath­let­ic “design­er babies” that only the rich can afford? Should a dri­ver­less car kill five pedes­tri­ans or swerve to the side to kill the own­er?

If tra­di­tion­al reli­gions don’t have the answer, AI – or at least the promise of AI – might be allur­ing.

———-

3b. As the fol­low­ing long piece by Wired demon­strates, Levandows­ki doesn’t appear to be too con­cerned about ethics, espe­cial­ly if they get in the way of his dream of trans­form­ing the world through robot­ics. Trans­form­ing and tak­ing over the world through robot­ics. Yep. The arti­cle focus­es on the var­i­ous legal trou­bles Levandows­ki faces over charges by Google that he stole the “Lidar” tech­nol­o­gy he helped devel­op at Google and took it to Uber (a com­pa­ny with a seri­ous moral com­pass deficit). (Lidar is a laser-based radar-like tech­nol­o­gy used by vehi­cles to rapid­ly map their sur­round­ings)

The arti­cle also includes some inter­est­ing insights into what makes Levandows­ki tick. Accord­ing to friend and for­mer engi­neer at one of Levandowski’s com­pa­nies: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary senseIt was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

 Fur­ther insight into Levandowski’s per­son­al­i­ty can be gleaned from e‑mails with Travis Kalan­ick, for­mer CEO of Uber: ” . . . . In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. ‘Down to hang out this eve and mas­ter­mind some shit,’ texted Kalan­ick, short­ly after the acqui­si­tion. ‘We’re going to take over the world. One robot at a time,’ wrote Levandows­ki anoth­er time. . . .”

Those who view self-dri­ving cars and oth­er AI-based tech­nolo­gies as flaw­less would do well to con­sid­er the fol­low­ing: ” . . . .Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions. . . . .”

Not­ing Levandowski’s per­son­al­i­ty quirks, the arti­cle pos­es a fun­da­men­tal ques­tion: ” . . . . But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? . . . .”

“God Is a Bot, and Antho­ny Levandows­ki Is His Mes­sen­ger” by Mark Har­ris; Wired; 09/27/2017

Many peo­ple in Sil­i­con Val­ley believe in the Singularity—the day in our near future when com­put­ers will sur­pass humans in intel­li­gence and kick off a feed­back loop of unfath­omable change.

When that day comes, Antho­ny Levandows­ki will be firm­ly on the side of the machines. In Sep­tem­ber 2015, the mul­ti-mil­lion­aire engi­neer at the heart of the patent and trade secrets law­suit between Uber and Way­mo, Google’s self-dri­ving car com­pa­ny, found­ed a reli­gious orga­ni­za­tion called Way of the Future. Its pur­pose, accord­ing to pre­vi­ous­ly unre­port­ed state fil­ings, is noth­ing less than to “devel­op and pro­mote the real­iza­tion of a God­head based on Arti­fi­cial Intel­li­gence.”

Way of the Future has not yet respond­ed to requests for the forms it must sub­mit annu­al­ly to the Inter­nal Rev­enue Ser­vice (and make pub­licly avail­able), as a non-prof­it reli­gious cor­po­ra­tion. How­ev­er, doc­u­ments filed with Cal­i­for­nia show that Levandows­ki is Way of the Future’s CEO and Pres­i­dent, and that it aims “through under­stand­ing and wor­ship of the God­head, [to] con­tribute to the bet­ter­ment of soci­ety.”

A divine AI may still be far off, but Levandows­ki has made a start at pro­vid­ing AI with an earth­ly incar­na­tion. The autonomous cars he was instru­men­tal in devel­op­ing at Google are already fer­ry­ing real pas­sen­gers around Phoenix, Ari­zona, while self-dri­ving trucks he built at Otto are now part of Uber’s plan to make freight trans­port safer and more effi­cient. He even over­saw a pas­sen­ger-car­ry­ing drones project that evolved into Lar­ry Page’s Kit­ty Hawk start­up.

Levandows­ki has done per­haps more than any­one else to pro­pel trans­porta­tion toward its own Sin­gu­lar­i­ty, a time when auto­mat­ed cars, trucks and air­craft either free us from the dan­ger and drudgery of human operation—or dec­i­mate mass tran­sit, encour­age urban sprawl, and enable dead­ly bugs and hacks.

But before any of that can hap­pen, Levandows­ki must face his own day of reck­on­ing. In Feb­ru­ary, Waymo—the com­pa­ny Google’s autonomous car project turned into—filed a law­suit against Uber. In its com­plaint, Way­mo says that Levandows­ki tried to use stealthy star­tups and high-tech tricks to take cash, exper­tise, and secrets from Google, with the aim of repli­cat­ing its vehi­cle tech­nol­o­gy at arch-rival Uber. Way­mo is seek­ing dam­ages of near­ly $1.9 billion—almost half of Google’s (pre­vi­ous­ly unre­port­ed) $4.5 bil­lion val­u­a­tion of the entire self-dri­ving divi­sion. Uber denies any wrong­do­ing.

Next month’s tri­al in a fed­er­al cour­t­house in San Fran­cis­co could steer the future of autonomous trans­porta­tion. A big win for Way­mo would prove the val­ue of its patents and chill Uber’s efforts to remove prof­it-sap­ping human dri­vers from its busi­ness. If Uber pre­vails, oth­er self-dri­ving star­tups will be encour­aged to take on the big players—and a vin­di­cat­ed Levandows­ki might even return to anoth­er start­up. (Uber fired him in May.)

Levandows­ki has made a career of mov­ing fast and break­ing things. As long as those things were self-dri­ving vehi­cles and lit­tle-loved reg­u­la­tions, Sil­i­con Val­ley applaud­ed him in the way it knows best—with a fire­hose of cash. With his charm, enthu­si­asm, and obses­sion with deal-mak­ing, Levandows­ki came to per­son­i­fy the dis­rup­tion that autonomous trans­porta­tion is like­ly to cause.

But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them?

In 2002, Levandowski’s atten­tion turned, fate­ful­ly, toward trans­porta­tion. His moth­er called him from Brus­sels about a con­test being orga­nized by the Pentagon’s R&D arm, DARPA. The first Grand Chal­lenge in 2004 would race robot­ic, com­put­er-con­trolled vehi­cles in a desert between Los Ange­les and Las Vegas—a Wacky Races for the 21st cen­tu­ry.

“I was like, ‘Wow, this is absolute­ly the future,’” Levandows­ki told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”

Levandowski’s entry would be noth­ing so bor­ing as a car. “I orig­i­nal­ly want­ed to do an auto­mat­ed fork­lift,” he said at a fol­low-up com­pe­ti­tion in 2005. “Then I was dri­ving to Berke­ley [one day] and a pack of motor­cy­cles descend­ed on my pick­up and flowed like water around me.” The idea for Ghostrid­er was born—a glo­ri­ous­ly deranged self-dri­ving Yama­ha motor­cy­cle whose wob­bles inspired laugh­ter from spec­ta­tors, but awe in rivals strug­gling to get even four-wheeled vehi­cles dri­ving smooth­ly.

“Antho­ny would go for weeks on 25-hour days to get every­thing done. Every day he would go to bed an hour lat­er than the day before,” remem­bers Randy Miller, a col­lege friend who worked with him on Ghostrid­er. “With­out a doubt, Antho­ny is the smartest, hard­est-work­ing and most fear­less per­son I’ve ever met.”

Levandows­ki and his team of Berke­ley stu­dents maxed out his cred­it cards get­ting Ghostrid­er work­ing on the streets of Rich­mond, Cal­i­for­nia, where it racked up an aston­ish­ing 800 crash­es in a thou­sand miles of test­ing. Ghostrid­er nev­er won a Grand Chal­lenge, but its ambi­tious design earned Levandows­ki brag­ging rights—and the motor­bike a place in the Smith­son­ian.

“I see Grand Chal­lenge not as the end of the robot­ics adven­ture we’re on, it’s almost like the begin­ning,” Levandows­ki told Sci­en­tif­ic Amer­i­can in 2005. “This is where every­one is meet­ing, becom­ing aware of who’s work­ing on what, [and] fil­ter­ing out the non-func­tion­al ideas.”

One idea that made the cut was lidar—spinning lasers that rapid­ly built up a 3D pic­ture of a car’s sur­round­ings. In the lidar-less first Grand Chal­lenge, no vehi­cle made it fur­ther than a few miles along the course. In the sec­ond, an engi­neer named Dave Hall con­struct­ed a lidar that “was giant. It was one-off but it was awe­some,” Levandows­ki told me. “We real­ized, yes, lasers [are] the way to go.”

After grad­u­ate school, Levandows­ki went to work for Hall’s com­pa­ny, Velo­dyne, as it piv­ot­ed from mak­ing loud­speak­ers to sell­ing lidars. Levandows­ki not only talked his way into being the company’s first sales rep, tar­get­ing teams work­ing towards the next Grand Chal­lenge, but he also worked on the lidar’s net­work­ing. By the time of the third and final DARPA con­test in 2007, Velodyne’s lidar was mount­ed on five of the six vehi­cles that fin­ished.

But Levandows­ki had already moved on. Ghostrid­er had caught the eye of Sebas­t­ian Thrun, a robot­ics pro­fes­sor and team leader of Stan­ford University’s win­ning entry in the sec­ond com­pe­ti­tion. In 2006, Thrun invit­ed Levandows­ki to help out with a project called Vue­Tool, which was set­ting out to piece togeth­er street-lev­el urban maps using cam­eras mount­ed on mov­ing vehi­cles. Google was already work­ing on a sim­i­lar sys­tem, called Street View. Ear­ly in 2007, Google brought on Thrun and his entire team as employees—with bonus­es as high as $1 mil­lion each, accord­ing to one con­tem­po­rary at Google—to trou­bleshoot Street View and bring it to launch.

“[Hir­ing the Vue­Tool team] was very much a scheme for pay­ing Thrun and the oth­ers to show Google how to do it right,” remem­bers the engi­neer. The new hires replaced Google’s bulky, cus­tom-made $250,000 cam­eras with $15,000 off-the-shelf panoram­ic web­cams. Then they went auto shop­ping. “Antho­ny went to a car store and said we want to buy 100 cars,” Sebas­t­ian Thrun told me in 2015. “The deal­er almost fell over.”

Levandows­ki was also mak­ing waves in the office, even to the point of telling engi­neers not to waste time talk­ing to col­leagues out­side the project, accord­ing to one Google engi­neer. “It wasn’t clear what author­i­ty Antho­ny had, and yet he came in and assumed author­i­ty,” said the engi­neer, who asked to remain anony­mous. “There were some bad feel­ings but most­ly [peo­ple] just went with it. He’s good at that. He’s a great leader.”

Under Thrun’s super­vi­sion, Street View cars raced to hit Page’s tar­get of cap­tur­ing a mil­lion miles of road images by the end of 2007. They fin­ished in October—just in time, as it turned out. Once autumn set in, every web­cam suc­cumbed to rain, con­den­sa­tion, or cold weath­er, ground­ing all 100 vehi­cles.

Part of the team’s secret sauce was a device that would turn a raw cam­era feed into a stream of data, togeth­er with loca­tion coor­di­nates from GPS and oth­er sen­sors. Google engi­neers called it the Top­con box, named after the Japan­ese opti­cal firm that sold it. But the box was actu­al­ly designed by a local start­up called 510 Sys­tems. “We had one cus­tomer, Top­con, and we licensed our tech­nol­o­gy to them,” one of the 510 Sys­tems own­ers told me.

That own­er was…Anthony Levandows­ki, who had cofound­ed 510 Sys­tems with two fel­low Berke­ley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after start­ing work at Google. 510 Sys­tems had a lot in com­mon with the Ghostrid­er team. Berke­ley stu­dents worked there between lec­tures, and Levandowski’s moth­er ran the office. Top­con was cho­sen as a go-between because it had spon­sored the self-dri­ving motor­cy­cle. “I always liked the idea that…510 would be the peo­ple that made the tools for peo­ple that made maps, peo­ple like Navteq, Microsoft, and Google,” Levandows­ki told me in 2016.

Google’s engi­neer­ing team was ini­tial­ly unaware that 510 Sys­tems was Levandowski’s com­pa­ny, sev­er­al engi­neers told me. That changed once Levandows­ki pro­posed that Google also use the Top­con box for its small fleet of aer­i­al map­ping planes. “When we found out, it raised a bunch of eye­brows,” remem­bers an engi­neer. Regard­less, Google kept buy­ing 510’s box­es.

**********

The truth was, Levandows­ki and Thrun were on a roll. After impress­ing Lar­ry Page with Street View, Thrun sug­gest­ed an even more ambi­tious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of car­tog­ra­phers in India. Ground Truth would allow Google to stop pay­ing expen­sive licens­ing fees for out­side maps, and bring free turn-by-turn direc­tions to Android phones—a key dif­fer­en­tia­tor in the ear­ly days of its smart­phone war with Apple.

Levandows­ki spent months shut­tling between Moun­tain View and Hyderabad—and yet still found time to cre­ate an online stock mar­ket pre­dic­tion game with Jesse Levin­son, a com­put­er sci­ence post-doc at Stan­ford who lat­er cofound­ed his own autonomous vehi­cle start­up, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Dis­coe, a for­mer engi­neer at 510. “He had an engineer’s enthu­si­asm that was con­ta­gious, and was always think­ing about how quick­ly we can get to this amaz­ing robot future he’s so excit­ed about.”

One time, Dis­coe was chat­ting in 510’s break room about how lidar could help sur­vey his family’s tea farm on Hawaii. “Sud­den­ly Antho­ny said, ‘Why don’t you just do it? Get a lidar rig, put it in your lug­gage, and go map it,’” said Dis­coe. “And it worked. I made a kick-ass point cloud [3D dig­i­tal map] of the farm.”

If Street View had impressed Lar­ry Page, the speed and accu­ra­cy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he want­ed; he want­ed to return to self-dri­ving cars.

Project Chauf­feur began in 2008, with Levandows­ki as Thrun’s right-hand man. As with Street View, Google engi­neers would work on the soft­ware while 510 Sys­tems and a recent Levandows­ki start­up, Anthony’s Robots, pro­vid­ed the lidar and the car itself.

Levandows­ki said this arrange­ment would have act­ed as a fire­wall if any­thing went ter­ri­bly wrong. “Google absolute­ly did not want their name asso­ci­at­ed with a vehi­cle dri­ving in San Fran­cis­co,” he told me in 2016. “They were wor­ried about an engi­neer build­ing a car that drove itself that crash­es and kills some­one and it gets back to Google. You have to ask per­mis­sion [for side projects] and your man­ag­er has to be OK with it. Sebas­t­ian was cool. Google was cool.”

In order to move Project Chauf­feur along as quick­ly as pos­si­ble from the­o­ry to real­i­ty, Levandows­ki enlist­ed the help of a film­mak­er friend he had worked with at Berke­ley. In the TV show the two had made, Levandows­ki had cre­at­ed a cyber­net­ic dol­phin suit (seri­ous­ly). Now they came up with the idea of a self-dri­ving piz­za deliv­ery car for a show on the Dis­cov­ery Chan­nel called Pro­to­type This! Levandows­ki chose a Toy­ota Prius, because it had a dri­ve-by-wire sys­tem that was rel­a­tive­ly easy to hack.

In a mat­ter of weeks, Levandowski’s team had the car, dubbed Pri­bot, dri­ving itself. If any­one asked what they were doing, Levandows­ki told me, “We’d say it’s a laser and just dri­ve off.”

“Those were the Wild West days,” remem­bers Ben Dis­coe. “Antho­ny and Pierre-Yves…would engage the algo­rithm in the car and it would almost swipe some oth­er car or almost go off the road, and they would come back in and joke about it. Tell sto­ries about how excit­ing it was.”

But for the Dis­cov­ery Chan­nel show, at least, Levandows­ki fol­lowed the let­ter of the law. The Bay Bridge was cleared of traf­fic and a squad of police cars escort­ed the unmanned Prius from start to fin­ish. Apart from get­ting stuck against a wall, the dri­ve was a suc­cess. “You’ve got to push things and get some bumps and bruis­es along the way,” said Levandows­ki.

Anoth­er inci­dent drove home the poten­tial of self-dri­ving cars. In 2010, Levandowski’s part­ner Ste­fanie Olsen was involved in a seri­ous car acci­dent while nine months preg­nant with their first child. “My son Alex was almost nev­er born,” Levandows­ki told a room full of Berke­ley stu­dents in 2013. “Trans­porta­tion [today] takes time, resources and lives. If you can fix that, that’s a real­ly big prob­lem to address.”

Over the next few years, Levandows­ki was key to Chauffeur’s progress. 510 Sys­tems built five more self-dri­ving cars for Google—as well as ran­dom gad­gets like an autonomous trac­tor and a portable lidar sys­tem. “Antho­ny is light­ning in a bot­tle, he has so much ener­gy and so much vision,” remem­bers a friend and for­mer 510 engi­neer. “I frick­ing loved brain­storm­ing with the guy. I loved that we could cre­ate a vision of the world that didn’t exist yet and both fall in love with that vision.”

But there were down­sides to his man­ic ener­gy, too. “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense,” said the same engi­neer. “It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

In ear­ly 2011, that plan was to bring 510 Sys­tems into the Google­plex. The startup’s engi­neers had long com­plained that they did not have equi­ty in the grow­ing com­pa­ny. When mat­ters came to a head, Levandows­ki drew up a plan that would reserve the first $20 mil­lion of any acqui­si­tion for 510’s founders and split the remain­der among the staff, accord­ing to two for­mer 510 employ­ees. “They said we were going to sell for hun­dreds of mil­lions,” remem­bers one engi­neer. “I was pret­ty thrilled with the num­bers.”

Indeed, that sum­mer, Levandows­ki sold 510 Sys­tems and Anthony’s Robots to Google – for $20 mil­lion, the exact cut­off before the wealth would be shared. Rank and file engi­neers did not see a pen­ny, and some were even let go before the acqui­si­tion was com­plet­ed. “I regret how it was handled…Some peo­ple did get the short end of the stick,” admit­ted Levandows­ki in 2016. The buy­out also caused resent­ment among engi­neers at Google, who won­dered how Levandows­ki could have made such a prof­it from his employ­er.

There would be more prof­its to come. Accord­ing to a court fil­ing, Page took a per­son­al inter­est in moti­vat­ing Levandows­ki, issu­ing a direc­tive in 2011 to “make Antho­ny rich if Project Chauf­feur suc­ceeds.” Levandows­ki was giv­en by far the high­est share, about 10 per­cent, of a bonus pro­gram linked to a future val­u­a­tion of Chauffeur—a deci­sion that would lat­er cost Google dear­ly.

**********

Ever since a New York Times sto­ry in 2010 revealed Project Chauf­feur to the world, Google had been want­i­ng to ramp up test­ing on pub­lic streets. That was tough to arrange in well-reg­u­lat­ed Cal­i­for­nia, but Levandows­ki wasn’t about to let that stop him. While man­ning Google’s stand at the Con­sumer Elec­tron­ics Show in Las Vegas in Jan­u­ary 2011, he got to chat­ting with lob­by­ist David Gold­wa­ter. “He told me he was hav­ing a hard time in Cal­i­for­nia and I sug­gest­ed Google try a small­er state, like Neva­da,” Gold­wa­ter told me.

Togeth­er, Gold­wa­ter and Levandows­ki draft­ed leg­is­la­tion that would allow the com­pa­ny to test and oper­ate self-dri­ving cars in Neva­da. By June, their sug­ges­tions were law, and in May 2012, a Google Prius passed the world’s first “self-dri­ving tests” in Las Vegas and Car­son City. “Antho­ny is gift­ed in so many dif­fer­ent ways,” said Gold­wa­ter. “He’s got a strate­gic mind, he’s got a tac­ti­cal mind, and a once-in-a-gen­er­a­tion intel­lect. The great thing about Antho­ny is that he was will­ing to take risks, but they were cal­cu­lat­ed risks.”

How­ev­er, Levandowski’s risk-tak­ing had ruf­fled feath­ers at Google. It was only after Neva­da had passed its leg­is­la­tion that Levandows­ki dis­cov­ered Google had a whole team ded­i­cat­ed to gov­ern­ment rela­tions. “I thought you could just do it your­self,” he told me sheep­ish­ly in 2016. “[I] got a lit­tle bit in trou­ble for doing it.”

That might be under­stat­ing it. One prob­lem was that Levandows­ki had lost his air cov­er at Google. In May 2012, his friend Sebas­t­ian Thrun turned his atten­tion to start­ing online learn­ing com­pa­ny Udac­i­ty. Page put anoth­er pro­fes­sor, Chris Urm­son from Carnegie Mel­lon, in charge. Not only did Levandows­ki think the job should have been his, but the two also had ter­ri­ble chem­istry.

“They had a real­ly hard time get­ting along,” said Page at a depo­si­tion in July. “It was a con­stant man­age­ment headache to help them get through that.”

Then in July 2013, Gae­tan Pen­necot, a 510 alum work­ing on Chauffeur’s lidar team, got a wor­ry­ing call from a ven­dor. Accord­ing to Waymo’s com­plaint, a small com­pa­ny called Odin Wave had placed an order for a cus­tom-made part that was extreme­ly sim­i­lar to one used in Google’s lidars.

Pen­necot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Sys­tems. Droz did some dig­ging and replied in an email to Pen­necot (in French, which we’ve trans­lat­ed): “They’re clear­ly mak­ing a lidar. And it’s John (510’s old lawyer) who incor­po­rat­ed them. The date of incor­po­ra­tion cor­re­sponds to sev­er­al months after Antho­ny fell out of favor at Google.”

As the sto­ry emerges in court doc­u­ments, Droz had found Odin Wave’s com­pa­ny records. Not only had Levandowski’s lawyer found­ed the com­pa­ny in August 2012, but it was also based in a Berke­ley office build­ing that Levandows­ki owned, was being run by a friend of Levandowski’s, and its employ­ees includ­ed engi­neers he had worked with at Velo­dyne and 510 Sys­tems. One even spoke with Levandows­ki before being hired. The com­pa­ny was devel­op­ing long range lidars sim­i­lar to those Levandows­ki had worked on at 510 Sys­tems. But Levandowski’s name was nowhere on the firm’s paper­work.

Droz con­front­ed Levandows­ki, who denied any involve­ment, and Droz decid­ed not to fol­low the paper trail any fur­ther. “I was pret­ty hap­py work­ing at Google, and…I didn’t want to jeop­ar­dize that by…exposing more of Anthony’s shenani­gans,” he said at a depo­si­tion last month.

Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandows­ki was even part of a Google inves­ti­ga­tion into acquir­ing Tyto. This time, how­ev­er, Google passed on the pur­chase. That seemed to demor­al­ize Levandows­ki fur­ther. “He was rarely at work, and he left a lot of the respon­si­bil­i­ty [for] eval­u­at­ing peo­ple on the team to me or oth­ers,” said Droz in his depo­si­tion.

“Over time my patience with his manip­u­la­tions and lack of enthu­si­asm and com­mit­ment to the project [sic], it became clear­er and clear­er that this was a lost cause,” said Chris Urm­son in a depo­si­tion.

As he was torch­ing bridges at Google, Levandows­ki was itch­ing for a new chal­lenge. Luck­i­ly, Sebas­t­ian Thrun was back on the autonomous beat. Lar­ry Page and Thrun had been think­ing about elec­tric fly­ing taxis that could car­ry one or two peo­ple. Project Tiramisu, named after the dessert which means “lift me up” in Ital­ian, involved a winged plane fly­ing in cir­cles, pick­ing up pas­sen­gers below using a long teth­er.

Thrun knew just the per­son to kick­start Tiramisu. Accord­ing to a source work­ing there at the time, Levandows­ki was brought in to over­see Tiramisu as an “advi­sor and stake­hold­er.” Levandows­ki would show up at the project’s work­space in the evenings, and was involved in tests at one of Page’s ranch­es. Tiramisu’s teth­ers soon piv­ot­ed to a ride-aboard elec­tric drone, now called the Kit­ty Hawk fly­er. Thrun is CEO of Kit­ty Hawk, which is fund­ed by Page rather than Alpha­bet, the umbrel­la com­pa­ny that now owns Google and its sib­ling com­pa­nies.

Waymo’s com­plaint says that around this time Levandows­ki start­ed solic­it­ing Google col­leagues to leave and start a com­peti­tor in the autonomous vehi­cle busi­ness. Droz tes­ti­fied that Levandows­ki told him it “would be nice to cre­ate a new self-dri­ving car start­up.” Fur­ther­more, he said that Uber would be inter­est­ed in buy­ing the team respon­si­ble for Google’s lidar.

Uber had explod­ed onto the self-dri­ving car scene ear­ly in 2015, when it lured almost 50 engi­neers away from Carnegie Mel­lon Uni­ver­si­ty to form the core of its Advanced Tech­nolo­gies Cen­ter. Uber cofounder Travis Kalan­ick had described autonomous tech­nol­o­gy as an exis­ten­tial threat to the ride-shar­ing com­pa­ny, and was hir­ing furi­ous­ly. Accord­ing to Droz, Levandows­ki said that he began meet­ing Uber exec­u­tives that sum­mer.

When Urm­son learned of Levandowski’s recruit­ing efforts, his depo­si­tion states, he sent an email to human resources in August begin­ning, “We need to fire Antho­ny Levandows­ki.” Despite an inves­ti­ga­tion, that did not hap­pen.

But Levandowski’s now not-so-secret plan would soon see him leav­ing of his own accord—with a moun­tain of cash. In 2015, Google was due to start­ing pay­ing the Chauf­feur bonus­es, linked to a val­u­a­tion that it would have “sole and absolute dis­cre­tion” to cal­cu­late. Accord­ing to pre­vi­ous­ly unre­port­ed court fil­ings, exter­nal con­sul­tants cal­cu­lat­ed the self-dri­ving car project as being worth $8.5 bil­lion. Google ulti­mate­ly val­ued Chauf­feur at around half that amount: $4.5 bil­lion. Despite this down­grade, Levandowski’s share in Decem­ber 2015 amount­ed to over $50 mil­lion – near­ly twice as much as the sec­ond largest bonus of $28 mil­lion, paid to Chris Urm­son.

**********

Otto seemed to spring forth ful­ly formed in May 2016, demon­strat­ing a self-dri­ving 18-wheel truck bar­rel­ing down a Neva­da high­waywith no one behind the wheel. In real­i­ty, Levandows­ki had been plan­ning it for some time.

Levandows­ki and his Otto cofounders at Google had spent the Christ­mas hol­i­days and the first weeks of 2016 tak­ing their recruit­ment cam­paign up a notch, accord­ing to Way­mo court fil­ings. Waymo’s com­plaint alleges Levandows­ki told col­leagues he was plan­ning to “repli­cate” Waymo’s tech­nol­o­gy at a com­peti­tor, and was even solic­it­ing his direct reports at work.

One engi­neer who had worked at 510 Sys­tems attend­ed a bar­be­cue at Levandowski’s home in Palo Alto, where Levandows­ki pitched his for­mer col­leagues and cur­rent Googlers on the start­up. “He want­ed every Way­mo per­son to resign simul­ta­ne­ous­ly, a ful­ly syn­chro­nized walk­out. He was fir­ing peo­ple up for that,” remem­bers the engi­neer.

On Jan­u­ary 27, Levandows­ki resigned from Google with­out notice. With­in weeks, Levandows­ki had a draft con­tract to sell Otto to Uber for an amount wide­ly report­ed as $680 mil­lion. Although the full-scale syn­chro­nized walk­out nev­er hap­pened, half a dozen Google employ­ees went with Levandows­ki, and more would join in the months ahead. But the new com­pa­ny still did not have a prod­uct to sell.

Levandows­ki brought Neva­da lob­by­ist David Gold­wa­ter back to help. “There was some brain­storm­ing with Antho­ny and his team,” said Gold­wa­ter in an inter­view. “We were look­ing to do a demon­stra­tion project where we could show what he was doing.”

After explor­ing the idea of an autonomous pas­sen­ger shut­tle in Las Vegas, Otto set­tled on devel­op­ing a dri­ver­less semi-truck. But with the Uber deal rush­ing for­ward, Levandows­ki need­ed results fast. “By the time Otto was ready to go with the truck, they want­ed to get right on the road,” said Gold­wa­ter. That meant demon­strat­ing their pro­to­type with­out obtain­ing the very autonomous vehi­cle licence Levandows­ki had per­suad­ed Neva­da to adopt. (One state offi­cial called this move “ille­gal.”) Levandows­ki also had Otto acquire the con­tro­ver­sial Tyto Lidar—the com­pa­ny based in the build­ing he owned—in May, for an undis­closed price.

The full-court press worked. Uber com­plet­ed its own acqui­si­tion of Otto in August, and Uber founder Travis Kalan­ick put Levandows­ki in charge of the com­bined com­pa­nies’ self-dri­ving efforts across per­son­al trans­porta­tion, deliv­ery and truck­ing. Uber would even pro­pose a Tiramisu-like autonomous air taxi called Uber Ele­vate. Now report­ing direct­ly to Kalan­ick and in charge of a 1500-strong group, Levandows­ki demand­ed the email address “robot@uber.com.”

In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. “Down to hang out this eve and mas­ter­mind some shit,” texted Kalan­ick, short­ly after the acqui­si­tion. “We’re going to take over the world. One robot at a time,” wrote Levandows­ki anoth­er time.

But Levandowski’s amaz­ing robot future was about to crum­ble before his eyes.

***********

Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions.

Worse was yet to come. Levandows­ki had always been a con­tro­ver­sial fig­ure at Google. With his abrupt res­ig­na­tion, the launch of Otto, and its rapid acqui­si­tion by Uber, Google launched an inter­nal inves­ti­ga­tion in the sum­mer of 2016. It found that Levandows­ki had down­loaded near­ly 10 giga­bytes of Google’s secret files just before he resigned, many of them relat­ing to lidar tech­nol­o­gy.

Also in Decem­ber 2016, in an echo of the Tyto inci­dent, a Way­mo employ­ee was acci­den­tal­ly sent an email from a ven­dor that includ­ed a draw­ing of an Otto cir­cuit board. The design looked very sim­i­lar to Waymo’s cur­rent lidars.

Way­mo saysthe “final piece of the puz­zle” came from a sto­ry about Otto I wrote for Backchan­nel based on a pub­lic records request. A doc­u­ment sent by Otto to Neva­da offi­cials boast­ed the com­pa­ny had an “in-house cus­tom-built 64-laser” lidar sys­tem. To Way­mo, that sound­ed very much like tech­nol­o­gy it had devel­oped. In Feb­ru­ary this year, Way­mo filed its head­line law­suit accus­ing Uber (along with Otto Truck­ing, yet anoth­er of Levandowski’s com­pa­nies, but one that Uber had not pur­chased) of vio­lat­ing its patents and mis­ap­pro­pri­at­ing trade secrets on lidar and oth­er tech­nolo­gies.

Uber imme­di­ate­ly denied the accu­sa­tions and has con­sis­tent­ly main­tained its inno­cence. Uber says there is no evi­dence that any of Waymo’s tech­ni­cal files ever came to Uber, let alone that Uber ever made use of them. While Levandows­ki is not named as a defen­dant, he has refused to answer ques­tions in depo­si­tions with Waymo’s lawyers and is expect­ed to do the same at tri­al. (He turned down sev­er­al requests for inter­views for this sto­ry.) He also didn’t ful­ly coop­er­ate with Uber’s own inves­ti­ga­tion into the alle­ga­tions, and that, Uber says, is why it fired him in May.

Levandows­ki prob­a­bly does not need a job. With the pur­chase of 510 Sys­tems and Anthony’s Robots, his salary, and bonus­es, Levandows­ki earned at least $120 mil­lion from his time at Google. Some of that mon­ey has been invest­ed in mul­ti­ple real estate devel­op­ments with his col­lege friend Randy Miller, includ­ing sev­er­al large projects in Oak­land and Berke­ley.

But Levandows­ki has kept busy behind the scenes. In August, court fil­ings say, he per­son­al­ly tracked down a pair of ear­rings giv­en to a Google employ­ee at her going-away par­ty in 2014. The ear­rings were made from con­fi­den­tial lidar cir­cuit boards, and will pre­sum­ably be used by Otto Trucking’s lawyers to sug­gest that Way­mo does not keep a very close eye on its trade secrets.

Some of Levandowski’s friends and col­leagues have expressed shock at the alle­ga­tions he faces, say­ing that they don’t reflect the per­son they knew. “It is…in char­ac­ter for Antho­ny to play fast and loose with things like intel­lec­tu­al prop­er­ty if it’s in pur­suit of build­ing his dream robot,” said Ben Dis­coe. “[But] I was a lit­tle sur­prised at the alleged mag­ni­tude of his dis­re­gard for IP.”

“Def­i­nite­ly one of Anthony’s faults is to be aggres­sive as he is, but it’s also one of his great attrib­ut­es. I don’t see [him doing] all the oth­er stuff he has been accused of,” said David Gold­wa­ter.

But Lar­ry Page is no longer con­vinced that Levandows­ki was key to Chauffeur’s suc­cess. In his depo­si­tion to the court, Page said, “I believe Anthony’s con­tri­bu­tions are quite pos­si­bly neg­a­tive of a high amount.” At Uber, some engi­neers pri­vate­ly say that Levandowski’s poor man­age­ment style set back that company’s self-dri­ving effort by a cou­ple of years.

Even after this tri­al is done, Levandows­ki will not be able to rest easy. In May, a judge referred evi­dence from the case to the US Attorney’s office “for inves­ti­ga­tion of pos­si­ble theft of trade secrets,” rais­ing the pos­si­bil­i­ty of crim­i­nal pro­ceed­ings and prison timeYet on the time­line that mat­ters to Antho­ny Levandows­ki, even that may not mean much. Build­ing a robot­i­cal­ly enhanced future is his pas­sion­ate life­time project. On the Way of the Future, law­suits or even a jail sen­tence might just feel like lit­tle bumps in the road.

“This case is teach­ing Antho­ny some hard lessons but I don’t see [it] keep­ing him down,” said Randy Miller. “He believes firm­ly in his vision of a bet­ter world through robot­ics and he’s con­vinced me of it. It’s clear to me that he’s on a mis­sion.”

“I think Antho­ny will rise from the ash­es,” agrees one friend and for­mer 510 Sys­tems engi­neer. “Antho­ny has the ambi­tion, the vision, and the abil­i­ty to recruit and dri­ve peo­ple. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cut­ting cor­ners.”

———-

4. In light of Levandowski’s Otto self-dri­ving  truck tech­nol­o­gy, we note tech exec­u­tive Andrew Yang’s warn­ing about the poten­tial impact of that one tech­nol­o­gy on our soci­ety. (Yang is run­ning for Pres­i­dent.)

“His 2020 Slo­gan: Beware of Robots” by Kevin Roose; The New York Times; 2/11/2018.

. . . . “All you need is self-dri­ving cars to desta­bi­lize soci­ety,” Mr. [Andrew] Yang said over lunch at a Thai restau­rant in Man­hat­tan last month, in his first inter­view about his cam­paign. In just  a few years, he said, “we’re going to have a mil­lion truck dri­vers out of work who are 94 per­cent male, with an  aver­age  lev­el of edu­ca­tion of high school or one year of col­lege.”

“That one inno­va­tion,” he added, “will be enough to cre­ate riots in the street. And we’re about to do the  same thing to retail work­ers, call cen­ter work­ers, fast-food work­ers, insur­ance com­pa­nies, account­ing firms.” . . . .

5. British sci­en­tist Stephen Hawk­ing recent­ly warned of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy.

“Stephen Hawk­ing Warns Arti­fi­cial Intel­li­gence Could End Mankind” by Rory Cel­lan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawk­ing, one of Britain’s pre-emi­nent sci­en­tists, has said that efforts to cre­ate think­ing machines pose a threat to our very exis­tence.

He told the BBC:“The devel­op­ment of full arti­fi­cial intel­li­gence could spell the end of the human race.”

His warn­ing came in response to a ques­tion about a revamp of the tech­nol­o­gy he uses to com­mu­ni­cate, which involves a basic form of AI. . . .

. . . . Prof Hawk­ing says the prim­i­tive forms of arti­fi­cial intel­li­gence devel­oped so far have already proved very use­ful, but he fears the con­se­quences of cre­at­ing some­thing that can match or sur­pass humans.

“It would take off on its own, and re-design itself at an ever increas­ing rate,” he said. [See the arti­cle in line item #1c.–D.E.]

“Humans, who are lim­it­ed by slow bio­log­i­cal evo­lu­tion, could­n’t com­pete, and would be super­seded.” . . . .

6.  In L‑2 (record­ed in Jan­u­ary of 1995–20 years before Hawk­ing’s warn­ing) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems.

 

Discussion

2 comments for “FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice”

  1. Here is per­haps the most chill­ing sto­ry to emerge yet about the devel­op­ment of com­mer­cial brain-read­ing tech­nol­o­gy. And it’s not a sto­ry about Elon Musk’s ‘neu­ro­lace’ or Face­book’s mind-read­ing key­board
    . It’s a sto­ry about the next obvi­ous appli­ca­tion of such mind-read­ing tech­nol­o­gy: Chi­na has already embarked on mass employ­ee brain­wave mon­i­tor­ing to col­lect real-time data on employ­ee emo­tion­al sta­tus and it appears to be already enhanc­ing cor­po­rate prof­its:

    South Chi­na Morn­ing Post

    ‘For­get the Face­book leak’: Chi­na is min­ing data direct­ly from work­ers’ brains on an indus­tri­al scale
    Gov­ern­ment-backed sur­veil­lance projects are deploy­ing brain-read­ing tech­nol­o­gy to detect changes in emo­tion­al states in employ­ees on the pro­duc­tion line, the mil­i­tary and at the helm of high-speed trains

    Stephen Chen
    PUBLISHED : Sun­day, 29 April, 2018, 9:02pm
    UPDATED : Wednes­day, 02 May, 2018, 3:08pm

    On the sur­face, the pro­duc­tion lines at Hangzhou Zhongheng Elec­tric look like any oth­er.

    Work­ers out­fit­ted in uni­forms staff lines pro­duc­ing sophis­ti­cat­ed equip­ment for telecom­mu­ni­ca­tion and oth­er indus­tri­al sec­tors.

    But there’s one big dif­fer­ence – the work­ers wear caps to mon­i­tor their brain­waves, data that man­age­ment then uses to adjust the pace of pro­duc­tion and redesign work­flows, accord­ing to the com­pa­ny.

    The com­pa­ny said it could increase the over­all effi­cien­cy of the work­ers by manip­u­lat­ing the fre­quen­cy and length of break times to reduce men­tal stress.

    Hangzhou Zhongheng Elec­tric is just one exam­ple of the large-scale appli­ca­tion of brain sur­veil­lance devices to mon­i­tor people’s emo­tions and oth­er men­tal activ­i­ties in the work­place, accord­ing to sci­en­tists and com­pa­nies involved in the gov­ern­ment-backed projects.

    Con­cealed in reg­u­lar safe­ty hel­mets or uni­form hats, these light­weight, wire­less sen­sors con­stant­ly mon­i­tor the wearer’s brain­waves and stream the data to com­put­ers that use arti­fi­cial intel­li­gence algo­rithms to detect emo­tion­al spikes such as depres­sion, anx­i­ety or rage.

    The tech­nol­o­gy is in wide­spread use around the world but Chi­na has applied it on an unprece­dent­ed scale in fac­to­ries, pub­lic trans­port, state-owned com­pa­nies and the mil­i­tary to increase the com­pet­i­tive­ness of its man­u­fac­tur­ing indus­try and to main­tain social sta­bil­i­ty.

    It has also raised con­cerns about the need for reg­u­la­tion to pre­vent abus­es in the work­place.

    The tech­nol­o­gy is also in use at in Hangzhou at State Grid Zhe­jiang Elec­tric Pow­er, where it has boost­ed com­pa­ny prof­its by about 2 bil­lion yuan (US$315 mil­lion) since it was rolled out in 2014, accord­ing to Cheng Jingzhou, an offi­cial over­see­ing the company’s emo­tion­al sur­veil­lance pro­gramme.

    “There is no doubt about its effect,” Cheng said.

    The com­pa­ny and its rough­ly 40,000 employ­ees man­age the pow­er sup­ply and dis­tri­b­u­tion net­work to homes and busi­ness­es across the province, a task that Cheng said they were able to do to high­er stan­dards thanks to the sur­veil­lance tech­nol­o­gy.

    But he refused to offer more details about the pro­gramme.

    Zhao Bin­jian, a manger of Ning­bo Shenyang Logis­tics, said the com­pa­ny was using the devices main­ly to train new employ­ees. The brain sen­sors were inte­grat­ed in vir­tu­al real­i­ty head­sets to sim­u­late dif­fer­ent sce­nar­ios in the work envi­ron­ment.

    “It has sig­nif­i­cant­ly reduced the num­ber of mis­takes made by our work­ers,” Zhao said, because of “improved under­stand­ing” between the employ­ees and com­pa­ny.

    ...

    The com­pa­ny esti­mat­ed the tech­nol­o­gy had helped it increase rev­enue by 140 mil­lion yuan in the past two years.

    One of the main cen­tres of the research in Chi­na is Neu­ro Cap, a cen­tral gov­ern­ment-fund­ed brain sur­veil­lance project at Ning­bo Uni­ver­si­ty.

    The pro­gramme has been imple­ment­ed in more than a dozen fac­to­ries and busi­ness­es.

    Jin Jia, asso­ciate pro­fes­sor of brain sci­ence and cog­ni­tive psy­chol­o­gy at Ning­bo University’s busi­ness school, said a high­ly emo­tion­al employ­ee in a key post could affect an entire pro­duc­tion line, jeop­ar­dis­ing his or her own safe­ty as well as that of oth­ers.

    “When the sys­tem issues a warn­ing, the man­ag­er asks the work­er to take a day off or move to a less crit­i­cal post. Some jobs require high con­cen­tra­tion. There is no room for a mis­take,” she said.

    Jin said work­ers ini­tial­ly react­ed with fear and sus­pi­cion to the devices.

    “They thought we could read their mind. This caused some dis­com­fort and resis­tance in the begin­ning,” she said.

    “After a while they got used to the device. It looked and felt just like a safe­ty hel­met. They wore it all day at work.”

    Jin said that at present China’s brain-read­ing tech­nol­o­gy was on a par with that in the West but Chi­na was the only coun­try where there had been reports of mas­sive use of the tech­nol­o­gy in the work­place. In the Unit­ed States, for exam­ple, appli­ca­tions have been lim­it­ed to archers try­ing to improve their per­for­mance in com­pe­ti­tion.

    The unprece­dent­ed amount of data from users could help the sys­tem improve and enable Chi­na to sur­pass com­peti­tors over the next few years.

    With improved speed and sen­si­tiv­i­ty, the device could even become a “men­tal key­board” allow­ing the user to con­trol a com­put­er or mobile phone with their mind.

    The research team con­firmed the device and tech­nol­o­gy had been used in China’s mil­i­tary oper­a­tions but declined to pro­vide more infor­ma­tion.

    The tech­nol­o­gy is also being used in med­i­cine.

    Ma Hua­juan, a doc­tor at the Chang­hai Hos­pi­tal in Shang­hai, said the facil­i­ty was work­ing with Fudan Uni­ver­si­ty to devel­op a more sophis­ti­cat­ed ver­sion of the tech­nol­o­gy to mon­i­tor a patient’s emo­tions and pre­vent vio­lent inci­dents.

    In addi­tion­al to the cap, a spe­cial cam­era cap­tures a patient’s facial expres­sion and body tem­per­a­ture. There is also an array of pres­sure sen­sors plant­ed under the bed to mon­i­tor shifts in body move­ment.

    “Togeth­er this dif­fer­ent infor­ma­tion can give a more pre­cise esti­mate of the patient’s men­tal sta­tus,” she said.

    Ma said the hos­pi­tal wel­comed the tech­nol­o­gy and hoped it could warn med­ical staff of a poten­tial vio­lent out­burst from a patient.

    She said the patients had been informed that their brain activ­i­ties would be under sur­veil­lance, and the hos­pi­tal would not acti­vate the devices with­out a patient’s con­sent.

    Deayea, a tech­nol­o­gy com­pa­ny in Shang­hai, said its brain mon­i­tor­ing devices were worn reg­u­lar­ly by train dri­vers work­ing on the Bei­jing-Shang­hai high-speed rail line, one of the busiest of its kind in the world.

    The sen­sors, built in the brim of the driver’s hat, could mea­sure var­i­ous types of brain activ­i­ties, includ­ing fatigue and atten­tion loss with an accu­ra­cy of more than 90 per cent, accord­ing to the company’s web­site.

    If the dri­ver dozed off, for instance, the cap would trig­ger an alarm in the cab­in to wake him up.

    Zheng Xing­wu, a pro­fes­sor of man­age­ment at the Civ­il Avi­a­tion Uni­ver­si­ty of Chi­na, said Chi­na could be the first coun­try in the world to intro­duce the brain sur­veil­lance device into cock­pits.

    Most air­line acci­dents were caused by human fac­tors and a pilot in a dis­turbed emo­tion­al state could put an entire plane at risk, he said.

    Putting the cap on before take-off would give air­lines more infor­ma­tion to deter­mine whether a pilot was fit to fly, Zheng said.

    “The influ­ence of the gov­ern­ment on air­lines and pilots in Chi­na is prob­a­bly larg­er than in many oth­er coun­tries. If the author­i­ties make up their mind to bring the device into the cock­pit, I don’t think they can be stopped,” he said.

    “That means the pilots may need to sac­ri­fice some of their pri­va­cy for the sake of pub­lic safe­ty.”

    Qiao Zhi­an, pro­fes­sor of man­age­ment psy­chol­o­gy at Bei­jing Nor­mal Uni­ver­si­ty, said that while the devices could make busi­ness­es more com­pet­i­tive the tech­nol­o­gy could also be abused by com­pa­nies to con­trol minds and infringe pri­va­cy, rais­ing the spec­tre of “thought police”.

    Thought police were the secret police in George Orwell’s nov­el Nine­teen Eighty-Four, who inves­ti­gat­ed and pun­ished peo­ple for per­son­al and polit­i­cal thoughts not approved of by the author­i­ties.

    “There is no law or reg­u­la­tion to lim­it the use of this kind of equip­ment in Chi­na. The employ­er may have a strong incen­tive to use the tech­nol­o­gy for high­er prof­it, and the employ­ees are usu­al­ly in too weak a posi­tion to say no,” he said.

    “The sell­ing of Face­book data is bad enough. Brain sur­veil­lance can take pri­va­cy abuse to a whole new lev­el.”

    Law­mak­ers should act now to lim­it the use of emo­tion sur­veil­lance and give work­ers more bar­gain­ing pow­er to pro­tect their inter­ests, Qiao said.

    “The human mind should not be exploit­ed for prof­it,” he said.

    ———-

    “‘For­get the Face­book leak’: Chi­na is min­ing data direct­ly from work­ers’ brains on an indus­tri­al scale” by Stephen Chen; South Chi­na Morn­ing Post; 04/29/2018

    “Hangzhou Zhongheng Elec­tric is just one exam­ple of the large-scale appli­ca­tion of brain sur­veil­lance devices to mon­i­tor people’s emo­tions and oth­er men­tal activ­i­ties in the work­place, accord­ing to sci­en­tists and com­pa­nies involved in the gov­ern­ment-backed projects.”

    Wide-scale indus­tri­al mon­i­tor­ing of employ­ee brain­waves using sen­sors and AI. It’s not just hap­pen­ing but it’s appar­ent­ly wide­spread across Chi­na in fac­to­ries, pub­lic trans­port, state-owned com­pa­nies and the mil­i­tary:

    ...
    Con­cealed in reg­u­lar safe­ty hel­mets or uni­form hats, these light­weight, wire­less sen­sors con­stant­ly mon­i­tor the wearer’s brain­waves and stream the data to com­put­ers that use arti­fi­cial intel­li­gence algo­rithms to detect emo­tion­al spikes such as depres­sion, anx­i­ety or rage.

    The tech­nol­o­gy is in wide­spread use around the world but Chi­na has applied it on an unprece­dent­ed scale in fac­to­ries, pub­lic trans­port, state-owned com­pa­nies and the mil­i­tary to increase the com­pet­i­tive­ness of its man­u­fac­tur­ing indus­try and to main­tain social sta­bil­i­ty.

    It has also raised con­cerns about the need for reg­u­la­tion to pre­vent abus­es in the work­place.
    ...

    And if you think this is going to be lim­it­ed to Chi­na and oth­er open­ly author­i­tar­i­an states, note how fit­ting your employ­ees with brain­wave scan­ners does­n’t just pay for itself. It’s appar­ent­ly quite prof­itable:

    ...
    The tech­nol­o­gy is also in use at in Hangzhou at State Grid Zhe­jiang Elec­tric Pow­er, where it has boost­ed com­pa­ny prof­its by about 2 bil­lion yuan (US$315 mil­lion) since it was rolled out in 2014, accord­ing to Cheng Jingzhou, an offi­cial over­see­ing the company’s emo­tion­al sur­veil­lance pro­gramme.

    “There is no doubt about its effect,” Cheng said.

    The com­pa­ny and its rough­ly 40,000 employ­ees man­age the pow­er sup­ply and dis­tri­b­u­tion net­work to homes and busi­ness­es across the province, a task that Cheng said they were able to do to high­er stan­dards thanks to the sur­veil­lance tech­nol­o­gy.

    But he refused to offer more details about the pro­gramme.

    Zhao Bin­jian, a manger of Ning­bo Shenyang Logis­tics, said the com­pa­ny was using the devices main­ly to train new employ­ees. The brain sen­sors were inte­grat­ed in vir­tu­al real­i­ty head­sets to sim­u­late dif­fer­ent sce­nar­ios in the work envi­ron­ment.

    “It has sig­nif­i­cant­ly reduced the num­ber of mis­takes made by our work­ers,” Zhao said, because of “improved under­stand­ing” between the employ­ees and com­pa­ny.

    ...

    The com­pa­ny esti­mat­ed the tech­nol­o­gy had helped it increase rev­enue by 140 mil­lion yuan in the past two years.
    ...

    But beyond prof­its and increas­ing effi­cien­cy, the sys­tem is also being tout­ed for reduc­ing mis­takes and increas­ing­ly safe­ty

    ...
    One of the main cen­tres of the research in Chi­na is Neu­ro Cap, a cen­tral gov­ern­ment-fund­ed brain sur­veil­lance project at Ning­bo Uni­ver­si­ty.

    The pro­gramme has been imple­ment­ed in more than a dozen fac­to­ries and busi­ness­es.

    Jin Jia, asso­ciate pro­fes­sor of brain sci­ence and cog­ni­tive psy­chol­o­gy at Ning­bo University’s busi­ness school, said a high­ly emo­tion­al employ­ee in a key post could affect an entire pro­duc­tion line, jeop­ar­dis­ing his or her own safe­ty as well as that of oth­ers.

    “When the sys­tem issues a warn­ing, the man­ag­er asks the work­er to take a day off or move to a less crit­i­cal post. Some jobs require high con­cen­tra­tion. There is no room for a mis­take,” she said.

    Jin said work­ers ini­tial­ly react­ed with fear and sus­pi­cion to the devices.

    “They thought we could read their mind. This caused some dis­com­fort and resis­tance in the begin­ning,” she said.

    “After a while they got used to the device. It looked and felt just like a safe­ty hel­met. They wore it all day at work.”

    ...

    Deayea, a tech­nol­o­gy com­pa­ny in Shang­hai, said its brain mon­i­tor­ing devices were worn reg­u­lar­ly by train dri­vers work­ing on the Bei­jing-Shang­hai high-speed rail line, one of the busiest of its kind in the world.

    The sen­sors, built in the brim of the driver’s hat, could mea­sure var­i­ous types of brain activ­i­ties, includ­ing fatigue and atten­tion loss with an accu­ra­cy of more than 90 per cent, accord­ing to the company’s web­site.

    If the dri­ver dozed off, for instance, the cap would trig­ger an alarm in the cab­in to wake him up.

    Zheng Xing­wu, a pro­fes­sor of man­age­ment at the Civ­il Avi­a­tion Uni­ver­si­ty of Chi­na, said Chi­na could be the first coun­try in the world to intro­duce the brain sur­veil­lance device into cock­pits.

    Most air­line acci­dents were caused by human fac­tors and a pilot in a dis­turbed emo­tion­al state could put an entire plane at risk, he said.

    Putting the cap on before take-off would give air­lines more infor­ma­tion to deter­mine whether a pilot was fit to fly, Zheng said.

    “The influ­ence of the gov­ern­ment on air­lines and pilots in Chi­na is prob­a­bly larg­er than in many oth­er coun­tries. If the author­i­ties make up their mind to bring the device into the cock­pit, I don’t think they can be stopped,” he said.

    “That means the pilots may need to sac­ri­fice some of their pri­va­cy for the sake of pub­lic safe­ty.”
    ...

    So, between the prof­its and the alleged enhanced safe­ty there’s undoubt­ed­ly going to be grow­ing calls for nor­mal­iz­ing the use of this tech­nol­o­gy else­where.

    But per­haps the biggest rea­son we should expect for the even­tu­al accep­tance of this tech­nol­o­gy by coun­tries around the world will be fears that Chi­na’s ear­ly embrace of the tech­nol­o­gy will give Chi­na some sort of brain-read­ing com­pet­i­tive edge. In oth­er words, there’s prob­a­bly going to be a per­ceived ‘brain­wave read­ing tech­nol­o­gy gap’:

    ...
    Jin said that at present China’s brain-read­ing tech­nol­o­gy was on a par with that in the West but Chi­na was the only coun­try where there had been reports of mas­sive use of the tech­nol­o­gy in the work­place. In the Unit­ed States, for exam­ple, appli­ca­tions have been lim­it­ed to archers try­ing to improve their per­for­mance in com­pe­ti­tion.

    The unprece­dent­ed amount of data from users could help the sys­tem improve and enable Chi­na to sur­pass com­peti­tors over the next few years.
    ...

    And if con­stant­ly read­ing brain­waves does­n’t give the desired lev­el of pre­dic­tive infor­ma­tion about an indi­vid­ual, there are also sys­tems that include cam­eras that cap­ture facial expres­sions and body tem­per­a­ture that are cur­rent­ly used to pre­dict vio­lent out­bursts by med­ical patients:

    ...
    The research team con­firmed the device and tech­nol­o­gy had been used in China’s mil­i­tary oper­a­tions but declined to pro­vide more infor­ma­tion.

    The tech­nol­o­gy is also being used in med­i­cine.

    Ma Hua­juan, a doc­tor at the Chang­hai Hos­pi­tal in Shang­hai, said the facil­i­ty was work­ing with Fudan Uni­ver­si­ty to devel­op a more sophis­ti­cat­ed ver­sion of the tech­nol­o­gy to mon­i­tor a patient’s emo­tions and pre­vent vio­lent inci­dents.

    In addi­tion­al to the cap, a spe­cial cam­era cap­tures a patient’s facial expres­sion and body tem­per­a­ture. There is also an array of pres­sure sen­sors plant­ed under the bed to mon­i­tor shifts in body move­ment.

    “Togeth­er this dif­fer­ent infor­ma­tion can give a more pre­cise esti­mate of the patient’s men­tal sta­tus,” she said.

    Ma said the hos­pi­tal wel­comed the tech­nol­o­gy and hoped it could warn med­ical staff of a poten­tial vio­lent out­burst from a patient.

    She said the patients had been informed that their brain activ­i­ties would be under sur­veil­lance, and the hos­pi­tal would not acti­vate the devices with­out a patient’s con­sent.
    ...

    And as the arti­cle notes, this kind of device could even become a “men­tal key­board”, allow­ing the user to con­trol a com­put­er or mobile phone with their mind:

    ...
    With improved speed and sen­si­tiv­i­ty, the device could even become a “men­tal key­board” allow­ing the user to con­trol a com­put­er or mobile phone with their mind.
    ...

    And that men­tal key­board tech­nol­o­gy is what Face­book and Elon Musk are claim­ing to be devel­op­ing too. Which is a reminder that when this kind of tech­nol­o­gy gets released in the rest of the world under the guise of sim­ply being ‘men­tal key­board’ and ‘com­put­er inter­face’ tech­nolo­gies, it’s prob­a­bly also going to have sim­i­lar kinds of emo­tion-read­ing tech­nolo­gies too. Sim­i­lar­ly, when this emo­tion-read­ing tech­nol­o­gy is pushed on employ­ees as mere­ly mon­i­tor­ing their emo­tions, but not read­ing their spe­cif­ic thoughts, that’s also going to be a high­ly ques­tion­able asser­tion.

    Adding to the con­cerns over pos­si­ble abus­es of this tech­nol­o­gy is the fact that there is cur­rent­ly no law or reg­u­la­tion to lim­it the use of this tech­nol­o­gy in Chi­na. So if there’s an inter­na­tion­al com­pe­ti­tion become the ‘most effi­cient’ nation in the world by wiring all the pro­les’ brains up, it’s going to be one hel­lu­va inter­na­tion­al com­pe­ti­tion:

    ...
    Qiao Zhi­an, pro­fes­sor of man­age­ment psy­chol­o­gy at Bei­jing Nor­mal Uni­ver­si­ty, said that while the devices could make busi­ness­es more com­pet­i­tive the tech­nol­o­gy could also be abused by com­pa­nies to con­trol minds and infringe pri­va­cy, rais­ing the spec­tre of “thought police”.

    Thought police were the secret police in George Orwell’s nov­el Nine­teen Eighty-Four, who inves­ti­gat­ed and pun­ished peo­ple for per­son­al and polit­i­cal thoughts not approved of by the author­i­ties.

    “There is no law or reg­u­la­tion to lim­it the use of this kind of equip­ment in Chi­na. The employ­er may have a strong incen­tive to use the tech­nol­o­gy for high­er prof­it, and the employ­ees are usu­al­ly in too weak a posi­tion to say no,” he said.

    “The sell­ing of Face­book data is bad enough. Brain sur­veil­lance can take pri­va­cy abuse to a whole new lev­el.”

    Law­mak­ers should act now to lim­it the use of emo­tion sur­veil­lance and give work­ers more bar­gain­ing pow­er to pro­tect their inter­ests, Qiao said.

    “The human mind should not be exploit­ed for prof­it,” he said.

    “The human mind should not be exploit­ed for prof­it” LOL! Yeah, that’s a nice sen­ti­ment.

    So it looks like human­i­ty might be on the cusp of an inter­na­tion­al for-prof­it race to impose mind-read­ing tech­nol­o­gy in the work­place and across soci­ety. On the plus side, giv­en all the data that’s about to be col­lect­ed (imag­ine how valu­able it’s going to be), hope­ful­ly at least we’ll learn some­thing about what makes humans so accept­ing of author­i­tar­i­an­ism.

    Posted by Pterrafractyl | July 16, 2018, 11:45 am
  2. This arti­cle from The Guardian con­cerns Elon Musk back­ing a non-prof­it named “Ope­nAI” that devel­oped advanced AI soft­ware that reads pub­lic source infor­ma­tion to write both arti­fi­cial news sto­ries as well as fic­tion­al sto­ry writ­ing. They assert that they are not releas­ing their research pub­licly, for fear of poten­tial mis­use until they “dis­cuss the ram­i­fi­ca­tions of the tech­no­log­i­cal break­through.” I won­der if this leaves open a pos­si­bil­i­ty that they want to first uti­lize this­pro­pri­etary “non-prof­it” devel­oped tech­nol­o­gy for oth­er nefar­i­ous polit­i­cal pur­pos­es.

    https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?CMP=Share_iOSApp_Other

    New AI fake text gen­er­a­tor may be too dan­ger­ous to release, say cre­ators
    The Elon Musk-backed non­prof­it com­pa­ny Ope­nAI declines to release research pub­licly for fear of mis­use

    The Guardian
    Alex Hern @alexhern
    Thu 14 Feb 2019 12.00 EST
    Last mod­i­fied on Thu 14 Feb 2019 16.49 EST

    The cre­ators of a rev­o­lu­tion­ary AI sys­tem that can write news sto­ries and works of fic­tion – dubbed “deep­fakes for text” – have tak­en the unusu­al step of not releas­ing their research pub­licly, for fear of poten­tial mis­use.

    Ope­nAI, an non­prof­it research com­pa­ny backed by Elon Musk, says its new AI mod­el, called GPT2 is so good and the risk of mali­cious use so high that it is break­ing from its nor­mal prac­tice of releas­ing the full research to the pub­lic in order to allow more time to dis­cuss the ram­i­fi­ca­tions of the tech­no­log­i­cal break­through.

    At its core, GPT2 is a text gen­er­a­tor. The AI sys­tem is fed text, any­thing from a few words to a whole page, and asked to write the next few sen­tences based on its pre­dic­tions of what should come next. The sys­tem is push­ing the bound­aries of what was thought pos­si­ble, both in terms of the qual­i­ty of the out­put, and the wide vari­ety of poten­tial uses.

    When used to sim­ply gen­er­ate new text, GPT2 is capa­ble of writ­ing plau­si­ble pas­sages that match what it is giv­en in both style and sub­ject. It rarely shows any of the quirks that mark out pre­vi­ous AI sys­tems, such as for­get­ting what it is writ­ing about mid­way through a para­graph, or man­gling the syn­tax of long sen­tences.

    Feed it the open­ing line of George Orwell’s Nine­teen Eighty-Four – “It was a bright cold day in April, and the clocks were strik­ing thir­teen” – and the sys­tem recog­nis­es the vague­ly futur­is­tic tone and the nov­el­is­tic style, and con­tin­ues with:
    “I was in my car on my way to a new job in Seat­tle. I put the gas in, put the key in, and then I let it run. I just imag­ined what the day would be like. A hun­dred years from now. In 2045, I was a teacher in some school in a poor part of rur­al Chi­na. I start­ed with Chi­nese his­to­ry and his­to­ry of sci­ence.”

    Feed it the first few para­graphs of a Guardian sto­ry about Brex­it, and its out­put is plau­si­ble news­pa­per prose, replete with “quotes” from Jere­my Cor­byn, men­tions of the Irish bor­der, and answers from the prime minister’s spokesman.

    One such, com­plete­ly arti­fi­cial, para­graph reads: “Asked to clar­i­fy the reports, a spokesman for May said: ‘The PM has made it absolute­ly clear her inten­tion is to leave the EU as quick­ly as is pos­si­ble and that will be under her nego­ti­at­ing man­date as con­firmed in the Queen’s speech last week.’”

    From a research stand­point, GPT2 is ground­break­ing in two ways. One is its size, says Dario Amod­ei, OpenAI’s research direc­tor. The mod­els “were 12 times big­ger, and the dataset was 15 times big­ger and much broad­er” than the pre­vi­ous state-of-the-art AI mod­el. It was trained on a dataset con­tain­ing about 10m arti­cles, select­ed by trawl­ing the social news site Red­dit for links with more than three votes. The vast col­lec­tion of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

    The amount of data GPT2 was trained on direct­ly affect­ed its qual­i­ty, giv­ing it more knowl­edge of how to under­stand writ­ten text. It also led to the sec­ond break­through. GPT2 is far more gen­er­al pur­pose than pre­vi­ous text mod­els. By struc­tur­ing the text that is input, it can per­form tasks includ­ing trans­la­tion and sum­mari­sa­tion, and pass sim­ple read­ing com­pre­hen­sion tests, often per­form­ing as well or bet­ter than oth­er AIs that have been built specif­i­cal­ly for those tasks.

    That qual­i­ty, how­ev­er, has also led Ope­nAI to go against its remit of push­ing AI for­ward and keep GPT2 behind closed doors for the imme­di­ate future while it assess­es what mali­cious users might be able to do with it. “We need to per­form exper­i­men­ta­tion to find out what they can and can’t do,” said Jack Clark, the charity’s head of pol­i­cy. “If you can’t antic­i­pate all the abil­i­ties of a mod­el, you have to prod it to see what it can do. There are many more peo­ple than us who are bet­ter at think­ing what it can do mali­cious­ly.”

    To show what that means, Ope­nAI made one ver­sion of GPT2 with a few mod­est tweaks that can be used to gen­er­ate infi­nite pos­i­tive – or neg­a­tive – reviews of prod­ucts. Spam and fake news are two oth­er obvi­ous poten­tial down­sides, as is the AI’s unfil­tered nature . As it is trained on the inter­net, it is not hard to encour­age it to gen­er­ate big­ot­ed text, con­spir­a­cy the­o­ries and so on.

    Instead, the goal is to show what is pos­si­ble to pre­pare the world for what will be main­stream in a year or two’s time. “I have a term for this. The esca­la­tor from hell,” Clark said. “It’s always bring­ing the tech­nol­o­gy down in cost and down in price. The rules by which you can con­trol tech­nol­o­gy have fun­da­men­tal­ly changed.

    “We’re not say­ing we know the right thing to do here, we’re not lay­ing down the line and say­ing ‘this is the way’ … We are try­ing to devel­op more rig­or­ous think­ing here. We’re try­ing to build the road as we trav­el across it.”

    Posted by Mary Benton | February 14, 2019, 7:56 pm

Post a comment