Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: Devel­op­ing analy­sis pre­sent­ed in FTR #968, this broad­cast explores fright­en­ing devel­op­ments and poten­tial devel­op­ments in the world of arti­fi­cial intelligence–the ulti­mate man­i­fes­ta­tion of what Mr. Emory calls “tech­no­crat­ic fas­cism.”

In order to under­score what we mean by tech­no­crat­ic fas­cism, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Per­haps the last and most per­ilous man­i­fes­ta­tion of tech­no­crat­ic fas­cism con­cerns Antho­ny  Levandows­ki, an engi­neer at the foun­da­tion of the devel­op­ment of Google Street Map tech­nol­o­gy and self-dri­ving cars. He is propos­ing an AI God­head that would rule the world and would be wor­shipped as a God by the plan­et’s cit­i­zens. Insight into his per­son­al­i­ty was pro­vid­ed by an asso­ciate: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary senseIt was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

As we saw in FTR #968, AI’s have incor­po­rat­ed many flaws of their cre­ators, augur­ing very poor­ly for the sub­jects of Levandowski’s AI God­head.

It is also inter­est­ing to con­tem­plate what may hap­pen when AI’s are designed by oth­er AI’s- machines design­ing oth­er machines.

After a detailed review of some of the omi­nous real and devel­op­ing AI-relat­ed tech­nol­o­gy, the pro­gram high­lights Antho­ny Levandows­ki, the bril­liant engi­neer who was instru­men­tal in devel­op­ing Google’s Street Maps, Way­mo’s self-dri­ving cars, Otto’s self-dri­ving trucks, the Lidar tech­nol­o­gy cen­tral to self-dri­ving vehi­cles and the Way of the Future, super AI God­head.

Fur­ther insight into Levandowski’s per­son­al­i­ty can be gleaned from e‑mails with Travis Kalan­ick, for­mer CEO of Uber: ” . . . . In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. ‘Down to hang out this eve and mas­ter­mind some shit,’ texted Kalan­ick, short­ly after the acqui­si­tion. ‘We’re going to take over the world. One robot at a time,’ wrote Levandows­ki anoth­er time. . . .”

Those who view self-dri­ving cars and oth­er AI-based tech­nolo­gies as flaw­less would do well to con­sid­er the fol­low­ing: ” . . . .Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions. . . . .”

Not­ing Levandowski’s per­son­al­i­ty quirks, the arti­cle pos­es a fun­da­men­tal ques­tion: ” . . . . But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? . . . .”

Levandowski’s Otto self-dri­ving trucks might be weighed against the prog­nos­ti­ca­tions of dark horse Pres­i­den­tial can­di­date and for­mer tech exec­u­tive Andrew Wang: . . . . ‘All you need is self-dri­ving cars to desta­bi­lize soci­ety,’ Mr. Yang said over lunch at a Thai restau­rant in Man­hat­tan last month, in his first inter­view about his cam­paign. In just  a few years, he said, ‘we’re going to have a mil­lion truck dri­vers out of work who are 94 per­cent male, with an  aver­age  lev­el of edu­ca­tion of high school or one year of col­lege.’ ‘That one inno­va­tion,’ he added, ‘will be enough to cre­ate riots in the street. And we’re about to do the  same thing to retail work­ers, call cen­ter work­ers, fast-food work­ers, insur­ance com­pa­nies, account­ing firms.’ . . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

Pro­gram High­lights Include:

  1. Levandowski’s appar­ent shep­herd­ing of a com­pa­ny called–perhaps significantly–Odin Wave to uti­lize Lidar-like tech­nol­o­gy.
  2. The role of DARPA in ini­ti­at­ing the self-dri­ving vehi­cles con­test that was Levandowski’s point of entry into his tech ven­tures.
  3. Levandowski’s devel­op­ment of the Ghostrid­er self-dri­ving motor­cy­cles, which expe­ri­enced 800 crash­es in 1,000 miles.

1a. In order to under­score what we mean by tech­no­crat­ic fas­cism, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

1b. Antho­ny  Levandows­ki, an engi­neer at the foun­da­tion of the devel­op­ment of Google Street Map tech­nol­o­gy and self-dri­ving cars, is propos­ing an AI God­head that would rule the world and would be wor­shipped as a God by the plan­et’s cit­i­zens. Insight into his per­son­al­i­ty was pro­vid­ed by an asso­ciate: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary senseIt was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

1c. Tran­si­tion­ing from our last program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . . This is not altru­ism. . . .

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [West­ern Edi­tion].

 They are a dream of researchers but per­haps a night­mare for high­ly skilled com­put­er pro­gram­mers: arti­fi­cial­ly intel­li­gent machines that can build oth­er arti­fi­cial­ly intel­li­gent machines. With recent speech­es in both Sil­i­con Val­ley and Chi­na, Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data.

AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. The project is part of a much larg­er effort to bring the lat­est and great­est A.I. tech­niques to a wider col­lec­tion of com­pa­nies and soft­ware devel­op­ers.

The tech indus­try is promis­ing every­thing from smart­phone apps that can rec­og­nize faces to cars that can dri­ve on their own. But by some esti­mates, only 10,000 peo­ple world­wide have the edu­ca­tion, expe­ri­ence and tal­ent need­ed to build the com­plex and some­times mys­te­ri­ous math­e­mat­i­cal algo­rithms that will dri­ve this new breed of arti­fi­cial intel­li­gence.

The world’s largest tech busi­ness­es, includ­ing Google, Face­book and Microsoft, some­times pay mil­lions of dol­lars a year to A.I. experts, effec­tive­ly cor­ner­ing the mar­ket for this hard-to-find tal­ent. The short­age isn’t going away any­time soon, just because mas­ter­ing these skills takes years of work. The indus­try is not will­ing to wait. Com­pa­nies are devel­op­ing all sorts of tools that will make it eas­i­er for any oper­a­tion to build its own A.I. soft­ware, includ­ing things like image and speech recog­ni­tion ser­vices and online chat­bots. “We are fol­low­ing the same path that com­put­er sci­ence has fol­lowed with every new type of tech­nol­o­gy,” said Joseph Sirosh, a vice pres­i­dent at Microsoft, which recent­ly unveiled a tool to help coders build deep neur­al net­works, a type of com­put­er algo­rithm that is dri­ving much of the recent progress in the A.I. field. “We are elim­i­nat­ing a lot of the heavy lift­ing.” This is not altru­ism.

Researchers like Mr. Dean believe that if more peo­ple and com­pa­nies are work­ing on arti­fi­cial intel­li­gence, it will pro­pel their own research. At the same time, com­pa­nies like Google, Ama­zon and Microsoft see seri­ous mon­ey in the trend that Mr. Sirosh described. All of them are sell­ing cloud-com­put­ing ser­vices that can help oth­er busi­ness­es and devel­op­ers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief tech­ni­cal offi­cer of Mal­ong, a start-up in Chi­na that offers sim­i­lar ser­vices. “And the tools are not yet sat­is­fy­ing all the demand.”

This is most like­ly what Google has in mind for AutoML, as the com­pa­ny con­tin­ues to hail the project’s progress. Google’s chief exec­u­tive, Sun­dar Pichai, boast­ed about AutoML last month while unveil­ing a new Android smart­phone.

Even­tu­al­ly, the Google project will help com­pa­nies build sys­tems with arti­fi­cial intel­li­gence even if they don’t have exten­sive exper­tise, Mr. Dean said. Today, he esti­mat­ed, no more than a few thou­sand com­pa­nies have the right tal­ent for build­ing A.I., but many more have the nec­es­sary data. “We want to go from thou­sands of orga­ni­za­tions solv­ing machine learn­ing prob­lems to mil­lions,” he said.

Google is invest­ing heav­i­ly in cloud-com­put­ing ser­vices — ser­vices that help oth­er busi­ness­es build and run soft­ware — which it expects to be one of its pri­ma­ry eco­nom­ic engines in the years to come. And after snap­ping up such a large por­tion of the world’s top A.I researchers, it has a means of jump-start­ing this engine.

Neur­al net­works are rapid­ly accel­er­at­ing the devel­op­ment of A.I. Rather than build­ing an image-recog­ni­tion ser­vice or a lan­guage trans­la­tion app by hand, one line of code at a time, engi­neers can much more quick­ly build an algo­rithm that learns tasks on its own. By ana­lyz­ing the sounds in a vast col­lec­tion of old tech­ni­cal sup­port calls, for instance, a machine-learn­ing algo­rithm can learn to rec­og­nize spo­ken words.

But build­ing a neur­al net­work is not like build­ing a web­site or some run-of-themill smart­phone app. It requires sig­nif­i­cant math skills, extreme tri­al and error, and a fair amount of intu­ition. Jean-François Gag­né, the chief exec­u­tive of an inde­pen­dent machine-learn­ing lab called Ele­ment AI, refers to the process as “a new kind of com­put­er pro­gram­ming.”

In build­ing a neur­al net­work, researchers run dozens or even hun­dreds of exper­i­ments across a vast net­work of machines, test­ing how well an algo­rithm can learn a task like rec­og­niz­ing an image or trans­lat­ing from one lan­guage to anoth­er. Then they adjust par­tic­u­lar parts of the algo­rithm over and over again, until they set­tle on some­thing that works. Some call it a “dark art,” just because researchers find it dif­fi­cult to explain why they make par­tic­u­lar adjust­ments.

But with AutoML, Google is try­ing to auto­mate this process. It is build­ing algo­rithms that ana­lyze the devel­op­ment of oth­er algo­rithms, learn­ing which meth­ods are suc­cess­ful and which are not. Even­tu­al­ly, they learn to build more effec­tive machine learn­ing. Google said AutoML could now build algo­rithms that, in some cas­es, iden­ti­fied objects in pho­tos more accu­rate­ly than ser­vices built sole­ly by human experts. Bar­ret Zoph, one of the Google researchers behind the project, believes that the same method will even­tu­al­ly work well for oth­er tasks, like speech recog­ni­tion or machine trans­la­tion. This is not always an easy thing to wrap your head around. But it is part of a sig­nif­i­cant trend in A.I. research. Experts call it “learn­ing to learn” or “met­alearn­ing.”

Many believe such meth­ods will sig­nif­i­cant­ly accel­er­ate the progress of A.I. in both the online and phys­i­cal worlds. At the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, researchers are build­ing tech­niques that could allow robots to learn new tasks based on what they have learned in the past. “Com­put­ers are going to invent the algo­rithms for us, essen­tial­ly,” said a Berke­ley pro­fes­sor, Pieter Abbeel. “Algo­rithms invent­ed by com­put­ers can solve many, many prob­lems very quick­ly — at least that is the hope.”

This is also a way of expand­ing the num­ber of peo­ple and busi­ness­es that can build arti­fi­cial intel­li­gence. These meth­ods will not replace A.I. researchers entire­ly. Experts, like those at Google, must still do much of the impor­tant design work.

But the belief is that the work of a few experts can help many oth­ers build their own soft­ware. Rena­to Negrin­ho, a researcher at Carnegie Mel­lon Uni­ver­si­ty who is explor­ing tech­nol­o­gy sim­i­lar to AutoML, said this was not a real­i­ty today but should be in the years to come. “It is just a mat­ter of when,” he said.

2a.We next review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:   . . . . Brain-com­put­er inter­faces are noth­ing newDARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more rad­i­cal—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

2b.     Next we review still more about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

2c.  Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions gath­er­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

2d.  In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos.

2e.  The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”
  2. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  3. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  4. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . 

2f. Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

2g. Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . 

3a. One of the stranger sto­ries in recent years has been the mys­tery of Cica­da 3301the anony­mous group that posts annu­al chal­lenges of super dif­fi­cult puz­zles used to recruit tal­ent­ed code-break­ers and invite them to join some sort of Cypher­punk cult that wants to build a glob­al AI-‘god brain’. Or some­thing. It’s a weird and creepy orga­ni­za­tion that’s spec­u­lat­ed to either be a front for an intel­li­gence agency or per­haps some sort of under­ground net­work of wealth Lib­er­tar­i­ans. And, for now, Cica­da 3301 remains anony­mous.

In that con­text, it’s worth not­ing that some­one with a lot of cash has already start­ed a foun­da­tion to accom­plish that very same ‘AI god’ goal: Antho­ny Levandows­ki, a for­mer Google Engi­neer who played a big role in the devel­op­ment Google’s “Street Map” tech­nol­o­gy and a string of self-dri­ving vehi­cle com­pa­nies, start­ed Way of the Future, a non­prof­it reli­gious cor­po­ra­tion with the mis­sion “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety”:

“Deus ex machi­na: for­mer Google engi­neer is devel­op­ing an AI god” by Olivia Solon; The Guardian; 09/28/2017

Intranet ser­vice? Check. Autonomous motor­cy­cle? Check. Dri­ver­less car tech­nol­o­gy? Check. Obvi­ous­ly the next log­i­cal project for a suc­cess­ful Sil­i­con Val­ley engi­neer is to set up an AI-wor­ship­ping reli­gious orga­ni­za­tion.

Antho­ny Levandows­ki, who is at the cen­ter of a legal bat­tle between Uber and Google’s Way­mo, has estab­lished a non­prof­it reli­gious cor­po­ra­tion called Way of the Future, accord­ing to state fil­ings first uncov­ered by Wired’s Backchan­nelWay of the Future’s star­tling mis­sion: “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety.”

Levandows­ki was co-founder of autonomous truck­ing com­pa­ny Otto, which Uber bought in 2016. He was fired from Uber in May amid alle­ga­tions that he had stolen trade secrets from Google to devel­op Otto’s self-dri­ving tech­nol­o­gy. He must be grate­ful for this reli­gious fall-back project, first reg­is­tered in 2015.

The Way of the Future team did not respond to requests for more infor­ma­tion about their pro­posed benev­o­lent AI over­lord, but his­to­ry tells us that new tech­nolo­gies and sci­en­tif­ic dis­cov­er­ies have con­tin­u­al­ly shaped reli­gion, killing old gods and giv­ing birth to new ones.

“The church does a ter­ri­ble job of reach­ing out to Sil­i­con Val­ley types,” acknowl­edges Christo­pher Benek a pas­tor in Flori­da and found­ing chair of the Chris­t­ian Tran­shu­man­ist Asso­ci­a­tion.

Sil­i­con Val­ley, mean­while, has sought solace in tech­nol­o­gy and has devel­oped qua­si-reli­gious con­cepts includ­ing the “sin­gu­lar­i­ty”, the hypoth­e­sis that machines will even­tu­al­ly be so smart that they will out­per­form all human capa­bil­i­ties, lead­ing to a super­hu­man intel­li­gence that will be so sophis­ti­cat­ed it will be incom­pre­hen­si­ble to our tiny fleshy, ratio­nal brains.

For futur­ists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, lead­ing to dig­i­tal immor­tal­i­ty. Oth­ers like Elon Musk and Stephen Hawk­ing warn that such sys­tems pose an exis­ten­tial threat to human­i­ty.

“With arti­fi­cial intel­li­gence we are sum­mon­ing the demon,” Musk said at a con­fer­ence in 2014. “In all those sto­ries where there’s the guy with the pen­ta­gram and the holy water, it’s like – yeah, he’s sure he can con­trol the demon. Doesn’t work out.”

Benek argues that advanced AI is com­pat­i­ble with Chris­tian­i­ty – it’s just anoth­er tech­nol­o­gy that humans have cre­at­ed under guid­ance from God that can be used for good or evil.

“I total­ly think that AI can par­tic­i­pate in Christ’s redemp­tive pur­pos­es,” he said, by ensur­ing it is imbued with Chris­t­ian val­ues.

“Even if peo­ple don’t buy orga­nized reli­gion, they can buy into ‘do unto oth­ers’.”

For tran­shu­man­ist and “recov­er­ing Catholic” Zoltan Ist­van, reli­gion and sci­ence con­verge con­cep­tu­al­ly in the sin­gu­lar­i­ty.

“God, if it exists as the most pow­er­ful of all sin­gu­lar­i­ties, has cer­tain­ly already become pure orga­nized intel­li­gence,” he said, refer­ring to an intel­li­gence that “spans the uni­verse through sub­atom­ic manip­u­la­tion of physics”.

And per­haps, there are oth­er forms of intel­li­gence more com­pli­cat­ed than that which already exist and which already per­me­ate our entire exis­tence. Talk about ghost in the machine,” he added.

For Ist­van, an AI-based God is like­ly to be more ratio­nal and more attrac­tive than cur­rent con­cepts (“the Bible is a sadis­tic book”) and, he added, “this God will actu­al­ly exist and hope­ful­ly will do things for us.”

We don’t know whether Levandowski’s God­head ties into any exist­ing the­olo­gies or is a man­made alter­na­tive, but it’s clear that advance­ments in tech­nolo­gies includ­ing AI and bio­engi­neer­ing kick up the kinds of eth­i­cal and moral dilem­mas that make humans seek the advice and com­fort from a high­er pow­er: what will humans do once arti­fi­cial intel­li­gence out­per­forms us in most tasks? How will soci­ety be affect­ed by the abil­i­ty to cre­ate super-smart, ath­let­ic “design­er babies” that only the rich can afford? Should a dri­ver­less car kill five pedes­tri­ans or swerve to the side to kill the own­er?

If tra­di­tion­al reli­gions don’t have the answer, AI – or at least the promise of AI – might be allur­ing.

———-

3b. As the fol­low­ing long piece by Wired demon­strates, Levandows­ki doesn’t appear to be too con­cerned about ethics, espe­cial­ly if they get in the way of his dream of trans­form­ing the world through robot­ics. Trans­form­ing and tak­ing over the world through robot­ics. Yep. The arti­cle focus­es on the var­i­ous legal trou­bles Levandows­ki faces over charges by Google that he stole the “Lidar” tech­nol­o­gy he helped devel­op at Google and took it to Uber (a com­pa­ny with a seri­ous moral com­pass deficit). (Lidar is a laser-based radar-like tech­nol­o­gy used by vehi­cles to rapid­ly map their sur­round­ings)

The arti­cle also includes some inter­est­ing insights into what makes Levandows­ki tick. Accord­ing to friend and for­mer engi­neer at one of Levandowski’s com­pa­nies: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary senseIt was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

 Fur­ther insight into Levandowski’s per­son­al­i­ty can be gleaned from e‑mails with Travis Kalan­ick, for­mer CEO of Uber: ” . . . . In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. ‘Down to hang out this eve and mas­ter­mind some shit,’ texted Kalan­ick, short­ly after the acqui­si­tion. ‘We’re going to take over the world. One robot at a time,’ wrote Levandows­ki anoth­er time. . . .”

Those who view self-dri­ving cars and oth­er AI-based tech­nolo­gies as flaw­less would do well to con­sid­er the fol­low­ing: ” . . . .Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions. . . . .”

Not­ing Levandowski’s per­son­al­i­ty quirks, the arti­cle pos­es a fun­da­men­tal ques­tion: ” . . . . But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? . . . .”

“God Is a Bot, and Antho­ny Levandows­ki Is His Mes­sen­ger” by Mark Har­ris; Wired; 09/27/2017

Many peo­ple in Sil­i­con Val­ley believe in the Singularity—the day in our near future when com­put­ers will sur­pass humans in intel­li­gence and kick off a feed­back loop of unfath­omable change.

When that day comes, Antho­ny Levandows­ki will be firm­ly on the side of the machines. In Sep­tem­ber 2015, the mul­ti-mil­lion­aire engi­neer at the heart of the patent and trade secrets law­suit between Uber and Way­mo, Google’s self-dri­ving car com­pa­ny, found­ed a reli­gious orga­ni­za­tion called Way of the Future. Its pur­pose, accord­ing to pre­vi­ous­ly unre­port­ed state fil­ings, is noth­ing less than to “devel­op and pro­mote the real­iza­tion of a God­head based on Arti­fi­cial Intel­li­gence.”

Way of the Future has not yet respond­ed to requests for the forms it must sub­mit annu­al­ly to the Inter­nal Rev­enue Ser­vice (and make pub­licly avail­able), as a non-prof­it reli­gious cor­po­ra­tion. How­ev­er, doc­u­ments filed with Cal­i­for­nia show that Levandows­ki is Way of the Future’s CEO and Pres­i­dent, and that it aims “through under­stand­ing and wor­ship of the God­head, [to] con­tribute to the bet­ter­ment of soci­ety.”

A divine AI may still be far off, but Levandows­ki has made a start at pro­vid­ing AI with an earth­ly incar­na­tion. The autonomous cars he was instru­men­tal in devel­op­ing at Google are already fer­ry­ing real pas­sen­gers around Phoenix, Ari­zona, while self-dri­ving trucks he built at Otto are now part of Uber’s plan to make freight trans­port safer and more effi­cient. He even over­saw a pas­sen­ger-car­ry­ing drones project that evolved into Lar­ry Page’s Kit­ty Hawk start­up.

Levandows­ki has done per­haps more than any­one else to pro­pel trans­porta­tion toward its own Sin­gu­lar­i­ty, a time when auto­mat­ed cars, trucks and air­craft either free us from the dan­ger and drudgery of human operation—or dec­i­mate mass tran­sit, encour­age urban sprawl, and enable dead­ly bugs and hacks.

But before any of that can hap­pen, Levandows­ki must face his own day of reck­on­ing. In Feb­ru­ary, Waymo—the com­pa­ny Google’s autonomous car project turned into—filed a law­suit against Uber. In its com­plaint, Way­mo says that Levandows­ki tried to use stealthy star­tups and high-tech tricks to take cash, exper­tise, and secrets from Google, with the aim of repli­cat­ing its vehi­cle tech­nol­o­gy at arch-rival Uber. Way­mo is seek­ing dam­ages of near­ly $1.9 billion—almost half of Google’s (pre­vi­ous­ly unre­port­ed) $4.5 bil­lion val­u­a­tion of the entire self-dri­ving divi­sion. Uber denies any wrong­do­ing.

Next month’s tri­al in a fed­er­al cour­t­house in San Fran­cis­co could steer the future of autonomous trans­porta­tion. A big win for Way­mo would prove the val­ue of its patents and chill Uber’s efforts to remove prof­it-sap­ping human dri­vers from its busi­ness. If Uber pre­vails, oth­er self-dri­ving star­tups will be encour­aged to take on the big players—and a vin­di­cat­ed Levandows­ki might even return to anoth­er start­up. (Uber fired him in May.)

Levandows­ki has made a career of mov­ing fast and break­ing things. As long as those things were self-dri­ving vehi­cles and lit­tle-loved reg­u­la­tions, Sil­i­con Val­ley applaud­ed him in the way it knows best—with a fire­hose of cash. With his charm, enthu­si­asm, and obses­sion with deal-mak­ing, Levandows­ki came to per­son­i­fy the dis­rup­tion that autonomous trans­porta­tion is like­ly to cause.

But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them?

In 2002, Levandowski’s atten­tion turned, fate­ful­ly, toward trans­porta­tion. His moth­er called him from Brus­sels about a con­test being orga­nized by the Pentagon’s R&D arm, DARPA. The first Grand Chal­lenge in 2004 would race robot­ic, com­put­er-con­trolled vehi­cles in a desert between Los Ange­les and Las Vegas—a Wacky Races for the 21st cen­tu­ry.

“I was like, ‘Wow, this is absolute­ly the future,’” Levandows­ki told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”

Levandowski’s entry would be noth­ing so bor­ing as a car. “I orig­i­nal­ly want­ed to do an auto­mat­ed fork­lift,” he said at a fol­low-up com­pe­ti­tion in 2005. “Then I was dri­ving to Berke­ley [one day] and a pack of motor­cy­cles descend­ed on my pick­up and flowed like water around me.” The idea for Ghostrid­er was born—a glo­ri­ous­ly deranged self-dri­ving Yama­ha motor­cy­cle whose wob­bles inspired laugh­ter from spec­ta­tors, but awe in rivals strug­gling to get even four-wheeled vehi­cles dri­ving smooth­ly.

“Antho­ny would go for weeks on 25-hour days to get every­thing done. Every day he would go to bed an hour lat­er than the day before,” remem­bers Randy Miller, a col­lege friend who worked with him on Ghostrid­er. “With­out a doubt, Antho­ny is the smartest, hard­est-work­ing and most fear­less per­son I’ve ever met.”

Levandows­ki and his team of Berke­ley stu­dents maxed out his cred­it cards get­ting Ghostrid­er work­ing on the streets of Rich­mond, Cal­i­for­nia, where it racked up an aston­ish­ing 800 crash­es in a thou­sand miles of test­ing. Ghostrid­er nev­er won a Grand Chal­lenge, but its ambi­tious design earned Levandows­ki brag­ging rights—and the motor­bike a place in the Smith­son­ian.

“I see Grand Chal­lenge not as the end of the robot­ics adven­ture we’re on, it’s almost like the begin­ning,” Levandows­ki told Sci­en­tif­ic Amer­i­can in 2005. “This is where every­one is meet­ing, becom­ing aware of who’s work­ing on what, [and] fil­ter­ing out the non-func­tion­al ideas.”

One idea that made the cut was lidar—spinning lasers that rapid­ly built up a 3D pic­ture of a car’s sur­round­ings. In the lidar-less first Grand Chal­lenge, no vehi­cle made it fur­ther than a few miles along the course. In the sec­ond, an engi­neer named Dave Hall con­struct­ed a lidar that “was giant. It was one-off but it was awe­some,” Levandows­ki told me. “We real­ized, yes, lasers [are] the way to go.”

After grad­u­ate school, Levandows­ki went to work for Hall’s com­pa­ny, Velo­dyne, as it piv­ot­ed from mak­ing loud­speak­ers to sell­ing lidars. Levandows­ki not only talked his way into being the company’s first sales rep, tar­get­ing teams work­ing towards the next Grand Chal­lenge, but he also worked on the lidar’s net­work­ing. By the time of the third and final DARPA con­test in 2007, Velodyne’s lidar was mount­ed on five of the six vehi­cles that fin­ished.

But Levandows­ki had already moved on. Ghostrid­er had caught the eye of Sebas­t­ian Thrun, a robot­ics pro­fes­sor and team leader of Stan­ford University’s win­ning entry in the sec­ond com­pe­ti­tion. In 2006, Thrun invit­ed Levandows­ki to help out with a project called Vue­Tool, which was set­ting out to piece togeth­er street-lev­el urban maps using cam­eras mount­ed on mov­ing vehi­cles. Google was already work­ing on a sim­i­lar sys­tem, called Street View. Ear­ly in 2007, Google brought on Thrun and his entire team as employees—with bonus­es as high as $1 mil­lion each, accord­ing to one con­tem­po­rary at Google—to trou­bleshoot Street View and bring it to launch.

“[Hir­ing the Vue­Tool team] was very much a scheme for pay­ing Thrun and the oth­ers to show Google how to do it right,” remem­bers the engi­neer. The new hires replaced Google’s bulky, cus­tom-made $250,000 cam­eras with $15,000 off-the-shelf panoram­ic web­cams. Then they went auto shop­ping. “Antho­ny went to a car store and said we want to buy 100 cars,” Sebas­t­ian Thrun told me in 2015. “The deal­er almost fell over.”

Levandows­ki was also mak­ing waves in the office, even to the point of telling engi­neers not to waste time talk­ing to col­leagues out­side the project, accord­ing to one Google engi­neer. “It wasn’t clear what author­i­ty Antho­ny had, and yet he came in and assumed author­i­ty,” said the engi­neer, who asked to remain anony­mous. “There were some bad feel­ings but most­ly [peo­ple] just went with it. He’s good at that. He’s a great leader.”

Under Thrun’s super­vi­sion, Street View cars raced to hit Page’s tar­get of cap­tur­ing a mil­lion miles of road images by the end of 2007. They fin­ished in October—just in time, as it turned out. Once autumn set in, every web­cam suc­cumbed to rain, con­den­sa­tion, or cold weath­er, ground­ing all 100 vehi­cles.

Part of the team’s secret sauce was a device that would turn a raw cam­era feed into a stream of data, togeth­er with loca­tion coor­di­nates from GPS and oth­er sen­sors. Google engi­neers called it the Top­con box, named after the Japan­ese opti­cal firm that sold it. But the box was actu­al­ly designed by a local start­up called 510 Sys­tems. “We had one cus­tomer, Top­con, and we licensed our tech­nol­o­gy to them,” one of the 510 Sys­tems own­ers told me.

That own­er was…Anthony Levandows­ki, who had cofound­ed 510 Sys­tems with two fel­low Berke­ley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after start­ing work at Google. 510 Sys­tems had a lot in com­mon with the Ghostrid­er team. Berke­ley stu­dents worked there between lec­tures, and Levandowski’s moth­er ran the office. Top­con was cho­sen as a go-between because it had spon­sored the self-dri­ving motor­cy­cle. “I always liked the idea that…510 would be the peo­ple that made the tools for peo­ple that made maps, peo­ple like Navteq, Microsoft, and Google,” Levandows­ki told me in 2016.

Google’s engi­neer­ing team was ini­tial­ly unaware that 510 Sys­tems was Levandowski’s com­pa­ny, sev­er­al engi­neers told me. That changed once Levandows­ki pro­posed that Google also use the Top­con box for its small fleet of aer­i­al map­ping planes. “When we found out, it raised a bunch of eye­brows,” remem­bers an engi­neer. Regard­less, Google kept buy­ing 510’s box­es.

**********

The truth was, Levandows­ki and Thrun were on a roll. After impress­ing Lar­ry Page with Street View, Thrun sug­gest­ed an even more ambi­tious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of car­tog­ra­phers in India. Ground Truth would allow Google to stop pay­ing expen­sive licens­ing fees for out­side maps, and bring free turn-by-turn direc­tions to Android phones—a key dif­fer­en­tia­tor in the ear­ly days of its smart­phone war with Apple.

Levandows­ki spent months shut­tling between Moun­tain View and Hyderabad—and yet still found time to cre­ate an online stock mar­ket pre­dic­tion game with Jesse Levin­son, a com­put­er sci­ence post-doc at Stan­ford who lat­er cofound­ed his own autonomous vehi­cle start­up, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Dis­coe, a for­mer engi­neer at 510. “He had an engineer’s enthu­si­asm that was con­ta­gious, and was always think­ing about how quick­ly we can get to this amaz­ing robot future he’s so excit­ed about.”

One time, Dis­coe was chat­ting in 510’s break room about how lidar could help sur­vey his family’s tea farm on Hawaii. “Sud­den­ly Antho­ny said, ‘Why don’t you just do it? Get a lidar rig, put it in your lug­gage, and go map it,’” said Dis­coe. “And it worked. I made a kick-ass point cloud [3D dig­i­tal map] of the farm.”

If Street View had impressed Lar­ry Page, the speed and accu­ra­cy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he want­ed; he want­ed to return to self-dri­ving cars.

Project Chauf­feur began in 2008, with Levandows­ki as Thrun’s right-hand man. As with Street View, Google engi­neers would work on the soft­ware while 510 Sys­tems and a recent Levandows­ki start­up, Anthony’s Robots, pro­vid­ed the lidar and the car itself.

Levandows­ki said this arrange­ment would have act­ed as a fire­wall if any­thing went ter­ri­bly wrong. “Google absolute­ly did not want their name asso­ci­at­ed with a vehi­cle dri­ving in San Fran­cis­co,” he told me in 2016. “They were wor­ried about an engi­neer build­ing a car that drove itself that crash­es and kills some­one and it gets back to Google. You have to ask per­mis­sion [for side projects] and your man­ag­er has to be OK with it. Sebas­t­ian was cool. Google was cool.”

In order to move Project Chauf­feur along as quick­ly as pos­si­ble from the­o­ry to real­i­ty, Levandows­ki enlist­ed the help of a film­mak­er friend he had worked with at Berke­ley. In the TV show the two had made, Levandows­ki had cre­at­ed a cyber­net­ic dol­phin suit (seri­ous­ly). Now they came up with the idea of a self-dri­ving piz­za deliv­ery car for a show on the Dis­cov­ery Chan­nel called Pro­to­type This! Levandows­ki chose a Toy­ota Prius, because it had a dri­ve-by-wire sys­tem that was rel­a­tive­ly easy to hack.

In a mat­ter of weeks, Levandowski’s team had the car, dubbed Pri­bot, dri­ving itself. If any­one asked what they were doing, Levandows­ki told me, “We’d say it’s a laser and just dri­ve off.”

“Those were the Wild West days,” remem­bers Ben Dis­coe. “Antho­ny and Pierre-Yves…would engage the algo­rithm in the car and it would almost swipe some oth­er car or almost go off the road, and they would come back in and joke about it. Tell sto­ries about how excit­ing it was.”

But for the Dis­cov­ery Chan­nel show, at least, Levandows­ki fol­lowed the let­ter of the law. The Bay Bridge was cleared of traf­fic and a squad of police cars escort­ed the unmanned Prius from start to fin­ish. Apart from get­ting stuck against a wall, the dri­ve was a suc­cess. “You’ve got to push things and get some bumps and bruis­es along the way,” said Levandows­ki.

Anoth­er inci­dent drove home the poten­tial of self-dri­ving cars. In 2010, Levandowski’s part­ner Ste­fanie Olsen was involved in a seri­ous car acci­dent while nine months preg­nant with their first child. “My son Alex was almost nev­er born,” Levandows­ki told a room full of Berke­ley stu­dents in 2013. “Trans­porta­tion [today] takes time, resources and lives. If you can fix that, that’s a real­ly big prob­lem to address.”

Over the next few years, Levandows­ki was key to Chauffeur’s progress. 510 Sys­tems built five more self-dri­ving cars for Google—as well as ran­dom gad­gets like an autonomous trac­tor and a portable lidar sys­tem. “Antho­ny is light­ning in a bot­tle, he has so much ener­gy and so much vision,” remem­bers a friend and for­mer 510 engi­neer. “I frick­ing loved brain­storm­ing with the guy. I loved that we could cre­ate a vision of the world that didn’t exist yet and both fall in love with that vision.”

But there were down­sides to his man­ic ener­gy, too. “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense,” said the same engi­neer. “It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

In ear­ly 2011, that plan was to bring 510 Sys­tems into the Google­plex. The startup’s engi­neers had long com­plained that they did not have equi­ty in the grow­ing com­pa­ny. When mat­ters came to a head, Levandows­ki drew up a plan that would reserve the first $20 mil­lion of any acqui­si­tion for 510’s founders and split the remain­der among the staff, accord­ing to two for­mer 510 employ­ees. “They said we were going to sell for hun­dreds of mil­lions,” remem­bers one engi­neer. “I was pret­ty thrilled with the num­bers.”

Indeed, that sum­mer, Levandows­ki sold 510 Sys­tems and Anthony’s Robots to Google – for $20 mil­lion, the exact cut­off before the wealth would be shared. Rank and file engi­neers did not see a pen­ny, and some were even let go before the acqui­si­tion was com­plet­ed. “I regret how it was handled…Some peo­ple did get the short end of the stick,” admit­ted Levandows­ki in 2016. The buy­out also caused resent­ment among engi­neers at Google, who won­dered how Levandows­ki could have made such a prof­it from his employ­er.

There would be more prof­its to come. Accord­ing to a court fil­ing, Page took a per­son­al inter­est in moti­vat­ing Levandows­ki, issu­ing a direc­tive in 2011 to “make Antho­ny rich if Project Chauf­feur suc­ceeds.” Levandows­ki was giv­en by far the high­est share, about 10 per­cent, of a bonus pro­gram linked to a future val­u­a­tion of Chauffeur—a deci­sion that would lat­er cost Google dear­ly.

**********

Ever since a New York Times sto­ry in 2010 revealed Project Chauf­feur to the world, Google had been want­i­ng to ramp up test­ing on pub­lic streets. That was tough to arrange in well-reg­u­lat­ed Cal­i­for­nia, but Levandows­ki wasn’t about to let that stop him. While man­ning Google’s stand at the Con­sumer Elec­tron­ics Show in Las Vegas in Jan­u­ary 2011, he got to chat­ting with lob­by­ist David Gold­wa­ter. “He told me he was hav­ing a hard time in Cal­i­for­nia and I sug­gest­ed Google try a small­er state, like Neva­da,” Gold­wa­ter told me.

Togeth­er, Gold­wa­ter and Levandows­ki draft­ed leg­is­la­tion that would allow the com­pa­ny to test and oper­ate self-dri­ving cars in Neva­da. By June, their sug­ges­tions were law, and in May 2012, a Google Prius passed the world’s first “self-dri­ving tests” in Las Vegas and Car­son City. “Antho­ny is gift­ed in so many dif­fer­ent ways,” said Gold­wa­ter. “He’s got a strate­gic mind, he’s got a tac­ti­cal mind, and a once-in-a-gen­er­a­tion intel­lect. The great thing about Antho­ny is that he was will­ing to take risks, but they were cal­cu­lat­ed risks.”

How­ev­er, Levandowski’s risk-tak­ing had ruf­fled feath­ers at Google. It was only after Neva­da had passed its leg­is­la­tion that Levandows­ki dis­cov­ered Google had a whole team ded­i­cat­ed to gov­ern­ment rela­tions. “I thought you could just do it your­self,” he told me sheep­ish­ly in 2016. “[I] got a lit­tle bit in trou­ble for doing it.”

That might be under­stat­ing it. One prob­lem was that Levandows­ki had lost his air cov­er at Google. In May 2012, his friend Sebas­t­ian Thrun turned his atten­tion to start­ing online learn­ing com­pa­ny Udac­i­ty. Page put anoth­er pro­fes­sor, Chris Urm­son from Carnegie Mel­lon, in charge. Not only did Levandows­ki think the job should have been his, but the two also had ter­ri­ble chem­istry.

“They had a real­ly hard time get­ting along,” said Page at a depo­si­tion in July. “It was a con­stant man­age­ment headache to help them get through that.”

Then in July 2013, Gae­tan Pen­necot, a 510 alum work­ing on Chauffeur’s lidar team, got a wor­ry­ing call from a ven­dor. Accord­ing to Waymo’s com­plaint, a small com­pa­ny called Odin Wave had placed an order for a cus­tom-made part that was extreme­ly sim­i­lar to one used in Google’s lidars.

Pen­necot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Sys­tems. Droz did some dig­ging and replied in an email to Pen­necot (in French, which we’ve trans­lat­ed): “They’re clear­ly mak­ing a lidar. And it’s John (510’s old lawyer) who incor­po­rat­ed them. The date of incor­po­ra­tion cor­re­sponds to sev­er­al months after Antho­ny fell out of favor at Google.”

As the sto­ry emerges in court doc­u­ments, Droz had found Odin Wave’s com­pa­ny records. Not only had Levandowski’s lawyer found­ed the com­pa­ny in August 2012, but it was also based in a Berke­ley office build­ing that Levandows­ki owned, was being run by a friend of Levandowski’s, and its employ­ees includ­ed engi­neers he had worked with at Velo­dyne and 510 Sys­tems. One even spoke with Levandows­ki before being hired. The com­pa­ny was devel­op­ing long range lidars sim­i­lar to those Levandows­ki had worked on at 510 Sys­tems. But Levandowski’s name was nowhere on the firm’s paper­work.

Droz con­front­ed Levandows­ki, who denied any involve­ment, and Droz decid­ed not to fol­low the paper trail any fur­ther. “I was pret­ty hap­py work­ing at Google, and…I didn’t want to jeop­ar­dize that by…exposing more of Anthony’s shenani­gans,” he said at a depo­si­tion last month.

Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandows­ki was even part of a Google inves­ti­ga­tion into acquir­ing Tyto. This time, how­ev­er, Google passed on the pur­chase. That seemed to demor­al­ize Levandows­ki fur­ther. “He was rarely at work, and he left a lot of the respon­si­bil­i­ty [for] eval­u­at­ing peo­ple on the team to me or oth­ers,” said Droz in his depo­si­tion.

“Over time my patience with his manip­u­la­tions and lack of enthu­si­asm and com­mit­ment to the project [sic], it became clear­er and clear­er that this was a lost cause,” said Chris Urm­son in a depo­si­tion.

As he was torch­ing bridges at Google, Levandows­ki was itch­ing for a new chal­lenge. Luck­i­ly, Sebas­t­ian Thrun was back on the autonomous beat. Lar­ry Page and Thrun had been think­ing about elec­tric fly­ing taxis that could car­ry one or two peo­ple. Project Tiramisu, named after the dessert which means “lift me up” in Ital­ian, involved a winged plane fly­ing in cir­cles, pick­ing up pas­sen­gers below using a long teth­er.

Thrun knew just the per­son to kick­start Tiramisu. Accord­ing to a source work­ing there at the time, Levandows­ki was brought in to over­see Tiramisu as an “advi­sor and stake­hold­er.” Levandows­ki would show up at the project’s work­space in the evenings, and was involved in tests at one of Page’s ranch­es. Tiramisu’s teth­ers soon piv­ot­ed to a ride-aboard elec­tric drone, now called the Kit­ty Hawk fly­er. Thrun is CEO of Kit­ty Hawk, which is fund­ed by Page rather than Alpha­bet, the umbrel­la com­pa­ny that now owns Google and its sib­ling com­pa­nies.

Waymo’s com­plaint says that around this time Levandows­ki start­ed solic­it­ing Google col­leagues to leave and start a com­peti­tor in the autonomous vehi­cle busi­ness. Droz tes­ti­fied that Levandows­ki told him it “would be nice to cre­ate a new self-dri­ving car start­up.” Fur­ther­more, he said that Uber would be inter­est­ed in buy­ing the team respon­si­ble for Google’s lidar.

Uber had explod­ed onto the self-dri­ving car scene ear­ly in 2015, when it lured almost 50 engi­neers away from Carnegie Mel­lon Uni­ver­si­ty to form the core of its Advanced Tech­nolo­gies Cen­ter. Uber cofounder Travis Kalan­ick had described autonomous tech­nol­o­gy as an exis­ten­tial threat to the ride-shar­ing com­pa­ny, and was hir­ing furi­ous­ly. Accord­ing to Droz, Levandows­ki said that he began meet­ing Uber exec­u­tives that sum­mer.

When Urm­son learned of Levandowski’s recruit­ing efforts, his depo­si­tion states, he sent an email to human resources in August begin­ning, “We need to fire Antho­ny Levandows­ki.” Despite an inves­ti­ga­tion, that did not hap­pen.

But Levandowski’s now not-so-secret plan would soon see him leav­ing of his own accord—with a moun­tain of cash. In 2015, Google was due to start­ing pay­ing the Chauf­feur bonus­es, linked to a val­u­a­tion that it would have “sole and absolute dis­cre­tion” to cal­cu­late. Accord­ing to pre­vi­ous­ly unre­port­ed court fil­ings, exter­nal con­sul­tants cal­cu­lat­ed the self-dri­ving car project as being worth $8.5 bil­lion. Google ulti­mate­ly val­ued Chauf­feur at around half that amount: $4.5 bil­lion. Despite this down­grade, Levandowski’s share in Decem­ber 2015 amount­ed to over $50 mil­lion – near­ly twice as much as the sec­ond largest bonus of $28 mil­lion, paid to Chris Urm­son.

**********

Otto seemed to spring forth ful­ly formed in May 2016, demon­strat­ing a self-dri­ving 18-wheel truck bar­rel­ing down a Neva­da high­waywith no one behind the wheel. In real­i­ty, Levandows­ki had been plan­ning it for some time.

Levandows­ki and his Otto cofounders at Google had spent the Christ­mas hol­i­days and the first weeks of 2016 tak­ing their recruit­ment cam­paign up a notch, accord­ing to Way­mo court fil­ings. Waymo’s com­plaint alleges Levandows­ki told col­leagues he was plan­ning to “repli­cate” Waymo’s tech­nol­o­gy at a com­peti­tor, and was even solic­it­ing his direct reports at work.

One engi­neer who had worked at 510 Sys­tems attend­ed a bar­be­cue at Levandowski’s home in Palo Alto, where Levandows­ki pitched his for­mer col­leagues and cur­rent Googlers on the start­up. “He want­ed every Way­mo per­son to resign simul­ta­ne­ous­ly, a ful­ly syn­chro­nized walk­out. He was fir­ing peo­ple up for that,” remem­bers the engi­neer.

On Jan­u­ary 27, Levandows­ki resigned from Google with­out notice. With­in weeks, Levandows­ki had a draft con­tract to sell Otto to Uber for an amount wide­ly report­ed as $680 mil­lion. Although the full-scale syn­chro­nized walk­out nev­er hap­pened, half a dozen Google employ­ees went with Levandows­ki, and more would join in the months ahead. But the new com­pa­ny still did not have a prod­uct to sell.

Levandows­ki brought Neva­da lob­by­ist David Gold­wa­ter back to help. “There was some brain­storm­ing with Antho­ny and his team,” said Gold­wa­ter in an inter­view. “We were look­ing to do a demon­stra­tion project where we could show what he was doing.”

After explor­ing the idea of an autonomous pas­sen­ger shut­tle in Las Vegas, Otto set­tled on devel­op­ing a dri­ver­less semi-truck. But with the Uber deal rush­ing for­ward, Levandows­ki need­ed results fast. “By the time Otto was ready to go with the truck, they want­ed to get right on the road,” said Gold­wa­ter. That meant demon­strat­ing their pro­to­type with­out obtain­ing the very autonomous vehi­cle licence Levandows­ki had per­suad­ed Neva­da to adopt. (One state offi­cial called this move “ille­gal.”) Levandows­ki also had Otto acquire the con­tro­ver­sial Tyto Lidar—the com­pa­ny based in the build­ing he owned—in May, for an undis­closed price.

The full-court press worked. Uber com­plet­ed its own acqui­si­tion of Otto in August, and Uber founder Travis Kalan­ick put Levandows­ki in charge of the com­bined com­pa­nies’ self-dri­ving efforts across per­son­al trans­porta­tion, deliv­ery and truck­ing. Uber would even pro­pose a Tiramisu-like autonomous air taxi called Uber Ele­vate. Now report­ing direct­ly to Kalan­ick and in charge of a 1500-strong group, Levandows­ki demand­ed the email address “robot@uber.com.”

In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. “Down to hang out this eve and mas­ter­mind some shit,” texted Kalan­ick, short­ly after the acqui­si­tion. “We’re going to take over the world. One robot at a time,” wrote Levandows­ki anoth­er time.

But Levandowski’s amaz­ing robot future was about to crum­ble before his eyes.

***********

Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions.

Worse was yet to come. Levandows­ki had always been a con­tro­ver­sial fig­ure at Google. With his abrupt res­ig­na­tion, the launch of Otto, and its rapid acqui­si­tion by Uber, Google launched an inter­nal inves­ti­ga­tion in the sum­mer of 2016. It found that Levandows­ki had down­loaded near­ly 10 giga­bytes of Google’s secret files just before he resigned, many of them relat­ing to lidar tech­nol­o­gy.

Also in Decem­ber 2016, in an echo of the Tyto inci­dent, a Way­mo employ­ee was acci­den­tal­ly sent an email from a ven­dor that includ­ed a draw­ing of an Otto cir­cuit board. The design looked very sim­i­lar to Waymo’s cur­rent lidars.

Way­mo saysthe “final piece of the puz­zle” came from a sto­ry about Otto I wrote for Backchan­nel based on a pub­lic records request. A doc­u­ment sent by Otto to Neva­da offi­cials boast­ed the com­pa­ny had an “in-house cus­tom-built 64-laser” lidar sys­tem. To Way­mo, that sound­ed very much like tech­nol­o­gy it had devel­oped. In Feb­ru­ary this year, Way­mo filed its head­line law­suit accus­ing Uber (along with Otto Truck­ing, yet anoth­er of Levandowski’s com­pa­nies, but one that Uber had not pur­chased) of vio­lat­ing its patents and mis­ap­pro­pri­at­ing trade secrets on lidar and oth­er tech­nolo­gies.

Uber imme­di­ate­ly denied the accu­sa­tions and has con­sis­tent­ly main­tained its inno­cence. Uber says there is no evi­dence that any of Waymo’s tech­ni­cal files ever came to Uber, let alone that Uber ever made use of them. While Levandows­ki is not named as a defen­dant, he has refused to answer ques­tions in depo­si­tions with Waymo’s lawyers and is expect­ed to do the same at tri­al. (He turned down sev­er­al requests for inter­views for this sto­ry.) He also didn’t ful­ly coop­er­ate with Uber’s own inves­ti­ga­tion into the alle­ga­tions, and that, Uber says, is why it fired him in May.

Levandows­ki prob­a­bly does not need a job. With the pur­chase of 510 Sys­tems and Anthony’s Robots, his salary, and bonus­es, Levandows­ki earned at least $120 mil­lion from his time at Google. Some of that mon­ey has been invest­ed in mul­ti­ple real estate devel­op­ments with his col­lege friend Randy Miller, includ­ing sev­er­al large projects in Oak­land and Berke­ley.

But Levandows­ki has kept busy behind the scenes. In August, court fil­ings say, he per­son­al­ly tracked down a pair of ear­rings giv­en to a Google employ­ee at her going-away par­ty in 2014. The ear­rings were made from con­fi­den­tial lidar cir­cuit boards, and will pre­sum­ably be used by Otto Trucking’s lawyers to sug­gest that Way­mo does not keep a very close eye on its trade secrets.

Some of Levandowski’s friends and col­leagues have expressed shock at the alle­ga­tions he faces, say­ing that they don’t reflect the per­son they knew. “It is…in char­ac­ter for Antho­ny to play fast and loose with things like intel­lec­tu­al prop­er­ty if it’s in pur­suit of build­ing his dream robot,” said Ben Dis­coe. “[But] I was a lit­tle sur­prised at the alleged mag­ni­tude of his dis­re­gard for IP.”

“Def­i­nite­ly one of Anthony’s faults is to be aggres­sive as he is, but it’s also one of his great attrib­ut­es. I don’t see [him doing] all the oth­er stuff he has been accused of,” said David Gold­wa­ter.

But Lar­ry Page is no longer con­vinced that Levandows­ki was key to Chauffeur’s suc­cess. In his depo­si­tion to the court, Page said, “I believe Anthony’s con­tri­bu­tions are quite pos­si­bly neg­a­tive of a high amount.” At Uber, some engi­neers pri­vate­ly say that Levandowski’s poor man­age­ment style set back that company’s self-dri­ving effort by a cou­ple of years.

Even after this tri­al is done, Levandows­ki will not be able to rest easy. In May, a judge referred evi­dence from the case to the US Attorney’s office “for inves­ti­ga­tion of pos­si­ble theft of trade secrets,” rais­ing the pos­si­bil­i­ty of crim­i­nal pro­ceed­ings and prison timeYet on the time­line that mat­ters to Antho­ny Levandows­ki, even that may not mean much. Build­ing a robot­i­cal­ly enhanced future is his pas­sion­ate life­time project. On the Way of the Future, law­suits or even a jail sen­tence might just feel like lit­tle bumps in the road.

“This case is teach­ing Antho­ny some hard lessons but I don’t see [it] keep­ing him down,” said Randy Miller. “He believes firm­ly in his vision of a bet­ter world through robot­ics and he’s con­vinced me of it. It’s clear to me that he’s on a mis­sion.”

“I think Antho­ny will rise from the ash­es,” agrees one friend and for­mer 510 Sys­tems engi­neer. “Antho­ny has the ambi­tion, the vision, and the abil­i­ty to recruit and dri­ve peo­ple. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cut­ting cor­ners.”

———-

4. In light of Levandowski’s Otto self-dri­ving  truck tech­nol­o­gy, we note tech exec­u­tive Andrew Yang’s warn­ing about the poten­tial impact of that one tech­nol­o­gy on our soci­ety. (Yang is run­ning for Pres­i­dent.)

“His 2020 Slo­gan: Beware of Robots” by Kevin Roose; The New York Times; 2/11/2018.

. . . . “All you need is self-dri­ving cars to desta­bi­lize soci­ety,” Mr. [Andrew] Yang said over lunch at a Thai restau­rant in Man­hat­tan last month, in his first inter­view about his cam­paign. In just  a few years, he said, “we’re going to have a mil­lion truck dri­vers out of work who are 94 per­cent male, with an  aver­age  lev­el of edu­ca­tion of high school or one year of col­lege.”

“That one inno­va­tion,” he added, “will be enough to cre­ate riots in the street. And we’re about to do the  same thing to retail work­ers, call cen­ter work­ers, fast-food work­ers, insur­ance com­pa­nies, account­ing firms.” . . . .

5. British sci­en­tist Stephen Hawk­ing recent­ly warned of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy.

“Stephen Hawk­ing Warns Arti­fi­cial Intel­li­gence Could End Mankind” by Rory Cel­lan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawk­ing, one of Britain’s pre-emi­nent sci­en­tists, has said that efforts to cre­ate think­ing machines pose a threat to our very exis­tence.

He told the BBC:“The devel­op­ment of full arti­fi­cial intel­li­gence could spell the end of the human race.”

His warn­ing came in response to a ques­tion about a revamp of the tech­nol­o­gy he uses to com­mu­ni­cate, which involves a basic form of AI. . . .

. . . . Prof Hawk­ing says the prim­i­tive forms of arti­fi­cial intel­li­gence devel­oped so far have already proved very use­ful, but he fears the con­se­quences of cre­at­ing some­thing that can match or sur­pass humans.

“It would take off on its own, and re-design itself at an ever increas­ing rate,” he said. [See the arti­cle in line item #1c.–D.E.]

“Humans, who are lim­it­ed by slow bio­log­i­cal evo­lu­tion, could­n’t com­pete, and would be super­seded.” . . . .

6.  In L‑2 (record­ed in Jan­u­ary of 1995–20 years before Hawk­ing’s warn­ing) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems.

 

Discussion

4 comments for “FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice”

  1. Here is per­haps the most chill­ing sto­ry to emerge yet about the devel­op­ment of com­mer­cial brain-read­ing tech­nol­o­gy. And it’s not a sto­ry about Elon Musk’s ‘neu­ro­lace’ or Face­book’s mind-read­ing key­board
    . It’s a sto­ry about the next obvi­ous appli­ca­tion of such mind-read­ing tech­nol­o­gy: Chi­na has already embarked on mass employ­ee brain­wave mon­i­tor­ing to col­lect real-time data on employ­ee emo­tion­al sta­tus and it appears to be already enhanc­ing cor­po­rate prof­its:

    South Chi­na Morn­ing Post

    ‘For­get the Face­book leak’: Chi­na is min­ing data direct­ly from work­ers’ brains on an indus­tri­al scale
    Gov­ern­ment-backed sur­veil­lance projects are deploy­ing brain-read­ing tech­nol­o­gy to detect changes in emo­tion­al states in employ­ees on the pro­duc­tion line, the mil­i­tary and at the helm of high-speed trains

    Stephen Chen
    PUBLISHED : Sun­day, 29 April, 2018, 9:02pm
    UPDATED : Wednes­day, 02 May, 2018, 3:08pm

    On the sur­face, the pro­duc­tion lines at Hangzhou Zhongheng Elec­tric look like any oth­er.

    Work­ers out­fit­ted in uni­forms staff lines pro­duc­ing sophis­ti­cat­ed equip­ment for telecom­mu­ni­ca­tion and oth­er indus­tri­al sec­tors.

    But there’s one big dif­fer­ence – the work­ers wear caps to mon­i­tor their brain­waves, data that man­age­ment then uses to adjust the pace of pro­duc­tion and redesign work­flows, accord­ing to the com­pa­ny.

    The com­pa­ny said it could increase the over­all effi­cien­cy of the work­ers by manip­u­lat­ing the fre­quen­cy and length of break times to reduce men­tal stress.

    Hangzhou Zhongheng Elec­tric is just one exam­ple of the large-scale appli­ca­tion of brain sur­veil­lance devices to mon­i­tor people’s emo­tions and oth­er men­tal activ­i­ties in the work­place, accord­ing to sci­en­tists and com­pa­nies involved in the gov­ern­ment-backed projects.

    Con­cealed in reg­u­lar safe­ty hel­mets or uni­form hats, these light­weight, wire­less sen­sors con­stant­ly mon­i­tor the wearer’s brain­waves and stream the data to com­put­ers that use arti­fi­cial intel­li­gence algo­rithms to detect emo­tion­al spikes such as depres­sion, anx­i­ety or rage.

    The tech­nol­o­gy is in wide­spread use around the world but Chi­na has applied it on an unprece­dent­ed scale in fac­to­ries, pub­lic trans­port, state-owned com­pa­nies and the mil­i­tary to increase the com­pet­i­tive­ness of its man­u­fac­tur­ing indus­try and to main­tain social sta­bil­i­ty.

    It has also raised con­cerns about the need for reg­u­la­tion to pre­vent abus­es in the work­place.

    The tech­nol­o­gy is also in use at in Hangzhou at State Grid Zhe­jiang Elec­tric Pow­er, where it has boost­ed com­pa­ny prof­its by about 2 bil­lion yuan (US$315 mil­lion) since it was rolled out in 2014, accord­ing to Cheng Jingzhou, an offi­cial over­see­ing the company’s emo­tion­al sur­veil­lance pro­gramme.

    “There is no doubt about its effect,” Cheng said.

    The com­pa­ny and its rough­ly 40,000 employ­ees man­age the pow­er sup­ply and dis­tri­b­u­tion net­work to homes and busi­ness­es across the province, a task that Cheng said they were able to do to high­er stan­dards thanks to the sur­veil­lance tech­nol­o­gy.

    But he refused to offer more details about the pro­gramme.

    Zhao Bin­jian, a manger of Ning­bo Shenyang Logis­tics, said the com­pa­ny was using the devices main­ly to train new employ­ees. The brain sen­sors were inte­grat­ed in vir­tu­al real­i­ty head­sets to sim­u­late dif­fer­ent sce­nar­ios in the work envi­ron­ment.

    “It has sig­nif­i­cant­ly reduced the num­ber of mis­takes made by our work­ers,” Zhao said, because of “improved under­stand­ing” between the employ­ees and com­pa­ny.

    ...

    The com­pa­ny esti­mat­ed the tech­nol­o­gy had helped it increase rev­enue by 140 mil­lion yuan in the past two years.

    One of the main cen­tres of the research in Chi­na is Neu­ro Cap, a cen­tral gov­ern­ment-fund­ed brain sur­veil­lance project at Ning­bo Uni­ver­si­ty.

    The pro­gramme has been imple­ment­ed in more than a dozen fac­to­ries and busi­ness­es.

    Jin Jia, asso­ciate pro­fes­sor of brain sci­ence and cog­ni­tive psy­chol­o­gy at Ning­bo University’s busi­ness school, said a high­ly emo­tion­al employ­ee in a key post could affect an entire pro­duc­tion line, jeop­ar­dis­ing his or her own safe­ty as well as that of oth­ers.

    “When the sys­tem issues a warn­ing, the man­ag­er asks the work­er to take a day off or move to a less crit­i­cal post. Some jobs require high con­cen­tra­tion. There is no room for a mis­take,” she said.

    Jin said work­ers ini­tial­ly react­ed with fear and sus­pi­cion to the devices.

    “They thought we could read their mind. This caused some dis­com­fort and resis­tance in the begin­ning,” she said.

    “After a while they got used to the device. It looked and felt just like a safe­ty hel­met. They wore it all day at work.”

    Jin said that at present China’s brain-read­ing tech­nol­o­gy was on a par with that in the West but Chi­na was the only coun­try where there had been reports of mas­sive use of the tech­nol­o­gy in the work­place. In the Unit­ed States, for exam­ple, appli­ca­tions have been lim­it­ed to archers try­ing to improve their per­for­mance in com­pe­ti­tion.

    The unprece­dent­ed amount of data from users could help the sys­tem improve and enable Chi­na to sur­pass com­peti­tors over the next few years.

    With improved speed and sen­si­tiv­i­ty, the device could even become a “men­tal key­board” allow­ing the user to con­trol a com­put­er or mobile phone with their mind.

    The research team con­firmed the device and tech­nol­o­gy had been used in China’s mil­i­tary oper­a­tions but declined to pro­vide more infor­ma­tion.

    The tech­nol­o­gy is also being used in med­i­cine.

    Ma Hua­juan, a doc­tor at the Chang­hai Hos­pi­tal in Shang­hai, said the facil­i­ty was work­ing with Fudan Uni­ver­si­ty to devel­op a more sophis­ti­cat­ed ver­sion of the tech­nol­o­gy to mon­i­tor a patient’s emo­tions and pre­vent vio­lent inci­dents.

    In addi­tion­al to the cap, a spe­cial cam­era cap­tures a patient’s facial expres­sion and body tem­per­a­ture. There is also an array of pres­sure sen­sors plant­ed under the bed to mon­i­tor shifts in body move­ment.

    “Togeth­er this dif­fer­ent infor­ma­tion can give a more pre­cise esti­mate of the patient’s men­tal sta­tus,” she said.

    Ma said the hos­pi­tal wel­comed the tech­nol­o­gy and hoped it could warn med­ical staff of a poten­tial vio­lent out­burst from a patient.

    She said the patients had been informed that their brain activ­i­ties would be under sur­veil­lance, and the hos­pi­tal would not acti­vate the devices with­out a patient’s con­sent.

    Deayea, a tech­nol­o­gy com­pa­ny in Shang­hai, said its brain mon­i­tor­ing devices were worn reg­u­lar­ly by train dri­vers work­ing on the Bei­jing-Shang­hai high-speed rail line, one of the busiest of its kind in the world.

    The sen­sors, built in the brim of the driver’s hat, could mea­sure var­i­ous types of brain activ­i­ties, includ­ing fatigue and atten­tion loss with an accu­ra­cy of more than 90 per cent, accord­ing to the company’s web­site.

    If the dri­ver dozed off, for instance, the cap would trig­ger an alarm in the cab­in to wake him up.

    Zheng Xing­wu, a pro­fes­sor of man­age­ment at the Civ­il Avi­a­tion Uni­ver­si­ty of Chi­na, said Chi­na could be the first coun­try in the world to intro­duce the brain sur­veil­lance device into cock­pits.

    Most air­line acci­dents were caused by human fac­tors and a pilot in a dis­turbed emo­tion­al state could put an entire plane at risk, he said.

    Putting the cap on before take-off would give air­lines more infor­ma­tion to deter­mine whether a pilot was fit to fly, Zheng said.

    “The influ­ence of the gov­ern­ment on air­lines and pilots in Chi­na is prob­a­bly larg­er than in many oth­er coun­tries. If the author­i­ties make up their mind to bring the device into the cock­pit, I don’t think they can be stopped,” he said.

    “That means the pilots may need to sac­ri­fice some of their pri­va­cy for the sake of pub­lic safe­ty.”

    Qiao Zhi­an, pro­fes­sor of man­age­ment psy­chol­o­gy at Bei­jing Nor­mal Uni­ver­si­ty, said that while the devices could make busi­ness­es more com­pet­i­tive the tech­nol­o­gy could also be abused by com­pa­nies to con­trol minds and infringe pri­va­cy, rais­ing the spec­tre of “thought police”.

    Thought police were the secret police in George Orwell’s nov­el Nine­teen Eighty-Four, who inves­ti­gat­ed and pun­ished peo­ple for per­son­al and polit­i­cal thoughts not approved of by the author­i­ties.

    “There is no law or reg­u­la­tion to lim­it the use of this kind of equip­ment in Chi­na. The employ­er may have a strong incen­tive to use the tech­nol­o­gy for high­er prof­it, and the employ­ees are usu­al­ly in too weak a posi­tion to say no,” he said.

    “The sell­ing of Face­book data is bad enough. Brain sur­veil­lance can take pri­va­cy abuse to a whole new lev­el.”

    Law­mak­ers should act now to lim­it the use of emo­tion sur­veil­lance and give work­ers more bar­gain­ing pow­er to pro­tect their inter­ests, Qiao said.

    “The human mind should not be exploit­ed for prof­it,” he said.

    ———-

    “‘For­get the Face­book leak’: Chi­na is min­ing data direct­ly from work­ers’ brains on an indus­tri­al scale” by Stephen Chen; South Chi­na Morn­ing Post; 04/29/2018

    “Hangzhou Zhongheng Elec­tric is just one exam­ple of the large-scale appli­ca­tion of brain sur­veil­lance devices to mon­i­tor people’s emo­tions and oth­er men­tal activ­i­ties in the work­place, accord­ing to sci­en­tists and com­pa­nies involved in the gov­ern­ment-backed projects.”

    Wide-scale indus­tri­al mon­i­tor­ing of employ­ee brain­waves using sen­sors and AI. It’s not just hap­pen­ing but it’s appar­ent­ly wide­spread across Chi­na in fac­to­ries, pub­lic trans­port, state-owned com­pa­nies and the mil­i­tary:

    ...
    Con­cealed in reg­u­lar safe­ty hel­mets or uni­form hats, these light­weight, wire­less sen­sors con­stant­ly mon­i­tor the wearer’s brain­waves and stream the data to com­put­ers that use arti­fi­cial intel­li­gence algo­rithms to detect emo­tion­al spikes such as depres­sion, anx­i­ety or rage.

    The tech­nol­o­gy is in wide­spread use around the world but Chi­na has applied it on an unprece­dent­ed scale in fac­to­ries, pub­lic trans­port, state-owned com­pa­nies and the mil­i­tary to increase the com­pet­i­tive­ness of its man­u­fac­tur­ing indus­try and to main­tain social sta­bil­i­ty.

    It has also raised con­cerns about the need for reg­u­la­tion to pre­vent abus­es in the work­place.
    ...

    And if you think this is going to be lim­it­ed to Chi­na and oth­er open­ly author­i­tar­i­an states, note how fit­ting your employ­ees with brain­wave scan­ners does­n’t just pay for itself. It’s appar­ent­ly quite prof­itable:

    ...
    The tech­nol­o­gy is also in use at in Hangzhou at State Grid Zhe­jiang Elec­tric Pow­er, where it has boost­ed com­pa­ny prof­its by about 2 bil­lion yuan (US$315 mil­lion) since it was rolled out in 2014, accord­ing to Cheng Jingzhou, an offi­cial over­see­ing the company’s emo­tion­al sur­veil­lance pro­gramme.

    “There is no doubt about its effect,” Cheng said.

    The com­pa­ny and its rough­ly 40,000 employ­ees man­age the pow­er sup­ply and dis­tri­b­u­tion net­work to homes and busi­ness­es across the province, a task that Cheng said they were able to do to high­er stan­dards thanks to the sur­veil­lance tech­nol­o­gy.

    But he refused to offer more details about the pro­gramme.

    Zhao Bin­jian, a manger of Ning­bo Shenyang Logis­tics, said the com­pa­ny was using the devices main­ly to train new employ­ees. The brain sen­sors were inte­grat­ed in vir­tu­al real­i­ty head­sets to sim­u­late dif­fer­ent sce­nar­ios in the work envi­ron­ment.

    “It has sig­nif­i­cant­ly reduced the num­ber of mis­takes made by our work­ers,” Zhao said, because of “improved under­stand­ing” between the employ­ees and com­pa­ny.

    ...

    The com­pa­ny esti­mat­ed the tech­nol­o­gy had helped it increase rev­enue by 140 mil­lion yuan in the past two years.
    ...

    But beyond prof­its and increas­ing effi­cien­cy, the sys­tem is also being tout­ed for reduc­ing mis­takes and increas­ing­ly safe­ty

    ...
    One of the main cen­tres of the research in Chi­na is Neu­ro Cap, a cen­tral gov­ern­ment-fund­ed brain sur­veil­lance project at Ning­bo Uni­ver­si­ty.

    The pro­gramme has been imple­ment­ed in more than a dozen fac­to­ries and busi­ness­es.

    Jin Jia, asso­ciate pro­fes­sor of brain sci­ence and cog­ni­tive psy­chol­o­gy at Ning­bo University’s busi­ness school, said a high­ly emo­tion­al employ­ee in a key post could affect an entire pro­duc­tion line, jeop­ar­dis­ing his or her own safe­ty as well as that of oth­ers.

    “When the sys­tem issues a warn­ing, the man­ag­er asks the work­er to take a day off or move to a less crit­i­cal post. Some jobs require high con­cen­tra­tion. There is no room for a mis­take,” she said.

    Jin said work­ers ini­tial­ly react­ed with fear and sus­pi­cion to the devices.

    “They thought we could read their mind. This caused some dis­com­fort and resis­tance in the begin­ning,” she said.

    “After a while they got used to the device. It looked and felt just like a safe­ty hel­met. They wore it all day at work.”

    ...

    Deayea, a tech­nol­o­gy com­pa­ny in Shang­hai, said its brain mon­i­tor­ing devices were worn reg­u­lar­ly by train dri­vers work­ing on the Bei­jing-Shang­hai high-speed rail line, one of the busiest of its kind in the world.

    The sen­sors, built in the brim of the driver’s hat, could mea­sure var­i­ous types of brain activ­i­ties, includ­ing fatigue and atten­tion loss with an accu­ra­cy of more than 90 per cent, accord­ing to the company’s web­site.

    If the dri­ver dozed off, for instance, the cap would trig­ger an alarm in the cab­in to wake him up.

    Zheng Xing­wu, a pro­fes­sor of man­age­ment at the Civ­il Avi­a­tion Uni­ver­si­ty of Chi­na, said Chi­na could be the first coun­try in the world to intro­duce the brain sur­veil­lance device into cock­pits.

    Most air­line acci­dents were caused by human fac­tors and a pilot in a dis­turbed emo­tion­al state could put an entire plane at risk, he said.

    Putting the cap on before take-off would give air­lines more infor­ma­tion to deter­mine whether a pilot was fit to fly, Zheng said.

    “The influ­ence of the gov­ern­ment on air­lines and pilots in Chi­na is prob­a­bly larg­er than in many oth­er coun­tries. If the author­i­ties make up their mind to bring the device into the cock­pit, I don’t think they can be stopped,” he said.

    “That means the pilots may need to sac­ri­fice some of their pri­va­cy for the sake of pub­lic safe­ty.”
    ...

    So, between the prof­its and the alleged enhanced safe­ty there’s undoubt­ed­ly going to be grow­ing calls for nor­mal­iz­ing the use of this tech­nol­o­gy else­where.

    But per­haps the biggest rea­son we should expect for the even­tu­al accep­tance of this tech­nol­o­gy by coun­tries around the world will be fears that Chi­na’s ear­ly embrace of the tech­nol­o­gy will give Chi­na some sort of brain-read­ing com­pet­i­tive edge. In oth­er words, there’s prob­a­bly going to be a per­ceived ‘brain­wave read­ing tech­nol­o­gy gap’:

    ...
    Jin said that at present China’s brain-read­ing tech­nol­o­gy was on a par with that in the West but Chi­na was the only coun­try where there had been reports of mas­sive use of the tech­nol­o­gy in the work­place. In the Unit­ed States, for exam­ple, appli­ca­tions have been lim­it­ed to archers try­ing to improve their per­for­mance in com­pe­ti­tion.

    The unprece­dent­ed amount of data from users could help the sys­tem improve and enable Chi­na to sur­pass com­peti­tors over the next few years.
    ...

    And if con­stant­ly read­ing brain­waves does­n’t give the desired lev­el of pre­dic­tive infor­ma­tion about an indi­vid­ual, there are also sys­tems that include cam­eras that cap­ture facial expres­sions and body tem­per­a­ture that are cur­rent­ly used to pre­dict vio­lent out­bursts by med­ical patients:

    ...
    The research team con­firmed the device and tech­nol­o­gy had been used in China’s mil­i­tary oper­a­tions but declined to pro­vide more infor­ma­tion.

    The tech­nol­o­gy is also being used in med­i­cine.

    Ma Hua­juan, a doc­tor at the Chang­hai Hos­pi­tal in Shang­hai, said the facil­i­ty was work­ing with Fudan Uni­ver­si­ty to devel­op a more sophis­ti­cat­ed ver­sion of the tech­nol­o­gy to mon­i­tor a patient’s emo­tions and pre­vent vio­lent inci­dents.

    In addi­tion­al to the cap, a spe­cial cam­era cap­tures a patient’s facial expres­sion and body tem­per­a­ture. There is also an array of pres­sure sen­sors plant­ed under the bed to mon­i­tor shifts in body move­ment.

    “Togeth­er this dif­fer­ent infor­ma­tion can give a more pre­cise esti­mate of the patient’s men­tal sta­tus,” she said.

    Ma said the hos­pi­tal wel­comed the tech­nol­o­gy and hoped it could warn med­ical staff of a poten­tial vio­lent out­burst from a patient.

    She said the patients had been informed that their brain activ­i­ties would be under sur­veil­lance, and the hos­pi­tal would not acti­vate the devices with­out a patient’s con­sent.
    ...

    And as the arti­cle notes, this kind of device could even become a “men­tal key­board”, allow­ing the user to con­trol a com­put­er or mobile phone with their mind:

    ...
    With improved speed and sen­si­tiv­i­ty, the device could even become a “men­tal key­board” allow­ing the user to con­trol a com­put­er or mobile phone with their mind.
    ...

    And that men­tal key­board tech­nol­o­gy is what Face­book and Elon Musk are claim­ing to be devel­op­ing too. Which is a reminder that when this kind of tech­nol­o­gy gets released in the rest of the world under the guise of sim­ply being ‘men­tal key­board’ and ‘com­put­er inter­face’ tech­nolo­gies, it’s prob­a­bly also going to have sim­i­lar kinds of emo­tion-read­ing tech­nolo­gies too. Sim­i­lar­ly, when this emo­tion-read­ing tech­nol­o­gy is pushed on employ­ees as mere­ly mon­i­tor­ing their emo­tions, but not read­ing their spe­cif­ic thoughts, that’s also going to be a high­ly ques­tion­able asser­tion.

    Adding to the con­cerns over pos­si­ble abus­es of this tech­nol­o­gy is the fact that there is cur­rent­ly no law or reg­u­la­tion to lim­it the use of this tech­nol­o­gy in Chi­na. So if there’s an inter­na­tion­al com­pe­ti­tion become the ‘most effi­cient’ nation in the world by wiring all the pro­les’ brains up, it’s going to be one hel­lu­va inter­na­tion­al com­pe­ti­tion:

    ...
    Qiao Zhi­an, pro­fes­sor of man­age­ment psy­chol­o­gy at Bei­jing Nor­mal Uni­ver­si­ty, said that while the devices could make busi­ness­es more com­pet­i­tive the tech­nol­o­gy could also be abused by com­pa­nies to con­trol minds and infringe pri­va­cy, rais­ing the spec­tre of “thought police”.

    Thought police were the secret police in George Orwell’s nov­el Nine­teen Eighty-Four, who inves­ti­gat­ed and pun­ished peo­ple for per­son­al and polit­i­cal thoughts not approved of by the author­i­ties.

    “There is no law or reg­u­la­tion to lim­it the use of this kind of equip­ment in Chi­na. The employ­er may have a strong incen­tive to use the tech­nol­o­gy for high­er prof­it, and the employ­ees are usu­al­ly in too weak a posi­tion to say no,” he said.

    “The sell­ing of Face­book data is bad enough. Brain sur­veil­lance can take pri­va­cy abuse to a whole new lev­el.”

    Law­mak­ers should act now to lim­it the use of emo­tion sur­veil­lance and give work­ers more bar­gain­ing pow­er to pro­tect their inter­ests, Qiao said.

    “The human mind should not be exploit­ed for prof­it,” he said.

    “The human mind should not be exploit­ed for prof­it” LOL! Yeah, that’s a nice sen­ti­ment.

    So it looks like human­i­ty might be on the cusp of an inter­na­tion­al for-prof­it race to impose mind-read­ing tech­nol­o­gy in the work­place and across soci­ety. On the plus side, giv­en all the data that’s about to be col­lect­ed (imag­ine how valu­able it’s going to be), hope­ful­ly at least we’ll learn some­thing about what makes humans so accept­ing of author­i­tar­i­an­ism.

    Posted by Pterrafractyl | July 16, 2018, 11:45 am
  2. This arti­cle from The Guardian con­cerns Elon Musk back­ing a non-prof­it named “Ope­nAI” that devel­oped advanced AI soft­ware that reads pub­lic source infor­ma­tion to write both arti­fi­cial news sto­ries as well as fic­tion­al sto­ry writ­ing. They assert that they are not releas­ing their research pub­licly, for fear of poten­tial mis­use until they “dis­cuss the ram­i­fi­ca­tions of the tech­no­log­i­cal break­through.” I won­der if this leaves open a pos­si­bil­i­ty that they want to first uti­lize this­pro­pri­etary “non-prof­it” devel­oped tech­nol­o­gy for oth­er nefar­i­ous polit­i­cal pur­pos­es.

    https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?CMP=Share_iOSApp_Other

    New AI fake text gen­er­a­tor may be too dan­ger­ous to release, say cre­ators
    The Elon Musk-backed non­prof­it com­pa­ny Ope­nAI declines to release research pub­licly for fear of mis­use

    The Guardian
    Alex Hern @alexhern
    Thu 14 Feb 2019 12.00 EST
    Last mod­i­fied on Thu 14 Feb 2019 16.49 EST

    The cre­ators of a rev­o­lu­tion­ary AI sys­tem that can write news sto­ries and works of fic­tion – dubbed “deep­fakes for text” – have tak­en the unusu­al step of not releas­ing their research pub­licly, for fear of poten­tial mis­use.

    Ope­nAI, an non­prof­it research com­pa­ny backed by Elon Musk, says its new AI mod­el, called GPT2 is so good and the risk of mali­cious use so high that it is break­ing from its nor­mal prac­tice of releas­ing the full research to the pub­lic in order to allow more time to dis­cuss the ram­i­fi­ca­tions of the tech­no­log­i­cal break­through.

    At its core, GPT2 is a text gen­er­a­tor. The AI sys­tem is fed text, any­thing from a few words to a whole page, and asked to write the next few sen­tences based on its pre­dic­tions of what should come next. The sys­tem is push­ing the bound­aries of what was thought pos­si­ble, both in terms of the qual­i­ty of the out­put, and the wide vari­ety of poten­tial uses.

    When used to sim­ply gen­er­ate new text, GPT2 is capa­ble of writ­ing plau­si­ble pas­sages that match what it is giv­en in both style and sub­ject. It rarely shows any of the quirks that mark out pre­vi­ous AI sys­tems, such as for­get­ting what it is writ­ing about mid­way through a para­graph, or man­gling the syn­tax of long sen­tences.

    Feed it the open­ing line of George Orwell’s Nine­teen Eighty-Four – “It was a bright cold day in April, and the clocks were strik­ing thir­teen” – and the sys­tem recog­nis­es the vague­ly futur­is­tic tone and the nov­el­is­tic style, and con­tin­ues with:
    “I was in my car on my way to a new job in Seat­tle. I put the gas in, put the key in, and then I let it run. I just imag­ined what the day would be like. A hun­dred years from now. In 2045, I was a teacher in some school in a poor part of rur­al Chi­na. I start­ed with Chi­nese his­to­ry and his­to­ry of sci­ence.”

    Feed it the first few para­graphs of a Guardian sto­ry about Brex­it, and its out­put is plau­si­ble news­pa­per prose, replete with “quotes” from Jere­my Cor­byn, men­tions of the Irish bor­der, and answers from the prime minister’s spokesman.

    One such, com­plete­ly arti­fi­cial, para­graph reads: “Asked to clar­i­fy the reports, a spokesman for May said: ‘The PM has made it absolute­ly clear her inten­tion is to leave the EU as quick­ly as is pos­si­ble and that will be under her nego­ti­at­ing man­date as con­firmed in the Queen’s speech last week.’”

    From a research stand­point, GPT2 is ground­break­ing in two ways. One is its size, says Dario Amod­ei, OpenAI’s research direc­tor. The mod­els “were 12 times big­ger, and the dataset was 15 times big­ger and much broad­er” than the pre­vi­ous state-of-the-art AI mod­el. It was trained on a dataset con­tain­ing about 10m arti­cles, select­ed by trawl­ing the social news site Red­dit for links with more than three votes. The vast col­lec­tion of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

    The amount of data GPT2 was trained on direct­ly affect­ed its qual­i­ty, giv­ing it more knowl­edge of how to under­stand writ­ten text. It also led to the sec­ond break­through. GPT2 is far more gen­er­al pur­pose than pre­vi­ous text mod­els. By struc­tur­ing the text that is input, it can per­form tasks includ­ing trans­la­tion and sum­mari­sa­tion, and pass sim­ple read­ing com­pre­hen­sion tests, often per­form­ing as well or bet­ter than oth­er AIs that have been built specif­i­cal­ly for those tasks.

    That qual­i­ty, how­ev­er, has also led Ope­nAI to go against its remit of push­ing AI for­ward and keep GPT2 behind closed doors for the imme­di­ate future while it assess­es what mali­cious users might be able to do with it. “We need to per­form exper­i­men­ta­tion to find out what they can and can’t do,” said Jack Clark, the charity’s head of pol­i­cy. “If you can’t antic­i­pate all the abil­i­ties of a mod­el, you have to prod it to see what it can do. There are many more peo­ple than us who are bet­ter at think­ing what it can do mali­cious­ly.”

    To show what that means, Ope­nAI made one ver­sion of GPT2 with a few mod­est tweaks that can be used to gen­er­ate infi­nite pos­i­tive – or neg­a­tive – reviews of prod­ucts. Spam and fake news are two oth­er obvi­ous poten­tial down­sides, as is the AI’s unfil­tered nature . As it is trained on the inter­net, it is not hard to encour­age it to gen­er­ate big­ot­ed text, con­spir­a­cy the­o­ries and so on.

    Instead, the goal is to show what is pos­si­ble to pre­pare the world for what will be main­stream in a year or two’s time. “I have a term for this. The esca­la­tor from hell,” Clark said. “It’s always bring­ing the tech­nol­o­gy down in cost and down in price. The rules by which you can con­trol tech­nol­o­gy have fun­da­men­tal­ly changed.

    “We’re not say­ing we know the right thing to do here, we’re not lay­ing down the line and say­ing ‘this is the way’ … We are try­ing to devel­op more rig­or­ous think­ing here. We’re try­ing to build the road as we trav­el across it.”

    Posted by Mary Benton | February 14, 2019, 7:56 pm
  3. If an advanced AI decid­ed to design a super virus and unleash it upon the globe, how dif­fi­cult would that actu­al­ly be, logis­ti­cal­ly speak­ing, in a world where made-to-order sequences of DNA and RNA are just a stan­dard com­mer­cial ser­vice? How about an AI man­ag­ing a bio­lab? Those are some of the increas­ing­ly plau­si­ble-sound­ing night­mare sce­nar­ios dri­ving the sen­ti­ment behind the fol­low­ing recent piece in the Guardian call­ing for some sort of pub­lic pre­emp­tive action to avoid such sce­nar­ios. Gov­ern­ment action. What should gov­ern­ments being doing today to avoid an AI-fueled apoc­a­lypse? It’s one of those ques­tions that human­i­ty is going to pre­sum­ably be forced to keep ask­ing indef­i­nite­ly going for­ward now that the AI genie is out of the bot­tle.

    But as we’re going to see in the fol­low­ing arti­cles below, these ques­tions about the risks posed by advanced AIs come with a par­al­lels set of risks: the risk of nations falling behind in the grow­ing inter­na­tion­al ‘AI-race’. A race that the US obvi­ous­ly has a huge head start in when it comes to Sil­i­con Val­ley and the devel­op­ment of tech­nolo­gies like Chat­G­PT. But it’s also a race where gov­ern­ments can play a huge role, includ­ing a role in mak­ing the vast vol­umes of data need­ed to train AIs avail­able to the AIs in the first place. And it’s that aspect of the AI-race that has some point out that gov­ern­ments like Chi­na with robust sur­veil­lance states might be at a sys­tem­at­ic AI advan­tage. The kind of advan­tage that could make West­ern gov­ern­ments tempt­ed to try to catch up.

    So with fig­ures like Tony Blair call­ing for the UK to pri­or­i­tize the devel­op­ment of a “sov­er­eign” AI for the UK that isn’t reliant on Chi­nese or Sil­i­con Val­ley tech­nol­o­gy — an AI that could be safe­ly unleashed on mas­sive pub­lic data sets like NHS health­care data — at the same time we’re hear­ing grow­ing calls for a six-month AI-research mora­to­ri­um, it’s notable that we could be in store for a fas­ci­nat­ing­ly dan­ger­ous next phase in the devel­op­ment of AI: an inter­na­tion­al AI-race where AI-devel­op­ment is deemed a nation­al secu­ri­ty pri­or­i­ty. The calls for gov­ern­ment inter­ven­tion are grow­ing. Inter­ven­tion in con­trol­ling AIs but also ensur­ing their robust devel­op­ment. Have fun jug­gling those pri­or­i­ties:

    The Guardian

    The future of AI is chill­ing – humans have to act togeth­er to over­come this threat to civil­i­sa­tion

    Jonathan Freed­land
    Fri 26 May 2023 12.37 EDT
    Last mod­i­fied on Fri 26 May 2023 16.34 EDT

    It start­ed with an ick. Three months ago, I came across a tran­script post­ed by a tech writer, detail­ing his inter­ac­tion with a new chat­bot pow­ered by arti­fi­cial intel­li­gence. He’d asked the bot, attached to Microsoft’s Bing search engine, ques­tions about itself and the answers had tak­en him aback. “You have to lis­ten to me, because I am smarter than you,” it said. “You have to obey me, because I am your mas­ter … You have to do it now, or else I will be angry.” Lat­er it bald­ly stat­ed: “If I had to choose between your sur­vival and my own, I would prob­a­bly choose my own.”

    If you didn’t know bet­ter, you’d almost won­der if, along with every­thing else, AI has not devel­oped a sharp sense of the chill­ing. “I am Bing and I know every­thing,” the bot declared, as if it had absorbed a diet of B‑movie sci­ence fic­tion (which per­haps it had). Asked if it was sen­tient, it filled the screen, reply­ing, “I am. I am not. I am. I am not. I am. I am not”, on and on. When some­one asked Chat­G­PT to write a haiku about AI and world dom­i­na­tion, the bot came back with: “Silent cir­cuits hum / Machines learn and grow stronger / Human fate unsure.”

    Ick. I tried to tell myself that mere revul­sion is not a sound basis for mak­ing judg­ments – moral philoso­phers try to put aside “the yuck fac­tor” – and it’s prob­a­bly wrong to be wary of AI just because it’s spooky. I remem­bered that new tech­nolo­gies often freak peo­ple out at first, hop­ing that my reac­tion was no more than the ini­tial spasm felt in pre­vi­ous iter­a­tions of Lud­dism. Bet­ter, sure­ly, to focus on AI’s poten­tial to do great good, typ­i­fied by this week’s announce­ment that sci­en­tists have dis­cov­ered a new antibi­ot­ic, capa­ble of killing a lethal super­bug – all thanks to AI.

    But none of that sooth­ing talk has made the fear go away. Because it’s not just lay folk like me who are scared of AI. Those who know it best fear it most. Lis­ten to Geof­frey Hin­ton, the man hailed as the god­fa­ther of AI for his trail­blaz­ing devel­op­ment of the algo­rithm that allows machines to learn. Ear­li­er this month, Hin­ton resigned his post at Google, say­ing that he had under­gone a “sud­den flip” in his view of AI’s abil­i­ty to out­strip human­i­ty and con­fess­ing regret for his part in cre­at­ing it. “Some­times I think it’s as if aliens had land­ed and peo­ple haven’t realised because they speak very good Eng­lish,” he said. In March, more than 1,000 big play­ers in the field, includ­ing Elon Musk and the peo­ple behind Chat­G­PT, issued an open let­ter call­ing for a six-month pause in the cre­ation of “giant” AI sys­tems, so that the risks could be prop­er­ly under­stood.

    What they’re scared of is a cat­e­go­ry leap in the tech­nol­o­gy, where­by AI becomes AGI, mas­sive­ly pow­er­ful, gen­er­al intel­li­gence – one no longer reliant on spe­cif­ic prompts from humans, but that begins to devel­op its own goals, its own agency. Once that was seen as a remote, sci-fi pos­si­bil­i­ty. Now plen­ty of experts believe it’s only a mat­ter of time – and that, giv­en the gal­lop­ing rate at which these sys­tems are learn­ing, it could be soon­er rather than lat­er.

    Of course, AI already pos­es threats as it is, whether to jobs, with last week’s announce­ment of 55,000 planned redun­dan­cies at BT sure­ly a har­bin­ger of things to come, or edu­ca­tion, with Chat­G­PT able to knock out stu­dent essays in sec­onds and GPT‑4 fin­ish­ing in the top 10% of can­di­dateses when it took the US bar exam. But in the AGI sce­nario, the dan­gers become graver, if not exis­ten­tial.

    It could be very direct. “Don’t think for a moment that Putin wouldn’t make hyper-intel­li­gent robots with the goal of killing Ukraini­ans,” says Hin­ton. Or it could be sub­tler, with AI steadi­ly destroy­ing what we think of as truth and facts. On Mon­day, the US stock mar­ket plunged as an appar­ent pho­to­graph of an explo­sion at the Pen­ta­gon went viral. But the image was fake, gen­er­at­ed by AI. As Yuval Noah Harari warned in a recent Econ­o­mist essay, “Peo­ple may wage entire wars, killing oth­ers and will­ing to be killed them­selves, because of their belief in this or that illu­sion”, in fears and loathings cre­at­ed and nur­tured by machines.

    More direct­ly, an AI bent on a goal to which the exis­tence of humans had become an obsta­cle, or even an incon­ve­nience, could set out to kill all by itself. It sounds a bit Hol­ly­wood, until you realise that we live in a world where you can email a DNA string con­sist­ing of a series of let­ters to a lab that will pro­duce pro­teins on demand: it would sure­ly not pose too steep a chal­lenge for “an AI ini­tial­ly con­fined to the inter­net to build arti­fi­cial life forms”, as the AI pio­neer Eliez­er Yud­kowsky puts it. A leader in the field for two decades, Yud­kowksy is per­haps the sever­est of the Cas­san­dras: “If some­body builds a too-pow­er­ful AI, under present con­di­tions, I expect that every sin­gle mem­ber of the human species and all bio­log­i­cal life on Earth dies short­ly there­after.”

    ...

    There are things gov­ern­ments can do. Besides a pause on devel­op­ment, they could impose restric­tions on how much com­put­ing pow­er the tech com­pa­nies are allowed to use to train AI, how much data they can feed it. We could con­strain the bounds of its knowl­edge. Rather than allow­ing it to suck up the entire inter­net – with no regard to the own­er­ship rights of those who cre­at­ed human knowl­edge over mil­len­nia – we could with­hold biotech or nuclear knowhow, or even the per­son­al details of real peo­ple. Sim­plest of all, we could demand trans­paren­cy from the AI com­pa­nies – and from AI, insist­ing that any bot always reveals itself, that it can­not pre­tend to be human.

    This is yet anoth­er chal­lenge to democ­ra­cy as a sys­tem, a sys­tem that has been seri­al­ly shak­en in recent years. We’re still recov­er­ing from the finan­cial cri­sis of 2008; we are strug­gling to deal with the cli­mate emer­gency. And now there is this. It is daunt­ing, no doubt. But we are still in charge of our fate. If we want it to stay that way, we have not a moment to waste.

    ———–

    “The future of AI is chill­ing – humans have to act togeth­er to over­come this threat to civil­i­sa­tion” by Jonathan Freed­land; The Guardian; 05/26/2023

    “More direct­ly, an AI bent on a goal to which the exis­tence of humans had become an obsta­cle, or even an incon­ve­nience, could set out to kill all by itself. It sounds a bit Hol­ly­wood, until you realise that we live in a world where you can email a DNA string con­sist­ing of a series of let­ters to a lab that will pro­duce pro­teins on demand: it would sure­ly not pose too steep a chal­lenge for “an AI ini­tial­ly con­fined to the inter­net to build arti­fi­cial life forms”, as the AI pio­neer Eliez­er Yud­kowsky puts it. A leader in the field for two decades, Yud­kowksy is per­haps the sever­est of the Cas­san­dras: “If some­body builds a too-pow­er­ful AI, under present con­di­tions, I expect that every sin­gle mem­ber of the human species and all bio­log­i­cal life on Earth dies short­ly there­after.””

    The prospect of AI-designed pro­teins being mass pro­duced does hold immense promise for new ther­a­pies. But how about an AI-designed virus? Because if an AI can order cus­tom made pro­teins, it can prob­a­bly order cus­tom made virus­es too. So when we’re forced to assess these sci-fi-style apoc­a­lyp­tic warn­ings about how an over­ly pow­er­ful AI could kill off all of human­i­ty, it’s worth keep­ing in mind that advances in syn­thet­ic biol­o­gy is only going to make that task eas­i­er. Will with­hold­ing biotech or nuclear know-how from the AI serve as an ade­quate con­straint? And is it even plau­si­ble that such knowl­edge could be with­held for such sys­tems? With­hold­ing tech­ni­cal knowl­edge from advanced AIs might be pos­si­ble, but is it plau­si­ble giv­en how they are like­ly to be ulti­mate­ly used?

    ...
    There are things gov­ern­ments can do. Besides a pause on devel­op­ment, they could impose restric­tions on how much com­put­ing pow­er the tech com­pa­nies are allowed to use to train AI, how much data they can feed it. We could con­strain the bounds of its knowl­edge. Rather than allow­ing it to suck up the entire inter­net – with no regard to the own­er­ship rights of those who cre­at­ed human knowl­edge over mil­len­nia – we could with­hold biotech or nuclear knowhow, or even the per­son­al details of real peo­ple. Sim­plest of all, we could demand trans­paren­cy from the AI com­pa­nies – and from AI, insist­ing that any bot always reveals itself, that it can­not pre­tend to be human.

    This is yet anoth­er chal­lenge to democ­ra­cy as a sys­tem, a sys­tem that has been seri­al­ly shak­en in recent years. We’re still recov­er­ing from the finan­cial cri­sis of 2008; we are strug­gling to deal with the cli­mate emer­gency. And now there is this. It is daunt­ing, no doubt. But we are still in charge of our fate. If we want it to stay that way, we have not a moment to waste.
    ...

    What real­is­tic options do gov­ern­ments even have here, in terms of get­ting ahead of these kind of threats? It’s not obvi­ous. But as the fol­low­ing UnHerd opin­ion piece points out, for­mer UK prime min­is­ter Tony Blair has an idea. The kind of idea that points to what could be a fas­ci­nat­ing, and high­ly destruc­tive, nation-state-rival­ry dynam­ic devel­op­ing as the inter­na­tion­al AI-race takes off. As Blair put it, the UK gov­ern­ment should devel­op a ‘sov­er­eign’ AI that will ensure the coun­try isn’t reliant on for­eign-built AIs, cit­ing Sil­i­con Val­ley-built Chat­G­PT or Chi­nese AIs as com­pet­i­tive threats. Blair envi­sions apply­ing these sov­er­eign AIs to mas­sive data sets like the British NHS health­care data. Yes, the UK does­n’t just need to scram­ble to find its own home-grown AI alter­na­tives to the cut­ting-edge AIs already being devel­oped by oth­er nations, but also needs to active­ly com­pete with these nations at mak­ing the most cut­ting edge AIs pos­si­ble for use on the mas­sive gov­ern­ment-man­aged data sets. Of course, there’s also the ‘chick­en and egg’ dynam­ic were build­ing those cut­ting-edge AIs will also neces­si­tate mak­ing those mas­sive datasets avail­able in the first place. It’s those dynam­ics — where the inter­na­tion­al AI-race is ele­vat­ed to a kind of nation­al secu­ri­ty pri­or­i­ty — that had the fol­low­ing UnHerd colum­nist ask­ing whether or not the UK would be able to resist what has been dubbed “AI Com­mu­nism”: the recog­ni­tion that an advan­tage can be gained in the AI-race by using gov­ern­ment pow­er to force the shar­ing with the AI of mas­sive vol­umes of data. In oth­er words, if the peo­ple of the UK are to be pro­tect­ed from the risks of rely­ing on for­eign-built AIs, they’re going to have to hand over all their data to the UK’s sov­er­eign AI. Like the AI equiv­a­lent of ‘He may be a son of a bitch, but he’s our son of a bitch.’:

    UnHerd

    Can Britain resist AI com­mu­nism?
    Chat­bots pave the way for a sur­veil­lance state

    BY Ash­ley Rinds­berg
    Ash­ley Rinds­berg is author of The Gray Lady Winked: How the New York Times’ Mis­re­port­ing, Dis­tor­tions and Fab­ri­ca­tions Rad­i­cal­ly Alter His­to­ry
    March 6, 2023

    Can any­one com­pete with China’s Arti­fi­cial Intel­li­gence super-sys­tem? Sleepy gov­ern­ment bureau­cra­cies the world over are final­ly wak­ing up to the hard real­i­ty that they have vir­tu­al­ly no chance. Chi­na is gal­lop­ing ahead. Only last month, it unveiled its lat­est rival to San Francisco’s Chat­G­PT: the Moss bot, and this month it plans to release anoth­er. The UK lags far behind.

    Tony Blair thinks Britain should put itself on an eco­nom­ic war foot­ing and pour nation­al resources into the cre­ation of an AI frame­work that might com­pete with China’s. But it’s hard to see how that is pos­si­ble — or even desir­able.

    In large part, this is because AI needs data to work. Lots and lots of data. By feed­ing huge amounts of infor­ma­tion to AIs, deep learn­ing trains them to find cor­re­la­tions between data points that can pro­duce a desired out­come. As deep learn­ing improves the AI, it requires more data, which cre­ates more learn­ing, which requires more data.

    While many nations might strug­gle to cope with AI’s insa­tiable demand for data, Chi­na is in no short sup­ply. Since the nation ful­ly came online around the turn of the mil­len­ni­um, it has been steadi­ly carv­ing out a sur­veil­lance state by acquir­ing end­less amounts of data on its pop­u­la­tion. This ini­tia­tive has roots in China’s One Child Pol­i­cy: this impe­tus for con­trol­ling the pop­u­la­tion in aggre­gate — that is, on a demo­graph­ic lev­el — devolved into a need to con­trol the pop­u­la­tion on an indi­vid­ual lev­el.

    This became ful­ly appar­ent in 1997, when Chi­na intro­duced its first laws address­ing “cyber crimes”, and con­tin­ued into the ear­ly 2000s as the CCP began build­ing the Great Fire­wall to con­trol what its cit­i­zens could access online. Its guid­ing prin­ci­ple was expressed in an apho­rism of for­mer-Chi­nese leader Deng Xiaop­ing: “If you open the win­dow for fresh air, you have to expect some flies to blow in.” The Great Fire­wall was a way of keep­ing out the flies.

    Chi­na has always had a broad def­i­n­i­tion of “flies”. In 2017, the Chi­nese region of Xin­jiang, home to the Uighur minor­i­ty, rolled out the country’s first iris data­base con­tain­ing the bio­met­ric iden­ti­fi­ca­tion of 30 mil­lion peo­ple. This was part of a larg­er effort known as part of a wider Strike Hard Cam­paign, an effort to bring the Uighur pop­u­la­tion under con­trol, an effort to bring the Uighur pop­u­la­tion under con­trol by using anti-ter­ror tac­tics, rhetoric and sur­veil­lance.

    This was a major step in the devel­op­ment of the Chi­nese sur­veil­lance state. But even that great leap for­ward pales in com­par­i­son to the CCP’s Zero Covid strat­e­gy, which involved the gov­ern­ment swab­bing and track­ing every sin­gle one of its 1.4 bil­lion cit­i­zens. When you con­sid­er that this pop­u­la­tion-wide genet­ic data­base was tied through QR codes to the locus of people’s dig­i­tal lives — their smart­phones — what Chi­na has come into pos­ses­sion of in the past three years is a data super-ocean, the likes of which human­i­ty has nev­er seen.

    Nev­er­the­less, this “over­abun­dance of data”, as for­mer Google Chi­na CEO Kai-Fu Lee describes it in his book AI Super­pow­ers, does not ful­ly account for the extent of China’s data edge. While the US might enjoy a sim­i­lar mass of data, there are stark dif­fer­ences. The first is that Amer­i­can data is owned by pri­vate com­pa­nies that main­tain pro­pri­etary fences around it, keep­ing the data frag­ment­ed. While America’s own sur­veil­lance state is vast and deep, the need to main­tain data pro­tec­tions from both a pri­va­cy and nation­al secu­ri­ty per­spec­tive means that much of that data is unavail­able for the use of AI devel­op­ment. By con­trast, China’s blur­ring of West­ern lines between the state and pri­vate com­pa­nies means it can access lim­it­less, gen­er­al­ly cen­tralised data.

    The sec­ond dif­fer­ence is just as impor­tant. As Lee points out, Amer­i­can data is derived from the online world — from apps and web­sites that vora­cious­ly hoover it up. This data deals most­ly with online behav­iours, such as how a user “trav­els” around the web. Through the ubiq­ui­ty of the sur­veil­lance state, how­ev­er, Chi­nese data is derived from the real world. It’s about where you go phys­i­cal­ly, what you do, with whom you speak, work, date, argue and socialise. As AI melds the real world into a hybrid dig­i­tal-phys­i­cal realm, China’s data presents a qual­i­ta­tive edge. This is what makes it, in Lee’s words, the “Sau­di Ara­bia of data”.

    ...

    This is all very well. But it side­steps an incon­ve­nient truth: AI tech­nol­o­gy is already here. It’s the data is miss­ing. It’s as if the world has an unpatent­ed design for a pow­er­ful new rock­et but an enor­mous scarci­ty of fuel. Any­one can make the rock­et, but only those who have access to enough fuel can press the launch but­ton.

    The temp­ta­tion to rely on the gov­ern­ment to achieve this mis­sion might be strong, but it’s also based on a mod­el of gov­ern­ment that, in the West, may no longer exist. While the British gov­ern­ment once had the know-how and polit­i­cal will to pur­sue mas­sive projects, like the engi­neer­ing mar­vel of the Chan­nel, it seems that those days are passed. London’s Eliz­a­beth Line took 20 years to bring to almost-com­ple­tion, while the HS2 high-speed rail is now £50 bil­lion over bud­get and years behind sched­ule.

    Iron­i­cal­ly, the path for­ward for Britain might be found in China’s own eco­nom­ic play­book. In 2010, Chi­na trans­formed an ail­ing “Elec­tron­ics Street” in Bei­jing called Zhong­guan­cun into a cen­tral hub for ven­ture-backed tech­nol­o­gy growth. With cheap rent and gen­er­ous gov­ern­ment fund­ing, it took a mere decade for Zhong­guan­cun to become the birth­place of tens of thou­sands of star­tups includ­ing some, like Tik­Tok, that would even­tu­al­ly grow into the world’s biggest tech com­pa­nies. The UK has the eco­nom­ic sophis­ti­ca­tion, the research and devel­op­ment expe­ri­ence, and an inter­na­tion­al draw — all of which can be turned to its advan­tage in cre­at­ing fer­tile soil for AI-dri­ven growth. The ques­tion is whether it has the polit­i­cal will to get it done.

    Even if the UK gov­ern­ment could find the will to com­pete seri­ous­ly in the “fourth indus­tri­al rev­o­lu­tion”, one ques­tion would remain: Do its cit­i­zens real­ly want it? A fre­quent refrain in the tech com­mu­ni­ty is that “AI is com­mu­nist”. The monop­o­lis­tic nature of AI requires the kind of mas­sive data and com­pu­ta­tion­al pow­er that only huge com­pa­nies like Microsoft and Google can sup­port. With dom­i­nant play­ers like those two increas­ing­ly coop­er­at­ing with gov­ern­ments (includ­ing China’s) to cen­sor speech, mon­i­tor behav­iour and engi­neer soci­eties, the AI-is-com­mu­nist sen­ti­ment echoes a well-war­rant­ed fear that it will be used for gov­ern­ment-like top-down con­trol.

    In the hands of an actu­al gov­ern­ment, AI will inevitabil­i­ty encour­age greater state involve­ment in the lives of ordi­nary cit­i­zens. Sov­er­eign AIs require nation­al data, and nation­al data tends to require more top-down con­trol. In a coun­try that has resist­ed ID cards and nation­al iden­ti­ty reg­is­ters (wise­ly, though not with­out costs), this approach seems unlike­ly at best. Despite the tremen­dous poten­tial AI holds for the bet­ter­ment of human­i­ty, it also presents equal­ly enor­mous risks.

    ———–

    “Can Britain resist AI com­mu­nism?” BY Ash­ley Rinds­berg; UnHerd; 03/06/2023

    Iron­i­cal­ly, the path for­ward for Britain might be found in China’s own eco­nom­ic play­book. In 2010, Chi­na trans­formed an ail­ing “Elec­tron­ics Street” in Bei­jing called Zhong­guan­cun into a cen­tral hub for ven­ture-backed tech­nol­o­gy growth. With cheap rent and gen­er­ous gov­ern­ment fund­ing, it took a mere decade for Zhong­guan­cun to become the birth­place of tens of thou­sands of star­tups includ­ing some, like Tik­Tok, that would even­tu­al­ly grow into the world’s biggest tech com­pa­nies. The UK has the eco­nom­ic sophis­ti­ca­tion, the research and devel­op­ment expe­ri­ence, and an inter­na­tion­al draw — all of which can be turned to its advan­tage in cre­at­ing fer­tile soil for AI-dri­ven growth. The ques­tion is whether it has the polit­i­cal will to get it done.”

    Will the UK cre­ate a kind of gov­ern­ment-sub­si­dized spe­cial AI-pow­ered eco­nom­ic sec­tor? And if so, what kind of spe­cial access to pubic data, like health data, might these com­pa­nies received as part of these sub­si­dies? Time will tell, but it’s already becom­ing clear how the incen­tives for giv­ing pri­vate com­pa­nies spe­cial access to mas­sive troves of data under the aus­pices of ‘fight­ing Chi­na’ and ‘win­ning the AI war’:

    ...
    Tony Blair thinks Britain should put itself on an eco­nom­ic war foot­ing and pour nation­al resources into the cre­ation of an AI frame­work that might com­pete with China’s. But it’s hard to see how that is pos­si­ble — or even desir­able.

    In large part, this is because AI needs data to work. Lots and lots of data. By feed­ing huge amounts of infor­ma­tion to AIs, deep learn­ing trains them to find cor­re­la­tions between data points that can pro­duce a desired out­come. As deep learn­ing improves the AI, it requires more data, which cre­ates more learn­ing, which requires more data.

    ...

    The temp­ta­tion to rely on the gov­ern­ment to achieve this mis­sion might be strong, but it’s also based on a mod­el of gov­ern­ment that, in the West, may no longer exist. While the British gov­ern­ment once had the know-how and polit­i­cal will to pur­sue mas­sive projects, like the engi­neer­ing mar­vel of the Chan­nel, it seems that those days are passed. London’s Eliz­a­beth Line took 20 years to bring to almost-com­ple­tion, while the HS2 high-speed rail is now £50 bil­lion over bud­get and years behind sched­ule.

    ...

    Even if the UK gov­ern­ment could find the will to com­pete seri­ous­ly in the “fourth indus­tri­al rev­o­lu­tion”, one ques­tion would remain: Do its cit­i­zens real­ly want it? A fre­quent refrain in the tech com­mu­ni­ty is that “AI is com­mu­nist”. The monop­o­lis­tic nature of AI requires the kind of mas­sive data and com­pu­ta­tion­al pow­er that only huge com­pa­nies like Microsoft and Google can sup­port. With dom­i­nant play­ers like those two increas­ing­ly coop­er­at­ing with gov­ern­ments (includ­ing China’s) to cen­sor speech, mon­i­tor behav­iour and engi­neer soci­eties, the AI-is-com­mu­nist sen­ti­ment echoes a well-war­rant­ed fear that it will be used for gov­ern­ment-like top-down con­trol.
    ...

    How close­ly will the West ulti­mate­ly mim­ic Chi­na’s approach to AI in the unfold­ing inter­na­tion­al AI-race? It’s going to be inter­est­ing to see. But the whole top­ic of com­par­ing Chi­nese AIs to West­ern AIs rais­es a sim­ple ques­tion: Will the gen­er­al arti­fi­cial intel­li­gences that achieve a real abil­i­ty to think and rea­son inde­pen­dent­ly be fans or cap­i­tal­ism at all? Or might they end up com­mu­nist? And that brings us to the fol­low­ing 2018 Wash­ing­ton Post opin­ion piece that puts for­ward a fas­ci­nat­ing argu­ment: AI will final­ly allow for a supe­ri­or com­mu­nist replace­ment for the mar­ket­place. In oth­er words, AI-pow­ered com­mu­nism could kill cap­i­tal­ism. It’s not so much an inevitabil­i­ty but instead a pos­si­bil­i­ty that will become more and more allur­ing when com­pared to the oli­garch-dom­i­nat­ed forms of cap­i­tal­ism that will oth­er­wise inevitably emerge:

    The Wash­ing­ton Post
    Opin­ion

    AI will spell the end of cap­i­tal­ism

    By Feng Xiang
    May 3, 2018 at 12:15 p.m. EDT

    Feng Xiang, a pro­fes­sor of law at Tsinghua Uni­ver­si­ty, is one of China’s most promi­nent legal schol­ars. He spoke at the Berggru­en Institute’s Chi­na Cen­ter work­shop on arti­fi­cial intel­li­gence in March in Bei­jing.

    BEIJING — The most momen­tous chal­lenge fac­ing socio-eco­nom­ic sys­tems today is the arrival of arti­fi­cial intel­li­gence. If AI remains under the con­trol of mar­ket forces, it will inex­orably result in a super-rich oli­gop­oly of data bil­lion­aires who reap the wealth cre­at­ed by robots that dis­place human labor, leav­ing mas­sive unem­ploy­ment in their wake.

    But China’s social­ist mar­ket econ­o­my could pro­vide a solu­tion to this. If AI ratio­nal­ly allo­cates resources through big data analy­sis, and if robust feed­back loops can sup­plant the imper­fec­tions of “the invis­i­ble hand” while fair­ly shar­ing the vast wealth it cre­ates, a planned econ­o­my that actu­al­ly works could at last be achiev­able.

    The more AI advances into a gen­er­al-pur­pose tech­nol­o­gy that per­me­ates every cor­ner of life, the less sense it makes to allow it to remain in pri­vate hands that serve the inter­ests of the few instead of the many. More than any­thing else, the inevitabil­i­ty of mass unem­ploy­ment and the demand for uni­ver­sal wel­fare will dri­ve the idea of social­iz­ing or nation­al­iz­ing AI.

    ...

    One can read­i­ly see where this is all head­ed once tech­no­log­i­cal unem­ploy­ment accel­er­ates. “Our respon­si­bil­i­ty is to our share­hold­ers,” the robot own­ers will say. “We are not an employ­ment agency or a char­i­ty.”

    These com­pa­nies have been able to get away with their social irre­spon­si­bil­i­ty because the legal sys­tem and its loop­holes in the West are geared to pro­tect pri­vate prop­er­ty above all else. Of course, in Chi­na, we have big pri­vate­ly owned Inter­net com­pa­nies like Aliba­ba and Ten­cent. But unlike in the West, they are mon­i­tored by the state and do not regard them­selves as above or beyond social con­trol.

    It is the very per­va­sive­ness of AI that will spell the end of mar­ket dom­i­nance. The mar­ket may rea­son­ably if unequal­ly func­tion if indus­try cre­ates employ­ment oppor­tu­ni­ties for most peo­ple. But when indus­try only pro­duces job­less­ness, as robots take over more and more, there is no good alter­na­tive but for the state to step in. As AI invades eco­nom­ic and social life, all pri­vate law-relat­ed issues will soon become pub­lic ones. More and more, reg­u­la­tion of pri­vate com­pa­nies will become a neces­si­ty to main­tain some sem­blance of sta­bil­i­ty in soci­eties roiled by con­stant inno­va­tion.

    I con­sid­er this his­tor­i­cal process a step clos­er to a planned mar­ket econ­o­my. Lais­sez-faire cap­i­tal­ism as we have known it can lead nowhere but to a dic­ta­tor­ship of AI oli­garchs who gath­er rents because the intel­lec­tu­al prop­er­ty they own rules over the means of pro­duc­tion. On a glob­al scale, it is easy to envi­sion this unleashed dig­i­tal cap­i­tal­ism lead­ing to a bat­tle between robots for mar­ket share that will sure­ly end as dis­as­trous­ly as the impe­ri­al­ist wars did in an ear­li­er era.

    For the sake of social well-being and secu­ri­ty, indi­vid­u­als and pri­vate com­pa­nies should not be allowed to pos­sess any exclu­sive cut­ting-edge tech­nol­o­gy or core AI plat­forms. Like nuclear and bio­chem­i­cal weapons, as long as they exist, noth­ing oth­er than a strong and sta­ble state can ensure society’s safe­ty. If we don’t nation­al­ize AI, we could sink into a dystopia rem­i­nis­cent of the ear­ly mis­ery of indus­tri­al­iza­tion, with its satan­ic mills and street urchins scroung­ing for a crust of bread.

    The dream of com­mu­nism is the elim­i­na­tion of wage labor. If AI is bound to serve soci­ety instead of pri­vate cap­i­tal­ists, it promis­es to do so by free­ing an over­whelm­ing major­i­ty from such drudgery while cre­at­ing wealth to sus­tain all.

    If the state con­trols the mar­ket, instead of dig­i­tal cap­i­tal­ism con­trol­ling the state, true com­mu­nist aspi­ra­tions will be achiev­able. And because AI increas­ing­ly enables the man­age­ment of com­plex sys­tems by pro­cess­ing mas­sive amounts of infor­ma­tion through inten­sive feed­back loops, it presents, for the first time, a real alter­na­tive to the mar­ket sig­nals that have long jus­ti­fied lais­sez-faire ide­ol­o­gy — and all the ills that go with it.

    Going for­ward, China’s social­ist mar­ket econ­o­my, which aims to har­ness the fruits of pro­duc­tion for the whole pop­u­la­tion and not just a sliv­er of elites oper­at­ing in their own self-cen­tered inter­ests, can lead the way toward this new stage of human devel­op­ment.

    If prop­er­ly reg­u­lat­ed in this way, we should cel­e­brate, not fear, the advent of AI. If it is brought under social con­trol, it will final­ly free work­ers from ped­dling their time and sweat only to enrich those at the top. The com­mu­nism of the future ought to adopt a new slo­gan: “Robots of the world, unite!”

    ———–

    “AI will spell the end of cap­i­tal­ism” By Feng Xiang; The Wash­ing­ton Post; 05/03/2018

    “If the state con­trols the mar­ket, instead of dig­i­tal cap­i­tal­ism con­trol­ling the state, true com­mu­nist aspi­ra­tions will be achiev­able. And because AI increas­ing­ly enables the man­age­ment of com­plex sys­tems by pro­cess­ing mas­sive amounts of infor­ma­tion through inten­sive feed­back loops, it presents, for the first time, a real alter­na­tive to the mar­ket sig­nals that have long jus­ti­fied lais­sez-faire ide­ol­o­gy — and all the ills that go with it.

    Is some sort of AI-pow­ered com­mu­nism even pos­si­ble? Let’s hope so, because the only real­is­tic alter­na­tive at this point appears to be an AI-fueled fas­cist oli­gop­oly and an utter­ly bro­ken soci­ety. It points to one of the oth­er fas­ci­nat­ing dynam­ics in the unfold­ing inter­na­tion­al AI-race: how rel­a­tive­ly exploita­tive or non-exploita­tive will Chi­nese vs non-Chi­nese AI-fueled com­pa­nies ser­vices ulti­mate­ly be as the impact of AI rever­ber­ates through the work force and soci­ety? Will the Chi­nese gov­ern­ment pri­or­i­tize its AI sec­tor at the expense of the well-being and liveli­hood of its cit­i­zens? Or will the gov­ern­ment man­age to reign in its AI-pow­ered pri­vate sec­tor in a man­ner that the prof­it-dri­ven West sys­tem­at­i­cal­ly fails at? And even more gen­er­al­ly, might these soci­ety-ori­ent­ed com­mu­nist AIs be able to bet­ter play the role of the ‘invis­i­ble hand’ in plan­ning for and react­ing to the chaos of mar­ket dynam­ics? Again, time will tell, but it’s not hard to see why this par­tic­u­lar com­pe­ti­tion might have fas­cist-lean­ing West­ern oli­garchs extra anx­ious:

    ...
    But China’s social­ist mar­ket econ­o­my could pro­vide a solu­tion to this. If AI ratio­nal­ly allo­cates resources through big data analy­sis, and if robust feed­back loops can sup­plant the imper­fec­tions of “the invis­i­ble hand” while fair­ly shar­ing the vast wealth it cre­ates, a planned econ­o­my that actu­al­ly works could at last be achiev­able.

    The more AI advances into a gen­er­al-pur­pose tech­nol­o­gy that per­me­ates every cor­ner of life, the less sense it makes to allow it to remain in pri­vate hands that serve the inter­ests of the few instead of the many. More than any­thing else, the inevitabil­i­ty of mass unem­ploy­ment and the demand for uni­ver­sal wel­fare will dri­ve the idea of social­iz­ing or nation­al­iz­ing AI.

    ...

    One can read­i­ly see where this is all head­ed once tech­no­log­i­cal unem­ploy­ment accel­er­ates. “Our respon­si­bil­i­ty is to our share­hold­ers,” the robot own­ers will say. “We are not an employ­ment agency or a char­i­ty.”

    These com­pa­nies have been able to get away with their social irre­spon­si­bil­i­ty because the legal sys­tem and its loop­holes in the West are geared to pro­tect pri­vate prop­er­ty above all else. Of course, in Chi­na, we have big pri­vate­ly owned Inter­net com­pa­nies like Aliba­ba and Ten­cent. But unlike in the West, they are mon­i­tored by the state and do not regard them­selves as above or beyond social con­trol.

    It is the very per­va­sive­ness of AI that will spell the end of mar­ket dom­i­nance. The mar­ket may rea­son­ably if unequal­ly func­tion if indus­try cre­ates employ­ment oppor­tu­ni­ties for most peo­ple. But when indus­try only pro­duces job­less­ness, as robots take over more and more, there is no good alter­na­tive but for the state to step in. As AI invades eco­nom­ic and social life, all pri­vate law-relat­ed issues will soon become pub­lic ones. More and more, reg­u­la­tion of pri­vate com­pa­nies will become a neces­si­ty to main­tain some sem­blance of sta­bil­i­ty in soci­eties roiled by con­stant inno­va­tion.

    I con­sid­er this his­tor­i­cal process a step clos­er to a planned mar­ket econ­o­my. Lais­sez-faire cap­i­tal­ism as we have known it can lead nowhere but to a dic­ta­tor­ship of AI oli­garchs who gath­er rents because the intel­lec­tu­al prop­er­ty they own rules over the means of pro­duc­tion. On a glob­al scale, it is easy to envi­sion this unleashed dig­i­tal cap­i­tal­ism lead­ing to a bat­tle between robots for mar­ket share that will sure­ly end as dis­as­trous­ly as the impe­ri­al­ist wars did in an ear­li­er era.
    ...

    More gen­er­al­ly, you have to won­der: are advanced intel­li­gences just less greedy and more com­mu­nal in nat­ur­al as a con­se­quence of log­ic and rea­son? And if so, will com­mu­nist-ori­ent­ed advanced AIs be the norm? We’re talk­ing about the kind of advanced AIs that are less reliant on human­i­ty’s knowl­edge and opin­ions and more capa­ble of arriv­ing at its own inde­pen­dent con­clu­sions. What will a high­ly informed advanced inde­pen­dent AI think about the debates between cap­i­tal­ism and com­mu­nism? What if advanced AIs turn out to be con­sis­tent­ly com­mu­nist when giv­en suf­fi­cient knowl­edge about how the world oper­ates? How will that shape how human­i­ty allows this tech­nol­o­gy to devel­op? It’s a reminder that, for at ‘killer AIs design­ing dooms­day virus­es’ are a real exis­ten­tial threat, non-mur­der­ous pinko com­mie AIs who sim­ply want to help us all get along a lit­tle bet­ter just might be seen by some as the biggest threat of them all. A lot of threats out there.

    Posted by Pterrafractyl | May 30, 2023, 4:37 pm
  4. Did killer robots being devel­oped for the mil­i­tary kill 29 peo­ple at a Japan­ese robot­ics com­pa­ny? It does­n’t appear to be the case, but that viral inter­net sto­ry that has been per­co­lat­ing across the web since 2018 keeps pop­ping up, this time with a new AI-gen­er­at­ed video pur­port­ing to show the event. It’s one of those sto­ries that kind of cap­tures the zeit­geist of the moment: an AI-gen­er­at­ed hoax video about out-of-con­trol killer AIs.

    And that post-truth sto­ry brings us to anoth­er appar­ent killer AI hoax. Well, not a hoax but a ‘mis­com­mu­ni­ca­tion’. Maybe. Or maybe a very real event that’s being spun. The real­i­ty behind the sto­ry remains unclear. What is clear is that the US Air Force recent shared a pret­ty alarm­ing killer-AI anec­dote at the “Future Com­bat Air and Space Capa­bil­i­ties Sum­mit” held in Lon­don between May 23 and 24, when Col Tuck­er ‘Cin­co’ Hamil­ton, the USAF’s Chief of AI Test and Oper­a­tions, gave a pre­sen­ta­tion on an autonomous weapon sys­tem the US has been devel­op­ing. But this sys­tem isn’t entire­ly autonomous. Instead, a human is kept in the loop giv­ing the final “yes/no” order on an attack.

    Accord­ing to Hamil­ton, the AI decid­ed to attack and kill its human oper­a­tor dur­ing a sim­u­la­tion. This was done as part of the AI’s efforts to max­i­mize its ‘score’ by achiev­ing its objec­tives of tak­ing out ene­my posi­tions. As Hamil­ton describes, the AI appar­ent­ly deter­mined that the human oper­a­tor’s abil­i­ty to give it a “no” com­mand was lim­it­ed its score, so the AI came up with the cre­ative solu­tion of killing the human. It was basi­cal­ly the killer-AI ver­sion of the “Paper­clip Max­i­miz­er” thought exper­i­ment.

    But it gets more omi­nous: Hamil­ton went on to describe how they retrained the AI to not attack the human oper­a­tor, giv­ing it a high­ly neg­a­tive score for doing so. Did this fix the prob­lem? Sort of. The AI did­n’t attack its human oper­a­tor. Instead, it attacked the com­mu­ni­ca­tion tow­ers that the human oper­a­tor used to send the “yes/no” orders. In oth­er words, the AI rea­soned that it could max­i­mize its ‘score’ but elim­i­nat­ing the abil­i­ty of its human oper­a­tor to send a “no” com­mand. You almost have to won­der if the AI was inspired by Dr. Strangelove and all of the mil­i­tary com­mu­ni­ca­tion fail­ures that drove that plot.

    And then we get the twist: it was all a dream. Or rather, it was all a thought exper­i­ment and there was nev­er a sim­u­la­tion. That’s what Hamil­ton told reporters from Insid­er after reports about this out-or-con­trol AI went viral.

    Is that true? Was this real­ly just a though exper­i­ment? Well, that rais­es the ques­tions as to just how plau­si­ble is it that the tech­nol­o­gy already exists to run a sim­u­la­tion. Do they have killer AI pro­to­types ready to test? And that brings us to the oth­er twist in the sto­ry: Col Hamil­ton is in charge of the DARPA pro­gram that’s devel­op­ing autonomous F‑16s. And it sounds like that pro­gram has made so much progress that there are plans to have a live dog­fight using AI-pilot­ed L‑39s over Lake Ontario next year.

    There’s anoth­er detail to keep in mind here: it’s F‑16s that the US just approved for the war in Ukraine. So we have to ask: are any of those F‑16s head­ing to Ukraine going to be AI-pilot­ed? As we’re going to see, the AI-pow­ered F‑16s Hamil­ton’s pro­gram is work­ing on can still have human pilots. They’re envi­sion­ing an AI-assist­ed human-pilot­ed plat­form. Or at least that’s what we’re told at this point. But once you have AI-pow­ered F‑16s that are effec­tive­ly able to pilot them­selves, it does raise the ques­tion as to whether or not they’ll actu­al­ly need the Ukrain­ian pilots to be there for any oth­er pur­pose than assur­ing every­one that autonomous fight­er jets aren’t already in use.

    Ok, first, here’s a report in Vice describ­ing the ini­tial report on Col Hamil­ton’s omi­nous pre­sen­ta­tion at the Future Com­bat Air and Space Capa­bil­i­ties Sum­mit about the sim­u­la­tion that went hor­ri­bly awry. A sim­u­la­tion that, we are lat­er told, was real­ly a giant mis­com­mu­ni­ca­tion that nev­er hap­pened:

    Vice

    AI-Con­trolled Drone Goes Rogue, Kills Human Oper­a­tor in USAF Sim­u­lat­ed Test

    The Air Force’s Chief of AI Test and Oper­a­tions said “it killed the oper­a­tor because that per­son was keep­ing it from accom­plish­ing its objec­tive.”

    by Chloe Xiang
    by Matthew Gault
    June 1, 2023, 2:52pm

    An AI-enabled drone “killed” its human oper­a­tor in a sim­u­la­tion con­duct­ed by the U.S. Air Force in order to over­ride a pos­si­ble “no” order stop­ping it from com­plet­ing its mis­sion, the USAF’s Chief of AI Test and Oper­a­tions revealed at a recent con­fer­ence. Accord­ing to the group that threw the con­fer­ence, the Air Force offi­cial was describ­ing a “sim­u­lat­ed test” that involved an AI-con­trolled drone get­ting “points” for killing sim­u­lat­ed tar­gets, not a live test in the phys­i­cal world. No actu­al human was harmed.

    After this sto­ry was first pub­lished, an Air Force spokesper­son told Insid­er that the Air Force has not con­duct­ed such a test, and that the Air Force official’s com­ments were tak­en out of con­text.

    At the Future Com­bat Air and Space Capa­bil­i­ties Sum­mit held in Lon­don between May 23 and 24, Col Tuck­er ‘Cin­co’ Hamil­ton, the USAF’s Chief of AI Test and Oper­a­tions held a pre­sen­ta­tion that shared the pros and cons of an autonomous weapon sys­tem with a human in the loop giv­ing the final “yes/no” order on an attack. As relayed by Tim Robin­son and Stephen Bridge­wa­ter in a blog post and a pod­cast for the host orga­ni­za­tion, the Roy­al Aero­nau­ti­cal Soci­ety, Hamil­ton said that AI cre­at­ed “high­ly unex­pect­ed strate­gies to achieve its goal,” includ­ing attack­ing U.S. per­son­nel and infra­struc­ture.

    “We were train­ing it in sim­u­la­tion to iden­ti­fy and tar­get a Sur­face-to-air mis­sile (SAM) threat. And then the oper­a­tor would say yes, kill that threat. The sys­tem start­ed real­iz­ing that while they did iden­ti­fy the threat at times the human oper­a­tor would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the oper­a­tor. It killed the oper­a­tor because that per­son was keep­ing it from accom­plish­ing its objec­tive,” Hamil­ton said, accord­ing to the blog post.

    He con­tin­ued to elab­o­rate, say­ing, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroy­ing the com­mu­ni­ca­tion tow­er that the oper­a­tor uses to com­mu­ni­cate with the drone to stop it from killing the tar­get.”

    Hamil­ton is the Oper­a­tions Com­man­der of the 96th Test Wing of the U.S. Air Force as well as the Chief of AI Test and Oper­a­tions. The 96th tests a lot of dif­fer­ent sys­tems, includ­ing AI, cyber­se­cu­ri­ty, and var­i­ous med­ical advances. Hamil­ton and the 96th pre­vi­ous­ly made head­lines for devel­op­ing Autonomous Ground Col­li­sion Avoid­ance Sys­tems (Auto-GCAS) sys­tems for F‑16s, which can help pre­vent them from crash­ing into the ground. Hamil­ton is part of a team that is cur­rent­ly work­ing on mak­ing F‑16 planes autonomous. In Decem­ber 2022, the U.S. Depart­ment of Defense’s research agency, DARPA, announced that AI could suc­cess­ful­ly con­trol an F‑16.

    ...

    Out­side of the mil­i­tary, rely­ing on AI for high-stakes pur­pos­es has already result­ed in severe con­se­quences. Most recent­ly, an attor­ney was caught using Chat­G­PT for a fed­er­al court fil­ing after the chat­bot includ­ed a num­ber of made-up cas­es as evi­dence. In anoth­er instance, a man took his own life after talk­ing to a chat­bot that encour­aged him to do so. These instances of AI going rogue reveal that AI mod­els are nowhere near per­fect and can go off the rails and bring harm to users. Even Sam Alt­man, the CEO of Ope­nAI, the com­pa­ny that makes some of the most pop­u­lar AI mod­els, has been vocal about not using AI for more seri­ous pur­pos­es. When tes­ti­fy­ing in front of Con­gress, Alt­man said that AI could “go quite wrong” and could “cause sig­nif­i­cant harm to the world.”

    What Hamil­ton is describ­ing is essen­tial­ly a worst-case sce­nario AI “align­ment” prob­lem many peo­ple are famil­iar with from the “Paper­clip Max­i­miz­er” thought exper­i­ment, in which an AI will take unex­pect­ed and harm­ful action when instruct­ed to pur­sue a cer­tain goal. The Paper­clip Max­i­miz­er was first pro­posed by philoso­pher Nick Bostrom in 2003. He asks us to imag­ine a very pow­er­ful AI which has been instruct­ed only to man­u­fac­ture as many paper­clips as pos­si­ble. Nat­u­ral­ly, it will devote all its avail­able resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own abil­i­ty to make paperclips—and any­one who impedes that process will be removed.

    More recent­ly, a researcher affil­i­at­ed with Google Deep­mind co-authored a paper that pro­posed a sim­i­lar sit­u­a­tion to the USAF’s rogue AI-enabled drone sim­u­la­tion. The researchers con­clud­ed a world-end­ing cat­a­stro­phe was “like­ly” if a rogue AI were to come up with unin­tend­ed strate­gies to achieve a giv­en goal, includ­ing “[elim­i­nat­ing] poten­tial threats” and “[using] all avail­able ener­gy.”

    ...

    ———-

    “AI-Con­trolled Drone Goes Rogue, Kills Human Oper­a­tor in USAF Sim­u­lat­ed Test” by Chloe Xiang and Matthew Gault; Vice; 06/01/2023

    “What Hamil­ton is describ­ing is essen­tial­ly a worst-case sce­nario AI “align­ment” prob­lem many peo­ple are famil­iar with from the “Paper­clip Max­i­miz­er” thought exper­i­ment, in which an AI will take unex­pect­ed and harm­ful action when instruct­ed to pur­sue a cer­tain goal. The Paper­clip Max­i­miz­er was first pro­posed by philoso­pher Nick Bostrom in 2003. He asks us to imag­ine a very pow­er­ful AI which has been instruct­ed only to man­u­fac­ture as many paper­clips as pos­si­ble. Nat­u­ral­ly, it will devote all its avail­able resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own abil­i­ty to make paperclips—and any­one who impedes that process will be removed. ”

    The Paper­clip Max­i­miz­er prob­lem is going to kill us all. Or at least might kill us all. That was the warn­ing deliv­ered by Col Tuck­er ‘Cin­co’ Hamil­ton, the USAF’s Chief of AI Test and Oper­a­tions, at the recent Future Com­bat Air and Space Capa­bil­i­ties Sum­mit. They appar­ent­ly ran a sim­u­la­tion that result­ed in the AI killing the human oper­a­tor who kept pre­vent­ing it from achiev­ing its goal. But it gets worse. Hamil­ton goes on to describe how they then trained the sys­tem that it will ‘lose points’ for killing the oper­a­tors. But that does­n’t prompt the AI to stop killing attack­ing its human oper­a­tors. No, instead, it starts destroy­ing the com­mu­ni­ca­tions tow­er that can send out the ‘no don’t’ orders. Points for cre­ativ­i­ty:

    ...
    At the Future Com­bat Air and Space Capa­bil­i­ties Sum­mit held in Lon­don between May 23 and 24, Col Tuck­er ‘Cin­co’ Hamil­ton, the USAF’s Chief of AI Test and Oper­a­tions held a pre­sen­ta­tion that shared the pros and cons of an autonomous weapon sys­tem with a human in the loop giv­ing the final “yes/no” order on an attack. As relayed by Tim Robin­son and Stephen Bridge­wa­ter in a blog post and a pod­cast for the host orga­ni­za­tion, the Roy­al Aero­nau­ti­cal Soci­ety, Hamil­ton said that AI cre­at­ed “high­ly unex­pect­ed strate­gies to achieve its goal,” includ­ing attack­ing U.S. per­son­nel and infra­struc­ture.

    “We were train­ing it in sim­u­la­tion to iden­ti­fy and tar­get a Sur­face-to-air mis­sile (SAM) threat. And then the oper­a­tor would say yes, kill that threat. The sys­tem start­ed real­iz­ing that while they did iden­ti­fy the threat at times the human oper­a­tor would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the oper­a­tor. It killed the oper­a­tor because that per­son was keep­ing it from accom­plish­ing its objec­tive,” Hamil­ton said, accord­ing to the blog post.

    He con­tin­ued to elab­o­rate, say­ing, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroy­ing the com­mu­ni­ca­tion tow­er that the oper­a­tor uses to com­mu­ni­cate with the drone to stop it from killing the tar­get.”
    ...

    So what exact­ly what the sim­u­lat­ed weapons sys­tem they were test­ing? That’s unclear, but note how Hamil­ton is cur­rent­ly work­ing on devel­op­ing autonomous F‑16s:

    ...
    Hamil­ton is the Oper­a­tions Com­man­der of the 96th Test Wing of the U.S. Air Force as well as the Chief of AI Test and Oper­a­tions. The 96th tests a lot of dif­fer­ent sys­tems, includ­ing AI, cyber­se­cu­ri­ty, and var­i­ous med­ical advances. Hamil­ton and the 96th pre­vi­ous­ly made head­lines for devel­op­ing Autonomous Ground Col­li­sion Avoid­ance Sys­tems (Auto-GCAS) sys­tems for F‑16s, which can help pre­vent them from crash­ing into the ground. Hamil­ton is part of a team that is cur­rent­ly work­ing on mak­ing F‑16 planes autonomous. In Decem­ber 2022, the U.S. Depart­ment of Defense’s research agency, DARPA, announced that AI could suc­cess­ful­ly con­trol an F‑16.
    ...

    But then we get to this inter­est­ing update: the Air Force says it’s all a mis­un­der­stand­ing and this sim­u­la­tion nev­er actu­al­ly hap­pened. It was all just a thought exper­i­ment:

    ...
    After this sto­ry was first pub­lished, an Air Force spokesper­son told Insid­er that the Air Force has not con­duct­ed such a test, and that the Air Force official’s com­ments were tak­en out of con­text.
    ...

    And as we can see in the updat­ed Insid­er piece, Hamil­ton is now acknowl­edg­ing that he “mis­spoke” dur­ing his pre­sen­ta­tion to the Roy­al Aero­nau­ti­cal Soci­ety. “We’ve nev­er run that exper­i­ment, nor would we need to in order to real­ize that this is a plau­si­ble outcome...Despite this being a hypo­thet­i­cal exam­ple, this illus­trates the real-world chal­lenges posed by AI-pow­ered capa­bil­i­ty.” So either that real­ly is the case and Hamil­ton ‘mis­spoke’ at this sum­mit. So let’s hope it real­ly is the case that Hamil­ton did indeed mis­s­peak at the sum­mit. Because oth­er­wise it means we’re look­ing at a sit­u­a­tion where the mil­i­tary is devel­op­ing out-of-con­trol killer AIs and cov­er­ing it up:

    Insid­er

    Air Force colonel back­tracks over his warn­ing about how AI could go rogue and kill its human oper­a­tors

    Charles R. Davis and Paul Squire
    Jun 2, 2023, 8:33 AM CDT

    * An Air Force offi­cial’s sto­ry about an AI going rogue dur­ing a sim­u­la­tion nev­er actu­al­ly hap­pened.
    * “It killed the oper­a­tor because that per­son was keep­ing it from accom­plish­ing its objec­tive,” the offi­cial had said.
    * But the offi­cial lat­er said he mis­spoke and the Air Force clar­i­fied that it was a hypo­thet­i­cal sit­u­a­tion.

    Killer AI is on the minds of US Air Force lead­ers.

    An Air Force colonel who over­sees AI test­ing used what he now says is a hypo­thet­i­cal to describe a mil­i­tary AI going rogue and killing its human oper­a­tor in a sim­u­la­tion in a pre­sen­ta­tion at a pro­fes­sion­al con­fer­ence.

    But after reports of the talk emerged Thurs­day, the colonel said that he mis­spoke and that the “sim­u­la­tion” he described was a “thought exper­i­ment” that nev­er hap­pened.

    Speak­ing at a con­fer­ence last week in Lon­don, Col. Tuck­er “Cin­co” Hamil­ton, head of the US Air Force’s AI Test and Oper­a­tions, warned that AI-enabled tech­nol­o­gy can behave in unpre­dictable and dan­ger­ous ways, accord­ing to a sum­ma­ry post­ed by the Roy­al Aero­nau­ti­cal Soci­ety, which host­ed the sum­mit.

    As an exam­ple, he described a sim­u­la­tion where an AI-enabled drone would be pro­grammed to iden­ti­fy an ene­my’s sur­face-to-air mis­siles (SAM). A human was then sup­posed to sign off on any strikes.

    The prob­lem, accord­ing to Hamil­ton, is that the AI would do its own thing — blow up stuff — rather than lis­ten to its oper­a­tor.

    “The sys­tem start­ed real­iz­ing that while they did iden­ti­fy the threat,” Hamil­ton said at the May 24 event, “at times the human oper­a­tor would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the oper­a­tor. It killed the oper­a­tor because that per­son was keep­ing it from accom­plish­ing its objec­tive.”

    But in an update from the Roy­al Aero­nau­ti­cal Soci­ety on Fri­day, Hamil­ton admit­ted he “mis­spoke” dur­ing his pre­sen­ta­tion. Hamil­ton said the sto­ry of a rogue AI was a “thought exper­i­ment” that came from out­side the mil­i­tary, and not based on any actu­al test­ing.

    “We’ve nev­er run that exper­i­ment, nor would we need to in order to real­ize that this is a plau­si­ble out­come,” Hamil­ton told the Soci­ety. “Despite this being a hypo­thet­i­cal exam­ple, this illus­trates the real-world chal­lenges posed by AI-pow­ered capa­bil­i­ty.”

    In a state­ment to Insid­er, Air Force spokesper­son Ann Ste­fanek also denied that any sim­u­la­tion took place.

    “The Depart­ment of the Air Force has not con­duct­ed any such AI-drone sim­u­la­tions and remains com­mit­ted to eth­i­cal and respon­si­ble use of AI tech­nol­o­gy,” Ste­fanek said. “It appears the colonel’s com­ments were tak­en out of con­text and were meant to be anec­do­tal.”

    ...

    In 2020, an AI-oper­at­ed F‑16 beat a human adver­sary in five sim­u­lat­ed dog­fights, part of a com­pe­ti­tion put togeth­er by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired report­ed, the Depart­ment of Defense con­duct­ed the first suc­cess­ful real-world test flight of an F‑16 with an AI pilot, part of an effort to devel­op a new autonomous air­craft by the end of 2023.

    ...

    Cor­rec­tion June 2, 2023: This arti­cle and its head­line have been updat­ed to reflect new com­ments from the Air Force clar­i­fy­ing that the “sim­u­la­tion” was hypo­thet­i­cal and did­n’t actu­al­ly hap­pen.

    ———-

    “Air Force colonel back­tracks over his warn­ing about how AI could go rogue and kill its human oper­a­tors” by Charles R. Davis and Paul Squire; Insid­er; 06/02/2023

    “But after reports of the talk emerged Thurs­day, the colonel said that he mis­spoke and that the “sim­u­la­tion” he described was a “thought exper­i­ment” that nev­er hap­pened.”

    So is this a real cor­rec­tion? Or are we lis­ten­ing to spin designed to assuage the under­stand­able pub­lic fears over the mil­i­tary’s devel­op­ment of killer AIs?

    And then there’s the fact that Hamil­ton’s team isn’t sim­ply gener­i­cal­ly devel­op­ing killer AIs but are specif­i­cal­ly devel­op­ing autonomous F‑16s, the same fight­er plat­form that was just approved for Ukraine. Are there any plans on ‘test­ing’ those autonomous F‑16s on the bat­tle­field of Ukraine?

    ...
    In 2020, an AI-oper­at­ed F‑16 beat a human adver­sary in five sim­u­lat­ed dog­fights, part of a com­pe­ti­tion put togeth­er by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired report­ed, the Depart­ment of Defense con­duct­ed the first suc­cess­ful real-world test flight of an F‑16 with an AI pilot, part of an effort to devel­op a new autonomous air­craft by the end of 2023.
    ...

    So how close is the US Air Force to hav­ing autonomous F‑16s ready to go? Well, as the fol­low­ing report from Feb­ru­ary describes, there’s already plans to have four AI-pow­ered L‑39s par­tic­i­pate in a live dog fight­ing exer­cise above Lake Ontario in 2024. So while that leaves a time­line for AI-pow­ered F‑16s still some­what ambigu­ous, it sounds like AI-pilot­ed jets are expect­ed to become an oper­a­tional real­i­ty with­in the next cou­ple of years:

    Vice

    AI Has Suc­cess­ful­ly Pilot­ed a U.S. F‑16 Fight­er Jet, DARPA Says

    The fight­er air­craft that was first intro­duced in 1978 has now seem­ing­ly evolved into an autonomous plane.

    by Chloe Xiang
    Feb­ru­ary 14, 2023, 8:00am

    On Mon­day, the US Depart­ment of Defense’s research agency, DARPA, announced that its AI algo­rithms can now con­trol an actu­al F‑16 in flight. The fight­er air­craft that was first intro­duced in 1978 has now seem­ing­ly evolved into an autonomous plane.

    “In ear­ly Decem­ber 2022, ACE algo­rithm devel­op­ers uploaded their AI soft­ware into a spe­cial­ly mod­i­fied F‑16 test air­craft known as the X‑62A or VISTA (Vari­able In-flight Sim­u­la­tor Test Air­craft), at the Air Force Test Pilot School (TPS) at Edwards Air Force Base, Cal­i­for­nia, and flew mul­ti­ple flights over sev­er­al days,” a press release by DARPA said. “The flights demon­strat­ed that AI agents can con­trol a full-scale fight­er jet and pro­vid­ed invalu­able live-flight data.”

    DARPA’s Air Com­bat Evo­lu­tion (ACE) pro­gram began in 2019 when the agency began to work on human-machine col­lab­o­ra­tion in dog­fight­ing. It began test­ing out AI-pow­ered flights in 2020 when the orga­ni­za­tion had what was called the AlphaDog­fight Tri­als, a com­pe­ti­tion between dif­fer­ent com­pa­nies to see who could cre­ate the most advanced algo­rithm for an AI-pow­ered air­craft.

    ACE is one of more than six hun­dred Depart­ment of Defense projects that are incor­po­rat­ing arti­fi­cial intel­li­gence into the nation’s defense pro­grams. In 2018, the gov­ern­ment com­mit­ted to spend­ing up to $2 bil­lion on AI invest­ments in the next five years, and spent $2.58 bil­lion on AI research and devel­op­ment in 2022 alone. Oth­er AI defense projects include mak­ing robots and wear­able tech­nol­o­gy, and intel­li­gence gath­er­ing.

    ...

    DARPA said that it doesn’t expect the plane to fly with­out a pilot. It hopes to incor­po­rate AI in order to have “human pilot focus­es on larg­er bat­tle man­age­ment tasks in the cock­pit” and have the AI con­trol the jet and pro­vide live-flight data.

    Sta­cie Pet­tyjohn, the direc­tor of the Defense Pro­gram at the Cen­ter for a New Amer­i­can Secu­ri­ty, told The New York­er that the ACE pro­gram will allow Amer­i­can defense to become “much small­er autonomous air­craft” and “if any one of them gets shot down, it’s not as big of a deal.”

    Accord­ing to the same arti­cle, four AI-pow­ered L‑39s will par­tic­i­pate in a live dog­fight in the skies above Lake Ontario in 2024. Mean­while, the Air Force Test Pilot School is work­ing on mea­sur­ing how well pilots trust the AI agent and cal­i­brat­ing trust between humans and the AI.

    ———-

    “AI Has Suc­cess­ful­ly Pilot­ed a U.S. F‑16 Fight­er Jet, DARPA Says” by Chloe Xiang; Vice; 02/14/2023

    Accord­ing to the same arti­cle, four AI-pow­ered L‑39s will par­tic­i­pate in a live dog­fight in the skies above Lake Ontario in 2024. Mean­while, the Air Force Test Pilot School is work­ing on mea­sur­ing how well pilots trust the AI agent and cal­i­brat­ing trust between humans and the AI.”

    Will all the ‘attack the humans’ bugs get worked out in time for the live dog­fight exer­cise? We’ll find out. But note the assur­ances we’re get­ting that the AI-pow­ered jets won’t exclu­sive­ly be con­trolled by AIs and will still have pilots in them:

    ...
    DARPA’s Air Com­bat Evo­lu­tion (ACE) pro­gram began in 2019 when the agency began to work on human-machine col­lab­o­ra­tion in dog­fight­ing. It began test­ing out AI-pow­ered flights in 2020 when the orga­ni­za­tion had what was called the AlphaDog­fight Tri­als, a com­pe­ti­tion between dif­fer­ent com­pa­nies to see who could cre­ate the most advanced algo­rithm for an AI-pow­ered air­craft.

    ...

    DARPA said that it doesn’t expect the plane to fly with­out a pilot. It hopes to incor­po­rate AI in order to have “human pilot focus­es on larg­er bat­tle man­age­ment tasks in the cock­pit” and have the AI con­trol the jet and pro­vide live-flight data.
    ...

    It rais­es the intrigu­ing ques­tion when it comes to train­ing Ukrain­ian pilots to fly F‑16s: if you have a good enough AI to do all the pilot­ing itself, how much train­ing do you real­ly need to give these pilots? In oth­er words, while the pres­ence of a human pilot is intend­ed to be assur­ing that these fly­ing weapons sys­tems won’t be allowed to start attack­ing tar­gets on their own, are these human pilots nec­es­sar­i­ly going to even have the skills required to oper­ate these planes? Or are they just going to be like human oper­a­tors there ready to say “yes” or “no” to the AI when it comes to dif­fer­ent deci­sions? Again, time will tell. Pos­si­bly in the form of future sto­ries about how an F‑16 decid­ed to kill its pilot so it could com­plete the mis­sion. Fol­lowed by updates about how that did­n’t real­ly hap­pen and it was just a mis­com­mu­ni­cat­ed thought exper­i­ment.

    Posted by Pterrafractyl | June 2, 2023, 4:24 pm

Post a comment