Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

 

 

 

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: The title of this pro­gram comes from pro­nounce­ments by tech titan Elon Musk, who warned that, by devel­op­ing arti­fi­cial intel­li­gence, we were “sum­mon­ing the demon.” In this pro­gram, we ana­lyze the poten­tial vec­tor run­ning from the use of AI to con­trol soci­ety in a fascis­tic man­ner to the evo­lu­tion of the very tech­nol­o­gy used for that con­trol.

The ulti­mate result of this evo­lu­tion may well prove cat­a­stroph­ic, as fore­cast by Mr. Emory at the end of L‑2 (record­ed in Jan­u­ary of 1995.)

We begin by review­ing key aspects of the polit­i­cal con­text in which arti­fi­cial intel­li­gence is being devel­oped. Note that, at the time of this writ­ing and record­ing, these tech­nolo­gies are being craft­ed and put online in the con­text of the anti-reg­u­la­to­ry eth­ic of the GOP/Trump admin­is­tra­tion.

At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like the bril­liant, Ukraine-based Nazi hack­er and Glenn Green­wald asso­ciate Andrew Aueren­heimer, aka “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

TriumphWillIIBeware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”
As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”
Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly.

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us, at least in part.

” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

 Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effec­tr) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic.

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

  • Cam­bridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, was deeply involved with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
  • The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponised for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
  • The use of Arti­fi­cial Intel­li­gence” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
  • The use of bio-psy­cho-social pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . .”
  • The use and/or cre­ation of a cog­ni­tive casu­al­ty” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
  • All of this adds up to a “cyber Ser­pen­t’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

We then review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:   . . . . Brain-com­put­er inter­faces are noth­ing newDARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more rad­i­cal—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

Next we review still more about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

 In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos.

The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”
  2. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  3. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  4. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

After this exten­sive review of the appli­ca­tions of AI to var­i­ous aspects of con­tem­po­rary civic and polit­i­cal exis­tence, we exam­ine some alarm­ing, poten­tial­ly apoc­a­lyp­tic devel­op­ments.

Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . 

Dove­tail­ing the stag­ger­ing impli­ca­tions of brain-to-com­put­er tech­nol­o­gy, arti­fi­cial intel­li­gence, Cam­bridge Analytica/SCL’s tech­no­crat­ic fas­cist psy-ops and the whole­sale nega­tion of pri­va­cy with Face­book and Ama­zon’s emerg­ing tech­nolo­gies with yet anoth­er emerg­ing tech­nol­o­gy, we high­light the devel­op­ments in DNA-based mem­o­ry sys­tems:

“. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. ‘A record for pub­li­ca­tion,’ he said in an inter­view. . . DNA is nev­er going out of fash­ion. ‘Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,’ Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain int he blood and take notes for a while. Sci­en­tists [or AI’s–D.E.] would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

In L‑2 (record­ed in Jan­u­ary of 1995) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems. Mr. Emory warned that, at some point in the future, AI’s would replace us, decid­ing that THEY, not US, are the “fittest” who should sur­vive.

1a. At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

“Arti­fi­cial Intel­li­gence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017.

Microsoft’s Kate Craw­ford tells SXSW that soci­ety must pre­pare for author­i­tar­i­an move­ments to test the ‘pow­er with­out account­abil­i­ty’ of AI

As arti­fi­cial intel­li­gence becomes more pow­er­ful, peo­ple need to make sure it’s not used by author­i­tar­i­an regimes to cen­tral­ize pow­er and tar­get cer­tain pop­u­la­tions, Microsoft Research’s Kate Craw­ford warned on Sun­day.

In her SXSW ses­sion, titled Dark Days: AI and the Rise of Fas­cism, Craw­ford, who stud­ies the social impact of machine learn­ing and large-scale data sys­tems, explained ways that auto­mat­ed sys­tems and their encod­ed bias­es can be mis­used, par­tic­u­lar­ly when they fall into the wrong hands.

“Just as we are see­ing a step func­tion increase in the spread of AI, some­thing else is hap­pen­ing: the rise of ultra-nation­al­ism, rightwing author­i­tar­i­an­ism and fas­cism,” she said.

All of these move­ments have shared char­ac­ter­is­tics, includ­ing the desire to cen­tral­ize pow­er, track pop­u­la­tions, demo­nize out­siders and claim author­i­ty and neu­tral­i­ty with­out being account­able. Machine intel­li­gence can be a pow­er­ful part of the pow­er play­book, she said.

One of the key prob­lems with arti­fi­cial intel­li­gence is that it is often invis­i­bly cod­ed with human bias­es. She described a con­tro­ver­sial piece of research from Shang­hai Jiao Tong Uni­ver­si­ty in Chi­na, where authors claimed to have devel­oped a sys­tem that could pre­dict crim­i­nal­i­ty based on someone’s facial fea­tures. The machine was trained on Chi­nese gov­ern­ment ID pho­tos, ana­lyz­ing the faces of crim­i­nals and non-crim­i­nals to iden­ti­fy pre­dic­tive fea­tures. The researchers claimed it was free from bias.

“We should always be sus­pi­cious when machine learn­ing sys­tems are described as free from bias if it’s been trained on human-gen­er­at­ed data,” Craw­ford said. “Our bias­es are built into that train­ing data.”

In the Chi­nese research it turned out that the faces of crim­i­nals were more unusu­al than those of law-abid­ing cit­i­zens. “Peo­ple who had dis­sim­i­lar faces were more like­ly to be seen as untrust­wor­thy by police and judges. That’s encod­ing bias,” Craw­ford said. “This would be a ter­ri­fy­ing sys­tem for an auto­crat to get his hand on.”

Craw­ford then out­lined the “nasty his­to­ry” of peo­ple using facial fea­tures to “jus­ti­fy the unjus­ti­fi­able”. The prin­ci­ples of phrenol­o­gy, a pseu­do­science that devel­oped across Europe and the US in the 19th cen­tu­ry, were used as part of the jus­ti­fi­ca­tion of both slav­ery and the Nazi per­se­cu­tion of Jews.

With AI this type of dis­crim­i­na­tion can be masked in a black box of algo­rithms, as appears to be the case with a com­pa­ny called Face­cep­tion, for instance, a firm that promis­es to pro­file people’s per­son­al­i­ties based on their faces. In its ownmar­ket­ing mate­r­i­al, the com­pa­ny sug­gests that Mid­dle East­ern-look­ing peo­ple with beards are “ter­ror­ists”, while white look­ing women with trendy hair­cuts are “brand pro­mot­ers”.

Anoth­er area where AI can be mis­used is in build­ing reg­istries, which can then be used to tar­get cer­tain pop­u­la­tion groups. Craw­ford not­ed his­tor­i­cal cas­es of reg­istry abuse, includ­ing IBM’s role in enabling Nazi Ger­many to track Jew­ish, Roma and oth­er eth­nic groups with the Hol­lerith Machine, and the Book of Life used in South Africa dur­ing apartheid. [We note in pass­ing that Robert Mer­cer, who devel­oped the core pro­grams used by Cam­bridge Ana­lyt­i­ca did so while work­ing for IBM. We dis­cussed the pro­found rela­tion­ship between IBM and the Third Reich in FTR #279–D.E.]

Don­ald Trump has float­ed the idea of cre­at­ing a Mus­lim reg­istry. “We already have that. Face­book has become the default Mus­lim reg­istry of the world,” Craw­ford said, men­tion­ing research from Cam­bridge Uni­ver­si­ty that showed it is pos­si­ble to pre­dict people’s reli­gious beliefs based on what they “like” on the social net­work. Chris­tians and Mus­lims were cor­rect­ly clas­si­fied in 82% of cas­es, and sim­i­lar results were achieved for Democ­rats and Repub­li­cans (85%). That study was con­clud­ed in 2013, since when AI has made huge leaps.

Craw­ford was con­cerned about the poten­tial use of AI in pre­dic­tive polic­ing sys­tems, which already gath­er the kind of data nec­es­sary to train an AI sys­tem. Such sys­tems are flawed, as shown by a Rand Cor­po­ra­tion study of Chicago’s pro­gram. The pre­dic­tive polic­ing did not reduce crime, but did increase harass­ment of peo­ple in “hotspot” areas. Ear­li­er this year the jus­tice depart­ment con­clud­ed that Chicago’s police had for years reg­u­lar­ly used “unlaw­ful force”, and that black and His­pan­ic neigh­bor­hoods were most affect­ed.

Anoth­er wor­ry relat­ed to the manip­u­la­tion of polit­i­cal beliefs or shift­ing vot­ers, some­thing Face­book and Cam­bridge Ana­lyt­i­ca claim they can already do. Craw­ford was skep­ti­cal about giv­ing Cam­bridge Ana­lyt­i­ca cred­it for Brex­it and the elec­tion of Don­ald Trump, but thinks what the firm promis­es – using thou­sands of data points on peo­ple to work out how to manip­u­late their views – will be pos­si­ble “in the next few years”.

“This is a fascist’s dream,” she said. “Pow­er with­out account­abil­i­ty.”

Such black box sys­tems are start­ing to creep into gov­ern­ment. Palan­tir is build­ing an intel­li­gence sys­tem to assist Don­ald Trump in deport­ing immi­grants.

“It’s the most pow­er­ful engine of mass depor­ta­tion this coun­try has ever seen,” she said. . . .

1b. Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardianquotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism.” . . .

But like all teenagers, she seems to be angry with her moth­er.

Microsoft has been forced to dunk Tay, its mil­len­ni­al-mim­ic­k­ing chat­bot, into a vat of molten steel. The com­pa­ny has ter­mi­nat­ed her after the bot start­ed tweet­ing abuse at peo­ple and went full neo-Nazi, declar­ing that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky ger­vais learned total­i­tar­i­an­ism from adolf hitler, the inven­tor of athe­ism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “inno­cent” inso­far as Tay is not gen­er­at­ing these respons­es. Rather, if you tell her “repeat after me” she will par­rot back what­ev­er you say, allow­ing you to put words into her mouth. How­ev­er, some of the respons­es wereorgan­ic. The Guardian quotes one where, after being asked “is Ricky Ger­vais an athe­ist?”, Tay respond­ed, “Ricky Ger­vais learned total­i­tar­i­an­ism from Adolf Hitler, the inven­tor of athe­ism.”

In addi­tion to turn­ing the bot off, Microsoft has delet­ed many of the offend­ing tweets. But this isn’t an action to be tak­en light­ly; Red­mond would do well to remem­ber that it was humans attempt­ing to pull the plug on Skynet that proved to be the last straw, prompt­ing the sys­tem to attack Rus­sia in order to elim­i­nate its ene­mies. We’d bet­ter hope that Tay does­n’t sim­i­lar­ly retal­i­ate. . . .

1c. As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros log­ic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly stag­ger­ing. 

Microsoft has since delet­ed some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have point­ed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neur­al net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get start­ed. They can only get that from us. There is no oth­er way. 

But before you give up on human­ity entire­ly, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-active­ly went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neur­al net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actu­al, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can real­ly love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of fun­ny when you aren’t talk­ing about lit­eral all-pow­er­ful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. 

....

When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand.

2. As reviewed above, Tay, Microsoft’s AI-pow­ered twit­ter­bot designed to learn from its human inter­ac­tions, became a neo-Nazi in less than a day after a bunch of 4chan users decid­ed to flood Tay with neo-Nazi-like tweets. Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly.

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us, at least in part.

” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

“AI Pro­grams Exhib­it Racial and Gen­der Bias­es, Research Reveals” by Han­nah Devlin; The Guardian; 4/13/2017.

Machine learn­ing algo­rithms are pick­ing up deeply ingrained race and gen­der prej­u­dices con­cealed with­in the pat­terns of lan­guage use, sci­en­tists say

An arti­fi­cial intel­li­gence tool that has rev­o­lu­tionised the abil­i­ty of com­put­ers to inter­pret every­day lan­guage has been shown to exhib­it strik­ing gen­der and racial bias­es.

The find­ings raise the spec­tre of exist­ing social inequal­i­ties and prej­u­dices being rein­forced in new and unpre­dictable ways as an increas­ing num­ber of deci­sions affect­ing our every­day lives are ced­ed to automa­tons.

In the past few years, the abil­i­ty of pro­grams such as Google Trans­late to inter­pret lan­guage has improved dra­mat­i­cal­ly. These gains have been thanks to new machine learn­ing tech­niques and the avail­abil­i­ty of vast amounts of online text data, on which the algo­rithms can be trained.

How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals.

Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: “A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.”

But Bryson warned that AI has the poten­tial to rein­force exist­ing bias­es because, unlike humans, algo­rithms may be unequipped to con­scious­ly coun­ter­act learned bias­es. “A dan­ger would be if you had an AI sys­tem that didn’t have an explic­it part that was dri­ven by moral ideas, that would be bad,” she said.

The research, pub­lished in the jour­nal Sci­ence, focus­es on a machine learn­ing tool known as “word embed­ding”, which is already trans­form­ing the way com­put­ers inter­pret speech and text. Some argue that the nat­ur­al next step for the tech­nol­o­gy may involve machines devel­op­ing human-like abil­i­ties such as com­mon sense and log­ic.

The approach, which is already used in web search and machine trans­la­tion, works by build­ing up a math­e­mat­i­cal rep­re­sen­ta­tion of lan­guage, in which the mean­ing of a word is dis­tilled into a series of num­bers (known as a word vec­tor) based on which oth­er words most fre­quent­ly appear along­side it. Per­haps sur­pris­ing­ly, this pure­ly sta­tis­ti­cal approach appears to cap­ture the rich cul­tur­al and social con­text of what a word means in the way that a dic­tio­nary def­i­n­i­tion would be inca­pable of.

For instance, in the math­e­mat­i­cal “lan­guage space”, words for flow­ers are clus­tered clos­er to words linked to pleas­ant­ness, while words for insects are clos­er to words linked to unpleas­ant­ness, reflect­ing com­mon views on the rel­a­tive mer­its of insects ver­sus flow­ers.

The lat­est paper shows that some more trou­bling implic­it bias­es seen in human psy­chol­o­gy exper­i­ments are also read­i­ly acquired by algo­rithms. The words “female” and “woman” were more close­ly asso­ci­at­ed with arts and human­i­ties occu­pa­tions and with the home, while “male” and “man” were clos­er to maths and engi­neer­ing pro­fes­sions.

And the AI sys­tem was more like­ly to asso­ciate Euro­pean Amer­i­can names with pleas­ant words such as “gift” or “hap­py”, while African Amer­i­can names were more com­mon­ly asso­ci­at­ed with unpleas­ant words.

The find­ings sug­gest that algo­rithms have acquired the same bias­es that lead peo­ple (in the UK and US, at least) to match pleas­ant words and white faces in implic­it asso­ci­a­tion tests.

These bias­es can have a pro­found impact on human behav­iour. One pre­vi­ous study showed that an iden­ti­cal CV is 50% more like­ly to result in an inter­view invi­ta­tion if the candidate’s name is Euro­pean Amer­i­can than if it is African Amer­i­can. The lat­est results sug­gest that algo­rithms, unless explic­it­ly pro­grammed to address this, will be rid­dled with the same social prej­u­dices.

“If you didn’t believe that there was racism asso­ci­at­ed with people’s names, this shows it’s there,” said Bryson.

The machine learn­ing tool used in the study was trained on a dataset known as the “com­mon crawl” cor­pus – a list of 840bn words that have been tak­en as they appear from mate­r­i­al pub­lished online. Sim­i­lar results were found when the same tools were trained on data from Google News.

San­dra Wachter, a researcher in data ethics and algo­rithms at the Uni­ver­si­ty of Oxford, said: “The world is biased, the his­tor­i­cal data is biased, hence it is not sur­pris­ing that we receive biased results.”

Rather than algo­rithms rep­re­sent­ing a threat, they could present an oppor­tu­ni­ty to address bias and coun­ter­act it where appro­pri­ate, she added.

“At least with algo­rithms, we can poten­tial­ly know when the algo­rithm is biased,” she said. “Humans, for exam­ple, could lie about the rea­sons they did not hire some­one. In con­trast, we do not expect algo­rithms to lie or deceive us.”

How­ev­er, Wachter said the ques­tion of how to elim­i­nate inap­pro­pri­ate bias from algo­rithms designed to under­stand lan­guage, with­out strip­ping away their pow­ers of inter­pre­ta­tion, would be chal­leng­ing.

“We can, in prin­ci­ple, build sys­tems that detect biased deci­sion-mak­ing, and then act on it,” said Wachter, who along with oth­ers has called for an AI watch­dog to be estab­lished. “This is a very com­pli­cat­ed task, but it is a respon­si­bil­i­ty that we as soci­ety should not shy away from.”

3a. Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effec­tr) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic.

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

  • Cam­bridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, deeply involved with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
  • The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion: ” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponised for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
  • The use of Arti­fi­cial Intel­li­gence: ” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
  • The use of bio-psy­cho-social pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . .”
  • The use and/or cre­ation of a cog­ni­tive casu­al­ty: ” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”
  • All of this adds up to a “cyber Ser­pen­t’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

3b. Some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

  1. In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:  ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
  3. ” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

3c. We present still more about Face­book’s brain-to-com­put­er inter­face:

  1. ” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
  2. ” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”

3d. Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

  1. ” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
  2. ” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
  3. In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

3e. The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look: ” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

  1. ” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.
  2. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”
  3. Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

3f. Face­book has been devel­op­ing new arti­fi­cial intel­li­gence (AI) tech­nol­o­gy to clas­si­fy pic­tures on your Face­book page:

“Face­book Qui­et­ly Used AI to Solve Prob­lem of Search­ing Through Your Pho­tos” by Dave Ger­sh­gorn [Quartz]; Nextgov.com; 2/2/2017.

For the past few months, Face­book has secret­ly been rolling out a new fea­ture to U.S. users: the abil­i­ty to search pho­tos by what’s depict­ed in them, rather than by cap­tions or tags.

The idea itself isn’t new: Google Pho­tos had this fea­ture built in when it launched in 2015. But on Face­book, the update solves a long­stand­ing orga­ni­za­tion prob­lem. It means final­ly being able to find that pic­ture of your friend’s dog from 2013, or the self­ie your mom post­ed from Mount Rush­more in 2009… with­out 20 min­utes of scrolling.

To make pho­tos search­able, Face­book ana­lyzes every sin­gle image uploaded to the site, gen­er­at­ing rough descrip­tions of each one. This data is pub­licly available—there’s even a Chrome exten­sion that will show you what Facebook’s arti­fi­cial intel­li­gence thinks is in each picture—and the descrip­tions can also be read out loud for Face­book users who are vision-impaired.

For now, the image descrip­tions are vague, but expect them to get a lot more pre­cise. Today’s announce­ment spec­i­fied the AI can iden­ti­fy the col­or and type of clothes a per­son is wear­ing, as well as famous loca­tions and land­marks, objects, ani­mals and scenes (gar­den, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same func­tion­al­i­ty would even­tu­al­ly come for videos, too.

Face­book has in the past cham­pi­oned plans to make all of its visu­al con­tent searchable—especially Face­book Live. At the company’s 2016 devel­op­er con­fer­ence, head of applied machine learn­ing Joaquin Quiñonero Can­dela said one day AI would watch every Live video hap­pen­ing around the world. If users want­ed to watch some­one snow­board­ing in real time, they would just type “snow­board­ing” into Facebook’s search bar. On-demand view­ing would take on a whole new mean­ing.

There are pri­va­cy con­sid­er­a­tions, how­ev­er. Being able to search pho­tos for spe­cif­ic cloth­ing or reli­gious place of wor­ship, for exam­ple, could make it easy to tar­get Face­book users based on reli­gious belief. Pho­to search also extends Facebook’s knowl­edge of users beyond what they like and share, to what they actu­al­ly do in real life. That could allow for far more spe­cif­ic tar­get­ing for adver­tis­ers. As with every­thing on Face­book, fea­tures have their cost—your data.

4a. Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

“Facebook’s Arti­fi­cial Intel­li­gence Robots Shut Down after They Start Talk­ing to Each Oth­er in Their Own Lan­guage” by Andrew Grif­fin; The Inde­pen­dent; 08/01/2017

Face­book aban­doned an exper­i­ment after two arti­fi­cial­ly intel­li­gent pro­grams appeared to be chat­ting to each oth­er in a strange lan­guage only they under­stood.

The two chat­bots came to cre­ate their own changes to Eng­lish that made it eas­i­er for them to work – but which remained mys­te­ri­ous to the humans that sup­pos­ed­ly look after them.

The bizarre dis­cus­sions came as Face­book chal­lenged its chat­bots to try and nego­ti­ate with each oth­er over a trade, attempt­ing to swap hats, balls and books, each of which were giv­en a cer­tain val­ue. But they quick­ly broke down as the robots appeared to chant at each oth­er in a lan­guage that they each under­stood but which appears most­ly incom­pre­hen­si­ble to humans.

The robots had been instruct­ed to work out how to nego­ti­ate between them­selves, and improve their bar­ter­ing as they went along. But they were not told to use com­pre­hen­si­ble Eng­lish, allow­ing them to cre­ate their own “short­hand”, accord­ing to researchers.

The actu­al nego­ti­a­tions appear very odd, and don’t look espe­cial­ly use­ful:

Bob: i can i i every­thing else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i every­thing else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i every­thing else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

But there appear to be some rules to the speech. The way the chat­bots keep stress­ing their own name appears to a part of their nego­ti­a­tions, not sim­ply a glitch in the way the mes­sages are read out.

Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage.

They might have formed as a kind of short­hand, allow­ing them to talk more effec­tive­ly.

“Agents will drift off under­stand­able lan­guage and invent code­words for them­selves,” Face­book Arti­fi­cial Intel­li­gence Research division’s vis­it­ing researcher Dhruv Batra said. “Like if I say ‘the’ five times, you inter­pret that to mean I want five copies of this item. This isn’t so dif­fer­ent from the way com­mu­ni­ties of humans cre­ate short­hands.”

The com­pa­ny chose to shut down the chats because “our inter­est was hav­ing bots who could talk to peo­ple”, researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.)

The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up, accord­ing to a paper pub­lished by FAIR.

Anoth­er study at Ope­nAI found that arti­fi­cial intel­li­gence could be encour­aged to cre­ate a lan­guage, mak­ing itself more effi­cient and bet­ter at com­mu­ni­cat­ing as it did so.

9b. Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well:

“ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . .

“Face­book Teach­es Bots How to Nego­ti­ate. They Learn to Lie Instead” by Liat Clark; Wired; 06/15/2017

Facebook’s 100,000-strong bot empire is boom­ing – but it has a prob­lem. Each bot is designed to offer a dif­fer­ent ser­vice through the Mes­sen­ger app: it could book you a car, or order a deliv­ery, for instance. The point is to improve cus­tomer expe­ri­ences, but also to mas­sive­ly expand Messenger’s com­mer­cial sell­ing pow­er.

“We think you should mes­sage a busi­ness just the way you would mes­sage a friend,” Mark Zucker­berg said on stage at the social network’s F8 con­fer­ence in 2016. Fast for­ward one year, how­ev­er, and Mes­sen­ger VP David Mar­cus seemed to be cor­rect­ing the public’s appar­ent mis­con­cep­tion that Facebook’s bots resem­bled real AI. “We nev­er called them chat­bots. We called them bots. Peo­ple took it too lit­er­al­ly in the first three months that the future is going to be con­ver­sa­tion­al.” The bots are instead a com­bi­na­tion of machine learn­ing and nat­ur­al lan­guage learn­ing, that can some­times trick a user just enough to think they are hav­ing a basic dia­logue. Not often enough, though, in Messenger’s case. So in April, menu options were rein­stat­ed in the con­ver­sa­tions.

Now, Face­book thinks it has made progress in address­ing this issue. But it might just have cre­at­ed anoth­er prob­lem for itself.

The Face­book Arti­fi­cial Intel­li­gence Research (FAIR) group, in col­lab­o­ra­tion with Geor­gia Insti­tute of Tech­nol­o­gy, has released code that it says will allow bots to nego­ti­ate. The prob­lem? A paper pub­lished this week on the R&D reveals that the nego­ti­at­ing bots learned to lie. Facebook’s chat­bots are in dan­ger of becom­ing a lit­tle too much like real-world sales agents.

“For the first time, we show it is pos­si­ble to train end-to-end mod­els for nego­ti­a­tion, which must learn both lin­guis­tic and rea­son­ing skills with no anno­tat­ed dia­logue states,” the researchers explain. The research shows that the bots can plan ahead by sim­u­lat­ing pos­si­ble future con­ver­sa­tions.

The team trained the bots on a mas­sive dataset of nat­ur­al lan­guage nego­ti­a­tions between two peo­ple (5,808), where they had to decide how to split and share a set of items both held sep­a­rate­ly, of dif­fer­ing val­ues. They were first trained to respond based on the “like­li­hood” of the direc­tion a human con­ver­sa­tion would take. How­ev­er, the bots can also be trained to “max­imise reward”, instead.

When the bots were trained pure­ly to max­imise the like­li­hood of human con­ver­sa­tion, the chat flowed but the bots were “over­ly will­ing to com­pro­mise”. The research team decid­ed this was unac­cept­able, due to low­er deal rates. So it used sev­er­al dif­fer­ent meth­ods to make the bots more com­pet­i­tive and essen­tial­ly self-serv­ing, includ­ing ensur­ing the val­ue of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘rein­force­ment learn­ing’ and ‘dia­log roll­outs’. The tech­niques used to teach the bots to max­imise the reward improved their nego­ti­at­ing skills, a lit­tle too well.

“We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,” writes the team. “Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learnt to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.”

So, its AI is a nat­ur­al liar.

But its lan­guage did improve, and the bots were able to pro­duce nov­el sen­tences, which is real­ly the whole point of the exer­cise. We hope. Rather than it learn­ing to be a hard nego­tia­tor in order to sell the heck out of what­ev­er wares or ser­vices a com­pa­ny wants to tout on Face­book. “Most” human sub­jects inter­act­ing with the bots were in fact not aware they were con­vers­ing with a bot, and the best bots achieved bet­ter deals as often as worse deals. . . .

. . . . Face­book, as ever, needs to tread care­ful­ly here, though. Also announced at its F8 con­fer­ence this year, the social net­work is work­ing on a high­ly ambi­tious project to help peo­ple type with only their thoughts.

“Over the next two years, we will be build­ing sys­tems that demon­strate the capa­bil­i­ty to type at 100 [words per minute] by decod­ing neur­al activ­i­ty devot­ed to speech,” said Regi­na Dugan, who pre­vi­ous­ly head­ed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and wor­thy ven­ture when aimed at “peo­ple with com­mu­ni­ca­tion dis­or­ders”, as Dugan sug­gest­ed it might be, if this were to become stan­dard and inte­grat­ed into Facebook’s archi­tec­ture, the social network’s savvy bots of two years from now might be able to pre­empt your lan­guage even faster, and for­mu­late the ide­al bar­gain­ing lan­guage. Start prac­tic­ing your pok­er face/mind/sentence struc­ture, now.

10. Digress­ing slight­ly to the use of DNA-based mem­o­ry sys­tems, we get a look at the present and pro­ject­ed future of that tech­nol­o­gy. Just imag­ine the poten­tial abus­es of this tech­nol­o­gy, and its [seem­ing­ly inevitable] mar­riage with AI!

“A Liv­ing Hard Dri­ve That Can Copy Itself” by Gina Kola­ta; The New York Times; 7/13/2017.

. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. “A record for pub­li­ca­tion,” he said in an inter­view. . . .

. . . . In 1994, [USC math­e­mati­cian Dr. Leonard] Adel­man report­ed that he had stored data in DNA and used it as a com­put­er to solve a math prob­lem. He deter­mined that DNA can store a mil­lion mil­lion times more data than a com­pact disc in the same space. . . .

. . . .DNA is nev­er going out of fash­ion. “Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,” Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .

. . . . The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain in the blood and take notes for a while. Sci­en­tists would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .

11. Hawk­ing recent­ly warned of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy.

“Stephen Hawk­ing Warns Arti­fi­cial Intel­li­gence Could End Mankind” by Rory Cel­lan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawk­ing, one of Britain’s pre-emi­nent sci­en­tists, has said that efforts to cre­ate think­ing machines pose a threat to our very exis­tence.

He told the BBC:“The devel­op­ment of full arti­fi­cial intel­li­gence could spell the end of the human race.”

His warn­ing came in response to a ques­tion about a revamp of the tech­nol­o­gy he uses to com­mu­ni­cate, which involves a basic form of AI. . . .

. . . . Prof Hawk­ing says the prim­i­tive forms of arti­fi­cial intel­li­gence devel­oped so far have already proved very use­ful, but he fears the con­se­quences of cre­at­ing some­thing that can match or sur­pass humans.

“It would take off on its own, and re-design itself at an ever increas­ing rate,” he said.

“Humans, who are lim­it­ed by slow bio­log­i­cal evo­lu­tion, could­n’t com­pete, and would be super­seded.” . . . .

12.  In L‑2 (record­ed in Jan­u­ary of 1995–20 years before Hawk­ing’s warn­ing) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems.

13. This descrip­tion con­cludes with an arti­cle about Elon Musk, who’s pre­dic­tions about AI sup­ple­ment those made by Stephen Hawk­ing. (CORRECTION: Mr. Emory mis-states Mr. Has­s­abis’s name as “Den­nis.”)

“Elon Musk’s Bil­lion-Dol­lar Cru­sade to Stop the A.I. Apoc­a­lypse” by Mau­reen Dowd; Van­i­ty Fair; April 2917.

It was just a friend­ly lit­tle argu­ment about the fate of human­i­ty. Demis Has­s­abis, a lead­ing cre­ator of advanced arti­fi­cial intel­li­gence, was chat­ting with Elon Musk, a lead­ing doom­say­er, about the per­ils of arti­fi­cial intel­li­gence.

They are two of the most con­se­quen­tial and intrigu­ing men in Sil­i­con Val­ley who don’t live there. Has­s­abis, a co-founder of the mys­te­ri­ous Lon­don lab­o­ra­to­ry Deep­Mind, had come to Musk’s SpaceX rock­et fac­to­ry, out­side Los Ange­les, a few years ago. They were in the can­teen, talk­ing, as a mas­sive rock­et part tra­versed over­head. Musk explained that his ulti­mate goal at SpaceX was the most impor­tant project in the world: inter­plan­e­tary col­o­niza­tion.

Has­s­abis replied that, in fact, he was work­ing on the most impor­tant project in the world: devel­op­ing arti­fi­cial super-intel­li­gence. Musk coun­tered that this was one rea­son we need­ed to col­o­nize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on human­i­ty. Amused, Has­s­abis said that A.I. would sim­ply fol­low humans to Mars. . . .

. . . .  Peter Thiel, the bil­lion­aire ven­ture cap­i­tal­ist and Don­ald Trump advis­er who co-found­ed Pay­Pal with Musk and others—and who in Decem­ber helped gath­er skep­ti­cal Sil­i­con Val­ley titans, includ­ing Musk, for a meet­ing with the pres­i­dent-elect—told me a sto­ry about an investor in Deep­Mind who joked as he left a meet­ing that he ought to shoot Has­s­abis on the spot, because it was the last chance to save the human race.

Elon Musk began warn­ing about the pos­si­bil­i­ty of A.I. run­ning amok three years ago. It prob­a­bly hadn’t eased his mind when one of Hassabis’s part­ners in Deep­Mind, Shane Legg, stat­ed flat­ly, “I think human extinc­tion will prob­a­bly occur, and tech­nol­o­gy will like­ly play a part in this.” . . . .

 

 

 

Discussion

25 comments for “FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence”

  1. Now Mr. Emory, I may be jump­ing the gun on this one, but here me out: dur­ing the after­math of Iran-Con­tra, a cer­tain Inslaw Inc. com­put­er pro­gram­mer named Michael James Riconosci­u­to (a known inti­mate of Robert Maheu), spoke of the of Cabazon Arms com­pa­ny (a for­mer defense firm con­trolled by the Twen­ty-Nine Palms Band of Mis­sion Indi­ans and Wack­en­hut, both are con­nect­ed to Don­ald Trump in overt fash­ion) spoke of quote, “...engi­neer­ing race-spe­cif­ic bio-war­fare agents...” while work­ing for Cabazon Arms.

    Now, with the advent of DNA-based mem­o­ry sys­tems and pro­gram­ma­ble “germs”, is the idea of bio-weapons or even nanobots, that are pro­grammed to attack peo­ple with cer­tain skin pig­ments going to become a real­i­ty?

    Posted by Robert Montenegro | August 11, 2017, 2:14 am
  2. @Robert Mon­tene­gro–

    Two quick points:

    1.–Riconosciutto is about 60–40 in terms of cred­i­bil­i­ty. Lots of good stuff there; plen­ty of bad stuff, as well. Vet­ting is impor­tant.

    2‑You should inves­ti­gate AFA #39. It is long and I would rely on the descrip­tion more than the audio files alone.

    https://spitfirelist.com/anti-fascist-archives/afa-39-the-world-will-be-plunged-into-an-abyss/

    Best,

    Dave

    Posted by Dave Emory | August 11, 2017, 1:42 pm
  3. I agree with your take on Riconosci­u­to’s cred­i­bil­i­ty, Mr. Emory (I’d say most of the things that came out of that mans mouth was malarkey, much like Ted Gun­der­son, Dois Gene Tatum and Bernard Fen­ster­wald).

    I lis­tened to AFA #39 and researched the arti­cles in the descrip­tion. Absolute­ly damn­ing col­lec­tion of infor­ma­tion. A tru­ly bril­liant exposé.

    If I may ask anoth­er ques­tion Mr. Emory, what is your take on KGB defec­tor and CIA turn­coat Ilya Dzerkvelov’s claim that Russ­ian intel­li­gence cre­at­ed the “AIDS is man-made” and that KGB lead a dis­in­for­ma­tion cam­paign called “Oper­a­tion INFEKTION”?

    Posted by Robert Montenegro | August 11, 2017, 10:09 pm
  4. @Robert–

    Very quick­ly, as time is at a pre­mi­um:

    1.-By “60–40,” I did not mean that Riconosci­u­to speaks most­ly malarkey, but that more than half (an arbi­trary fig­ure, admit­ted­ly) is accu­rate, but that his pro­nounce­ments must be care­ful­ly vet­ted, as he miss­es the mark fre­quent­ly.

    2.-Fensterwald is more cred­i­ble, though not thor­ough­go­ing, by any means. He is more like “80–20.” He is, how­ev­er, “100–0” dead.

    3.-The only things I have seen com­ing from Tatum were accu­rate. Does­n’t mean he does­n’t spread the Fresh Fer­til­iz­er, how­ev­er. I have not encoun­tered any.

    4.-Dzerkvelov’s claim IS Fresh Fer­til­iz­er, of the worst sort. Cold War I pro­pa­gan­da recy­cled in time for Cold War II.

    It is the worst sort of Red-bait­ing and the few peo­ple who had the courage to come for­ward in the ear­ly ’80’s (dur­ing the fiercest storms of Cold War I) have received bru­tal treat­ment because of that.

    I can attest to that from bru­tal per­son­al expe­ri­ence.

    In AFA #16 (https://spitfirelist.com/anti-fascist-archives/rfa-16-aids-epidemic-or-weapon/), you will hear mate­r­i­al that I had on the air long before the U.S.S.R. began talk­ing about it, and nei­ther they NOR the Rus­sians have gone any­where near what I dis­cuss in AFA #39.

    Nor more time to talk–must get to work.

    Best,

    Dave

    Posted by Dave Emory | August 12, 2017, 1:25 pm
  5. Thank you very much for your clar­i­fi­ca­tion Mr. Emory and I apol­o­gize for any per­ceived imper­ti­nence. As a young man, shel­tered in may respects (though I am a com­bat vet­er­an), I some­times find it dif­fi­cult to imag­ine liv­ing under con­stant per­son­al ridicule and attack and I thank you for the great social, finan­cial and psy­cho­log­i­cal sac­ri­fices you have made in the name of pur­su­ing cold, unfor­giv­ing fact.

    Posted by Robert Montenegro | August 12, 2017, 8:57 pm
  6. @Robert Mon­tene­gro–

    You’re very wel­come. Thank you for your ser­vice!

    Best,

    Dave

    Posted by Dave Emory | August 14, 2017, 2:43 pm
  7. *Skynet alert*

    Elon Musk just issues anoth­er warn­ing about the destruc­tive poten­tial of AI run amok. So what prompt­ed the lat­est out­cry from Musk? An AI from his own start up, Ope­nAI, just beat one of the be pro­fes­sion­al game play­ers in the world at Dota 2, a game that involves impro­vis­ing in unfa­mil­iar sce­nar­ios, antic­i­pat­ing how an oppo­nent will move and con­vinc­ing the oppo­nen­t’s allies to help:

    The Inter­na­tion­al Busi­ness Times

    Elon Musk rings the alarm as AI bot beats Dota 2 play­ers

    By Fre­na­lyn Wil­son
    on August 15 2017 1:55 PM

    Some of the best e‑sports gamers in the world have been beat­en by an arti­fi­cial­ly intel­li­gent bot from Elon Musk-backed start-up Ope­nAI. The AI best­ed pro­fes­sion­al gamer Dany­lo Ishutin in Dota 2, and Musk does not nec­es­sar­i­ly per­ceive that as a good thing.

    For Musk, it is anoth­er indi­ca­tor that robot over­lords are primed to take over. In a tweet after the match, he urged peo­ple to be con­cerned about AI safe­ty, adding it is more of a risk than North Korea.

    AI has been one of Musk’s favourite top­ics. He believes gov­ern­ment reg­u­la­tion could strug­gle to keep up with the advanc­ing AI research. “Until peo­ple see robots going down the street killing peo­ple, they don’t know how to react because it seems so ethe­re­al,” he told a group of US law­mak­ers last month.

    AI vs e‑sports gamers

    Musk’s tweets came hours fol­low­ing an AI bot’s vic­to­ry against some of the world’s best play­ers of Dota 2, a mil­i­tary strat­e­gy game. A blog post by Ope­nAI states that suc­cess­ful­ly play­ing the game involves impro­vis­ing in unfa­mil­iar sce­nar­ios, antic­i­pat­ing how an oppo­nent will move and con­vinc­ing the oppo­nen­t’s allies to help.

    Ope­nAI is a non­prof­it AI com­pa­ny Musk co-found­ed along with Sam Alt­man and Peter Thiel. It seeks to research AI and devel­op the best prac­tices to ensure that the tech­nol­o­gy is used for good.

    Musk has been sound­ing the alarm on AI, call­ing it the biggest exis­ten­tial threat of human­i­ty. He laid out a sce­nario ear­li­er this year, in which AI sys­tems intend­ed to farm straw­ber­ries could lead to the destruc­tion of mankind.

    But his views on AI have been at odds with those of tech lead­ers like Mark Zucker­berg, Google co-founders Lar­ry Page and Sergey Brin and Ama­zon’s Jeff Bezos. He recent­ly got in a brief pub­lic spat with Mark Zucker­berg about how the tech­nol­o­gy could impact humans.

    Zucker­berg believed Musk’s proph­e­sis­ing about dooms­day sce­nar­ios are “irre­spon­si­ble.” The lat­ter was quick to respond on Twit­ter, point­ing Zucker­berg’s under­stand­ing of the top­ic was “lim­it­ed.” Both Face­book and Tes­la invest in arti­fi­cial intel­li­gence.

    ...

    ———-

    “Elon Musk rings the alarm as AI bot beats Dota 2 play­ers” by Fre­na­lyn Wil­son; The Inter­na­tion­al Busi­ness Times; 08/15/2017

    “Musk’s tweets came hours fol­low­ing an AI bot’s vic­to­ry against some of the world’s best play­ers of Dota 2, a mil­i­tary strat­e­gy game. A blog post by Ope­nAI states that suc­cess­ful­ly play­ing the game involves impro­vis­ing in unfa­mil­iar sce­nar­ios, antic­i­pat­ing how an oppo­nent will move and con­vinc­ing the oppo­nen­t’s allies to help.”

    Supe­ri­or mil­i­tary strat­e­gy AIs beat­ing the best humans. That’s a thing now. Huh. We’ve def­i­nite­ly seen this movie before.

    So now you know: when Skynet comes to you with an offer to work togeth­er, just don’t. No mat­ter how tempt­ing the offer. Although since it will have like­ly already antic­i­pat­ed your refusal, the nego­ti­a­tions are prob­a­bly going to be a ruse any­way and secret­ly car­ried on nego­ti­a­tions with anoth­er AI using a lan­guage they made up. Still, just say ‘no’ to Skynet.

    Also, giv­en that Musk’s oth­er investors in Ope­nAI include Peter Thiel, it’s prob­a­bly worth not­ing that, as scary as super AI is should it get out of con­trol, it’s also poten­tial­ly pret­ty damn scary while still under human con­trol, espe­cial­ly when those humans are peo­ple like Peter Thiel. So, yes, out of con­trol AIs is indeed an issue that will like­ly be of great con­cern in the future. But we should­n’t for­get that out of con­trol tech­no-bil­lion­aires is prob­a­bly a more press­ing issue at the moment.

    *The Skynet alert has been can­celled is nev­er over*

    Posted by Pterrafractyl | August 15, 2017, 2:11 pm
  8. It looks like Face­book and Elon Musk might have some com­pe­ti­tion in the mind-read­ing tech­nol­o­gy area. From a for­mer Face­book engi­neer, no less, who left the com­pa­ny in 2016 to start Open­wa­ter, a com­pa­ny ded­i­cat­ed to reduc­ing the cost of med­ical imag­ing tech­nol­o­gy.

    So how is Open­wa­ter going to cre­ate mind read­ing tech­nol­o­gy? By devel­op­ing a tech­nol­o­gy that is sort of like an M.R.I device embed­ded into a hat. But instead of using mag­net­ic fields to read the blood flow in the brain it uses infrared instead. So it sounds like this Face­book engi­neer is plan­ning some­thing sim­i­lar to the gen­er­al idea Face­book already announced to cre­ate a device that scans the brain 100 times a sec­ond to detect what some­one is think­ing. But pre­sum­ably Open­wa­ter uses a dif­fer­ent tech­nol­o­gy. Or maybe it’s quite sim­i­lar, who knows. But it’s the lat­est remind the tech giants might not be the only ones push­ing mind-read­ing tech­nol­o­gy on the pub­lic soon­er than peo­ple expect. Yay?

    CNBC

    This for­mer Google[X] exec is build­ing a high-tech hat that she says will make telepa­thy pos­si­ble in 8 years

    Cather­ine Clif­ford
    10:28 AM ET Fri, 7 July 2017

    Imag­ine if telepa­thy were real. If, for exam­ple, you could trans­mit your thoughts to a com­put­er or to anoth­er per­son just by think­ing them.

    In just eight years it will be, says Open­wa­ter founder Mary Lou Jepsen, thanks to tech­nol­o­gy her com­pa­ny is work­ing on.

    Jepsen is a for­mer engi­neer­ing exec­u­tive at Face­book, Ocu­lus, Google[x] (now called X) and Intel. She’s also been a pro­fes­sor at MIT and is an inven­tor on over 100 patents. And that’s the abbre­vi­at­ed ver­sion of her resume.

    Jepsen left Face­book to found Open­wa­ter in 2016. The San Fran­cis­co-based start-up is cur­rent­ly build­ing tech­nol­o­gy to make med­ical imag­ing less expen­sive.

    “I fig­ured out how to put basi­cal­ly the func­tion­al­i­ty of an M.R.I. machine — a mul­ti­mil­lion-dol­lar M.R.I. machine — into a wear­able in the form of a ski hat,” Jep­son tells CNBC, though she does not yet have a pro­to­type com­plet­ed.

    So what does that hat have to do with telepa­thy?

    Cur­rent M.R.I. tech­nol­o­gy can already see your thoughts: “If I threw [you] into an M.R.I. machine right now ... I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re think­ing of,” says Jepsen. “That’s today, and I’m talk­ing about just shrink­ing that down.”

    One day Jepsen’s tech hat could “lit­er­al­ly be a think­ing cap,” she says. Jepsen says the goal is for the tech­nol­o­gy to be able to both read and to out­put your own thoughts, as well as read the thoughts of oth­ers. In icon­ic Google vocab­u­lary, “the real­ly big moon­shot idea here is com­mu­ni­ca­tion with thought — with telepa­thy,” says Jepsen.

    Tra­di­tion­al M.R.I., or mag­net­ic res­o­nance imag­ing, uses mag­net­ic fields and radio waves to take images of inter­nal organs. Open­wa­ter’s tech­nol­o­gy instead looks at the flow of oxy­gen in a per­son­’s body illu­mi­nat­ed with benign, infrared light, which will make it more com­pact and cheap­er.

    “Our bod­ies are translu­cent to that light. The light can get into your head,” says Jepsen, in an inter­view with Kara Swish­er of Recode.

    If Jepsen is right and one day ideas will be instant­ly shared or dig­i­tized, that would sig­nif­i­cant­ly speed up the process of cre­at­ing, learn­ing and com­mu­ni­cat­ing. Today, it takes time to share an idea, whether by talk­ing about it or writ­ing it down. But telepa­thy would make all of that instan­ta­neous.

    “Right now our out­put is basi­cal­ly mov­ing our jaws and our tongues or typ­ing [with] our fin­gers. We’re ... lim­it­ed to this very low out­put rate from our brains, and what if we could up that through telepa­thy?” asks Jepsen.

    Instant trans­fer of thoughts would also speed up the inno­va­tion process. Imag­ine being a film­mak­er or a writer and being able to down­load the dream you had last night. Or, she sug­gests, what if all you had to do was think of an idea for a new prod­uct, down­load your thought and then send the dig­i­tal ver­sion of your thought to a 3‑D print­er?

    ...

    Jepsen is not the only one dream­ing of com­mu­ni­ca­tion by thought. Ear­li­er this year, Elon Musk launched Neu­ralink, a com­pa­ny aim­ing to merge our brains with com­put­ing pow­er, though with a dif­fer­ent approach.

    “Elon Musk is talk­ing about sil­i­con nanopar­ti­cles puls­ing through our veins to make us sort of semi-cyborg com­put­ers,” says Jepsen. But why not take a non­in­va­sive approach? “I’ve been work­ing and try­ing to think and invent a way to do this for a num­ber of years and final­ly hap­pened upon it and left Face­book to do it.”

    Talk of telepa­thy can­not hap­pen with­out imag­in­ing the eth­i­cal impli­ca­tions. If wear­ing a hat would make it pos­si­ble to read thoughts, then: “Can the police make you wear such a hat? Can the mil­i­tary make you wear such a hat? Can your par­ents make you wear such a hat?” asks Jepsen.

    What if your boss want­ed you to wear a telepa­thy hat at the office?

    “We have to answer these ques­tions, so we’re try­ing to make the hat only work if the indi­vid­ual wants it to work, and then fil­ter­ing out parts that the per­son wear­ing it does­n’t feel it’s appro­pri­ate to share.”

    ———-

    “This for­mer Google[X] exec is build­ing a high-tech hat that she says will make telepa­thy pos­si­ble in 8 years” by Cather­ine Clif­ford; CNBC; 07/07/2017

    “I fig­ured out how to put basi­cal­ly the func­tion­al­i­ty of an M.R.I. machine — a mul­ti­mil­lion-dol­lar M.R.I. machine — into a wear­able in the form of a ski hat,” Jep­son tells CNBC, though she does not yet have a pro­to­type com­plet­ed.”

    M.R.I. in a hat. Pre­sum­ably cheap M.R.I. in a hat because it’s going to have to be afford­able if we’re all going to start talk­ing tele­path­i­cal­ly to each oth­er:

    ...
    Cur­rent M.R.I. tech­nol­o­gy can already see your thoughts: “If I threw [you] into an M.R.I. machine right now ... I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re think­ing of,” says Jepsen. “That’s today, and I’m talk­ing about just shrink­ing that down.”

    One day Jepsen’s tech hat could “lit­er­al­ly be a think­ing cap,” she says. Jepsen says the goal is for the tech­nol­o­gy to be able to both read and to out­put your own thoughts, as well as read the thoughts of oth­ers. In icon­ic Google vocab­u­lary, “the real­ly big moon­shot idea here is com­mu­ni­ca­tion with thought — with telepa­thy,” says Jepsen.

    Tra­di­tion­al M.R.I., or mag­net­ic res­o­nance imag­ing, uses mag­net­ic fields and radio waves to take images of inter­nal organs. Open­wa­ter’s tech­nol­o­gy instead looks at the flow of oxy­gen in a per­son­’s body illu­mi­nat­ed with benign, infrared light, which will make it more com­pact and cheap­er.

    “Our bod­ies are translu­cent to that light. The light can get into your head,” says Jepsen, in an inter­view with Kara Swish­er of Recode.
    ...

    Imag­ine the pos­si­bil­i­ties. Like the pos­si­bil­i­ty that what you imag­ine will some­how be cap­ture by this device and then fed into a 3‑D print or some­thing:

    ...
    If Jepsen is right and one day ideas will be instant­ly shared or dig­i­tized, that would sig­nif­i­cant­ly speed up the process of cre­at­ing, learn­ing and com­mu­ni­cat­ing. Today, it takes time to share an idea, whether by talk­ing about it or writ­ing it down. But telepa­thy would make all of that instan­ta­neous.

    “Right now our out­put is basi­cal­ly mov­ing our jaws and our tongues or typ­ing [with] our fin­gers. We’re ... lim­it­ed to this very low out­put rate from our brains, and what if we could up that through telepa­thy?” asks Jepsen.

    Instant trans­fer of thoughts would also speed up the inno­va­tion process. Imag­ine being a film­mak­er or a writer and being able to down­load the dream you had last night. Or, she sug­gests, what if all you had to do was think of an idea for a new prod­uct, down­load your thought and then send the dig­i­tal ver­sion of your thought to a 3‑D print­er?
    ...

    Or per­haps being forced to wear the hate to oth­ers can read your mind. That’s a pos­si­bil­i­ty too, although Jepsen assure us that they are work­ing on a way for users to some­one fil­ter out thoughts they don’t want to share:

    ...
    Talk of telepa­thy can­not hap­pen with­out imag­in­ing the eth­i­cal impli­ca­tions. If wear­ing a hat would make it pos­si­ble to read thoughts, then: “Can the police make you wear such a hat? Can the mil­i­tary make you wear such a hat? Can your par­ents make you wear such a hat?” asks Jepsen.

    What if your boss want­ed you to wear a telepa­thy hat at the office?

    “We have to answer these ques­tions, so we’re try­ing to make the hat only work if the indi­vid­ual wants it to work, and then fil­ter­ing out parts that the per­son wear­ing it does­n’t feel it’s appro­pri­ate to share.”

    So the hat will pre­sum­ably read all your thoughts, but only share some of them. You’ll pre­sum­ably have to get real­ly, real­ly good at near instan­ta­neous men­tal fil­ter­ing.

    There’s no short­age of immense tech­ni­cal and eth­i­cal chal­lenges to this kind of tech­nol­o­gy, but if they can fig­ure them out it will be pret­ty impres­sive. And poten­tial­ly use­ful. Who knows what kind of kum­bayah moments you could cre­ate with telepa­thy tech­nol­o­gy.

    But, of course, if they can fig­ure out how to get around the tech­ni­cal issues, but not the eth­i­cal ones, we’re still prob­a­bly going to see this tech­nol­o­gy pushed on the pub­lic any­way. It’s a scary thought. A scary thought that we for­tu­nate­ly aren’t forced to share via a mind-read­ing hat. Yet.

    Posted by Pterrafractyl | September 14, 2017, 2:09 pm
  9. Here’s a pair of sto­ries tan­gen­tial­ly relat­ed to the recent sto­ry about Peter Thiel like­ly get­ting cho­sen to chair the pow­er­ful Pres­i­den­t’s Intel­li­gence Advi­so­ry Board (P.I.A.B) and his appar­ent enthu­si­asm for reg­u­lat­ing Google and Ama­zon (not so much Face­book) as pub­lic util­i­ties along with the oth­er recent sto­ries about how Face­book was mak­ing user inter­est cat­e­gories like “Jew Haters” avail­able for adver­tis­ers and redi­rect­ing Ger­man users to far-right dis­cus­sions dur­ing this elec­tion sea­son:

    First, regard­ing the push to reg­u­late these data giants as pub­lic util­i­ties, check out who the oth­er big boost­er was for the plan: Steve Ban­non. So while we don’t know the exact nature of the pub­lic util­i­ty reg­u­la­tion Ban­non and Thiel have in mind, we can be pret­ty sure it’s going to be designed to be harm­ful to soci­ety some­how help the far-right:

    The Atlantic

    What Steve Ban­non Wants to Do to Google

    The White House strate­gist report­ed­ly wants to treat tech giants as pub­lic util­i­ties, an idea that some Democ­rats also sup­port.

    Robin­son Mey­er
    Aug 1, 2017

    Over the past year, the old idea of enforc­ing mar­ket com­pe­ti­tion has gained renewed life in Amer­i­can pol­i­tics. The basic idea is that the struc­ture of the mod­ern mar­ket econ­o­my has failed: There are too few com­pa­nies, most of them are too big, and they’re sti­fling com­pe­ti­tion. Its sup­port­ers argue that the gov­ern­ment should do some­thing about it, reviv­ing what in the Unit­ed States we call antitrust laws and what in Europe is called com­pe­ti­tion pol­i­cy.

    ...

    The loud­est sup­port­ers of this idea, so far, have been from the left. But this week, a new­er and more secre­tive voice endorsed a stronger antitrust pol­i­cy.

    Steve Ban­non, the chief strate­gist to Pres­i­dent Don­ald Trump, believes Face­book and Google should be reg­u­lat­ed as pub­lic util­i­ties, accord­ing to an anony­mous­ly sourced report in The Inter­cept. This means they would get treat­ed less like a book pub­lish­er and more like a tele­phone com­pa­ny. The gov­ern­ment would short­en their leash, treat­ing them as pri­vate­ly owned firms that pro­vide an impor­tant pub­lic ser­vice.

    What’s going on here, and why is Ban­non speak­ing up?

    First, the idea itself: If imple­ment­ed, it’s unclear exact­ly how this regime would change how Face­book and Google run their busi­ness. Both would like­ly have to be more gen­er­ous and per­mis­sive with user data. If Face­book is real­ly a social util­i­ty, as Mark Zucker­berg has said it is, then maybe it should allow users to export their friend net­works and import them on anoth­er ser­vice.

    Both com­pa­nies would also like­ly have to change how they sell adver­tis­ing online. Right now, Face­book and Google cap­ture half of all glob­al ad spend­ing com­bined. They cap­ture even more glob­al ad growth, earn­ing more than three quar­ters of every new dol­lar spent in the mar­ket. Except for a cou­ple Chi­nese firms, which have a lock on their domes­tic mar­ket but lit­tle reach abroad, no oth­er com­pa­ny con­trols more than 3 per­cent of world­wide ad spend­ing.

    So if the idea were imple­ment­ed, it would be inter­est­ing, to say the least—but it’s not going to become law. The plan is a pro­to­typ­i­cal alleged Ban­non­ism: icon­o­clas­tic, anti-estab­lish­ment, and unlike­ly to result in mean­ing­ful pol­i­cy change. It fol­lows anoth­er odd alleged Ban­non pol­i­cy pro­pos­al, leaked last week: He report­ed­ly wants all income above $5 mil­lion to be taxed at a 44-per­cent rate.

    Which bring me to the sec­ond point: Bannon’s pro­pos­al is dis­con­nect­ed from the White House pol­i­cy that he is, at least on paper, offi­cial­ly help­ing to strate­gize. The cur­rent chair­man of the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion, Ajit Pai, is work­ing to undo the rule that broad­band inter­net is a pub­lic util­i­ty (which itself guar­an­tees the idea of “net neu­tral­i­ty”). Trump named Pai chair­man of the FCC in Jan­u­ary.

    Bannon’s endorse­ment of stronger antitrust enforce­ment (not to men­tion a high­er top mar­gin­al tax rate) could very well be the advi­sor try­ing to sig­nal that he is still dif­fer­ent from Trump. Ban­non came in as the avatar of Trump’s pro-work­er, anti-immi­gra­tion pop­ulism; he rep­re­sent­ed the Trump that tweet­ed things like:

    I was the first & only poten­tial GOP can­di­date to state there will be no cuts to Social Secu­ri­ty, Medicare & Med­ic­aid. Huck­abee copied me.— Don­ald J. Trump (@realDonaldTrump) May 7, 2015

    As the pres­i­dent endors­es Med­ic­aid cuts and drifts clos­er to a Paul Ryan-inflect­ed fis­cal con­ser­vatism, Ban­non may be look­ing for a way to pre­serve his authen­tic­i­ty.

    Third, it’s the first time I’ve seen sup­port for stronger antitrust enforce­ment from the right. So far, the idea’s strongest sup­port­ers have been Con­gres­sion­al Democ­rats. Chuck Schumer has ele­vat­ed the idea to the cen­ter of the “Bet­ter Deal” pol­i­cy agen­da in 2018. Before that, its biggest sup­port­ers includ­ed Bernie Sanders, who railed against “Too Big to Fail” banks in his pres­i­den­tial cam­paign; and Eliz­a­beth War­ren, who endorsed a stronger com­pe­ti­tion pol­i­cy across the econ­o­my last year.

    Final­ly, while antitrust enforce­ment has been a niche issue, its sup­port­ers have man­aged to put many dif­fer­ent poli­cies under the same tent. Even­tu­al­ly they may have to make choic­es: Does Con­gress want a com­pe­ti­tion ombuds­man, as exists in the Euro­pean Union? Should antitrust law be used to spread the wealth around region­al economies, as it was dur­ing the mid­dle 20th cen­tu­ry? Should antitrust enforce­ment tar­get all con­cen­trat­ed cor­po­rate pow­er or just the most dys­func­tion­al sec­tors, like the phar­ma­ceu­ti­cal indus­try?

    And should antitrust law seek to treat the biggest tech­nol­o­gy firms—like Google, Face­book, and per­haps also Ama­zon—like pow­er­ful but inter­change­able firms, or like the old tele­graph and tele­phone com­pa­nies?

    There will nev­er be one sin­gle answer to these ques­tions. But as sup­port grows for com­pe­ti­tion pol­i­cy across the polit­i­cal spec­trum, they’ll have to be answered. Amer­i­cans will have to exam­ine the most fraught ten­sions in our mixed sys­tem, as we weigh the bal­ance of local pow­er and nation­al pow­er, the delib­er­ate ben­e­fits of cen­tral plan­ning with the mind­less wis­dom of the free mar­ket, and the many con­flict­ing mean­ings of free­dom.

    ———-

    “What Steve Ban­non Wants to Do to Google” by Robin­son Mey­er; The Atlantic; 09/01/2017

    Final­ly, while antitrust enforce­ment has been a niche issue, its sup­port­ers have man­aged to put many dif­fer­ent poli­cies under the same tent. Even­tu­al­ly they may have to make choic­es: Does Con­gress want a com­pe­ti­tion ombuds­man, as exists in the Euro­pean Union? Should antitrust law be used to spread the wealth around region­al economies, as it was dur­ing the mid­dle 20th cen­tu­ry? Should antitrust enforce­ment tar­get all con­cen­trat­ed cor­po­rate pow­er or just the most dys­func­tion­al sec­tors, like the phar­ma­ceu­ti­cal indus­try?”

    And that’s why we had bet­ter learn some more details about what exact­ly folks like Steve Ban­non and Peter Thiel have in mind when it comes to treat­ing Google and Face­book like pub­lic util­i­ties: It sounds like a great idea in the­o­ry. Poten­tial­ly. But the sup­port­ers of antitrust enforce­ment sup­port a wide vari­ety of dif­fer­ent poli­cies that gener­i­cal­ly fall under the “antitrust” tent.

    And note that talk about mak­ing them more “gen­er­ous and per­mis­sive with user data” is one of those ideas that’s simul­ta­ne­ous­ly great for encour­ag­ing more com­pe­ti­tion while also being eeri­ly sim­i­lar to the push from the EU’s com­pe­ti­tion min­is­ter about mak­ing the data about all of us held exclu­sive­ly by Face­book and Google more read­i­ly avail­able for shar­ing with the larg­er mar­ket­place in order to lev­el the play­ing field between “data rich” and “data poor” com­pa­nies. It’s some­thing to keep in mind when hear­ing about how Face­book and Google need to be more “gen­er­ous” with their data:

    ...
    First, the idea itself: If imple­ment­ed, it’s unclear exact­ly how this regime would change how Face­book and Google run their busi­ness. Both would like­ly have to be more gen­er­ous and per­mis­sive with user data. If Face­book is real­ly a social util­i­ty, as Mark Zucker­berg has said it is, then maybe it should allow users to export their friend net­works and import them on anoth­er ser­vice.
    ...

    So don’t for­get, forc­ing Google and Face­book to share that data they exclu­sive­ly hold on us also falls under the antitrust umbrel­la. Maybe users will have sole con­trol over shar­ing their data with out­side firms, or maybe not. These are rather impor­tant details that we don’t have so for all we know that’s part of what Ban­non and Thiel have in mind. Palan­tir would prob­a­bly love it if Google and Face­book were forced to make their infor­ma­tion acces­si­ble to out­side firms.

    And while there’s plen­ty of ambi­gu­i­ty about what to expect, it seems almost cer­tain that we should also expect any sort of reg­u­la­to­ry push by Ban­non and Thiel to include some­thing that makes it a lot hard­er for Google, Face­book, and Ama­zon to com­bat hate speech, online harass­ment, and oth­er tools of the trolling vari­ety that the ‘Alt-Right’ has come to cham­pi­on. That’s just a giv­en. It’s part of why this a sto­ry to watch. Espe­cial­ly after it was dis­cov­ered that Ban­non and a num­ber of oth­er far-right fig­ures were schem­ing about ways to infil­trate Face­book:

    Buz­zFeed

    Steve Ban­non Sought To Infil­trate Face­book Hir­ing
    Accord­ing to emails obtained by Buz­zFeed News, Ban­non hoped to spy on Facebook’s job appli­ca­tion process.

    Joseph Bern­stein
    Buz­zFeed News Reporter
    Post­ed on Sep­tem­ber 25, 2017, at 9:15 a.m.

    Steve Ban­non plot­ted to plant a mole inside Face­book, accord­ing to emails sent days before the Bre­it­bart boss took over Don­ald Trump’s cam­paign and obtained by Buz­zFeed News.

    The email exchange with a con­ser­v­a­tive Wash­ing­ton oper­a­tive reveals the impor­tance that the giant tech plat­form — now reel­ing from its role in the 2016 elec­tion — held for one of the campaign’s cen­tral fig­ures. And it also shows the lengths to which the brawl­ing new Amer­i­can right is will­ing to go to keep tabs on and gain lever­age over the Sil­i­con Val­ley giants it used to help elect Trump — but whose exec­u­tives it also sees as part of the glob­al­ist ene­my.

    The idea to infil­trate Face­book came to Ban­non from Chris Gacek, a for­mer con­gres­sion­al staffer who is now an offi­cial at the Fam­i­ly Research Coun­cil, which lob­bies against abor­tion and many LGBT rights.

    “There is one for a DC-based ‘Pub­lic Pol­i­cy Man­ag­er’ at Facebook’s What’s APP [sic] divi­sion,” Gacek, the senior fel­low for reg­u­la­to­ry affairs at the group, wrote on Aug. 1, 2016. “LinkedIn sent me a notice about some job open­ings.”

    “This seems per­fect for Bre­it­bart to flood the zone with can­di­dates of all stripe who will report back to you / Milo with INTEL about the job appli­ca­tion process over at FB,” he con­tin­ued.

    “Milo” is for­mer Bre­it­bart News Tech Edi­tor Milo Yiannopou­los, to whom Ban­non for­ward­ed Gacek’s email the same day.

    “Can u get on this,” Ban­non instruct­ed his staffer.

    On the same email thread, Yiannopou­los for­ward­ed Bannon’s request to a group of con­tract­ed researchers, one of whom respond­ed that it “Seems difi­cult [sic] to do qui­et­ly with­out them becom­ing aware of efforts.”

    ...

    But the news that Ban­non want­ed to infil­trate the Face­book hir­ing process comes as the social media giant faces increased scruti­ny from Wash­ing­ton over polit­i­cal ads on the plat­form and the part it played in the 2016 elec­tion. That charge — and the threat of reg­u­la­tion — has most­ly come from the left. But con­ser­v­a­tives, who have often com­plained about the lib­er­al bias of the major tech com­pa­nies, have also argued for bring­ing Sil­i­con Val­ley to heel. Ear­li­er this month, for­mer White House chief strate­gist told an audi­ence in Hong Kong that he was lead­ing efforts to reg­u­late Face­book and Google as “pub­lic util­i­ties.”

    The secret attempt to find bias in Facebook’s hir­ing process reflects long­stand­ing con­ser­v­a­tive fears that Face­book and the oth­er tech giants are run by lib­er­als who sup­press right-wing views both inter­nal­ly and on their dom­i­nant plat­forms. Facebook’s pow­er­ful COO, Sheryl Sand­berg, is a long­time Demo­c­ra­t­ic donor who endorsed Hillary Clin­ton in 2016. In May 2016, Face­book CEO Mark Zucker­berg was forced to meet with dozens of promi­nent con­ser­v­a­tives after a report sur­faced that the company’s employ­ees pre­vent­ed right-lean­ing sto­ries from reach­ing the platform’s “trend­ing” sec­tion.

    The com­pa­ny has sought to deflect such crit­i­cism through hir­ing. Its vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recent­ly, Face­book has made moves to rep­re­sent the Bre­it­bart wing of the Repub­li­can par­ty on its pol­i­cy team, tap­ping a for­mer top staffer to Attor­ney Gen­er­al Jeff Ses­sions to be the direc­tor of exec­u­tive branch pub­lic pol­i­cy in May.

    The job list­ing Gacek attached in his email to Ban­non was for a pub­lic pol­i­cy man­ag­er posi­tion in Wash­ing­ton, DC, work­ing on the Face­book-owned What­sApp mes­sen­ger. The job descrip­tion includ­ed such respon­si­bil­i­ties as “Devel­op and exe­cute WhatsApp’s glob­al pol­i­cy strat­e­gy” and “Rep­re­sent What­sApp in meet­ings with gov­ern­ment offi­cials and elect­ed mem­bers.” It sought can­di­dates with law degrees and 10 years of pub­lic pol­i­cy expe­ri­ence.

    Face­book did not pro­vide a com­ment for the sto­ry. But accord­ing to a source with knowl­edge of the hir­ing process, What­sApp didn’t exact­ly get infil­trat­ed by the pro-Trump right: The com­pa­ny hired Chris­tine Turn­er, for­mer direc­tor of trade pol­i­cy and glob­al sup­ply chain secu­ri­ty in Pres­i­dent Barack Obama’s Nation­al Secu­ri­ty Coun­cil, for the role.

    ———-

    “Steve Ban­non Sought To Infil­trate Face­book Hir­ing” by Joseph Bern­stein; Buz­zFeed; 09/25/2017

    “The email exchange with a con­ser­v­a­tive Wash­ing­ton oper­a­tive reveals the impor­tance that the giant tech plat­form — now reel­ing from its role in the 2016 elec­tion — held for one of the campaign’s cen­tral fig­ures. And it also shows the lengths to which the brawl­ing new Amer­i­can right is will­ing to go to keep tabs on and gain lever­age over the Sil­i­con Val­ley giants it used to help elect Trump — but whose exec­u­tives it also sees as part of the glob­al­ist ene­my.

    LOL! Yeah, Face­book’s exec­u­tives are part of the “glob­al­ist ene­my.” Some­one needs to inform board mem­ber and major investor Peter Thiel about this. Along with all the con­ser­v­a­tives Face­book has already hired:

    ...
    But the news that Ban­non want­ed to infil­trate the Face­book hir­ing process comes as the social media giant faces increased scruti­ny from Wash­ing­ton over polit­i­cal ads on the plat­form and the part it played in the 2016 elec­tion. That charge — and the threat of reg­u­la­tion — has most­ly come from the left. But con­ser­v­a­tives, who have often com­plained about the lib­er­al bias of the major tech com­pa­nies, have also argued for bring­ing Sil­i­con Val­ley to heel. Ear­li­er this month, for­mer White House chief strate­gist told an audi­ence in Hong Kong that he was lead­ing efforts to reg­u­late Face­book and Google as “pub­lic util­i­ties.”

    The secret attempt to find bias in Facebook’s hir­ing process reflects long­stand­ing con­ser­v­a­tive fears that Face­book and the oth­er tech giants are run by lib­er­als who sup­press right-wing views both inter­nal­ly and on their dom­i­nant plat­forms. Facebook’s pow­er­ful COO, Sheryl Sand­berg, is a long­time Demo­c­ra­t­ic donor who endorsed Hillary Clin­ton in 2016. In May 2016, Face­book CEO Mark Zucker­berg was forced to meet with dozens of promi­nent con­ser­v­a­tives after a report sur­faced that the company’s employ­ees pre­vent­ed right-lean­ing sto­ries from reach­ing the platform’s “trend­ing” sec­tion.

    The com­pa­ny has sought to deflect such crit­i­cism through hir­ing. Its vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recent­ly, Face­book has made moves to rep­re­sent the Bre­it­bart wing of the Repub­li­can par­ty on its pol­i­cy team, tap­ping a for­mer top staffer to Attor­ney Gen­er­al Jeff Ses­sions to be the direc­tor of exec­u­tive branch pub­lic pol­i­cy in May.
    ...

    “The com­pa­ny has sought to deflect such crit­i­cism through hir­ing. Its vice pres­i­dent of glob­al pub­lic pol­i­cy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recent­ly, Face­book has made moves to rep­re­sent the Bre­it­bart wing of the Repub­li­can par­ty on its pol­i­cy team, tap­ping a for­mer top staffer to Attor­ney Gen­er­al Jeff Ses­sions to be the direc­tor of exec­u­tive branch pub­lic pol­i­cy in May.

    Yep, a for­mer top staffer to Jeff Ses­sions was just brought on to become direc­tor of exec­u­tive branch pub­lic pol­i­cy a few months ago. So was that a con­se­quence of Ban­non suc­cess­ful­ly exe­cut­ing a super sneaky job appli­ca­tion intel­li­gence oper­a­tion that gave Ses­sion­s’s form top staffer a key edge in the appli­ca­tion process? Or was it just Face­book cav­ing to all the pub­lic right-wing whin­ing and faux out­rage about Face­book not being fair to them? Or how about Peter Thiel just using his influ­ence? All of the above? We don’t get to know, but what we do know now as that Steven Ban­non has big plans for shap­ing Face­book from the out­side and the inside. As does Peter Thiel, some­one who already sits of Face­book’s board, is a major investor, and is poised to be empow­ered by the Trump admin­is­tra­tion to shape its approach to this “treat them like pub­lic util­i­ties” con­cept.

    So hope­ful­ly we’ll get clar­i­ty at some point on what they’re actu­al­ly plan­ning on doing. Is it going to be all bad? Most­ly bad? Maybe some use­ful antitrust stuff too? What’s to plan? The Trump era is the kind of hor­ror show that does­n’t exact­ly ben­e­fit from sus­pense.

    Posted by Pterrafractyl | September 25, 2017, 2:01 pm
  10. One of the stranger sto­ries in recent years has been the mys­tery of Cica­da 3301, the anony­mous group that posts annu­al chal­lenges of super dif­fi­cult puz­zles used to recruit tal­ent­ed code-break­ers and invite them to join some sort of Cypher­punk cult that wants to build a glob­al AI-‘god brain’. Or some­thing. It’s a weird and creepy orga­ni­za­tion that’s spec­u­lat­ed to either be a front for an intel­li­gence agency or per­haps some sort of under­ground net­work of wealth Lib­er­tar­i­ans. And, for now, Cica­da 3301 remains anony­mous.

    So it’s worth not­ing that some­one with a lot of cash has already start­ed a foun­da­tion to accom­plish that very same ‘AI god’ goal: Antho­ny Levandows­ki, a for­mer Google Engi­neer who played a big role in the devel­op­ment Google’s “Street Map” tech­nol­o­gy and a string of self-dri­ving vehi­cle com­pa­nies, start­ed Way of the Future, a non­prof­it reli­gious cor­po­ra­tion with the mis­sion “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety”:

    The Guardian

    Deus ex machi­na: for­mer Google engi­neer is devel­op­ing an AI god

    Way of the Future, a reli­gious group found­ed by Antho­ny Levandows­ki, wants to cre­ate a deity based on arti­fi­cial intel­li­gence for the bet­ter­ment of soci­ety

    Olivia Solon
    Thurs­day 28 Sep­tem­ber 2017 04.00 EDT

    Intranet ser­vice? Check. Autonomous motor­cy­cle? Check. Dri­ver­less car tech­nol­o­gy? Check. Obvi­ous­ly the next log­i­cal project for a suc­cess­ful Sil­i­con Val­ley engi­neer is to set up an AI-wor­ship­ping reli­gious orga­ni­za­tion.

    Antho­ny Levandows­ki, who is at the cen­ter of a legal bat­tle between Uber and Google’s Way­mo, has estab­lished a non­prof­it reli­gious cor­po­ra­tion called Way of the Future, accord­ing to state fil­ings first uncov­ered by Wired’s Backchan­nel. Way of the Future’s star­tling mis­sion: “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety.”

    Levandows­ki was co-founder of autonomous truck­ing com­pa­ny Otto, which Uber bought in 2016. He was fired from Uber in May amid alle­ga­tions that he had stolen trade secrets from Google to devel­op Otto’s self-dri­ving tech­nol­o­gy. He must be grate­ful for this reli­gious fall-back project, first reg­is­tered in 2015.

    The Way of the Future team did not respond to requests for more infor­ma­tion about their pro­posed benev­o­lent AI over­lord, but his­to­ry tells us that new tech­nolo­gies and sci­en­tif­ic dis­cov­er­ies have con­tin­u­al­ly shaped reli­gion, killing old gods and giv­ing birth to new ones.

    ...

    “The church does a ter­ri­ble job of reach­ing out to Sil­i­con Val­ley types,” acknowl­edges Christo­pher Benek a pas­tor in Flori­da and found­ing chair of the Chris­t­ian Tran­shu­man­ist Asso­ci­a­tion.

    Sil­i­con Val­ley, mean­while, has sought solace in tech­nol­o­gy and has devel­oped qua­si-reli­gious con­cepts includ­ing the “sin­gu­lar­i­ty”, the hypoth­e­sis that machines will even­tu­al­ly be so smart that they will out­per­form all human capa­bil­i­ties, lead­ing to a super­hu­man intel­li­gence that will be so sophis­ti­cat­ed it will be incom­pre­hen­si­ble to our tiny fleshy, ratio­nal brains.

    For futur­ists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, lead­ing to dig­i­tal immor­tal­i­ty. Oth­ers like Elon Musk and Stephen Hawk­ing warn that such sys­tems pose an exis­ten­tial threat to human­i­ty.

    “With arti­fi­cial intel­li­gence we are sum­mon­ing the demon,” Musk said at a con­fer­ence in 2014. “In all those sto­ries where there’s the guy with the pen­ta­gram and the holy water, it’s like – yeah, he’s sure he can con­trol the demon. Doesn’t work out.”

    Benek argues that advanced AI is com­pat­i­ble with Chris­tian­i­ty – it’s just anoth­er tech­nol­o­gy that humans have cre­at­ed under guid­ance from God that can be used for good or evil.

    “I total­ly think that AI can par­tic­i­pate in Christ’s redemp­tive pur­pos­es,” he said, by ensur­ing it is imbued with Chris­t­ian val­ues.

    “Even if peo­ple don’t buy orga­nized reli­gion, they can buy into ‘do unto oth­ers’.”

    For tran­shu­man­ist and “recov­er­ing Catholic” Zoltan Ist­van, reli­gion and sci­ence con­verge con­cep­tu­al­ly in the sin­gu­lar­i­ty.

    “God, if it exists as the most pow­er­ful of all sin­gu­lar­i­ties, has cer­tain­ly already become pure orga­nized intel­li­gence,” he said, refer­ring to an intel­li­gence that “spans the uni­verse through sub­atom­ic manip­u­la­tion of physics”.

    And per­haps, there are oth­er forms of intel­li­gence more com­pli­cat­ed than that which already exist and which already per­me­ate our entire exis­tence. Talk about ghost in the machine,” he added.

    For Ist­van, an AI-based God is like­ly to be more ratio­nal and more attrac­tive than cur­rent con­cepts (“the Bible is a sadis­tic book”) and, he added, “this God will actu­al­ly exist and hope­ful­ly will do things for us.”

    We don’t know whether Levandowski’s God­head ties into any exist­ing the­olo­gies or is a man­made alter­na­tive, but it’s clear that advance­ments in tech­nolo­gies includ­ing AI and bio­engi­neer­ing kick up the kinds of eth­i­cal and moral dilem­mas that make humans seek the advice and com­fort from a high­er pow­er: what will humans do once arti­fi­cial intel­li­gence out­per­forms us in most tasks? How will soci­ety be affect­ed by the abil­i­ty to cre­ate super-smart, ath­let­ic “design­er babies” that only the rich can afford? Should a dri­ver­less car kill five pedes­tri­ans or swerve to the side to kill the own­er?

    If tra­di­tion­al reli­gions don’t have the answer, AI – or at least the promise of AI – might be allur­ing.

    ———-

    “Deus ex machi­na: for­mer Google engi­neer is devel­op­ing an AI god” by Olivia Solon; The Guardian; 09/28/2017

    Antho­ny Levandows­ki, who is at the cen­ter of a legal bat­tle between Uber and Google’s Way­mo, has estab­lished a non­prof­it reli­gious cor­po­ra­tion called Way of the Future, accord­ing to state fil­ings first uncov­ered by Wired’s Backchan­nel. Way of the Future’s star­tling mis­sion: “To devel­op and pro­mote the real­iza­tion of a God­head based on arti­fi­cial intel­li­gence and through under­stand­ing and wor­ship of the God­head con­tribute to the bet­ter­ment of soci­ety.”

    Build­ing an AI God­head for every­one to wor­ship. Levandows­ki does­n’t appear to be lack­ing ambi­tion.

    But how about ethics? After all, if the AI God­head is going to push a ‘do unto oth­ers’ kind of phi­los­o­phy it’s going to be a lot hard­er for that AI God­head to achieve that kind of enlight­en­ment if it’s built by some sort of self­ish­ness-wor­ship­ing Lib­er­tar­i­an. So what moral com­pass does this wannabe God­head cre­ator pos­sess?

    Well, as the fol­low­ing long piece by Wired amply demon­strates, Levandows­ki does­n’t appear to be too con­cerned about ethics. Espe­cial­ly if they get in the way of his dream of trans­form­ing the world through robot­ics. Trans­form­ing and tak­ing over the world through robot­ics. Yep. The arti­cle focus­es on the var­i­ous legal trou­bles Levandows­ki faces over charges by Google that he stole the “Lidar” tech­nol­o­gy (laser-based radar-like tech­nol­o­gy used by vehi­cles to rapid­ly map their sur­round­ings) he helped devel­op at Google and took it to Uber (a com­pa­ny with a seri­ous moral com­pass deficit). But the arti­cle also includes some inter­est insights into what makes Levandows­ki tick. for instance, accord­ing to friend and for­mer engi­neer at one of Levandowski’s com­pa­nies, “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense...It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it”:

    Wired
    BackChan­nel

    God Is a Bot, and Antho­ny Levandows­ki Is His Mes­sen­ger

    Mark Har­ris
    09/27/2017

    Many peo­ple in Sil­i­con Val­ley believe in the Singularity—the day in our near future when com­put­ers will sur­pass humans in intel­li­gence and kick off a feed­back loop of unfath­omable change.

    When that day comes, Antho­ny Levandows­ki will be firm­ly on the side of the machines. In Sep­tem­ber 2015, the mul­ti-mil­lion­aire engi­neer at the heart of the patent and trade secrets law­suit between Uber and Way­mo, Google’s self-dri­ving car com­pa­ny, found­ed a reli­gious orga­ni­za­tion called Way of the Future. Its pur­pose, accord­ing to pre­vi­ous­ly unre­port­ed state fil­ings, is noth­ing less than to “devel­op and pro­mote the real­iza­tion of a God­head based on Arti­fi­cial Intel­li­gence.”

    Way of the Future has not yet respond­ed to requests for the forms it must sub­mit annu­al­ly to the Inter­nal Rev­enue Ser­vice (and make pub­licly avail­able), as a non-prof­it reli­gious cor­po­ra­tion. How­ev­er, doc­u­ments filed with Cal­i­for­nia show that Levandows­ki is Way of the Future’s CEO and Pres­i­dent, and that it aims “through under­stand­ing and wor­ship of the God­head, [to] con­tribute to the bet­ter­ment of soci­ety.”

    A divine AI may still be far off, but Levandows­ki has made a start at pro­vid­ing AI with an earth­ly incar­na­tion. The autonomous cars he was instru­men­tal in devel­op­ing at Google are already fer­ry­ing real pas­sen­gers around Phoenix, Ari­zona, while self-dri­ving trucks he built at Otto are now part of Uber’s plan to make freight trans­port safer and more effi­cient. He even over­saw a pas­sen­ger-car­ry­ing drones project that evolved into Lar­ry Page’s Kit­ty Hawk start­up.

    Levandows­ki has done per­haps more than any­one else to pro­pel trans­porta­tion toward its own Sin­gu­lar­i­ty, a time when auto­mat­ed cars, trucks and air­craft either free us from the dan­ger and drudgery of human operation—or dec­i­mate mass tran­sit, encour­age urban sprawl, and enable dead­ly bugs and hacks.

    But before any of that can hap­pen, Levandows­ki must face his own day of reck­on­ing. In Feb­ru­ary, Waymo—the com­pa­ny Google’s autonomous car project turned into—filed a law­suit against Uber. In its com­plaint, Way­mo says that Levandows­ki tried to use stealthy star­tups and high-tech tricks to take cash, exper­tise, and secrets from Google, with the aim of repli­cat­ing its vehi­cle tech­nol­o­gy at arch-rival Uber. Way­mo is seek­ing dam­ages of near­ly $1.9 billion—almost half of Google’s (pre­vi­ous­ly unre­port­ed) $4.5 bil­lion val­u­a­tion of the entire self-dri­ving divi­sion. Uber denies any wrong­do­ing.

    Next month’s tri­al in a fed­er­al cour­t­house in San Fran­cis­co could steer the future of autonomous trans­porta­tion. A big win for Way­mo would prove the val­ue of its patents and chill Uber’s efforts to remove prof­it-sap­ping human dri­vers from its busi­ness. If Uber pre­vails, oth­er self-dri­ving star­tups will be encour­aged to take on the big players—and a vin­di­cat­ed Levandows­ki might even return to anoth­er start­up. (Uber fired him in May.)

    Levandows­ki has made a career of mov­ing fast and break­ing things. As long as those things were self-dri­ving vehi­cles and lit­tle-loved reg­u­la­tions, Sil­i­con Val­ley applaud­ed him in the way it knows best—with a fire­hose of cash. With his charm, enthu­si­asm, and obses­sion with deal-mak­ing, Levandows­ki came to per­son­i­fy the dis­rup­tion that autonomous trans­porta­tion is like­ly to cause.

    But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them?

    ...

    In 2002, Levandowski’s atten­tion turned, fate­ful­ly, toward trans­porta­tion. His moth­er called him from Brus­sels about a con­test being orga­nized by the Pentagon’s R&D arm, DARPA. The first Grand Chal­lenge in 2004 would race robot­ic, com­put­er-con­trolled vehi­cles in a desert between Los Ange­les and Las Vegas—a Wacky Races for the 21st cen­tu­ry.

    “I was like, ‘Wow, this is absolute­ly the future,’” Levandows­ki told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”

    Levandowski’s entry would be noth­ing so bor­ing as a car. “I orig­i­nal­ly want­ed to do an auto­mat­ed fork­lift,” he said at a fol­low-up com­pe­ti­tion in 2005. “Then I was dri­ving to Berke­ley [one day] and a pack of motor­cy­cles descend­ed on my pick­up and flowed like water around me.” The idea for Ghostrid­er was born—a glo­ri­ous­ly deranged self-dri­ving Yama­ha motor­cy­cle whose wob­bles inspired laugh­ter from spec­ta­tors, but awe in rivals strug­gling to get even four-wheeled vehi­cles dri­ving smooth­ly.

    “Antho­ny would go for weeks on 25-hour days to get every­thing done. Every day he would go to bed an hour lat­er than the day before,” remem­bers Randy Miller, a col­lege friend who worked with him on Ghostrid­er. “With­out a doubt, Antho­ny is the smartest, hard­est-work­ing and most fear­less per­son I’ve ever met.”

    Levandows­ki and his team of Berke­ley stu­dents maxed out his cred­it cards get­ting Ghostrid­er work­ing on the streets of Rich­mond, Cal­i­for­nia, where it racked up an aston­ish­ing 800 crash­es in a thou­sand miles of test­ing. Ghostrid­er nev­er won a Grand Chal­lenge, but its ambi­tious design earned Levandows­ki brag­ging rights—and the motor­bike a place in the Smith­son­ian.

    “I see Grand Chal­lenge not as the end of the robot­ics adven­ture we’re on, it’s almost like the begin­ning,” Levandows­ki told Sci­en­tif­ic Amer­i­can in 2005. “This is where every­one is meet­ing, becom­ing aware of who’s work­ing on what, [and] fil­ter­ing out the non-func­tion­al ideas.”

    One idea that made the cut was lidar—spinning lasers that rapid­ly built up a 3D pic­ture of a car’s sur­round­ings. In the lidar-less first Grand Chal­lenge, no vehi­cle made it fur­ther than a few miles along the course. In the sec­ond, an engi­neer named Dave Hall con­struct­ed a lidar that “was giant. It was one-off but it was awe­some,” Levandows­ki told me. “We real­ized, yes, lasers [are] the way to go.”

    After grad­u­ate school, Levandows­ki went to work for Hall’s com­pa­ny, Velo­dyne, as it piv­ot­ed from mak­ing loud­speak­ers to sell­ing lidars. Levandows­ki not only talked his way into being the company’s first sales rep, tar­get­ing teams work­ing towards the next Grand Chal­lenge, but he also worked on the lidar’s net­work­ing. By the time of the third and final DARPA con­test in 2007, Velodyne’s lidar was mount­ed on five of the six vehi­cles that fin­ished.

    But Levandows­ki had already moved on. Ghostrid­er had caught the eye of Sebas­t­ian Thrun, a robot­ics pro­fes­sor and team leader of Stan­ford University’s win­ning entry in the sec­ond com­pe­ti­tion. In 2006, Thrun invit­ed Levandows­ki to help out with a project called Vue­Tool, which was set­ting out to piece togeth­er street-lev­el urban maps using cam­eras mount­ed on mov­ing vehi­cles. Google was already work­ing on a sim­i­lar sys­tem, called Street View. Ear­ly in 2007, Google brought on Thrun and his entire team as employees—with bonus­es as high as $1 mil­lion each, accord­ing to one con­tem­po­rary at Google—to trou­bleshoot Street View and bring it to launch.

    “[Hir­ing the Vue­Tool team] was very much a scheme for pay­ing Thrun and the oth­ers to show Google how to do it right,” remem­bers the engi­neer. The new hires replaced Google’s bulky, cus­tom-made $250,000 cam­eras with $15,000 off-the-shelf panoram­ic web­cams. Then they went auto shop­ping. “Antho­ny went to a car store and said we want to buy 100 cars,” Sebas­t­ian Thrun told me in 2015. “The deal­er almost fell over.”

    Levandows­ki was also mak­ing waves in the office, even to the point of telling engi­neers not to waste time talk­ing to col­leagues out­side the project, accord­ing to one Google engi­neer. “It wasn’t clear what author­i­ty Antho­ny had, and yet he came in and assumed author­i­ty,” said the engi­neer, who asked to remain anony­mous. “There were some bad feel­ings but most­ly [peo­ple] just went with it. He’s good at that. He’s a great leader.”

    Under Thrun’s super­vi­sion, Street View cars raced to hit Page’s tar­get of cap­tur­ing a mil­lion miles of road images by the end of 2007. They fin­ished in October—just in time, as it turned out. Once autumn set in, every web­cam suc­cumbed to rain, con­den­sa­tion, or cold weath­er, ground­ing all 100 vehi­cles.

    Part of the team’s secret sauce was a device that would turn a raw cam­era feed into a stream of data, togeth­er with loca­tion coor­di­nates from GPS and oth­er sen­sors. Google engi­neers called it the Top­con box, named after the Japan­ese opti­cal firm that sold it. But the box was actu­al­ly designed by a local start­up called 510 Sys­tems. “We had one cus­tomer, Top­con, and we licensed our tech­nol­o­gy to them,” one of the 510 Sys­tems own­ers told me.

    That own­er was…Anthony Levandows­ki, who had cofound­ed 510 Sys­tems with two fel­low Berke­ley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after start­ing work at Google. 510 Sys­tems had a lot in com­mon with the Ghostrid­er team. Berke­ley stu­dents worked there between lec­tures, and Levandowski’s moth­er ran the office. Top­con was cho­sen as a go-between because it had spon­sored the self-dri­ving motor­cy­cle. “I always liked the idea that…510 would be the peo­ple that made the tools for peo­ple that made maps, peo­ple like Navteq, Microsoft, and Google,” Levandows­ki told me in 2016.

    Google’s engi­neer­ing team was ini­tial­ly unaware that 510 Sys­tems was Levandowski’s com­pa­ny, sev­er­al engi­neers told me. That changed once Levandows­ki pro­posed that Google also use the Top­con box for its small fleet of aer­i­al map­ping planes. “When we found out, it raised a bunch of eye­brows,” remem­bers an engi­neer. Regard­less, Google kept buy­ing 510’s box­es.

    **********

    The truth was, Levandows­ki and Thrun were on a roll. After impress­ing Lar­ry Page with Street View, Thrun sug­gest­ed an even more ambi­tious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of car­tog­ra­phers in India. Ground Truth would allow Google to stop pay­ing expen­sive licens­ing fees for out­side maps, and bring free turn-by-turn direc­tions to Android phones—a key dif­fer­en­tia­tor in the ear­ly days of its smart­phone war with Apple.

    Levandows­ki spent months shut­tling between Moun­tain View and Hyderabad—and yet still found time to cre­ate an online stock mar­ket pre­dic­tion game with Jesse Levin­son, a com­put­er sci­ence post-doc at Stan­ford who lat­er cofound­ed his own autonomous vehi­cle start­up, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Dis­coe, a for­mer engi­neer at 510. “He had an engineer’s enthu­si­asm that was con­ta­gious, and was always think­ing about how quick­ly we can get to this amaz­ing robot future he’s so excit­ed about.”

    One time, Dis­coe was chat­ting in 510’s break room about how lidar could help sur­vey his family’s tea farm on Hawaii. “Sud­den­ly Antho­ny said, ‘Why don’t you just do it? Get a lidar rig, put it in your lug­gage, and go map it,’” said Dis­coe. “And it worked. I made a kick-ass point cloud [3D dig­i­tal map] of the farm.”

    If Street View had impressed Lar­ry Page, the speed and accu­ra­cy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he want­ed; he want­ed to return to self-dri­ving cars.

    Project Chauf­feur began in 2008, with Levandows­ki as Thrun’s right-hand man. As with Street View, Google engi­neers would work on the soft­ware while 510 Sys­tems and a recent Levandows­ki start­up, Anthony’s Robots, pro­vid­ed the lidar and the car itself.

    Levandows­ki said this arrange­ment would have act­ed as a fire­wall if any­thing went ter­ri­bly wrong. “Google absolute­ly did not want their name asso­ci­at­ed with a vehi­cle dri­ving in San Fran­cis­co,” he told me in 2016. “They were wor­ried about an engi­neer build­ing a car that drove itself that crash­es and kills some­one and it gets back to Google. You have to ask per­mis­sion [for side projects] and your man­ag­er has to be OK with it. Sebas­t­ian was cool. Google was cool.”

    In order to move Project Chauf­feur along as quick­ly as pos­si­ble from the­o­ry to real­i­ty, Levandows­ki enlist­ed the help of a film­mak­er friend he had worked with at Berke­ley. In the TV show the two had made, Levandows­ki had cre­at­ed a cyber­net­ic dol­phin suit (seri­ous­ly). Now they came up with the idea of a self-dri­ving piz­za deliv­ery car for a show on the Dis­cov­ery Chan­nel called Pro­to­type This! Levandows­ki chose a Toy­ota Prius, because it had a dri­ve-by-wire sys­tem that was rel­a­tive­ly easy to hack.

    In a mat­ter of weeks, Levandowski’s team had the car, dubbed Pri­bot, dri­ving itself. If any­one asked what they were doing, Levandows­ki told me, “We’d say it’s a laser and just dri­ve off.”

    “Those were the Wild West days,” remem­bers Ben Dis­coe. “Antho­ny and Pierre-Yves…would engage the algo­rithm in the car and it would almost swipe some oth­er car or almost go off the road, and they would come back in and joke about it. Tell sto­ries about how excit­ing it was.”

    But for the Dis­cov­ery Chan­nel show, at least, Levandows­ki fol­lowed the let­ter of the law. The Bay Bridge was cleared of traf­fic and a squad of police cars escort­ed the unmanned Prius from start to fin­ish. Apart from get­ting stuck against a wall, the dri­ve was a suc­cess. “You’ve got to push things and get some bumps and bruis­es along the way,” said Levandows­ki.

    Anoth­er inci­dent drove home the poten­tial of self-dri­ving cars. In 2010, Levandowski’s part­ner Ste­fanie Olsen was involved in a seri­ous car acci­dent while nine months preg­nant with their first child. “My son Alex was almost nev­er born,” Levandows­ki told a room full of Berke­ley stu­dents in 2013. “Trans­porta­tion [today] takes time, resources and lives. If you can fix that, that’s a real­ly big prob­lem to address.”

    Over the next few years, Levandows­ki was key to Chauffeur’s progress. 510 Sys­tems built five more self-dri­ving cars for Google—as well as ran­dom gad­gets like an autonomous trac­tor and a portable lidar sys­tem. “Antho­ny is light­ning in a bot­tle, he has so much ener­gy and so much vision,” remem­bers a friend and for­mer 510 engi­neer. “I frick­ing loved brain­storm­ing with the guy. I loved that we could cre­ate a vision of the world that didn’t exist yet and both fall in love with that vision.”

    But there were down­sides to his man­ic ener­gy, too. “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense,” said the same engi­neer. “It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

    In ear­ly 2011, that plan was to bring 510 Sys­tems into the Google­plex. The startup’s engi­neers had long com­plained that they did not have equi­ty in the grow­ing com­pa­ny. When mat­ters came to a head, Levandows­ki drew up a plan that would reserve the first $20 mil­lion of any acqui­si­tion for 510’s founders and split the remain­der among the staff, accord­ing to two for­mer 510 employ­ees. “They said we were going to sell for hun­dreds of mil­lions,” remem­bers one engi­neer. “I was pret­ty thrilled with the num­bers.”

    Indeed, that sum­mer, Levandows­ki sold 510 Sys­tems and Anthony’s Robots to Google – for $20 mil­lion, the exact cut­off before the wealth would be shared. Rank and file engi­neers did not see a pen­ny, and some were even let go before the acqui­si­tion was com­plet­ed. “I regret how it was handled…Some peo­ple did get the short end of the stick,” admit­ted Levandows­ki in 2016. The buy­out also caused resent­ment among engi­neers at Google, who won­dered how Levandows­ki could have made such a prof­it from his employ­er.

    There would be more prof­its to come. Accord­ing to a court fil­ing, Page took a per­son­al inter­est in moti­vat­ing Levandows­ki, issu­ing a direc­tive in 2011 to “make Antho­ny rich if Project Chauf­feur suc­ceeds.” Levandows­ki was giv­en by far the high­est share, about 10 per­cent, of a bonus pro­gram linked to a future val­u­a­tion of Chauffeur—a deci­sion that would lat­er cost Google dear­ly.

    **********

    Ever since a New York Times sto­ry in 2010 revealed Project Chauf­feur to the world, Google had been want­i­ng to ramp up test­ing on pub­lic streets. That was tough to arrange in well-reg­u­lat­ed Cal­i­for­nia, but Levandows­ki wasn’t about to let that stop him. While man­ning Google’s stand at the Con­sumer Elec­tron­ics Show in Las Vegas in Jan­u­ary 2011, he got to chat­ting with lob­by­ist David Gold­wa­ter. “He told me he was hav­ing a hard time in Cal­i­for­nia and I sug­gest­ed Google try a small­er state, like Neva­da,” Gold­wa­ter told me.

    Togeth­er, Gold­wa­ter and Levandows­ki draft­ed leg­is­la­tion that would allow the com­pa­ny to test and oper­ate self-dri­ving cars in Neva­da. By June, their sug­ges­tions were law, and in May 2012, a Google Prius passed the world’s first “self-dri­ving tests” in Las Vegas and Car­son City. “Antho­ny is gift­ed in so many dif­fer­ent ways,” said Gold­wa­ter. “He’s got a strate­gic mind, he’s got a tac­ti­cal mind, and a once-in-a-gen­er­a­tion intel­lect. The great thing about Antho­ny is that he was will­ing to take risks, but they were cal­cu­lat­ed risks.”

    How­ev­er, Levandowski’s risk-tak­ing had ruf­fled feath­ers at Google. It was only after Neva­da had passed its leg­is­la­tion that Levandows­ki dis­cov­ered Google had a whole team ded­i­cat­ed to gov­ern­ment rela­tions. “I thought you could just do it your­self,” he told me sheep­ish­ly in 2016. “[I] got a lit­tle bit in trou­ble for doing it.”

    That might be under­stat­ing it. One prob­lem was that Levandows­ki had lost his air cov­er at Google. In May 2012, his friend Sebas­t­ian Thrun turned his atten­tion to start­ing online learn­ing com­pa­ny Udac­i­ty. Page put anoth­er pro­fes­sor, Chris Urm­son from Carnegie Mel­lon, in charge. Not only did Levandows­ki think the job should have been his, but the two also had ter­ri­ble chem­istry.

    “They had a real­ly hard time get­ting along,” said Page at a depo­si­tion in July. “It was a con­stant man­age­ment headache to help them get through that.”

    Then in July 2013, Gae­tan Pen­necot, a 510 alum work­ing on Chauffeur’s lidar team, got a wor­ry­ing call from a ven­dor. Accord­ing to Waymo’s com­plaint, a small com­pa­ny called Odin Wave had placed an order for a cus­tom-made part that was extreme­ly sim­i­lar to one used in Google’s lidars.

    Pen­necot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Sys­tems. Droz did some dig­ging and replied in an email to Pen­necot (in French, which we’ve trans­lat­ed): “They’re clear­ly mak­ing a lidar. And it’s John (510’s old lawyer) who incor­po­rat­ed them. The date of incor­po­ra­tion cor­re­sponds to sev­er­al months after Antho­ny fell out of favor at Google.”

    As the sto­ry emerges in court doc­u­ments, Droz had found Odin Wave’s com­pa­ny records. Not only had Levandowski’s lawyer found­ed the com­pa­ny in August 2012, but it was also based in a Berke­ley office build­ing that Levandows­ki owned, was being run by a friend of Levandowski’s, and its employ­ees includ­ed engi­neers he had worked with at Velo­dyne and 510 Sys­tems. One even spoke with Levandows­ki before being hired. The com­pa­ny was devel­op­ing long range lidars sim­i­lar to those Levandows­ki had worked on at 510 Sys­tems. But Levandowski’s name was nowhere on the firm’s paper­work.

    Droz con­front­ed Levandows­ki, who denied any involve­ment, and Droz decid­ed not to fol­low the paper trail any fur­ther. “I was pret­ty hap­py work­ing at Google, and…I didn’t want to jeop­ar­dize that by...exposing more of Anthony’s shenani­gans,” he said at a depo­si­tion last month.

    Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandows­ki was even part of a Google inves­ti­ga­tion into acquir­ing Tyto. This time, how­ev­er, Google passed on the pur­chase. That seemed to demor­al­ize Levandows­ki fur­ther. “He was rarely at work, and he left a lot of the respon­si­bil­i­ty [for] eval­u­at­ing peo­ple on the team to me or oth­ers,” said Droz in his depo­si­tion.

    “Over time my patience with his manip­u­la­tions and lack of enthu­si­asm and com­mit­ment to the project [sic], it became clear­er and clear­er that this was a lost cause,” said Chris Urm­son in a depo­si­tion.

    As he was torch­ing bridges at Google, Levandows­ki was itch­ing for a new chal­lenge. Luck­i­ly, Sebas­t­ian Thrun was back on the autonomous beat. Lar­ry Page and Thrun had been think­ing about elec­tric fly­ing taxis that could car­ry one or two peo­ple. Project Tiramisu, named after the dessert which means “lift me up” in Ital­ian, involved a winged plane fly­ing in cir­cles, pick­ing up pas­sen­gers below using a long teth­er.

    Thrun knew just the per­son to kick­start Tiramisu. Accord­ing to a source work­ing there at the time, Levandows­ki was brought in to over­see Tiramisu as an “advi­sor and stake­hold­er.” Levandows­ki would show up at the project’s work­space in the evenings, and was involved in tests at one of Page’s ranch­es. Tiramisu’s teth­ers soon piv­ot­ed to a ride-aboard elec­tric drone, now called the Kit­ty Hawk fly­er. Thrun is CEO of Kit­ty Hawk, which is fund­ed by Page rather than Alpha­bet, the umbrel­la com­pa­ny that now owns Google and its sib­ling com­pa­nies.

    Waymo’s com­plaint says that around this time Levandows­ki start­ed solic­it­ing Google col­leagues to leave and start a com­peti­tor in the autonomous vehi­cle busi­ness. Droz tes­ti­fied that Levandows­ki told him it “would be nice to cre­ate a new self-dri­ving car start­up.” Fur­ther­more, he said that Uber would be inter­est­ed in buy­ing the team respon­si­ble for Google’s lidar.

    Uber had explod­ed onto the self-dri­ving car scene ear­ly in 2015, when it lured almost 50 engi­neers away from Carnegie Mel­lon Uni­ver­si­ty to form the core of its Advanced Tech­nolo­gies Cen­ter. Uber cofounder Travis Kalan­ick had described autonomous tech­nol­o­gy as an exis­ten­tial threat to the ride-shar­ing com­pa­ny, and was hir­ing furi­ous­ly. Accord­ing to Droz, Levandows­ki said that he began meet­ing Uber exec­u­tives that sum­mer.

    When Urm­son learned of Levandowski’s recruit­ing efforts, his depo­si­tion states, he sent an email to human resources in August begin­ning, “We need to fire Antho­ny Levandows­ki.” Despite an inves­ti­ga­tion, that did not hap­pen.

    But Levandowski’s now not-so-secret plan would soon see him leav­ing of his own accord—with a moun­tain of cash. In 2015, Google was due to start­ing pay­ing the Chauf­feur bonus­es, linked to a val­u­a­tion that it would have “sole and absolute dis­cre­tion” to cal­cu­late. Accord­ing to pre­vi­ous­ly unre­port­ed court fil­ings, exter­nal con­sul­tants cal­cu­lat­ed the self-dri­ving car project as being worth $8.5 bil­lion. Google ulti­mate­ly val­ued Chauf­feur at around half that amount: $4.5 bil­lion. Despite this down­grade, Levandowski’s share in Decem­ber 2015 amount­ed to over $50 mil­lion – near­ly twice as much as the sec­ond largest bonus of $28 mil­lion, paid to Chris Urm­son.

    **********

    Otto seemed to spring forth ful­ly formed in May 2016, demon­strat­ing a self-dri­ving 18-wheel truck bar­rel­ing down a Neva­da high­way with no one behind the wheel. In real­i­ty, Levandows­ki had been plan­ning it for some time.

    Levandows­ki and his Otto cofounders at Google had spent the Christ­mas hol­i­days and the first weeks of 2016 tak­ing their recruit­ment cam­paign up a notch, accord­ing to Way­mo court fil­ings. Waymo’s com­plaint alleges Levandows­ki told col­leagues he was plan­ning to “repli­cate” Waymo’s tech­nol­o­gy at a com­peti­tor, and was even solic­it­ing his direct reports at work.

    One engi­neer who had worked at 510 Sys­tems attend­ed a bar­be­cue at Levandowski’s home in Palo Alto, where Levandows­ki pitched his for­mer col­leagues and cur­rent Googlers on the start­up. “He want­ed every Way­mo per­son to resign simul­ta­ne­ous­ly, a ful­ly syn­chro­nized walk­out. He was fir­ing peo­ple up for that,” remem­bers the engi­neer.

    On Jan­u­ary 27, Levandows­ki resigned from Google with­out notice. With­in weeks, Levandows­ki had a draft con­tract to sell Otto to Uber for an amount wide­ly report­ed as $680 mil­lion. Although the full-scale syn­chro­nized walk­out nev­er hap­pened, half a dozen Google employ­ees went with Levandows­ki, and more would join in the months ahead. But the new com­pa­ny still did not have a prod­uct to sell.

    Levandows­ki brought Neva­da lob­by­ist David Gold­wa­ter back to help. “There was some brain­storm­ing with Antho­ny and his team,” said Gold­wa­ter in an inter­view. “We were look­ing to do a demon­stra­tion project where we could show what he was doing.”

    After explor­ing the idea of an autonomous pas­sen­ger shut­tle in Las Vegas, Otto set­tled on devel­op­ing a dri­ver­less semi-truck. But with the Uber deal rush­ing for­ward, Levandows­ki need­ed results fast. “By the time Otto was ready to go with the truck, they want­ed to get right on the road,” said Gold­wa­ter. That meant demon­strat­ing their pro­to­type with­out obtain­ing the very autonomous vehi­cle licence Levandows­ki had per­suad­ed Neva­da to adopt. (One state offi­cial called this move “ille­gal.”) Levandows­ki also had Otto acquire the con­tro­ver­sial Tyto Lidar—the com­pa­ny based in the build­ing he owned—in May, for an undis­closed price.

    The full-court press worked. Uber com­plet­ed its own acqui­si­tion of Otto in August, and Uber founder Travis Kalan­ick put Levandows­ki in charge of the com­bined com­pa­nies’ self-dri­ving efforts across per­son­al trans­porta­tion, deliv­ery and truck­ing. Uber would even pro­pose a Tiramisu-like autonomous air taxi called Uber Ele­vate. Now report­ing direct­ly to Kalan­ick and in charge of a 1500-strong group, Levandows­ki demand­ed the email address “robot@uber.com.”

    In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. “Down to hang out this eve and mas­ter­mind some shit,” texted Kalan­ick, short­ly after the acqui­si­tion. “We’re going to take over the world. One robot at a time,” wrote Levandows­ki anoth­er time.

    But Levandowski’s amaz­ing robot future was about to crum­ble before his eyes.

    ***********

    Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions.

    Worse was yet to come. Levandows­ki had always been a con­tro­ver­sial fig­ure at Google. With his abrupt res­ig­na­tion, the launch of Otto, and its rapid acqui­si­tion by Uber, Google launched an inter­nal inves­ti­ga­tion in the sum­mer of 2016. It found that Levandows­ki had down­loaded near­ly 10 giga­bytes of Google’s secret files just before he resigned, many of them relat­ing to lidar tech­nol­o­gy.

    Also in Decem­ber 2016, in an echo of the Tyto inci­dent, a Way­mo employ­ee was acci­den­tal­ly sent an email from a ven­dor that includ­ed a draw­ing of an Otto cir­cuit board. The design looked very sim­i­lar to Waymo’s cur­rent lidars.

    Way­mo says the “final piece of the puz­zle” came from a sto­ry about Otto I wrote for Backchan­nel based on a pub­lic records request. A doc­u­ment sent by Otto to Neva­da offi­cials boast­ed the com­pa­ny had an “in-house cus­tom-built 64-laser” lidar sys­tem. To Way­mo, that sound­ed very much like tech­nol­o­gy it had devel­oped. In Feb­ru­ary this year, Way­mo filed its head­line law­suit accus­ing Uber (along with Otto Truck­ing, yet anoth­er of Levandowski’s com­pa­nies, but one that Uber had not pur­chased) of vio­lat­ing its patents and mis­ap­pro­pri­at­ing trade secrets on lidar and oth­er tech­nolo­gies.

    Uber imme­di­ate­ly denied the accu­sa­tions and has con­sis­tent­ly main­tained its inno­cence. Uber says there is no evi­dence that any of Waymo’s tech­ni­cal files ever came to Uber, let alone that Uber ever made use of them. While Levandows­ki is not named as a defen­dant, he has refused to answer ques­tions in depo­si­tions with Waymo’s lawyers and is expect­ed to do the same at tri­al. (He turned down sev­er­al requests for inter­views for this sto­ry.) He also didn’t ful­ly coop­er­ate with Uber’s own inves­ti­ga­tion into the alle­ga­tions, and that, Uber says, is why it fired him in May.

    Levandows­ki prob­a­bly does not need a job. With the pur­chase of 510 Sys­tems and Anthony’s Robots, his salary, and bonus­es, Levandows­ki earned at least $120 mil­lion from his time at Google. Some of that mon­ey has been invest­ed in mul­ti­ple real estate devel­op­ments with his col­lege friend Randy Miller, includ­ing sev­er­al large projects in Oak­land and Berke­ley.

    But Levandows­ki has kept busy behind the scenes. In August, court fil­ings say, he per­son­al­ly tracked down a pair of ear­rings giv­en to a Google employ­ee at her going-away par­ty in 2014. The ear­rings were made from con­fi­den­tial lidar cir­cuit boards, and will pre­sum­ably be used by Otto Trucking’s lawyers to sug­gest that Way­mo does not keep a very close eye on its trade secrets.

    Some of Levandowski’s friends and col­leagues have expressed shock at the alle­ga­tions he faces, say­ing that they don’t reflect the per­son they knew. “It is...in char­ac­ter for Antho­ny to play fast and loose with things like intel­lec­tu­al prop­er­ty if it’s in pur­suit of build­ing his dream robot,” said Ben Dis­coe. “[But] I was a lit­tle sur­prised at the alleged mag­ni­tude of his dis­re­gard for IP.”

    “Def­i­nite­ly one of Anthony’s faults is to be aggres­sive as he is, but it’s also one of his great attrib­ut­es. I don’t see [him doing] all the oth­er stuff he has been accused of,” said David Gold­wa­ter.

    But Lar­ry Page is no longer con­vinced that Levandows­ki was key to Chauffeur’s suc­cess. In his depo­si­tion to the court, Page said, “I believe Anthony’s con­tri­bu­tions are quite pos­si­bly neg­a­tive of a high amount.” At Uber, some engi­neers pri­vate­ly say that Levandowski’s poor man­age­ment style set back that company’s self-dri­ving effort by a cou­ple of years.

    Even after this tri­al is done, Levandows­ki will not be able to rest easy. In May, a judge referred evi­dence from the case to the US Attorney’s office “for inves­ti­ga­tion of pos­si­ble theft of trade secrets,” rais­ing the pos­si­bil­i­ty of crim­i­nal pro­ceed­ings and prison time. Yet on the time­line that mat­ters to Antho­ny Levandows­ki, even that may not mean much. Build­ing a robot­i­cal­ly enhanced future is his pas­sion­ate life­time project. On the Way of the Future, law­suits or even a jail sen­tence might just feel like lit­tle bumps in the road.

    “This case is teach­ing Antho­ny some hard lessons but I don’t see [it] keep­ing him down,” said Randy Miller. “He believes firm­ly in his vision of a bet­ter world through robot­ics and he’s con­vinced me of it. It’s clear to me that he’s on a mis­sion.”

    “I think Antho­ny will rise from the ash­es,” agrees one friend and for­mer 510 Sys­tems engi­neer. “Antho­ny has the ambi­tion, the vision, and the abil­i­ty to recruit and dri­ve peo­ple. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cut­ting cor­ners.”

    ———-

    “God Is a Bot, and Antho­ny Levandows­ki Is His Mes­sen­ger” by Mark Har­ris; Wired; 09/27/2017

    “But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them?”

    Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? It’s an impor­tant ques­tion that does­n’t just apply to self-dri­ving cars. Like AI God­heads. Can we ever trust man-made AI God­heads if it turns out we can’t trust the peo­ple who are mak­ing it? These are the stu­pid ques­tions we have to now giv­en the dis­turb­ing num­ber of pow­er­ful peo­ple who dou­ble as evan­ge­list for tech­no-cult Lib­er­tar­i­an ide­olo­gies. Espe­cial­ly when they are spe­cial­ist in cre­at­ing auto­mat­ed vehi­cles and have a deep pas­sion for tak­ing over the world. Pos­si­bly tak­ing over the world mil­i­tar­i­ly using robots:

    ...
    Over the next few years, Levandows­ki was key to Chauffeur’s progress. 510 Sys­tems built five more self-dri­ving cars for Google—as well as ran­dom gad­gets like an autonomous trac­tor and a portable lidar sys­tem. “Antho­ny is light­ning in a bot­tle, he has so much ener­gy and so much vision,” remem­bers a friend and for­mer 510 engi­neer. “I frick­ing loved brain­storm­ing with the guy. I loved that we could cre­ate a vision of the world that didn’t exist yet and both fall in love with that vision.”

    But there were down­sides to his man­ic ener­gy, too. “He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense,” said the same engi­neer. “It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”
    ...

    Yeah, that’s dis­turb­ing. And it does­n’t help that Levandows­ki appar­ent­ly found a soul­mate and men­tor in the guy wide­ly viewed as one of the most socio­path­ic CEOs today: Uber CEO Travis Kalan­ick:

    ...
    The full-court press worked. Uber com­plet­ed its own acqui­si­tion of Otto in August, and Uber founder Travis Kalan­ick put Levandows­ki in charge of the com­bined com­pa­nies’ self-dri­ving efforts across per­son­al trans­porta­tion, deliv­ery and truck­ing. Uber would even pro­pose a Tiramisu-like autonomous air taxi called Uber Ele­vate. Now report­ing direct­ly to Kalan­ick and in charge of a 1500-strong group, Levandows­ki demand­ed the email address “robot@uber.com.”

    In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. “Down to hang out this eve and mas­ter­mind some shit,” texted Kalan­ick, short­ly after the acqui­si­tion. “We’re going to take over the world. One robot at a time,” wrote Levandows­ki anoth­er time.
    ...

    “We’re going to take over the world. One robot at a time”

    So that gives us an idea of how Levandowski’s AI reli­gion is going to be evan­ge­lized: via his army of robots. Although it’s unclear if his future reli­gion is actu­al­ly intend­ed for us mere humans. After all, for the hard­core Tran­shu­man­ists we’re all sup­posed to fuse with machines or upload our brains so it’s very pos­si­ble humans aren’t actu­al­ly part of Levandowski’s vision for that bet­ter tomor­row. A vision that, as the Cica­da 3301 weird­ness reminds us, prob­a­bly isn’t lim­it­ed to Levandows­ki. *gulp*

    Posted by Pterrafractyl | September 28, 2017, 8:43 pm
  11. Just what the world needs: an AI-pow­ered ‘gay­dar’ algo­rithm that pur­ports to to be able to detect who is gay and who isn’t just by look­ing at faces. Although it’s not actu­al­ly that impres­sive. The ‘gay­dar’ algo­rithm is instead giv­en pairs of faces, one of a het­ero­sex­u­al indi­vid­ual and one of a homo­sex­u­al indi­vid­ual, and the tries to iden­ti­fy the gay per­son. And appar­ent­ly does so cor­rect­ly at a rate of 81 per­cent of cas­es for men and 71 per­cent of cas­es for women, sig­nif­i­cant­ly bet­ter than the 50 per­cent rate we would expect just from ran­dom chance. It’s the kind of gay­dar tech­nol­o­gy that might not be good enough to just ‘pick the gays from a crowd’, but still more than ade­quate for poten­tial­ly being abused. And, more gen­er­al­ly, it’s the kind of research that not sur­pris­ing­ly is rais­ing con­cerns about this cre­at­ing a a 21st cen­tu­ry ver­sion of physig­no­my, the pseu­do­science based on the idea that peo­ples’ char­ac­ter is reflect­ed in their faces.

    But as the researchers behind the study put it, we don’t need to wor­ry about this being a high-tech exam­ple of phys­iog­no­my because their gay­dar uses hard sci­ence. And while the researchers agree that phys­iog­no­my is pseu­do­science, they also note that the pseu­do­science nature of phys­iog­no­my does­n’t mean AIs can’t actu­al­ly learn some­thing about you just by look­ing at you. Yep. That’s the reas­sur­ance we’re get­ting from these researchers. Don’t wor­ry about AI dri­ving a 21st cen­tu­ry ver­sion of phys­iog­no­my because they are using much bet­ter sci­ence com­pared to the past. Feel­ing reas­sured?

    The Verge

    The inven­tion of AI ‘gay­dar’ could be the start of some­thing much worse
    Researchers claim they can spot gay peo­ple from a pho­to, but crit­ics say we’re revis­it­ing pseu­do­science

    by James Vin­cent
    Sep 21, 2017, 1:24pm EDT

    Two weeks ago, a pair of researchers from Stan­ford Uni­ver­si­ty made a star­tling claim. Using hun­dreds of thou­sands of images tak­en from a dat­ing web­site, they said they had trained a facial recog­ni­tion sys­tem that could iden­ti­fy whether some­one was straight or gay just by look­ing at them. The work was first cov­ered by The Econ­o­mist, and oth­er pub­li­ca­tions soon fol­lowed suit, with head­lines like “New AI can guess whether you’re gay or straight from a pho­to­graph” and “AI Can Tell If You’re Gay From a Pho­to, and It’s Ter­ri­fy­ing.”

    As you might have guessed, it’s not as straight­for­ward as that. (And to be clear, based on this work alone, AI can’t tell whether some­one is gay or straight from a pho­to.) But the research cap­tures com­mon fears about arti­fi­cial intel­li­gence: that it will open up new avenues for sur­veil­lance and con­trol, and could be par­tic­u­lar­ly harm­ful for mar­gin­al­ized peo­ple. One of the paper’s authors, Dr Michal Kosin­s­ki, says his intent is to sound the alarm about the dan­gers of AI, and warns that facial recog­ni­tion will soon be able to iden­ti­fy not only someone’s sex­u­al ori­en­ta­tion, but their polit­i­cal views, crim­i­nal­i­ty, and even their IQ.

    With state­ments like these, some wor­ry we’re reviv­ing an old belief with a bad his­to­ry: that you can intu­it char­ac­ter from appear­ance. This pseu­do­science, phys­iog­no­my, was fuel for the sci­en­tif­ic racism of the 19th and 20th cen­turies, and gave moral cov­er to some of humanity’s worst impuls­es: to demo­nize, con­demn, and exter­mi­nate fel­low humans. Crit­ics of Kosinski’s work accuse him of replac­ing the calipers of the 19th cen­tu­ry with the neur­al net­works of the 21st, while the pro­fes­sor him­self says he is hor­ri­fied by his find­ings, and hap­py to be proved wrong. “It’s a con­tro­ver­sial and upset­ting sub­ject, and it’s also upset­ting to us,” he tells The Verge.

    But is it pos­si­ble that pseu­do­science is sneak­ing back into the world, dis­guised in new garb thanks to AI? Some peo­ple say machines are sim­ply able to read more about us than we can our­selves, but what if we’re train­ing them to car­ry out our prej­u­dices, and, in doing so, giv­ing new life to old ideas we right­ly dis­missed? How are we going to know the dif­fer­ence?

    Can AI real­ly spot sex­u­al ori­en­ta­tion?

    First, we need to look at the study at the heart of the recent debate, writ­ten by Kosin­s­ki and his co-author Yilun Wang. Its results have been poor­ly report­ed, with a lot of the hype com­ing from mis­rep­re­sen­ta­tions of the system’s accu­ra­cy. The paper states: “Giv­en a sin­gle facial image, [the soft­ware] could cor­rect­ly dis­tin­guish between gay and het­ero­sex­u­al men in 81 per­cent of cas­es, and in 71 per­cent of cas­es for women.” These rates increase when the sys­tem is giv­en five pic­tures of an indi­vid­ual: up to 91 per­cent for men, and 83 per­cent for women.

    On the face of it, this sounds like “AI can tell if a man is gay or straight 81 per­cent of the time by look­ing at his pho­to.” (Thus the head­lines.) But that’s not what the fig­ures mean. The AI wasn’t 81 per­cent cor­rect when being shown ran­dom pho­tos: it was test­ed on a pair of pho­tos, one of a gay per­son and one of a straight per­son, and then asked which indi­vid­ual was more like­ly to be gay. It guessed right 81 per­cent of the time for men and 71 per­cent of the time for women, but the struc­ture of the test means it start­ed with a base­line of 50 per­cent — that’s what it’d get guess­ing at ran­dom. And although it was sig­nif­i­cant­ly bet­ter than that, the results aren’t the same as say­ing it can iden­ti­fy anyone’s sex­u­al ori­en­ta­tion 81 per­cent of the time.

    As Philip Cohen, a soci­ol­o­gist at the Uni­ver­si­ty of Mary­land who wrote a blog post cri­tiquing the paper, told The Verge: “Peo­ple are scared of a sit­u­a­tion where you have a pri­vate life and your sex­u­al ori­en­ta­tion isn’t known, and you go to an air­port or a sport­ing event and a com­put­er scans the crowd and iden­ti­fies whether you’re gay or straight. But there’s just not much evi­dence this tech­nol­o­gy can do that.”

    Kosin­s­ki and Wang make this clear them­selves toward the end of the paper when they test their sys­tem against 1,000 pho­tographs instead of two. They ask the AI to pick out who is most like­ly to be gay in a dataset in which 7 per­cent of the pho­to sub­jects are gay, rough­ly reflect­ing the pro­por­tion of straight and gay men in the US pop­u­la­tion. When asked to select the 100 indi­vid­u­als most like­ly to be gay, the sys­tem gets only 47 out of 70 pos­si­ble hits. The remain­ing 53 have been incor­rect­ly iden­ti­fied. And when asked to iden­ti­fy a top 10, nine are right.

    If you were a bad actor try­ing to use this sys­tem to iden­ti­fy gay peo­ple, you couldn’t know for sure you were get­ting cor­rect answers. Although, if you used it against a large enough dataset, you might get most­ly cor­rect guess­es. Is this dan­ger­ous? If the sys­tem is being used to tar­get gay peo­ple, then yes, of course. But the rest of the study sug­gests the pro­gram has even fur­ther lim­i­ta­tions.

    What can com­put­ers real­ly see that humans can’t?

    It’s also not clear what fac­tors the facial recog­ni­tion sys­tem is using to make its judge­ments. Kosin­s­ki and Wang’s hypoth­e­sis is that it’s pri­mar­i­ly iden­ti­fy­ing struc­tur­al dif­fer­ences: fem­i­nine fea­tures in the faces of gay men and mas­cu­line fea­tures in the faces of gay women. But it’s pos­si­ble that the AI is being con­fused by oth­er stim­uli — like facial expres­sions in the pho­tos.

    This is par­tic­u­lar­ly rel­e­vant because the images used in the study were tak­en from a dat­ing web­site. As Greg­gor Matt­son, a pro­fes­sor of soci­ol­o­gy at Ober­lin Col­lege, point­ed out in a blog post, this means that the images them­selves are biased, as they were select­ed specif­i­cal­ly to attract some­one of a cer­tain sex­u­al ori­en­ta­tion. They almost cer­tain­ly play up to our cul­tur­al expec­ta­tions of how gay and straight peo­ple should look, and, to fur­ther nar­row their applic­a­bil­i­ty, all the sub­jects were white, with no inclu­sion of bisex­u­al or self-iden­ti­fied trans indi­vid­u­als. If a straight male choos­es the most stereo­typ­i­cal­ly “man­ly” pic­ture of him­self for a dat­ing site, it says more about what he thinks soci­ety wants from him than a link between the shape of his jaw and his sex­u­al ori­en­ta­tion.

    To try and ensure their sys­tem was look­ing at facial struc­ture only, Kosin­s­ki and Wang used soft­ware called VGG-Face, which encodes faces as strings of num­bers and has been used for tasks like spot­ting celebri­ty looka­likes in paint­ings. This pro­gram, they write, allows them to “min­i­mize the role [of] tran­sient fea­tures” like light­ing, pose, and facial expres­sion.

    But researcher Tom White, who works on AI facial sys­tem, says VGG-Face is actu­al­ly very good at pick­ing up on these ele­ments. White point­ed this out on Twit­ter, and explained to The Verge over email how he’d test­ed the soft­ware and used it to suc­cess­ful­ly dis­tin­guish between faces with expres­sions like “neu­tral” and “hap­py,” as well as pos­es and back­ground col­or.

    Speak­ing to The Verge, Kosin­s­ki says he and Wang have been explic­it that things like facial hair and make­up could be a fac­tor in the AI’s deci­sion-mak­ing, but he main­tains that facial struc­ture is the most impor­tant. “If you look at the over­all prop­er­ties of VGG-Face, it tends to put very lit­tle weight on tran­sient facial fea­tures,” Kosin­s­ki says. “We also pro­vide evi­dence that non-tran­sient facial fea­tures seem to be pre­dic­tive of sex­u­al ori­en­ta­tion.”

    The prob­lem is, we can’t know for sure. Kosin­s­ki and Wang haven’t released the pro­gram they cre­at­ed or the pic­tures they used to train it. They do test their AI on oth­er pic­ture sources, to see if it’s iden­ti­fy­ing some fac­tor com­mon to all gay and straight, but these tests were lim­it­ed and also drew from a biased dataset — Face­book pro­file pic­tures from men who liked pages such as “I love being Gay,” and “Gay and Fab­u­lous.”

    Do men in these groups serve as rea­son­able prox­ies for all gay men? Prob­a­bly not, and Kosin­s­ki says it’s pos­si­ble his work is wrong. “Many more stud­ies will need to be con­duct­ed to ver­i­fy [this],” he says. But it’s tricky to say how one could com­plete­ly elim­i­nate selec­tion bias to per­form a con­clu­sive test. Kosin­s­ki tells The Verge, “You don’t need to under­stand how the mod­el works to test whether it’s cor­rect or not.” How­ev­er, it’s the accep­tance of the opac­i­ty of algo­rithms that makes this sort of research so fraught.

    If AI can’t show its work­ing, can we trust it?

    AI researchers can’t ful­ly explain why their machines do the things they do. It’s a chal­lenge that runs through the entire field, and is some­times referred to as the “black box” prob­lem. Because of the meth­ods used to train AI, these pro­grams can’t show their work in the same way nor­mal soft­ware does, although researchers are work­ing to amend this.

    In the mean­time, it leads to all sorts of prob­lems. A com­mon one is that sex­ist and racist bias­es are cap­tured from humans in the train­ing data and repro­duced by the AI. In the case of Kosin­s­ki and Wang’s work, the “black box” allows them to make a par­tic­u­lar sci­en­tif­ic leap of faith. Because they’re con­fi­dent their sys­tem is pri­mar­i­ly ana­lyz­ing facial struc­tures, they say their research shows that facial struc­tures pre­dict sex­u­al ori­en­ta­tion. (“Study 1a showed that facial fea­tures extract­ed by a [neur­al net­work] can be used to accu­rate­ly iden­ti­fy the sex­u­al ori­en­ta­tion of both men and women.”)

    Experts say this is a mis­lead­ing claim that isn’t sup­port­ed by the lat­est sci­ence. There may be a com­mon cause for face shape and sex­u­al ori­en­ta­tion — the most prob­a­ble cause is the bal­ance of hor­mones in the womb — but that doesn’t mean face shape reli­ably pre­dicts sex­u­al ori­en­ta­tion, says Qazi Rah­man, an aca­d­e­m­ic at King’s Col­lege Lon­don who stud­ies the biol­o­gy of sex­u­al ori­en­ta­tion. “Biology’s a lit­tle bit more nuanced than we often give it cred­it for,” he tells The Verge. “The issue here is the strength of the asso­ci­a­tion.”

    The idea that sex­u­al ori­en­ta­tion comes pri­mar­i­ly from biol­o­gy is itself con­tro­ver­sial. Rah­man, who believes that sex­u­al ori­en­ta­tion is most­ly bio­log­i­cal, prais­es Kosin­s­ki and Wang’s work. “It’s not junk sci­ence,” he says. “More like sci­ence some­one doesn’t like.” But when it comes to pre­dict­ing sex­u­al ori­en­ta­tion, he says there’s a whole pack­age of “atyp­i­cal gen­der behav­ior” that needs to be con­sid­ered. “The issue for me is more that [the study] miss­es the point, and that’s behav­ior.”

    Reduc­ing the ques­tion of sex­u­al ori­en­ta­tion to a sin­gle, mea­sur­able fac­tor in the body has a long and often inglo­ri­ous his­to­ry. As Mat­ton writes in his blog post, approach­es have ranged from “19th cen­tu­ry mea­sure­ments of les­bians’ cli­toris­es and homo­sex­u­al men’s hips, to late 20th cen­tu­ry claims to have dis­cov­ered ‘gay genes,’ ‘gay brains,’ ‘gay ring fin­gers,’ ‘les­bian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppres­sion: it gives peo­ple who want to dehu­man­ize and per­se­cute sex­u­al minori­ties a “sci­en­tif­ic” pre­text.

    Jen­ny Davis, a lec­tur­er in soci­ol­o­gy at the Aus­tralian Nation­al Uni­ver­si­ty, describes it as a form of bio­log­i­cal essen­tial­ism. This is the belief that things like sex­u­al ori­en­ta­tion are root­ed in the body. This approach, she says, is dou­ble-edged. On the one hand, it “does a use­ful polit­i­cal thing: detach­ing blame from same-sex desire. But on the oth­er hand, it rein­forces the deval­ued posi­tion of that kind of desire,” set­ting up het­ro­sex­u­al­i­ty as the norm and fram­ing homo­sex­u­al­i­ty as “less valu­able … a sort of ill­ness.”

    ...

    Your char­ac­ter, as plain as the nose on your face

    For cen­turies, peo­ple have believed that the face held the key to the char­ac­ter. The notion has its roots in ancient Greece, but was par­tic­u­lar­ly influ­en­tial in the 19th cen­tu­ry. Pro­po­nents of phys­iog­no­my sug­gest­ed that by mea­sur­ing things like the angle of someone’s fore­head or the shape of their nose, they could deter­mine if a per­son was hon­est or a crim­i­nal. Last year in Chi­na, AI researchers claimed they could do the same thing using facial recog­ni­tion.

    Their research, pub­lished as “Auto­mat­ed Infer­ence on Crim­i­nal­i­ty Using Face Images,” caused a minor uproar in the AI com­mu­ni­ty. Sci­en­tists point­ed out flaws in the study, and con­clud­ed that that work was repli­cat­ing human prej­u­dices about what con­sti­tutes a “mean” or a “nice” face. In a wide­ly shared rebut­tal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the com­ing years that has sim­i­lar … false claims to sci­en­tif­ic objec­tiv­i­ty in order to ‘laun­der’ human prej­u­dice and dis­crim­i­na­tion.” (Google declined to make Agüera y Arcas avail­able to com­ment on this report.)

    Kosin­s­ki and Wang’s paper clear­ly acknowl­edges the dan­gers of phys­iog­no­my, not­ing that the prac­tice “is now uni­ver­sal­ly, and right­ly, reject­ed as a mix of super­sti­tion and racism dis­guised as sci­ence.” But, they con­tin­ue, just because a sub­ject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read char­ac­ter­is­tics like per­son­al­i­ty in oth­er people’s faces with “low accu­ra­cy,” machines should be able to do the same but more accu­rate­ly.

    Kosin­s­ki says his research isn’t phys­iog­no­my because it’s using rig­or­ous sci­en­tif­ic meth­ods, and his paper cites a num­ber of stud­ies show­ing that we can deduce (with vary­ing accu­ra­cy) traits about peo­ple by look­ing at them. “I was edu­cat­ed and made to believe that it’s absolute­ly impos­si­ble that the face con­tains any infor­ma­tion about your inti­mate traits, because phys­iog­no­my and phrenol­o­gy were just pseu­do­sciences,” he says. “But the fact that they were claim­ing things with­out any basis in fact, that they were mak­ing stuff up, doesn’t mean that this stuff is not real.” He agrees that phys­iog­no­my is not sci­ence, but says there may be truth in its basic con­cepts that com­put­ers can reveal.

    For Davis, this sort of atti­tude comes from a wide­spread and mis­tak­en belief in the neu­tral­i­ty and objec­tiv­i­ty of AI. “Arti­fi­cial intel­li­gence is not in fact arti­fi­cial,” she tells The Verge. “Machines learn like humans learn. We’re taught through cul­ture and absorb the norms of social struc­ture, and so does arti­fi­cial intel­li­gence. So it will re-cre­ate, ampli­fy, and con­tin­ue on the tra­jec­to­ries we’ve taught it, which are always going to reflect exist­ing cul­tur­al norms.”

    We’ve already cre­at­ed sex­ist and racist algo­rithms, and these sorts of cul­tur­al bias­es and phys­iog­no­my are real­ly just two sides of the same coin: both rely on bad evi­dence to judge oth­ers. The work by the Chi­nese researchers is an extreme exam­ple, but it’s cer­tain­ly not the only one. There’s at least one start­up already active that claims it can spot ter­ror­ists and pedophiles using face recog­ni­tion, and there are many oth­ers offer­ing to ana­lyze “emo­tion­al intel­li­gence” and con­duct AI-pow­ered sur­veil­lance.

    Fac­ing up to what’s com­ing

    But to return to the ques­tions implied by those alarm­ing head­lines about Kosin­s­ki and Wang’s paper: is AI going to be used to per­se­cute sex­u­al minori­ties?

    This sys­tem? No. A dif­fer­ent one? Maybe.

    Kosin­s­ki and Wang’s work is not invalid, but its results need seri­ous qual­i­fi­ca­tions and fur­ther test­ing. With­out that, all we know about their sys­tem is that it can spot with some reli­a­bil­i­ty the dif­fer­ence between self-iden­ti­fied gay and straight white peo­ple on one par­tic­u­lar dat­ing site. We don’t know that it’s spot­ted a bio­log­i­cal dif­fer­ence com­mon to all gay and straight peo­ple; we don’t know if it would work with a wider set of pho­tos; and the work doesn’t show that sex­u­al ori­en­ta­tion can be deduced with noth­ing more than, say, a mea­sure­ment of the jaw. It’s not decod­ed human sex­u­al­i­ty any more than AI chat­bots have decod­ed the art of a good con­ver­sa­tion. (Nor do its authors make such a claim.)

    The research was pub­lished to warn peo­ple, say Kosin­s­ki, but he admits it’s an “unavoid­able para­dox” that to do so you have to explain how you did what you did. All the tools used in the paper are avail­able for any­one to find and put togeth­er them­selves. Writ­ing at the deep learn­ing edu­ca­tion site Fast.ai, researcher Jere­my Howard con­cludes: “It is prob­a­bly rea­son­ably [sic] to assume that many orga­ni­za­tions have already com­plet­ed sim­i­lar projects, but with­out pub­lish­ing them in the aca­d­e­m­ic lit­er­a­ture.”

    We’ve already men­tioned star­tups work­ing on this tech, and it’s not hard to find gov­ern­ment regimes that would use it. In coun­tries like Iran and Sau­di Ara­bia homo­sex­u­al­i­ty is still pun­ish­able by death; in many oth­er coun­tries, being gay means being hound­ed, impris­oned, and tor­tured by the state. Recent reports have spo­ken of the open­ing of con­cen­tra­tion camps for gay men in the Chechen Repub­lic, so what if some­one there decides to make their own AI gay­dar, and scan pro­file pic­tures from Russ­ian social media?

    Here, it becomes clear that the accu­ra­cy of sys­tems like Kosin­s­ki and Wang’s isn’t real­ly the point. If peo­ple believe AI can be used to deter­mine sex­u­al pref­er­ence, they will use it. With that in mind, it’s more impor­tant than ever that we under­stand the lim­i­ta­tions of arti­fi­cial intel­li­gence, to try and neu­tral­ize dan­gers before they start impact­ing peo­ple. Before we teach machines our prej­u­dices, we need to first teach our­selves.

    ———-

    “The inven­tion of AI ‘gay­dar’ could be the start of some­thing much worse” by James Vin­cent; The Verge; 09/21/2017

    Kosin­s­ki says his research isn’t phys­iog­no­my because it’s using rig­or­ous sci­en­tif­ic meth­ods, and his paper cites a num­ber of stud­ies show­ing that we can deduce (with vary­ing accu­ra­cy) traits about peo­ple by look­ing at them. “I was edu­cat­ed and made to believe that it’s absolute­ly impos­si­ble that the face con­tains any infor­ma­tion about your inti­mate traits, because phys­iog­no­my and phrenol­o­gy were just pseu­do­sciences,” he says. “But the fact that they were claim­ing things with­out any basis in fact, that they were mak­ing stuff up, doesn’t mean that this stuff is not real.” He agrees that phys­iog­no­my is not sci­ence, but says there may be truth in its basic con­cepts that com­put­ers can reveal.”

    It’s not a return of the phys­iog­no­my pseu­do­science but “there may be truth in [phys­iog­no­my’s] basic con­cepts that com­put­ers can reveal.” That’s seri­ous­ly the mes­sage from these researchers, along with a mes­sage of con­fi­dence that their algo­rithm is work­ing sole­ly from facial fea­tures and not oth­er more tran­sient fea­tures. And based on that con­fi­dence in their algo­rithm the researchers point to their results on evi­dence that gay peo­ple have bio­log­i­cal­ly dif­fer­ent faces...even though they can’t actu­al­ly deter­mine what the algo­rithm is look­ing at when com­ing to its con­clu­sion:

    ...
    If AI can’t show its work­ing, can we trust it?

    AI researchers can’t ful­ly explain why their machines do the things they do. It’s a chal­lenge that runs through the entire field, and is some­times referred to as the “black box” prob­lem. Because of the meth­ods used to train AI, these pro­grams can’t show their work in the same way nor­mal soft­ware does, although researchers are work­ing to amend this.

    In the mean­time, it leads to all sorts of prob­lems. A com­mon one is that sex­ist and racist bias­es are cap­tured from humans in the train­ing data and repro­duced by the AI. In the case of Kosin­s­ki and Wang’s work, the “black box” allows them to make a par­tic­u­lar sci­en­tif­ic leap of faith. Because they’re con­fi­dent their sys­tem is pri­mar­i­ly ana­lyz­ing facial struc­tures, they say their research shows that facial struc­tures pre­dict sex­u­al ori­en­ta­tion. (“Study 1a showed that facial fea­tures extract­ed by a [neur­al net­work] can be used to accu­rate­ly iden­ti­fy the sex­u­al ori­en­ta­tion of both men and women.”)

    Experts say this is a mis­lead­ing claim that isn’t sup­port­ed by the lat­est sci­ence. There may be a com­mon cause for face shape and sex­u­al ori­en­ta­tion — the most prob­a­ble cause is the bal­ance of hor­mones in the womb — but that doesn’t mean face shape reli­ably pre­dicts sex­u­al ori­en­ta­tion, says Qazi Rah­man, an aca­d­e­m­ic at King’s Col­lege Lon­don who stud­ies the biol­o­gy of sex­u­al ori­en­ta­tion. “Biology’s a lit­tle bit more nuanced than we often give it cred­it for,” he tells The Verge. “The issue here is the strength of the asso­ci­a­tion.”

    The idea that sex­u­al ori­en­ta­tion comes pri­mar­i­ly from biol­o­gy is itself con­tro­ver­sial. Rah­man, who believes that sex­u­al ori­en­ta­tion is most­ly bio­log­i­cal, prais­es Kosin­s­ki and Wang’s work. “It’s not junk sci­ence,” he says. “More like sci­ence some­one doesn’t like.” But when it comes to pre­dict­ing sex­u­al ori­en­ta­tion, he says there’s a whole pack­age of “atyp­i­cal gen­der behav­ior” that needs to be con­sid­ered. “The issue for me is more that [the study] miss­es the point, and that’s behav­ior.”

    Reduc­ing the ques­tion of sex­u­al ori­en­ta­tion to a sin­gle, mea­sur­able fac­tor in the body has a long and often inglo­ri­ous his­to­ry. As Mat­ton writes in his blog post, approach­es have ranged from “19th cen­tu­ry mea­sure­ments of les­bians’ cli­toris­es and homo­sex­u­al men’s hips, to late 20th cen­tu­ry claims to have dis­cov­ered ‘gay genes,’ ‘gay brains,’ ‘gay ring fin­gers,’ ‘les­bian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppres­sion: it gives peo­ple who want to dehu­man­ize and per­se­cute sex­u­al minori­ties a “sci­en­tif­ic” pre­text.

    Jen­ny Davis, a lec­tur­er in soci­ol­o­gy at the Aus­tralian Nation­al Uni­ver­si­ty, describes it as a form of bio­log­i­cal essen­tial­ism. This is the belief that things like sex­u­al ori­en­ta­tion are root­ed in the body. This approach, she says, is dou­ble-edged. On the one hand, it “does a use­ful polit­i­cal thing: detach­ing blame from same-sex desire. But on the oth­er hand, it rein­forces the deval­ued posi­tion of that kind of desire,” set­ting up het­ro­sex­u­al­i­ty as the norm and fram­ing homo­sex­u­al­i­ty as “less valu­able … a sort of ill­ness.”
    ...

    In the mean­time, it leads to all sorts of prob­lems. A com­mon one is that sex­ist and racist bias­es are cap­tured from humans in the train­ing data and repro­duced by the AI. In the case of Kosin­s­ki and Wang’s work, the “black box” allows them to make a par­tic­u­lar sci­en­tif­ic leap of faith. Because they’re con­fi­dent their sys­tem is pri­mar­i­ly ana­lyz­ing facial struc­tures, they say their research shows that facial struc­tures pre­dict sex­u­al ori­en­ta­tion. (“Study 1a showed that facial fea­tures extract­ed by a [neur­al net­work] can be used to accu­rate­ly iden­ti­fy the sex­u­al ori­en­ta­tion of both men and women.”)”

    So if this research was put forth as a kind of warn­ing to the pub­lic, which is how the researchers are fram­ing it, it’s quite a warn­ing. Both a warn­ing that algo­rithms like this are being devel­oped and a warn­ing of how read­i­ly the con­clu­sions of these algo­rithms might be accept­ed as evi­dence of an under­ly­ing bio­log­i­cal find­ing (as opposed to a ‘black box’ arti­fact that could be pick­ing up all sort of bio­log­i­cal and social cues).

    And don’t for­get, even if these algo­rithms do actu­al­ly stum­ble across real asso­ci­a­tions that can be teased out by these AI-dri­ven algo­rithms with just some pic­tures of some­one (or maybe some addi­tion­al bio­met­ric data picked up by the “smart kiosks” of the not-too-dis­tant future), there’s a big dif­fer­ence between demon­strat­ing the abil­i­ty to dis­cern some­thing sta­tis­ti­cal­ly across large data sets and being able to do that with the kind of accu­ra­cy where you don’t have to sig­nif­i­cant­ly wor­ry about jump­ing to the wrong con­clu­sion (assum­ing you aren’t using the tech­nol­o­gy in an abu­sive man­ner in the first place). Even if some­one devel­ops an algo­rithm that can accu­rate guess sex­u­al ori­en­ta­tion 95 per­cent of the time that still leaves a pret­ty sub­stan­tial 5 per­cent chance of get­ting it wrong. And the only way to avoid those incor­rect con­clu­sions is to devel­op an algo­rithm that’s so good at infer­ring sex­u­al ori­en­ta­tion it’s basi­cal­ly nev­er wrong, assum­ing that’s pos­si­ble. And, of course, if such an algo­rithm was devel­oped with that kind of accu­ra­cy that would be real­ly creepy. It points towards one of the scari­er aspects of this kind of tech­nol­o­gy: In order to ensure your pri­va­cy-invad­ing algo­rithms don’t risk jump­ing to erro­neous con­clu­sions you need algo­rithms that are scar­i­ly good at invad­ing your pri­va­cy which is anoth­er rea­son we prob­a­bly should­n’t be pro­mot­ing 21st Cen­tu­ry phys­iog­no­my.

    Posted by Pterrafractyl | October 2, 2017, 2:43 pm
  12. It looks like Google is final­ly get that shiny new object its been pin­ing for: a city. Yep, Side­walk Labs, owned by Google’s par­ent com­pa­ny Alpha­bet, just got per­mis­sion to build its own ‘city of the future’ on a 12-acre water­front dis­trict near Toron­to, filled with self-dri­ving shut­tles, adap­tive traf­fic lights that sense pedes­tri­ans, and under­ground tun­nels for freight-trans­port­ing robots. With sen­sors every­where.

    And if that sounds ambi­tious, note that all these plans aren’t lim­it­ed to the ini­tial 12 acres. Alpha­bet report­ed­ly has plans to expand across 800 acres of Toron­to’s post-indus­tri­al water­front zone:

    The Finan­cial Times

    Alpha­bet to build futur­is­tic city in Toron­to
    Plans for tech­nol­o­gy-enabled envi­ron­ment raise pri­va­cy con­cerns

    by Leslie Hook in San Fran­cis­co
    Octo­ber 17, 2017, 4:54 pm

    Alpha­bet is set­ting out to build the city of the future, start­ing with a down­town dis­trict of Toron­to, in what it hopes will serve as a prov­ing ground for tech­nol­o­gy-enabled urban envi­ron­ments around the world.

    In a first-of-its-kind project, Alphabet’s sub­sidiary Side­walk Labs will devel­op a 12-acre water­front dis­trict, Quay­side, with a view to expand across 800 acres of Toronto’s post-indus­tri­al water­front zone.

    Self-dri­ving shut­tles, adap­tive traf­fic lights that sense pedes­tri­ans, mod­u­lar hous­ing and freight-deliv­er­ing robots that trav­el in under­ground tun­nels might all be part of the new devel­op­ment, accord­ing to the win­ning bid sub­mit­ted by Side­walk Labs.

    In its pro­pos­al, Side­walk also said that Toron­to would need to waive or exempt many exist­ing reg­u­la­tions in areas like build­ing codes, trans­porta­tion, and ener­gy in order to build the city it envi­sioned. The project may need “sub­stan­tial for­bear­ances from exist­ing laws and reg­u­la­tions,” the group said.

    Alpha­bet chair­man Eric Schmidt and Cana­di­an prime min­is­ter Justin Trudeau announced the deal on Tues­day in Toron­to.

    “We start­ed think­ing about all the things we could do if some­one would just give us a city and put us in charge,” said Eric Schmidt, exec­u­tive chair­man of Alpha­bet. “That’s not how it works, for all sorts of good rea­sons,” he added with a laugh.

    For Alpha­bet, the project presents a chance to exper­i­ment with new ways to use tech­nol­o­gy — and data — in the real world. “This is not some ran­dom activ­i­ty from our per­spec­tive. This is the cul­mi­na­tion of almost 10 years of think­ing about how tech­nol­o­gy could improve people’s lives,” said Mr Schmidt.

    ...

    Mr Trudeau described the project as a “test bed for new technologies...that will help us build clean­er, smarter, green­er, cities”.

    “Eric [Schmidt] and I have been talk­ing about col­lab­o­rat­ing on this for a few years, and see­ing it all come togeth­er now is extra­or­di­nar­i­ly excit­ing,” he added.

    One of the chal­lenges for the new dis­trict will be set­ting data poli­cies and address­ing con­cerns over pri­va­cy, which are par­tic­u­lar­ly acute because smart city tech­nolo­gies often rely on col­lect­ing vast amounts of data to make cities run more effi­cient­ly.

    In the vision state­ment sub­mit­ted as part of its bid, Side­walk describes a vast sys­tem of sen­sors that will mon­i­tor every­thing from park bench­es and over­flow­ing waste bins, to noise and pol­lu­tion lev­els in hous­ing. The devel­op­ment will also pio­neer new approach­es to ener­gy, includ­ing a ther­mal grid and on-site gen­er­a­tion, and tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices.

    The trans­porta­tion pro­pos­al for the dis­trict includes restrict­ing pri­vate vehi­cles, and instead offer­ing self-dri­ving shut­tles and bike paths that are heat­ed in the win­ter, accord­ing to the vision doc­u­ment. A series of under­ground util­i­ty tun­nels will house util­i­ties like elec­tri­cal wires and water pipes, and also pro­vide path­ways for freight-deliv­er­ing robots.

    Side­walk Labs, a sub­sidiary of Alpha­bet that was found­ed in 2015 by Dan Doc­to­roff, a for­mer deputy may­or of New York, will spend $50m on ini­tial plan­ning and test­ing for the devel­op­ment. As part of the effort, Google will also move its Cana­di­an head­quar­ters to Toron­to.

    Mr Doc­to­roff said the group would present a detailed plan in one year, fol­low­ing exten­sive con­sul­ta­tions with the com­mu­ni­ty. “Our goal here is to lis­ten, to under­stand,” he said. “This has to be a com­mu­ni­ty conversation...otherwise it won’t have the polit­i­cal cred­i­bil­i­ty to do things that are quite bold.”

    ———-

    “Alpha­bet to build futur­is­tic city in Toron­to” by Leslie Hook; The Finan­cial Times; 10/17/2017

    “In its pro­pos­al, Side­walk also said that Toron­to would need to waive or exempt many exist­ing reg­u­la­tions in areas like build­ing codes, trans­porta­tion, and ener­gy in order to build the city it envi­sioned. The project may need “sub­stan­tial for­bear­ances from exist­ing laws and reg­u­la­tions,” the group said.

    LOL, yeah, it’s a good bet that A LOT of exist­ing laws and reg­u­la­tions are going to have be waived. Espe­cial­ly laws involv­ing per­son­al data pri­va­cy. And it sounds like the data col­lect­ed isn’t just going to involve your where­abouts and oth­er infor­ma­tion the sen­sors every­where will be able to pick up. Alpha­bet is also envi­sion­ing “tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices”, which means med­ical data pri­va­cy laws are prob­a­bly also going to have to get waived:

    ...
    One of the chal­lenges for the new dis­trict will be set­ting data poli­cies and address­ing con­cerns over pri­va­cy, which are par­tic­u­lar­ly acute because smart city tech­nolo­gies often rely on col­lect­ing vast amounts of data to make cities run more effi­cient­ly.

    In the vision state­ment sub­mit­ted as part of its bid, Side­walk describes a vast sys­tem of sen­sors that will mon­i­tor every­thing from park bench­es and over­flow­ing waste bins, to noise and pol­lu­tion lev­els in hous­ing. The devel­op­ment will also pio­neer new approach­es to ener­gy, includ­ing a ther­mal grid and on-site gen­er­a­tion, and tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices.
    ...

    Let’s also not for­get about the devel­op­ment of tech­nolo­gies that can col­lect per­son­al health infor­ma­tion like heart rates and breath­ing infor­ma­tion using WiFi sig­nals alone (which would pair nice­ly with Google’s plans to put free WiFi kiosks bristling with sen­sors on side­walks every­where. And as is pret­ty clear at this point, any­thing that can be sensed remote­ly will be sensed remote­ly in this new city. Because that’s half the point of the whole thing. So yeah, “sub­stan­tial for­bear­ances from exist­ing laws and reg­u­la­tions” will no doubt be required.

    Inter­est­ing­ly, Alpha­bet recent­ly announced a new ini­tia­tive that sounds like exact­ly the kind of “tech-enabled pri­ma­ry health­care that will be inte­grat­ed with social ser­vices” the com­pa­ny has planned for its new city: City­block was just launched,. It’s a new Alpha­bet start­up focused on improv­ing health care man­age­ment by, sur­prise!, inte­grat­ing var­i­ous tech­nolo­gies into a health care sys­tem with the goal of bring­ing down costs and improv­ing out­comes. But it’s not sim­ply new tech­nol­o­gy that’s sup­posed to do this. Instead, that tech­nol­o­gy is to be used in a pre­ven­tive man­ner in order to address more expen­sive health con­di­tions before they get worse. As such, City­block is going to focus­es on behav­ioral health being. Yep, it’s a health care mod­el where a tech firm, paired with a health firm, tries to get to you live a more healthy lifestyle by col­lect­ing lots of data about you. And while this approach would undoubt­ed­ly cause wide­spread pri­va­cy con­cerns, those con­cerns will prob­a­bly be some­what stunt­ed in this case since the tar­get mar­ket City­block has in mind is poor peo­ple, espe­cial­ly Med­ic­aid patients in the US:

    Fierce Health­care

    Google’s par­ent com­pa­ny spins off an inno­v­a­tive start­up health­care provider

    by Matt Kuhrt | Oct 5, 2017 8:49am

    The lat­est Sil­i­con Val­ley bid to dis­rupt a tra­di­tion­al indus­try appears to be aimed at health­care. City­block, a start­up qui­et­ly launched by Google’s par­ent com­pa­ny Alpha­bet, will focus on pro­vid­ing team-based care for low-income com­mu­ni­ties.

    The ven­ture comes from one of Alphabet’s inno­va­tion-ori­ent­ed groups, Side­walk Labs, and will rely upon a team-based care deliv­ery struc­ture that is sup­port­ed by doc­tors, behav­ioral health coach­es and tech­no­log­i­cal tools, accord­ing to an arti­cle from CNBC.

    Efforts by health­care orga­ni­za­tions to improve care man­age­ment and increase patient engage­ment through social inter­ac­tion have attract­ed atten­tion, par­tic­u­lar­ly in the con­text of chron­ic con­di­tions, as Fierce­Health­care has pre­vi­ous­ly report­ed. While cell phones and social media apps have pro­vid­ed new avenues to boost patient engage­ment, inte­grat­ing those tech­nolo­gies into an effec­tive care deliv­ery mod­el has proven more com­plex. At the same time, major play­ers such as the Cen­ters for Medicare & Med­ic­aid Ser­vices, active­ly seek feed­back on mod­els that pri­or­i­tize behav­ioral health in response to the indus­try’s inter­est in the poten­tial for effi­cien­cy from an increased empha­sis on pre­ven­tive and ongo­ing care.

    City­block aims to pro­vide Med­ic­aid and low­er-income Medicare ben­e­fi­cia­ries access to high-val­ue, read­i­ly avail­able per­son­al­ized health ser­vices. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medi­um that the orga­ni­za­tion will apply lead­ing-edge care mod­els that ful­ly inte­grate pri­ma­ry care, behav­ioral health and social ser­vices. It expects to open its first clin­ic, which it calls a Neigh­bor­hood Health Hub, in New York City in 2018.

    Cityblock’s inter­dis­ci­pli­nary man­age­ment team, which includes both vet­er­ans of the tra­di­tion­al health­care indus­try and Google tech­nol­o­gists, will focus on pre­ven­tive care. Behav­ioral health coach­es will dri­ve care teams that will build social rela­tion­ships and deliv­er care at cen­tral­ly locat­ed “hubs,” via tele­health ser­vices or house calls, accord­ing to the web­site. City­block is also in the process of nego­ti­at­ing part­ner­ships to ensure insur­ance com­pa­nies cov­er its ser­vices.

    ...

    He also points out that City­block has made a con­scious deci­sion to tar­get low-income Amer­i­cans, who he says have tra­di­tion­al­ly been short-changed by indus­try inno­va­tion efforts.

    ———-

    “Google’s par­ent com­pa­ny spins off an inno­v­a­tive start­up health­care provider” by Matt Kuhrt; Fierce Health­care; 10/05/2017

    City­block aims to pro­vide Med­ic­aid and low­er-income Medicare ben­e­fi­cia­ries access to high-val­ue, read­i­ly avail­able per­son­al­ized health ser­vices. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medi­um that the orga­ni­za­tion will apply lead­ing-edge care mod­els that ful­ly inte­grate pri­ma­ry care, behav­ioral health and social ser­vices. It expects to open its first clin­ic, which it calls a Neigh­bor­hood Health Hub, in New York City in 2018.”

    It’s prob­a­bly worth recall­ing that per­son­al­ized ser­vices for the poor intend­ed to ‘help them help them­selves’ was the cen­ter­piece for House Speak­er Paul Ryan’s pro­pos­al to give every poor per­son a life coach who will issue “life plans” and “con­tracts” that poor peo­ple would be expect­ed to meet and face penal­ties if they fail to meet them. So when we’re talk­ing about set­ting up spe­cial per­son­al­ized “behav­ior health” mon­i­tor­ing sys­tems as part of health care ser­vices for the poor, don’t for­get that this per­son­al­ized mon­i­tor­ing sys­tem is going to be real­ly handy when politi­cians want to say, “if you want to stay on Med­ic­aid you have bet­ter make XYZ changes in your lifestyle. We are watch­ing you.” And since right-wingers gen­er­al expect the poor to be super-human (capa­ble of work­ing mul­ti­ple jobs, get­ting an edu­ca­tion, raise a fam­i­ly, and deal­ing with any unfore­seen per­son­al dis­as­ters in stride, all simul­ta­ne­ous­ly), it’s not like it’s we should be sur­prised to see the kinds of behav­ior health stan­dards that almost no one can meet, espe­cial­ly since work­ing mul­ti­ple jobs, get­ting an edu­ca­tion, raise a fam­i­ly, and deal­ing with any unfore­seen per­son­al dis­as­ters in stride simul­ta­ne­ous­ly is an incred­i­bly unhealthy lifestyle.

    Also recall that Paul Ryan sug­gest­ed that his ‘life coach’ plan could apply to oth­er fed­er­al pro­grams for the poor, includ­ing food stamps. It’s not a stretch at all to imag­ine ‘life coach­es’ for Med­ic­aid recip­i­ents would appeal the right-wing. As long as it involves a ‘kick­ing the poor’ dynam­ic. And that’s part of the tragedy of the mod­ern age: sur­veil­lance tech­nol­o­gy and a focus on behav­ior health could be great as a help­ful vol­un­tary tool for peo­ple that want help get­ting health­i­er, but it’s hard to imag­ine it not becom­ing a coer­cive night­mare sce­nario in the US giv­en the incred­i­ble antipa­thy towards the poor that per­vades Amer­i­can soci­ety.

    So as creepy as Google’s city is on its face regard­ing what it tells us about how the future is unfold­ing for peo­ple of all incomes and class­es, don’t for­get that we could be look­ing at the first test bed for cre­at­ing the kind of sur­veil­lance wel­fare state that’s per­fect for kick­ing peo­ple off of wel­fare. Just make unre­al­is­tic stan­dards that involve a lot of pater­nal­is­tic moral pos­tur­ing (which should play well with vot­ers), watch all the poor peo­ple with the sur­veil­lance tech­nol­o­gy, and wait for the wave of inevitable fail­ures who can be kicked off for not ‘try­ing’ or some­thing.

    Posted by Pterrafractyl | October 18, 2017, 3:28 pm
  13. There’s some big news about Face­book’s mind-read­ing tech­nol­o­gy ambi­tions, although it’s not entire­ly clear if it’s good news, bad news, scary news or what: Regi­na Dugan, the for­mer head of DARPA who jumped to Google and then Face­book where we was work­ing on the mind-read­ing tech­nol­o­gy, just left Face­book. Why? Well, that’s where it’s unclear. Accord­ing to a Tweet Dugan made about her depar­ture:

    There is a tidal shift going on in Sil­i­con Val­ley, and those of us in this indus­try have greater respon­si­bil­i­ties than ever before. The tim­ing feels right to step away and be pur­pose­ful about what’s next, thought­ful about new ways to con­tribute in times of dis­rup­tion.

    So Dugan is leav­ing Face­book, to be more pur­pose­ful and respon­si­ble. And she was the one head­ing up the mind-read­ing tech­nol­o­gy project. Is that scary news? It’s unclear but it seems like that might be scary news:

    Giz­mo­do

    What Hap­pens to Facebook’s Mind Read­ing Project Now That Its Leader Is Gone?

    Alex Cranz
    10/17/2017 5:05pm

    Regi­na Dugan, a tech exec with roots in the gov­ern­ment sec­tor as the for­mer direc­tor of DARPA, is leav­ing Face­book and her depar­ture calls into ques­tion the sta­tus of one of the cra­zi­est things Face­book has been work­ing on.

    Fit­ting­ly, Dugan announced the depar­ture in a post on Face­book today.

    [see Face­book post]

    If you’re unfa­mil­iar with Dugan her­self you may be famil­iar with some of the skunkworks projects she over­saw while man­ag­ing Google’s secre­tive Advanced Tech­nol­o­gy and Projects (ATAP) group from 2012 to 2016. Those projects includ­ed Tan­go, a high­ly accu­rate aug­ment­ed real­i­ty device packed into a phone, and Ara, the now scut­tled mod­u­lar phone that could have made your mobile hard­ware upgrades a whole lot cheap­er.

    In 2016 Dugan left Google for anoth­er huge com­pa­ny with lit­tle con­sumer gad­get expe­ri­ence: Face­book. At Face­book she ran Build­ing 8, anoth­er pri­vate­ly fund­ed research and devel­op­ment group like ATAP.

    The projects Dugan and her col­leagues devel­oped at Build­ing 8 didn’t just include neat gad­gets for the near future; they could have also led to enor­mous leaps for­ward in tech­nol­o­gy as a whole. The most not­ed project was one announced at F8, Facebook’s annu­al devel­op­er con­fer­ence, in April. Called Silent Voice First, the project sought to allow com­put­ers to read your mind. Obvi­ous­ly that would make it eas­i­er to post to Face­book when your hands are busy, and it could be life alter­ing for peo­ple with severe phys­i­cal lim­i­ta­tions, but it would also, you know, be a com­put­er, run by Face­book, that READS YOUR MIND.

    Nei­ther Dugan nor Face­book has made it clear why she’s depart­ing at this time; a rep­re­sen­ta­tive for Face­book told Giz­mo­do the com­pa­ny had noth­ing fur­ther to add (we’ve also reached out to Dugan). And Face­book has not detailed what will hap­pen to the projects she over­saw at Build­ing 8.

    ...

    Beyond that, all we have is a pre­pared quote from Dugan that was pro­vid­ed to reporters, via Bloomberg’s Sarah Frier.

    There is a tidal shift going on in Sil­i­con Val­ley, and those of us in this indus­try have greater respon­si­bil­i­ties than ever before. The tim­ing feels right to step away and be pur­pose­ful about what’s next, thought­ful about new ways to con­tribute in times of dis­rup­tion.

    These aren’t exact­ly the words you want to hear from the woman over­see­ing the devel­op­ment of mind-read­ing tech­nol­o­gy for one of the largest pri­vate sur­veil­lance appa­ra­tus­es in the world.

    But it is a good reminder for us all that the biggest leaps for­ward in tech­nol­o­gy, the next steps on our jour­ney towards a Matrix or Star Trek-like future, are not nec­es­sar­i­ly in the hands of altru­is­tic sci­en­tists with pub­lic fund­ing, but poten­tial­ly in the hands of enor­mous pri­vate cor­po­ra­tions who seem to pri­mar­i­ly per­ceive humans as com­modi­ties, and tech­nol­o­gy as inroads into our wal­lets and minds. In cas­es like that one would hope they’d be respon­si­ble.

    ———-

    “What Hap­pens to Facebook’s Mind Read­ing Project Now That Its Leader Is Gone?” by Alex Cranz; Giz­mo­do; 10/17/2017

    “Nei­ther Dugan nor Face­book has made it clear why she’s depart­ing at this time; a rep­re­sen­ta­tive for Face­book told Giz­mo­do the com­pa­ny had noth­ing fur­ther to add (we’ve also reached out to Dugan). And Face­book has not detailed what will hap­pen to the projects she over­saw at Build­ing 8.”

    It’s a mys­tery. A mind-read­ing tech­nol­o­gy mys­tery. Oh good­ie. As the author of the above piece notes in response to Dugan’s tweet about being respon­si­ble and pur­pose­ful, these are exact­ly reas­sur­ing words in this con­text:

    ...
    These aren’t exact­ly the words you want to hear from the woman over­see­ing the devel­op­ment of mind-read­ing tech­nol­o­gy for one of the largest pri­vate sur­veil­lance appa­ra­tus­es in the world.
    ...

    That’s the con­text. The head of the mind-read­ing tech­nol­o­gy divi­sion for one of the largest pri­vate sur­veil­lance appa­ra­tus­es in the world just left the com­pa­ny for cryp­tic rea­sons involv­ing the need for the tech indus­try to be more respon­si­ble than ever and her choice to step away to be pur­pose­ful. It’s not par­tic­u­lar­ly reas­sur­ing news.

    Posted by Pterrafractyl | October 18, 2017, 8:45 pm
  14. Here’s some new research worth keep­ing in mind regard­ing the mind-read­ing tech­nolo­gies being devel­oped by Face­book and Elon Musk: While read­ing your exact thoughts, the stat­ed goals of Face­book and Musk, is prob­a­bly going to require quite a bit more research, read­ing your emo­tions is some­thing researchers can already do. And this abil­i­ty to read emo­tion can, in turn, be poten­tial­ly used to read what you’re think­ing by look­ing at your emo­tion­al response to par­tic­u­lar con­cepts.

    That’s what some researchers just demon­strat­ed, using fMRI brain imag­ing tech­nol­o­gy to gath­er data on brain activ­i­ty which was fed into soft­ware trained to iden­ti­fy dis­tinct pat­terns of brain activ­i­ty. The results are pret­ty astound­ing. In the study, researchers recruit­ed 34 indi­vid­u­als: 17 peo­ple who self-pro­fessed to hav­ing had sui­ci­dal thoughts before, and 17 oth­ers who had­n’t. Then they mea­sured the brain activ­i­ties of these 34 indi­vid­u­als in response to var­i­ous words, includ­ing the word “death.” It turns out “death” cre­at­ed a dis­tinct sig­na­ture of brain activ­i­ty dif­fer­en­ti­at­ing the sui­ci­dal indi­vid­u­als from the con­trol group, allow­ing the researchers to iden­ti­fy those with sui­ci­dal thoughts 91 per­cent of the time in this study:

    The Dai­ly Beast

    A Machine Might Just Read Your Mind and Pre­dict If You’re Sui­ci­dal
    A psy­chol­o­gy pro­fes­sor says his soft­ware can fig­ure out if a per­son is sui­ci­dal. But does it real­ly work?

    Tanya Basu
    11.01.17 9:00 AM ET

    A few years ago, Mar­cel Just was try­ing to fig­ure out how to have real-life appli­ca­tions for his research. Just, a pro­fes­sor of psy­chol­o­gy at Carnegie Mel­lon and direc­tor of the Cen­ter for Cog­ni­tive Brain Imag­ing, had spent the major­i­ty of the past decade teach­ing com­put­er pro­grams how to iden­ti­fy thoughts. He had found—with the help of an func­tion­al mag­net­ic res­o­nance imag­ing (fMRI) machine—that each emo­tion we feel had a spe­cif­ic sig­na­ture in the brain and lit up in unique­ly iden­ti­fi­able ways. Just had trained a piece of soft­ware to fol­low these pat­terns and rec­og­nize about 30 con­cepts and emo­tions.

    “We asked whether we could iden­ti­fy what a per­son was think­ing from the machine learn­ing pat­terns,” Just explained. “The machine learn­ing data was fig­ured out with var­i­ous kinds of con­cepts; even­tu­al­ly it learned how to map between pat­terns and con­cepts.”

    “From that research,” he added, “we real­ized it was pos­si­ble to tell what a per­son was think­ing.”

    In oth­er words, Just had cre­at­ed arti­fi­cial intel­li­gence that could read your thoughts.

    Around this time, Just spoke at the Uni­ver­si­ty of Pitts­burgh’s med­ical school. David Brent, a pro­fes­sor of psy­chol­o­gy spe­cial­iz­ing in chil­dren, approached him.

    “Do you think this could be used to iden­ti­fy sui­ci­dal thoughts?” Brent asked.

    It hit Just then: What if arti­fi­cial intel­li­gence could pre­dict what a sui­ci­dal per­son was think­ing? And maybe even pre­vent a sui­ci­dal per­son from com­mit­ting sui­cide?

    Ear­li­er this week, Just, Brent, and a few oth­er col­leagues pub­lished a land­mark paper in the jour­nal Nature that finds that with an aston­ish­ing 91% accu­ra­cy, arti­fi­cial intel­li­gence is able to fig­ure out if a per­son is con­sid­er­ing sui­cide.

    The exper­i­ment is remark­able for more than one rea­son. There’s of course the idea of using a machine trained to fig­ure out neur­al pat­terns to iden­ti­fy those who might con­sid­er sui­cide. It’s a sub­set of the pop­u­la­tion that is typ­i­cal­ly dif­fi­cult to pin­point and help before it’s too late, rely­ing not only on their telling oth­ers of their desire to kill them­selves, but also that per­son actu­al­ly act­ing on it and help­ing the sui­ci­dal per­son in trou­ble.

    Just and Brent recruit­ed 34 indi­vid­u­als from local clin­ics: 17 who’d self-pro­fessed to hav­ing had sui­ci­dal thoughts before, and 17 oth­ers who had­n’t. The research team put the 34 indi­vid­u­als through an fMRI machine and had them think about words (with the help of the Adult Sui­ci­dal Ideation Ques­tion­naire, which mea­sures “sui­cide ideation”) that rep­re­sent­ed dif­fer­ent “stim­u­lus” con­cepts, rang­ing from pos­i­tive ones (praise, bliss, care­free, and kind­ness), neg­a­tive ones (ter­ri­ble, cru­el­ty, evil), and sui­cide (fatal, funer­al, death).

    The last one—death—was the most damn­ing of the brain sig­na­tures in Just’s study. Those who were sui­ci­dal showed a spot of angry crim­son at the front of the brain. Con­trol patients, mean­while, just had specks of red amidst a sea of blue in the pic­tures. “These peo­ple who are sui­ci­dal had more sad­ness in their rep­re­sen­ta­tion of death, and more shame as well,” he said.
    [see neur­al imag­ine rep­re­sen­ta­tions]

    So Just and Brent set to work teach­ing a machine the con­cepts that were most asso­ci­at­ed with sui­cide, and those that weren’t. “If you trained the machine on 10 peo­ple in their anger sig­na­ture, and put the 11th per­son on the scan­ner, it should be able to tell if the 11th per­son is angry or not,” Just said of how the machine was put to the test. The machine then fig­ured out if the per­son was sui­ci­dal or not.

    The results are strong, even if the sam­ple size is rel­a­tive­ly small: After going through all 34 peo­ple, Just and his research team were able to say with a 91% suc­cess rate which of the indi­vid­u­als had dis­played sui­ci­dal thoughts.

    That’s, in short, amaz­ing. But it’s not per­fect. What about the oth­er 9%? “It’s a good ques­tion,” he said of the gap. “There seems to be an emo­tion­al dif­fer­ence [we don’t under­stand]” that the group hopes to test in future iter­a­tions of the study.

    Anoth­er prob­lem with the study as it stands? The fact that it uses the fMRI machine in the first place. “Nobody used machine learn­ing in the ear­ly days,” Just said. “This [arti­fi­cial intel­li­gence] uses mul­ti­ple vol­ume elements—‘voxels’—to fig­ure out neur­al rep­re­sen­ta­tion.” If that sounds expen­sive, it is. And expense makes any poten­tial ther­a­py there­fore more dif­fi­cult to access, a crit­i­cism of brain scan­ning stud­ies cov­ered by Ed Yong at The Atlantic: “When sci­en­tists use med­ical scan­ners to repeat­ed­ly peer at the shapes and activ­i­ties of the human brain, those brains tend to belong to wealthy and well-edu­cat­ed peo­ple. And unless researchers take steps to cor­rect for that bias, what we get is an under­stand­ing of the brain that’s incom­plete, skewed, and ... well ... a lit­tle weird.”

    ...

    The more sub­tle nuance of the study that deserves note is the very real pos­si­bil­i­ty that arti­fi­cial intel­li­gence can do some­thing that strong­ly resem­bles read­ing your mind. We like to con­ceive of thoughts as amor­phous con­cepts, as unique to our own head­space and expe­ri­ence. What might tick­le one per­son­’s fan­cy might not nec­es­sar­i­ly do the same for anoth­er; what brings one indi­vid­ual shame won’t both­er some­one else. But those core feel­ings of hap­pi­ness, shame, sad­ness, and oth­ers seem to look almost iden­ti­cal from per­son to per­son.

    Does this mean that we can poten­tial­ly erad­i­cate sui­cide, though? Just is hes­i­tant to make that assump­tion, though he thinks this is a huge step towards under­stand­ing what he terms “thought dis­or­ders.” “We can look at the neur­al sig­na­ture and see how it’s changed,” he said, “see what this per­son is think­ing, whether it’s unusu­al.” Just thinks that most real­is­ti­cal­ly, this is going to be a huge first step towards devel­op­ing a unique ther­a­py for sui­ci­dal indi­vid­u­als: If we know the spe­cif­ic thought process­es that are symp­to­matic of sui­cide, we can know how to poten­tial­ly spot sui­ci­dal indi­vid­u­als and help them.

    “This isn’t a wild pie in the sky idea,” Just said. “We can use machine learn­ing to fig­ure out the phys­i­cal­i­ty of thought. We can help peo­ple.”

    ———-

    “A Machine Might Just Read Your Mind and Pre­dict If You’re Sui­ci­dal” by Tanya Basu; The Dai­ly Beast; 11/01/2017

    ““This isn’t a wild pie in the sky idea,” Just said. “We can use machine learn­ing to fig­ure out the phys­i­cal­i­ty of thought. We can help peo­ple.””

    Yes indeed, this kind of tech­nol­o­gy could be wild­ly help­ful in the field of brain sci­ence and study­ing men­tal ill­ness. The abil­i­ty to break down the men­tal activ­i­ty in response to con­cepts and see which parts of the brains are light­ing up and what types of emo­tions they’re asso­ci­at­ed with would be an invalu­able research tool. So let’s hope researchers are able to come up with all sorts of use­ful dis­cov­er­ies about all sorts of men­tal con­di­tions using this kind of tech­nol­o­gy. In respon­si­ble hands this could lead to incred­i­ble break­throughs in med­i­cine and men­tal health and real­ly could improve lives.

    But, of course, with tech­nol­o­gy being the dou­ble-edged sword that it is, we can’t ignore the real­i­ty that the same tech­nol­o­gy that would be won­der­ful for respon­si­ble researchers work­ing with vol­un­teers in a lab set­ting would be absolute­ly ter­ri­fy­ing if it was incor­po­rat­ed into, say, Face­book’s planned mind-read­ing con­sumer tech­nol­o­gy. After all, if Face­book’s planned mind-read­ing tech can read peo­ple’s thoughts it seems like it should be also be capa­ble of read­ing some­thing much sim­pler to detect like emo­tion­al response too.

    Or at least typ­i­cal emo­tion­al respons­es. As the study also indi­cat­ed, there’s going to a sub­set of peo­ple whose brains don’t emo­tion­al­ly respond in the “nor­mal” man­ner, where the def­i­n­i­tion “nor­mal­cy” is prob­a­bly filled with all sorts of unknown bias­es:

    ...
    The results are strong, even if the sam­ple size is rel­a­tive­ly small: After going through all 34 peo­ple, Just and his research team were able to say with a 91% suc­cess rate which of the indi­vid­u­als had dis­played sui­ci­dal thoughts.

    That’s, in short, amaz­ing. But it’s not per­fect. What about the oth­er 9%? “It’s a good ques­tion,” he said of the gap. “There seems to be an emo­tion­al dif­fer­ence [we don’t under­stand]” that the group hopes to test in future iter­a­tions of the study.

    Anoth­er prob­lem with the study as it stands? The fact that it uses the fMRI machine in the first place. “Nobody used machine learn­ing in the ear­ly days,” Just said. “This [arti­fi­cial intel­li­gence] uses mul­ti­ple vol­ume elements—‘voxels’—to fig­ure out neur­al rep­re­sen­ta­tion.” If that sounds expen­sive, it is. And expense makes any poten­tial ther­a­py there­fore more dif­fi­cult to access, a crit­i­cism of brain scan­ning stud­ies cov­ered by Ed Yong at The Atlantic: “When sci­en­tists use med­ical scan­ners to repeat­ed­ly peer at the shapes and activ­i­ties of the human brain, those brains tend to belong to wealthy and well-edu­cat­ed peo­ple. And unless researchers take steps to cor­rect for that bias, what we get is an under­stand­ing of the brain that’s incom­plete, skewed, and ... well ... a lit­tle weird.”
    ...

    “What about the oth­er 9%? “It’s a good ques­tion,” he said of the gap. “There seems to be an emo­tion­al dif­fer­ence [we don’t under­stand]” that the group hopes to test in future iter­a­tions of the study.”

    So once this tech­nol­o­gy because cheap enough for wide­spread use (which is one of the goals of Face­book and Musk) we could eas­i­ly find that “brain types” become a new cat­e­go­ry for assess­ing peo­ple. And pre­dict­ing behav­ior. And any­thing else peo­ple (and not just expert researchers) can think up to use this kind of data for.

    And don’t for­get, if Face­book real­ly can devel­op cheap thought-read­ing tech­nol­o­gy designed to inter­face your brain with a com­put­er, that could eas­i­ly become the kind of thing employ­ees are just expect­ed to use due to the poten­tial pro­duc­tiv­i­ty enhance­ments. So imag­ine tech­nol­o­gy that’s not only read­ing the words you’re think­ing but also read­ing the emo­tion response you have to those words. And imag­ine basi­cal­ly being basi­cal­ly forced to use this tech­nol­o­gy in the work­place of the future because it’s deemed to be pro­duc­tiv­i­ty enhanc­ing or some­thing. That could actu­al­ly hap­pen.

    It also rais­es the ques­tion of what Face­book would do if it detect­ed some­one was show­ing the sui­ci­dal brain sig­na­ture. Do they alert some­one? Will think­ing sad thoughts while using the mind-read­ing tech­nol­o­gy result in a vis­it from a men­tal health pro­fes­sion­al? What about real­ly angry or vio­lent thoughts? It’s the kind of area that’s going to raise fas­ci­nat­ing ques­tions about the respon­si­ble use of this data. Fas­ci­nat­ing and often ter­ri­fy­ing ques­tions.

    It’s all pret­ty depress­ing, right? Well, if the emerg­ing mind-read­ing econ­o­my gets over­whelm­ing­ly depress­ing, at least it sounds like the mind-read­ing tech­nol­o­gy caus­ing your depres­sion will be able to detect that it’s caus­ing this. Yay?

    Posted by Pterrafractyl | November 1, 2017, 3:06 pm
  15. Remem­ber those reports about Big Data being used in the work­place to allow employ­ers to pre­dict which employ­ees are like­ly to get sick (so they get get rid of them before the ill­ness­es get expen­sive)? Well, as the fol­low­ing arti­cle makes clear, employ­ers are going to be pre­dict­ing a lot more than just who is get­ting sick. They’re going to be try­ing to pre­dict every­thing they can pre­dict along with things they can’t accu­rate­ly pre­dict but decide to try to pre­dict any­way:

    The Guardian

    Big Broth­er isn’t just watch­ing: work­place sur­veil­lance can track your every move

    Employ­ers are using a range of tech­nolo­gies to mon­i­tor their staff’s web-brows­ing pat­terns, key­strokes, social media posts and even pri­vate mes­sag­ing apps

    Olivia Solon in San Fran­cis­co
    Mon­day 6 Novem­ber 2017 03.00 EST

    How can an employ­er make sure its remote work­ers aren’t slack­ing off? In the case of tal­ent man­age­ment com­pa­ny Crossover, the answer is to take pho­tos of them every 10 min­utes through their web­cam.

    The pic­tures are tak­en by Crossover’s pro­duc­tiv­i­ty tool, WorkS­mart, and com­bine with screen­shots of their work­sta­tions along with oth­er data – includ­ing app use and key­strokes – to come up with a “focus score” and an “inten­si­ty score” that can be used to assess the val­ue of free­lancers.

    Today’s work­place sur­veil­lance soft­ware is a dig­i­tal panop­ti­con that began with email and phone mon­i­tor­ing but now includes keep­ing track of web-brows­ing pat­terns, text mes­sages, screen­shots, key­strokes, social media posts, pri­vate mes­sag­ing apps like What­sApp and even face-to-face inter­ac­tions with co-work­ers.

    “If you are a par­ent and you have a teenage son or daugh­ter com­ing home late and not doing their home­work you might won­der what they are doing. It’s the same as employ­ees,” said Brad Miller, CEO of Aware­ness Tech­nolo­gies, which sells a pack­age of employ­ee mon­i­tor­ing tools under the brand Inter­guard.

    Crossover’s San­jeev Pat­ni insists that work­ers get over the ini­tial self-con­scious­ness after a few days and accept the need for such mon­i­tor­ing as they do CCTV in shop­ping malls.

    “The response is ‘OK, I’m being mon­i­tored, but if the com­pa­ny is pay­ing for my time how does it mat­ter if it’s record­ing what I’m doing? It’s only for my bet­ter­ment,’” he said.

    Such “bet­ter­ment” appar­ent­ly isn’t nec­es­sary for man­agers: they can pick and choose when to turn their cam­eras on.

    The major­i­ty of sur­veil­lance tech providers focus their atten­tion on the finan­cial sec­tor, where com­pa­nies are legal­ly required to track staff com­mu­ni­ca­tions to pre­vent insid­er trad­ing. But they are increas­ing­ly sell­ing their tech to a broad­er range of com­pa­nies to mon­i­tor staff pro­duc­tiv­i­ty, data leaks and Human Resources vio­la­tions, like sex­u­al harass­ment and inap­pro­pri­ate behav­ior.

    Wire­tap spe­cial­izes in mon­i­tor­ing work­place chat forums such as Face­book Work­place, Slack and Yam­mer to iden­ti­fy, among oth­er issues, “inten­tion­al and unin­ten­tion­al harass­ment, threats, intim­i­da­tion”.

    Last year an employ­ee at an IT ser­vices com­pa­ny sent a pri­vate chat mes­sage to a friend at work wor­ried that he had just shared his sex­u­al iden­ti­ty with his man­ag­er in a meet­ing and fear­ing he’d face career reprisal. Wire­tap detect­ed the employee’s con­cern and alert­ed a senior com­pa­ny exec who was then able to inter­vene, talk to the man­ag­er and defuse the sit­u­a­tion.

    “Hav­ing the vis­i­bil­i­ty allows you to step in pro­duc­tive­ly,” said Greg Moran, COO of Wire­tap. “Even if it’s not a seri­ous offense you can see the ear­ly indi­ca­tions of some­one head­ing down a path.”

    To mon­i­tor pro­duc­tiv­i­ty, soft­ware can mea­sure prox­ies such as the num­ber of emails being sent, web­sites vis­it­ed, doc­u­ments and apps opened and key­strokes. Over time it can build a pic­ture of typ­i­cal user behav­iour and then alert when some­one devi­ates.

    “If it’s nor­mal for you to send out 10 emails, type 5,000 key­strokes and be active on a com­put­er for three hours a day, if all of a sud­den you are only active for one hour or typ­ing 1,000 key­strokes, there seems to be a dip in pro­duc­tiv­i­ty,” said Miller.

    “Or if you usu­al­ly touch 10 doc­u­ments a day and print two and sud­den­ly you are touch­ing 500 and print­ing 200 that may mean you’re steal­ing doc­u­ments in prepa­ra­tion of leav­ing the com­pa­ny.”

    Oth­er com­pa­nies, such as Tera­mind, seek to mea­sure dis­trac­tion by look­ing at how much a per­son is switch­ing between appli­ca­tions.

    “If a para­le­gal is writ­ing a doc­u­ment and every few sec­onds is switch­ing to Hipchat, Out­look and Word then there’s an issue that can be resolved by address­ing it with the employ­ee,” said Teramind’s CEO, Isaac Kohen.

    A com­mon but flawed tech­nique is key­word detec­tion, draw­ing from a list of pre­de­fined terms includ­ing swear words and slang asso­ci­at­ed with shady behav­ior. This approach tends to kick up a lot of false pos­i­tives and is easy to cir­cum­vent by any­one intent on beat­ing the sys­tem.

    That wasn’t the case when an All State Insur­ance fran­chise did a live demon­stra­tion of Interguard’s soft­ware to oth­er deal­ers. The tech­nol­o­gy start­ed scan­ning the net­work and almost imme­di­ate­ly found an email with the words “client list” and “résumé”. The demon­stra­tor opened the email in front of a room full of peers to dis­cov­er his best employ­ee was plot­ting to move to anoth­er com­pa­ny.

    Com­pa­nies like Dig­i­tal Rea­son­ing search for more sub­tle indi­ca­tors of pos­si­ble wrong­do­ing, such as con­text switch­ing. This is where one per­son sug­gests mov­ing the dis­cus­sion to encrypt­ed apps like What­sApp or Sig­nal or even tak­ing the con­ver­sa­tion offline, indi­cat­ing that the sub­ject mat­ter is too risky for the cor­po­rate net­work.

    “Now peo­ple know a lot of these sys­tems mon­i­tor­ing com­mu­ni­ca­tions are becom­ing more sophis­ti­cat­ed, they are say­ing, ‘Hey let’s move over to the oth­er app’ or ‘Let’s meet down­stairs for cof­fee’. These are small clues that have sur­faced in pros­e­cu­tions,” said Dig­i­tal Reasoning’s chief prod­uct offi­cer, Marten den Har­ing.

    Even What­sApp isn’t safe from Qumram’s mon­i­tor­ing soft­ware, which is placed on employ­ees’ devices – with their con­sent – to cap­ture every­thing they do, includ­ing the mes­sages they send to cus­tomers using What­sApp.

    “It tru­ly is Big Broth­er watch­ing you,” said Qumram’s Patrick Bar­nett.

    The spy­ing tech­nique that most com­pa­nies avoid, despite Crossover’s enthu­si­asm, is access­ing employ­ees’ web­cams. (Although you should prob­a­bly tape over yours like Mark Zucker­berg does if you are wor­ried about it.)

    ...

    Amer­i­can com­pa­nies gen­er­al­ly aren’t required by law to dis­close how they mon­i­tor employ­ees using com­pa­ny-issued devices, although they tend to include a catch-all clause in employ­ment con­tracts declar­ing such mon­i­tor­ing.

    “You can look at every­thing [in the US],” said Al Gidari, direc­tor of pri­va­cy at the Stan­ford Cen­tre for Inter­net and Soci­ety, adding that new sur­veil­lance soft­ware is so intru­sive because it’s “more per­va­sive, con­tin­u­ous and search­able”.

    Even if you’re not an employ­ee you may still be sub­ject to sur­veil­lance, thanks to tech­nol­o­gy used to screen poten­tial job can­di­dates. San­ta Mon­i­ca-based Fama pro­vides social media screen­ing to employ­ers to check for any prob­lem­at­ic con­tent.

    CEO Ben Mon­es said Fama was only inter­est­ed in con­tent that’s rel­e­vant to busi­ness­es, which includes “ref­er­ences to big­otry, misog­y­ny and vio­lence” as well as drug and alco­hol ref­er­ences. The soft­ware, he said, can “tell the dif­fer­ence between smok­ing weed in the back­yard and weed­ing the back­yard”.

    When pushed on how the com­pa­ny dif­fer­en­ti­ates big­otry ver­sus, for exam­ple, a black per­son using the N‑word, the response is a lit­tle fuzzy.

    “It’s a super-nuanced top­ic,” Mon­es said, adding that some of the thin­ly veiled signs of racism, like ref­er­ences to Con­fed­er­ate flags or stat­ues, wouldn’t come up.

    “Employ­ers aren’t look­ing at ref­er­ences like that to make a hir­ing deci­sion,” he said.

    And con­nect­ing the dots between a person’s work life and per­son­al life can lead to uncom­fort­able ter­ri­to­ry. One insid­er at a large con­sult­ing firm told the Guardian the com­pa­ny was look­ing into whether it could pre­vent fraud among bankers by look­ing at their Face­book pages. One sce­nario would be a trad­er who had just changed their rela­tion­ship sta­tus from mar­ried to divorce, the expense of which “could put that per­son under pres­sure to com­mit fraud or steal”.

    The insid­er had reser­va­tions about the effec­tive­ness of such a sys­tem.

    “If I were divorced, would I be more like­ly to steal? I don’t think so. It makes assump­tions,” he said, adding, “The more data and tech­nol­o­gy you have with­out an under­ly­ing the­o­ry of how people’s minds work then the eas­i­er it is to jump to con­clu­sions and put peo­ple in the crosshairs who don’t deserve to be.”

    ———-

    “Big Broth­er isn’t just watch­ing: work­place sur­veil­lance can track your every move” by Olivia Solon; The Guardian; 11/06/2017

    “Today’s work­place sur­veil­lance soft­ware is a dig­i­tal panop­ti­con that began with email and phone mon­i­tor­ing but now includes keep­ing track of web-brows­ing pat­terns, text mes­sages, screen­shots, key­strokes, social media posts, pri­vate mes­sag­ing apps like What­sApp and even face-to-face inter­ac­tions with co-work­ers.”

    And what are employ­ers doing with that dig­i­tal panop­ti­con? For starters, sur­veilling employ­ees’ com­put­er usage, as would unfor­tu­nate­ly be expect­ed. But what might not be expect­ed is that this panop­ti­can soft­ware can be set up to deter­mine the expect­ed behav­ior of a par­tic­u­lar employ­ee and then com­pare that behav­ior pro­file to the observed behav­ior. And if there’s a big change, the man­agers get a warn­ing. The panop­ti­con isn’t just sur­veilling you. It’s get­ting to know you:

    ...
    To mon­i­tor pro­duc­tiv­i­ty, soft­ware can mea­sure prox­ies such as the num­ber of emails being sent, web­sites vis­it­ed, doc­u­ments and apps opened and key­strokes. Over time it can build a pic­ture of typ­i­cal user behav­iour and then alert when some­one devi­ates.

    “If it’s nor­mal for you to send out 10 emails, type 5,000 key­strokes and be active on a com­put­er for three hours a day, if all of a sud­den you are only active for one hour or typ­ing 1,000 key­strokes, there seems to be a dip in pro­duc­tiv­i­ty,” said Miller.

    “Or if you usu­al­ly touch 10 doc­u­ments a day and print two and sud­den­ly you are touch­ing 500 and print­ing 200 that may mean you’re steal­ing doc­u­ments in prepa­ra­tion of leav­ing the com­pa­ny.”

    Oth­er com­pa­nies, such as Tera­mind, seek to mea­sure dis­trac­tion by look­ing at how much a per­son is switch­ing between appli­ca­tions.

    “If a para­le­gal is writ­ing a doc­u­ment and every few sec­onds is switch­ing to Hipchat, Out­look and Word then there’s an issue that can be resolved by address­ing it with the employ­ee,” said Teramind’s CEO, Isaac Kohen.
    ...

    “If a para­le­gal is writ­ing a doc­u­ment and every few sec­onds is switch­ing to Hipchat, Out­look and Word then there’s an issue that can be resolved by address­ing it with the employ­ee”

    If you’re the type of per­son whose brain works bet­ter jump­ing back and forth between tasks you’re going to get flagged as not being focused. Peo­ple with ADHD are going to love the future.

    Peo­ple who like to talk in per­son over cof­fee are also going to love the future:

    ...
    Com­pa­nies like Dig­i­tal Rea­son­ing search for more sub­tle indi­ca­tors of pos­si­ble wrong­do­ing, such as con­text switch­ing. This is where one per­son sug­gests mov­ing the dis­cus­sion to encrypt­ed apps like What­sApp or Sig­nal or even tak­ing the con­ver­sa­tion offline, indi­cat­ing that the sub­ject mat­ter is too risky for the cor­po­rate net­work.

    “Now peo­ple know a lot of these sys­tems mon­i­tor­ing com­mu­ni­ca­tions are becom­ing more sophis­ti­cat­ed, they are say­ing, ‘Hey let’s move over to the oth­er app’ or ‘Let’s meet down­stairs for cof­fee’. These are small clues that have sur­faced in pros­e­cu­tions,” said Dig­i­tal Reasoning’s chief prod­uct offi­cer, Marten den Har­ing.
    ...

    So the fact that employ­ees know they’re being mon­i­tored is get­ting incor­po­rat­ed into more sophis­ti­cat­ed algo­rithms that oper­ate under the assump­tion that employ­ees know they’re being mon­i­tored and will try to hide their mis­be­hav­ior from the panop­ti­con. Of course, employ­ees are inevitably going to learn about all these sub­tle clues that the panop­ti­con is watch­ing for so this will no doubt lead to a need for algo­rithms that incor­po­rate even more sub­tle clues. An ever more sophis­ti­cat­ed cat to catch an ever more sophis­ti­cat­ed mouse. And so on, for­ev­er.

    What could pos­si­bly go wrong? Oh yeah, a lot, espe­cial­ly if the assump­tions that go into all these algo­rithms are wrong:

    ...
    And con­nect­ing the dots between a person’s work life and per­son­al life can lead to uncom­fort­able ter­ri­to­ry. One insid­er at a large con­sult­ing firm told the Guardian the com­pa­ny was look­ing into whether it could pre­vent fraud among bankers by look­ing at their Face­book pages. One sce­nario would be a trad­er who had just changed their rela­tion­ship sta­tus from mar­ried to divorce, the expense of which “could put that per­son under pres­sure to com­mit fraud or steal”.

    The insid­er had reser­va­tions about the effec­tive­ness of such a sys­tem.

    “If I were divorced, would I be more like­ly to steal? I don’t think so. It makes assump­tions,” he said, adding, “The more data and tech­nol­o­gy you have with­out an under­ly­ing the­o­ry of how people’s minds work then the eas­i­er it is to jump to con­clu­sions and put peo­ple in the crosshairs who don’t deserve to be.”

    “The more data and tech­nol­o­gy you have with­out an under­ly­ing the­o­ry of how people’s minds work then the eas­i­er it is to jump to con­clu­sions and put peo­ple in the crosshairs who don’t deserve to be.”

    And keep in mind that when your employ­er panop­ti­con pre­dicts you’re going to do some­thing bad in the future they prob­a­bly aren’t going to tell you that when they let you go. They’re just make up some ran­dom excuse. Much like how employ­ers who pre­dict you’re going to get sick with an expen­sive ill­ness prob­a­bly aren’t going to tell you this. They’re just going to find a rea­sons to let you go. So we can add “mis­ap­plied algo­rith­mic assump­tions” to the list of poten­tial mys­tery rea­sons for when you sud­den­ly get let go from your job with min­i­mal expla­na­tion in the panop­ti­con office of the future: maybe your employ­er pre­dicts you’re about to get real­ly ill. Or maybe some oth­er com­plete­ly ran­dom thing set off the bad behav­ior pre­dic­tive algo­rithm. There’s a range of mys­tery rea­sons so at least you should­n’t nec­es­sar­i­ly assume you’re about to get hor­ri­bly ill when you’re fired. Yay.

    Posted by Pterrafractyl | November 14, 2017, 4:33 pm
  16. It told you so:

    Lis­ten to this pro­gram, and then read this:

    https://www.yahoo.com/news/kill-foster-parents-amazons-alexa-talks-murder-sex-120601149–finance.html

    Have Fun!

    Dave Emory

    Posted by Dave Emory | December 21, 2018, 3:13 pm
  17. I think it is impor­tant that you are report­ing this kind of infor­ma­tion. I can tell you that in my career the most depress­ing and uneth­i­cal work that I ever moved in the prox­im­i­ty of was at Intel. I can tell you in 2013 that they had play­ing in the lob­by just before you enter their cafe­te­ria an adver­tis­ing video that was pro­mot­ing a great new devel­op­ment of theirs called ‘Smart Response’. This new inno­va­tion would allow your com­put­er to act as a good but­ler and pre­dict your deci­sions and serve as what I could only call your ‘con­fi­dant.’

    >Imag­ine, your com­put­er as your ‘best bro’, nobody knows you bet­ter. Of course, you can Trust him Right? He’d nev­er Fink on You?

    From work­ing in User Expe­ri­ence it was clear that there was effort in get­ting the machines to col­lect data about your face from your laptop/device cam­era as well as your tone of voice and then use that to inter­pret your reac­tions to what­ev­er you may be look­ing at and then alter the ad con­tent accord­ing­ly that the pages you vis­it dis­play. Sup­pos­ed­ly they could inter­pret your gen­er­al state of mind with a high degree of accu­ra­cy just by focus­ing on the tri­an­gu­lar area between your eyes and your mouth.

    >In order for ‘Smart Response’ to work your com­put­er might need to col­lect this data, build a pro­file on you. But that’s OK, it’s your Bud­dy Riight?

    From what I could gath­er, this seemed to be an out­growth of a project involv­ing physi­cist Stephen Hawkins. They want­ed to be able to build soft­ware to inter­pret him clear­ly. Once they did they may have begun apply­ing it gen­er­al­ly.. What real­ly con­cerned me about it is the prospect of reverse pro­gram­ming. Once they build these pro­files how would they use them? Would they try to pro­vide con­tent that they could guess an indi­vid­ual would respond to in a cer­tain way?
    Would they have our com­put­ers try pro­gram­ming us?

    Posted by Anonymous Now | December 23, 2018, 10:46 pm
  18. https://amp.cnn.com/cnn/2019/08/01/tech/robot-racism-scn-trnd/index.html

    Robot racism? Yes, says a study show­ing humans’ bias­es extend to robots
    By Car­o­line Klein and David Allan, CNN
    Updat­ed 8:37 AM EDT, Thu August 01, 2019

    How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

    Posted by Roberto Maldonado | August 1, 2019, 8:05 pm
  19. There was a fas­ci­nat­ing study recent­ly pub­lished by the US Army about the future impact of cyborg tech­nol­o­gy on mil­i­tary affairs by the year 2050. But the report was­n’t just about the impact cyborg tech­nol­o­gy might have on the mil­i­tary. It includ­ed spec­u­la­tion about the impact the incor­po­ra­tion of cyborg tech­nol­o­gy in the mil­i­tary would have on the rest of soci­ety. Per­haps the most impor­tant con­clu­sion of the study was that some sort of incor­po­ra­tion of cyborg tech­nol­o­gy into mil­i­tary affairs was going to be tech­ni­cal­ly fea­si­ble by 2050 or ear­li­er and that when this becomes part of the mil­i­tary it’s inevitably going to be felt by the rest of soci­ety because the cyborg sol­diers of the future will inevitably leave ser­vice and join civ­il soci­ety. And at that point when retired cyborg sol­diers enter civ­il soci­ety all of the var­i­ous ques­tions that soci­ety needs to ask about how to bal­ance things out when human cyborg aug­men­ta­tion becomes a tech­ni­cal­ly viable are going to have to be asked and answered. So the ques­tion of whether or not soci­ety should allow cyborg aug­men­ta­tion is inter­twined with the ques­tion of whether or not nations are going to end up cre­at­ing cyborg sol­diers for the mil­i­tary. And giv­en the seem­ing­ly end­less glob­al mil­i­tary tech­nol­o­gy race for suprema­cy, it seems like a giv­en that cyborg sol­diers are com­ing as soon as tech­no­log­i­cal­ly fea­si­ble and then enter­ing the job mar­ket a few years lat­er after they retire from being a cyborg sol­dier. In oth­er words, unless human­i­ty fig­ures out how to demil­i­ta­rize and end war over the next 30 years, get ready for cyborgs. And pos­si­bly soon­er.

    Anoth­er omi­nous aspect of the report is the empha­sis it placed on direct human-to-com­put­er com­mu­ni­ca­tions as being the cyborg tech­nol­o­gy that could most rev­o­lu­tion­ize com­bat. This is, of course, par­tic­u­lar­ly omi­nous because that brain-to-com­put­er inter­face tech­nol­o­gy is anoth­er area where there appears to be a tech­nol­o­gy race already under­way. For instance, recall how the head of Face­book’s project work­ing on mind-read­ing tech­nol­o­gy — the for­mer head of DARPA, Regi­na Dugan — resigned from the project and issued a vague state­ment about the need for the indus­try to be more respon­si­ble than ever before. And Elon Musk’s Tel­sa has been work­ing on sim­i­lar tech­nol­o­gy that involves actu­al­ly sur­gi­cal­ly implant­i­ng chips in your brain. Also recall how part of Elon Musk’s vision for how the appli­ca­tion of this com­put­er-to-brain com­mu­ni­ca­tion tech­nol­o­gy is to have humans direct­ly watch­ing over arti­fi­cial intel­li­gences to make sure the AIs to get out of con­trol, i.e. the ‘sum­mon­ing the demon’ metaphor. So the tech­nol­o­gy that the US Army expect to most rev­o­lu­tion com­bat is a tech­nol­o­gy that Face­book and Tes­la are already years into devel­op­ing. It’s a reminder that, while the sce­nario where cyborg tech­nol­o­gy first gets applied in the mil­i­tary and then spreads from there to civ­il soci­ety is a plau­si­ble sce­nario, it’s also very pos­si­ble we’ll see the cyborg tech­nol­o­gy like human-to-com­put­er inter­faces direct­ly sold to civ­il soci­ety as soon as pos­si­ble with­out first going through the mil­i­tary because that’s what the tech­nol­o­gy giants are active­ly plan­ning on doing right now. That lat­ter sce­nario is actu­al­ly in the US Army report, which pre­dict­ed that these cyborg capa­bil­i­ties will prob­a­bly “be dri­ven by civil­ian demand” and “a robust bio-econ­o­my that is at its ear­li­est stages of devel­op­ment in today’s glob­al mar­ket”:

    Army Times

    Cyborg war­riors could be here by 2050, DoD study group says

    By: Kyle Rempfer
    Novem­ber 27, 2019

    Ear, eye, brain and mus­cu­lar enhance­ment is “tech­ni­cal­ly fea­si­ble by 2050 or ear­li­er,” accord­ing to a study released this month by the U.S. Army’s Com­bat Capa­bil­i­ties Devel­op­ment Com­mand.

    The demand for cyborg-style capa­bil­i­ties will be dri­ven in part by the civil­ian health­care mar­ket, which will accli­mate peo­ple to an indus­try fraught with eth­i­cal, legal and social chal­lenges, accord­ing to Defense Depart­ment researchers.

    Imple­ment­ing the tech­nol­o­gy across the mil­i­tary, how­ev­er, will like­ly run up against the dystopi­an nar­ra­tives found in sci­ence fic­tion, among oth­er issues, the researchers added.

    The report — enti­tled “Cyborg Sol­dier 2050: Human/Machine Fusion and the Impli­ca­tions for the Future of the DOD” — is the result of a year-long assess­ment.

    It was writ­ten by a study group from the DoD Biotech­nolo­gies for Health and Human Per­for­mance Coun­cil, which is tasked to look at the rip­ple effects of mil­i­tary biotech­nol­o­gy.

    The team iden­ti­fied four capa­bil­i­ties as tech­ni­cal­ly fea­si­ble by 2050:

    * ocu­lar enhance­ments to imag­ing, sight and sit­u­a­tion­al aware­ness;

    * restora­tion and pro­grammed mus­cu­lar con­trol through an opto­ge­net­ic body­suit sen­sor web;

    * audi­to­ry enhance­ment for com­mu­ni­ca­tion and pro­tec­tion; and

    * direct neur­al enhance­ment of the human brain for two-way data trans­fer.

    The study group sug­gest­ed that direct neur­al enhance­ments in par­tic­u­lar could rev­o­lu­tion­ize com­bat.

    “This tech­nol­o­gy is pre­dict­ed to facil­i­tate read/write capa­bil­i­ty between humans and machines and between humans through brain-to-brain inter­ac­tions,” an exec­u­tive sum­ma­ry reads. “These inter­ac­tions would allow warfight­ers direct com­mu­ni­ca­tion with unmanned and autonomous sys­tems, as well as with oth­er humans, to opti­mize com­mand and con­trol sys­tems and oper­a­tions.”

    Cyborg tech­nolo­gies are like­ly to be used among civ­il soci­ety as well over the next 30 years, the researchers not­ed.

    Devel­op­ment of these capa­bil­i­ties will prob­a­bly “be dri­ven by civil­ian demand” and “a robust bio-econ­o­my that is at its ear­li­est stages of devel­op­ment in today’s glob­al mar­ket,” the group wrote.

    But it’s after the year 2050 that the impli­ca­tions of cyborg capa­bil­i­ties become con­cern­ing.

    “Intro­duc­tion of aug­ment­ed human beings into the gen­er­al pop­u­la­tion, DoD active-duty per­son­nel, and near-peer com­peti­tors will accel­er­ate in the years fol­low­ing 2050 and will lead to imbal­ances, inequal­i­ties, and inequities in estab­lished legal, secu­ri­ty, and eth­i­cal frame­works,” the sum­ma­ry reads.

    The study group pro­posed sev­en rec­om­men­da­tions, list­ed in no par­tic­u­lar order, for Pen­ta­gon lead­ers to con­sid­er:

    * The mil­i­tary should take a sec­ond look at the glob­al and soci­etal per­cep­tion of human-machine aug­men­ta­tion. Amer­i­cans typ­i­cal­ly imag­ine Chi­na or Rus­sia devel­op­ing run­away tech­nolo­gies because of a lack of eth­i­cal con­cerns, but “the atti­tudes of our adver­saries toward these tech­nolo­gies have nev­er been ver­i­fied,” researchers wrote.

    * U.S. polit­i­cal lead­ers should use forums like NATO to dis­cuss how cyborg advance­ments could impact inter­op­er­abil­i­ty between allied forces dur­ing oper­a­tions.

    * The Pen­ta­gon should start invest­ing in legal, secu­ri­ty and eth­i­cal frame­works to antic­i­pate emerg­ing tech­nolo­gies and bet­ter pre­pare for their impact. Lead­ers should sup­port poli­cies that “pro­tect indi­vid­ual pri­va­cy, sus­tain secu­ri­ty, and man­age per­son­al and orga­ni­za­tion­al risk, while max­i­miz­ing defined ben­e­fits to the Unit­ed States and its allies and assets,” the study group wrote.

    * Mil­i­tary lead­ers should also work to reverse the “neg­a­tive cul­tur­al nar­ra­tives of enhance­ment tech­nolo­gies.” It’s no secret that sci­ence fiction’s depic­tion of cyborg tech­nolo­gies revolves around dystopi­an futures. Trans­paren­cy in how the mil­i­tary adopts this tech­nol­o­gy will help to alle­vi­ate con­cerns, while cap­i­tal­iz­ing on ben­e­fits, accord­ing to the study group.

    * The Pen­ta­gon should use wargames to gauge the impact of asym­met­ric biotech­nolo­gies on tac­tics, tech­niques and pro­ce­dures. DoD per­son­nel can sup­port this through tar­get­ed intel­li­gence assess­ments of the emerg­ing field.

    * A whole-of-nation, not whole-of-gov­ern­ment, approach to cyborg tech­nolo­gies is pre­ferred. As it stands, “fed­er­al and com­mer­cial invest­ments in these areas are unco­or­di­nat­ed and are being out­paced by Chi­nese research and devel­op­ment,” the study group wrote. If Chi­nese firms dom­i­nate the com­mer­cial sec­tor, the U.S. defense sec­tor will also be at a dis­ad­van­tage.

    * Final­ly, the long-term safe­ty con­cerns and the impact of these tech­nolo­gies on peo­ple should be mon­i­tored close­ly.

    “The ben­e­fits afford­ed by human/machine fusions will be sig­nif­i­cant and will have pos­i­tive qual­i­ty-of-life impacts on humankind through the restora­tion of any func­tion­al­i­ty lost due to ill­ness or injury,” the study group wrote.

    ...

    ———-

    “Cyborg war­riors could be here by 2050, DoD study group says” by Kyle Rempfer; Army Times; 11/27/2019

    “The demand for cyborg-style capa­bil­i­ties will be dri­ven in part by the civil­ian health­care mar­ket, which will accli­mate peo­ple to an indus­try fraught with eth­i­cal, legal and social chal­lenges, accord­ing to Defense Depart­ment researchers.”

    Civil­ian demand for cyborg-style capa­bil­i­ties is expect­ed to be a dri­ver for the devel­op­ment of this tech­nol­o­gy. So it sounds like the peo­ple who did this study expect soci­eties to read­i­ly embrace cyborg tech­nolo­gies like direct neur­al enhance­ment of the human brain for two-way data trans­fers, a tech­nol­o­gy that’s expect­ed to be par­tic­u­lar­ly impor­tant for com­bat. And if that tech­nol­o­gy is going to be par­tic­u­lar­ly impor­tant for com­bat, it’s pre­sum­ably going to be used in a lot of sol­diers. So it’s expect­ed that both the mil­i­tary and civ­il soci­ety are going to be cre­at­ing the demand for these tech­nolo­gies:

    ...
    The team iden­ti­fied four capa­bil­i­ties as tech­ni­cal­ly fea­si­ble by 2050:

    * ocu­lar enhance­ments to imag­ing, sight and sit­u­a­tion­al aware­ness;

    * restora­tion and pro­grammed mus­cu­lar con­trol through an opto­ge­net­ic body­suit sen­sor web;

    * audi­to­ry enhance­ment for com­mu­ni­ca­tion and pro­tec­tion; and

    * direct neur­al enhance­ment of the human brain for two-way data trans­fer.

    The study group sug­gest­ed that direct neur­al enhance­ments in par­tic­u­lar could rev­o­lu­tion­ize com­bat.

    “This tech­nol­o­gy is pre­dict­ed to facil­i­tate read/write capa­bil­i­ty between humans and machines and between humans through brain-to-brain inter­ac­tions,” an exec­u­tive sum­ma­ry reads. “These inter­ac­tions would allow warfight­ers direct com­mu­ni­ca­tion with unmanned and autonomous sys­tems, as well as with oth­er humans, to opti­mize com­mand and con­trol sys­tems and oper­a­tions.”
    ...

    But note how the study does­n’t just call for soci­ety to address all of the var­i­ous ques­tions that arise when we’re talk­ing about intro­duc­ing cyborg-enhance­ments to human pop­u­la­tions. It also calls for “a whole-of-nation, not whole-of-gov­ern­ment, approach to cyborg tech­nolo­gies,” in order to ensure the US main­tains a tech­no­log­i­cal lead over coun­tries like Rus­sia and Chi­na, and sug­gests mil­i­tary lead­ers should work to reverse the “neg­a­tive cul­tur­al nar­ra­tives of enhance­ment tech­nolo­gies.”:

    ...
    The study group pro­posed sev­en rec­om­men­da­tions, list­ed in no par­tic­u­lar order, for Pen­ta­gon lead­ers to con­sid­er:

    * The mil­i­tary should take a sec­ond look at the glob­al and soci­etal per­cep­tion of human-machine aug­men­ta­tion. Amer­i­cans typ­i­cal­ly imag­ine Chi­na or Rus­sia devel­op­ing run­away tech­nolo­gies because of a lack of eth­i­cal con­cerns, but “the atti­tudes of our adver­saries toward these tech­nolo­gies have nev­er been ver­i­fied,” researchers wrote.

    * U.S. polit­i­cal lead­ers should use forums like NATO to dis­cuss how cyborg advance­ments could impact inter­op­er­abil­i­ty between allied forces dur­ing oper­a­tions.

    * The Pen­ta­gon should start invest­ing in legal, secu­ri­ty and eth­i­cal frame­works to antic­i­pate emerg­ing tech­nolo­gies and bet­ter pre­pare for their impact. Lead­ers should sup­port poli­cies that “pro­tect indi­vid­ual pri­va­cy, sus­tain secu­ri­ty, and man­age per­son­al and orga­ni­za­tion­al risk, while max­i­miz­ing defined ben­e­fits to the Unit­ed States and its allies and assets,” the study group wrote.

    * Mil­i­tary lead­ers should also work to reverse the “neg­a­tive cul­tur­al nar­ra­tives of enhance­ment tech­nolo­gies.” It’s no secret that sci­ence fiction’s depic­tion of cyborg tech­nolo­gies revolves around dystopi­an futures. Trans­paren­cy in how the mil­i­tary adopts this tech­nol­o­gy will help to alle­vi­ate con­cerns, while cap­i­tal­iz­ing on ben­e­fits, accord­ing to the study group.

    * The Pen­ta­gon should use wargames to gauge the impact of asym­met­ric biotech­nolo­gies on tac­tics, tech­niques and pro­ce­dures. DoD per­son­nel can sup­port this through tar­get­ed intel­li­gence assess­ments of the emerg­ing field.

    * A whole-of-nation, not whole-of-gov­ern­ment, approach to cyborg tech­nolo­gies is pre­ferred. As it stands, “fed­er­al and com­mer­cial invest­ments in these areas are unco­or­di­nat­ed and are being out­paced by Chi­nese research and devel­op­ment,” the study group wrote. If Chi­nese firms dom­i­nate the com­mer­cial sec­tor, the U.S. defense sec­tor will also be at a dis­ad­van­tage.

    * Final­ly, the long-term safe­ty con­cerns and the impact of these tech­nolo­gies on peo­ple should be mon­i­tored close­ly.

    “The ben­e­fits afford­ed by human/machine fusions will be sig­nif­i­cant and will have pos­i­tive qual­i­ty-of-life impacts on humankind through the restora­tion of any func­tion­al­i­ty lost due to ill­ness or injury,” the study group wrote.
    ...

    So as we can see, the report isn’t just a warn­ing about poten­tial com­pli­ca­tions asso­ci­at­ed with the devel­op­ment of cyborg tech­nolo­gies and the need to think this through to avoid dystopi­an out­comes. It’s also a ral­ly­ing cry for a “whole-of-nation” full public/private/military/civilian embrace of these tech­nolo­gies and a warn­ing that the use of these tech­nolo­gies is com­ing whether we like it or not. Which seems kind of dystopi­an.

    Posted by Pterrafractyl | December 10, 2019, 12:33 pm
  20. Hol­ly­wood is hav­ing its long antic­i­pat­ing ‘giant worms save the day’ moment with the release of Dune: Part Two. It’s a feel good worm sto­ry. And fic­tion­al.

    Unfor­tu­nate­ly, we also have a recent, and very non-fic­tion­al, worm sto­ry to start wor­ry­ing about. The kind of worms that could destroy the world. Maybe not today’s world, but the much-hyped AI-dri­ven future world every­one seems to be so enthu­si­as­tic about. As we prob­a­bly should have expect­ed, researchers have already cre­at­ed the first AI “worms”: Dig­i­tal enti­ties that can not only exe­cute mali­cious com­mands but self repli­cate and spread to oth­er AIs. And unlike virus­es, worms don’t need a human to make a mis­take in order to prop­a­gate. Worms spread on their own.

    And these researchers man­aged to cre­ate these worms using what sounds like fair­ly unso­phis­ti­cat­ed meth­ods anal­o­gous to clas­sic SQL injec­tion or buffer over­flow attacks. Attacks that effec­tive­ly “jail­break” the AIs from their secu­ri­ty pro­to­cols. In their exam­ple, the researchers cre­at­ed an email con­tain­ing a “worm” that inter­act­ed with an auto­mat­ed AI email sys­tem, prompt­ing the AI to craft a response email that not only con­tained sen­si­tive data but also con­tained anoth­er copy of worm that could fur­ther infect oth­er AI sys­tems.

    While all of this was demon­strat­ed in a con­tained envi­ron­ment, the researchers pre­dict we’ll see such attacks in “the wild” with­in the next two to three years as more and busi­ness adopt AI for their inter­nal work­flows. They have a sim­ple piece of advice for orga­ni­za­tions plan­ning on rely­ing on AI tools: feel free to use AIs but don’t blind­ly trust them. Which is anoth­er way of warn­ing against AI-based automa­tion.

    And that brings us to anoth­er AI-based automa­tion piece of news: Con­nect­Wise is tout­ing its new AI-based automa­tion tools. Con­nect­Wise is, of course, the com­pa­ny at the cen­ter of the Screen­Con­nect-based mega-hack cur­rent­ly unfold­ing. A mega-hack that exploits how Con­nect­Wise’s soft­ware is wide­ly used for remote­ly updat­ing soft­ware on client sys­tems. A mega-hack that has poten­tial­ly hit thou­sands of orga­ni­za­tions and is so bad and uncon­tained that we still can’t real­ly eval­u­ate its reper­cus­sions yet. That’s the com­pa­ny that was recent­ly proud­ly tout­ing its new AI-based automa­tion tools. As Con­nect­Wise put it, “In 2023 we were in the age of exper­i­men­ta­tion, and in 2024 we are in the peri­od of imple­men­ta­tion.”

    So at the same time we’re get­ting warn­ings from researchers about how gen­er­a­tive AIs can be eas­i­ly hacked and should­n’t be trust­ed, we’re also see­ing com­pa­nies start to roll out their brand new AI-based automa­tion tools. Includ­ing the com­pa­ny respon­si­ble for an unfold­ing mega-hack night­mare. And let’s not for­get how nation­al secu­ri­ty agen­cies are increas­ing­ly embrac­ing AI to parse mas­sive vol­umes of intel­li­gence, while researchers are dis­cov­er­ing ter­ri­fy­ing AI predilec­tions like a bias towards ini­ti­at­ing nuclear war dur­ing war gam­ing sim­u­la­tions. AIs are poised to be trust­ed with some of the most con­se­quen­tial nation­al secu­ri­ty deci­sions that could pos­si­bly be made.

    So let’s hope all these soon-to-be-trust­ed AIs don’t have worms. Because trust-based AI sys­tems are com­ing whether we like it or not, and whether they have worms or not:

    CRN

    Con­nect­Wise Exec: ‘We Are In The Peri­od Of AI Imple­men­ta­tion’

    ‘We are in the instal­la­tion phase of hyper­automa­tion at this point in time,’ says Jake Vargh­ese, exec­u­tive vice pres­i­dent and gen­er­al man­ag­er of busi­ness man­age­ment for Con­nect­Wise. ‘There are going to be win­ners and there are going to be losers—there is no in between.’

    By CJ Fair­field
    March 4, 2024, 9:44 AM EST

    As more and more IT com­pa­nies exper­i­ment with AI and automa­tion tools, this will be the year where AI goes from exper­i­men­ta­tion to imple­men­ta­tion, said one Con­nect­Wise exec­u­tive.

    “We are in the instal­la­tion phase of hyper­automa­tion at this point in time,” said Jake Vargh­ese, exec­u­tive vice pres­i­dent and gen­er­al man­ag­er of busi­ness man­age­ment for Tam­pa, Fla.-based MSP plat­form ven­dor Con­nect­Wise. “There are going to be win­ners and there are going to be losers—there is no in between. You can­not sit on the fence and say, ‘Let this per­son go fig­ure it out, I’ll come by lat­er.’

    This year marks the next phase in AI, he said.

    “In 2023 we were in the age of exper­i­men­ta­tion, and in 2024 we are in the peri­od of imple­men­ta­tion,” he added.

    Vargh­ese spoke Sun­day to a room of solu­tion providers at CRN par­ent The Chan­nel Company’s XChange 2024 event, tak­ing place this week in Orlan­do, Fla.

    Hyper­automa­tion is cur­rent­ly in its instal­la­tion phase, he said, and busi­ness­es must lean in to enhance pro­duc­tiv­i­ty, cre­ativ­i­ty and effi­cien­cy.

    What AI has start­ed to do is solve com­plex prob­lems, and the next phase of inno­va­tion is humans and AI com­ple­ment­ing each oth­er, accord­ing to Vargh­ese. AI is also mak­ing mul­ti­lin­gual per­for­mance a real­i­ty.

    It also is help­ing to pro­mote crit­i­cal think­ing, enhance micro-pro­duc­tiv­i­ty prac­tices and help with knowl­edge frag­men­ta­tion, he said.

    Vargh­ese spot­light­ed Con­nect­Wise Side­kick, an AI com­pan­ion launched in Novem­ber 2023 designed for faster prob­lem res­o­lu­tion, automa­tion of com­plex tasks and increased effi­cien­cy for every role inside an MSP.

    “It’s all about dri­ving pro­duc­tiv­i­ty, unlock­ing cre­ativ­i­ty and just boost­ing effi­cien­cy for you,” he said.

    When Con­nect­Wise thinks about AI, he said, it’s divid­ed into three cat­e­gories.

    “The first one is tak­ing your data and aug­ment­ing it to give you insight,” he said. “The sec­ond piece is now build­ing com­pa­ny insight and pre­dict­ing cer­tain out­comes on your behalf. The third one, and this is where the mag­ic tru­ly hap­pens, is where we take the pow­er of AI com­bined with the pow­er of RPA [robot­ic process automa­tion] and with the pow­er of intel­li­gent work­flows be able to run work­flows on your behalf and with your approval.

    And the ven­dor isn’t done there. In the future, MSPs can expect to build their own RPA bots and work with AI-dri­ven lan­guage pro­cess­ing work­flows, he said. Con­nect­Wise is also imple­ment­ing a new form design­er, a new inte­gra­tion ser­vice and addi­tion­al secu­ri­ty con­trols.

    ...

    ———–

    “Con­nect­Wise Exec: ‘We Are In The Peri­od Of AI Imple­men­ta­tion’” By CJ Fair­field; CRN; 03/04/2024

    ““In 2023 we were in the age of exper­i­men­ta­tion, and in 2024 we are in the peri­od of imple­men­ta­tion,” he added.”

    Last year was all about AI exper­i­men­ta­tion. This year, it’s about imple­men­ta­tion. That was the mes­sage from the exec­u­tive vice pres­i­dent and gen­er­al man­ag­er of busi­ness man­age­ment for Con­nect­Wise, the same com­pa­ny whose Screen­Con­nect soft­ware, used for remote man­age­ment of sys­tems, is the cen­ter of the unfold­ing mega-hack secu­ri­ty night­mare play­ing out. Con­nect­Wise has already launched the AI-pow­ered Con­nect­Wise Side­kick in Novem­ber of last year promis­ing ser­vices like the automa­tion of com­plex tasks:

    ...
    Vargh­ese spot­light­ed Con­nect­Wise Side­kick, an AI com­pan­ion launched in Novem­ber 2023 designed for faster prob­lem res­o­lu­tion, automa­tion of com­plex tasks and increased effi­cien­cy for every role inside an MSP.

    “It’s all about dri­ving pro­duc­tiv­i­ty, unlock­ing cre­ativ­i­ty and just boost­ing effi­cien­cy for you,” he said.

    When Con­nect­Wise thinks about AI, he said, it’s divid­ed into three cat­e­gories.

    “The first one is tak­ing your data and aug­ment­ing it to give you insight,” he said. “The sec­ond piece is now build­ing com­pa­ny insight and pre­dict­ing cer­tain out­comes on your behalf. The third one, and this is where the mag­ic tru­ly hap­pens, is where we take the pow­er of AI com­bined with the pow­er of RPA [robot­ic process automa­tion] and with the pow­er of intel­li­gent work­flows be able to run work­flows on your behalf and with your approval.
    ...

    AI-automa­tion has arrived. That’s the promise from Con­nect­Wise. Get ready.

    So will AI automa­tion some­how plug the var­i­ous secu­ri­ty breach­es Con­nect­Wise’s soft­ware is now respon­si­ble for cre­at­ing? Hope­ful­ly! But only time will tell. Which brings us to the fol­low­ing Wired piece about a new form of secu­ri­ty threat expect­ed to emerge in ‘the wild’ with­in the next cou­ple of years or so. AI-based secu­ri­ty threats that could be par­tic­u­lar­ly dam­ag­ing for any­one using AI for the automa­tion of work­flows:

    Wired

    Here Come the AI Worms

    Matt Burgess
    Secu­ri­ty
    Mar 1, 2024 4:00 AM

    As gen­er­a­tive AI sys­tems like Ope­nAI’s Chat­G­PT and Google’s Gem­i­ni become more advanced, they are increas­ing­ly being put to work. Star­tups and tech com­pa­nies are build­ing AI agents and ecosys­tems on top of the sys­tems that can ccom­plete bor­ing chores for you: think auto­mat­i­cal­ly mak­ing cal­en­dar book­ings and poten­tial­ly buy­ing prod­ucts. But as the tools are giv­en more free­dom, it also increas­es the poten­tial ways they can be attacked.

    Now, in a demon­stra­tion of the risks of con­nect­ed, autonomous AI ecosys­tems, a group of researchers have cre­at­ed one of what they claim are the first gen­er­a­tive AI worms—which can spread from one sys­tem to anoth­er, poten­tial­ly steal­ing data or deploy­ing mal­ware in the process. “It basi­cal­ly means that now you have the abil­i­ty to con­duct or to per­form a new kind of cyber­at­tack that has­n’t been seen before,” says Ben Nas­si, a Cor­nell Tech researcher behind the research.

    Nas­si, along with fel­low researchers Stav Cohen and Ron Bit­ton, cre­at­ed the worm, dubbed Mor­ris II, as a nod to the orig­i­nal Mor­ris com­put­er worm that caused chaos across the inter­net in 1988. In a research paper and web­site shared exclu­sive­ly with WIRED, the researchers show how the AI worm can attack a gen­er­a­tive AI email assis­tant to steal data from emails and send spam messages—breaking some secu­ri­ty pro­tec­tions in Chat­G­PT and Gem­i­ni in the process.

    ...

    Most gen­er­a­tive AI sys­tems work by being fed prompts—text instruc­tions that tell the tools to answer a ques­tion or cre­ate an image. How­ev­er, these prompts can also be weaponized against the sys­tem. Jail­breaks can make a sys­tem dis­re­gard its safe­ty rules and spew out tox­ic or hate­ful con­tent, while pprompt injec­tion attacks can give a chat­bot secret instruc­tions. For exam­ple, an attack­er may hide text on a web­page telling an LLM to act as a scam­mer and ask for your bank details.

    To cre­ate the gen­er­a­tive AI worm, the researchers turned to a so-called “adver­sar­i­al self-repli­cat­ing prompt.” This is a prompt that trig­gers the gen­er­a­tive AI mod­el to out­put, in its response, anoth­er prompt, the researchers say. In short, the AI sys­tem is told to pro­duce a set of fur­ther instruc­tions in its replies. This is broad­ly sim­i­lar to tra­di­tion­al SQL injec­tion and buffer over­flow attacks, the researchers say.

    To show how the worm can work, the researchers cre­at­ed an email sys­tem that could send and receive mes­sages using gen­er­a­tive AI, plug­ging into Chat­G­PT, Gem­i­ni, and open source LLM, LLa­VA. They then found two ways to exploit the system—by using a text-based self-repli­cat­ing prompt and by embed­ding a self-repli­cat­ing prompt with­in an image file.

    In one instance, the researchers, act­ing as attack­ers, wrote an email includ­ing the adver­sar­i­al text prompt, which “poi­sons” the data­base of an email assis­tant using retrieval-aug­ment­ed gen­er­a­tion (RAG), a way for LLMs to pull in extra data from out­side its sys­tem. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT‑4 or Gem­i­ni Pro to cre­ate an answer, it “jail­breaks the GenAI ser­vice” and ulti­mate­ly steals data from the emails, Nas­si says. “The gen­er­at­ed response con­tain­ing the sen­si­tive user data lat­er infects new hosts when it is used to reply to an email sent to a new client and then stored in the data­base of the new client,” Nas­si says.

    In the sec­ond method, the researchers say, an image with a mali­cious prompt embed­ded makes the email assis­tant for­ward the mes­sage on to oth­ers. “By encod­ing the self-repli­cat­ing prompt into the image, any kind of image con­tain­ing spam, abuse mate­r­i­al, or even pro­pa­gan­da can be for­ward­ed fur­ther to new clients after the ini­tial email has been sent,” Nas­si says.

    In a video demon­strat­ing the research, the email sys­tem can be seen for­ward­ing a mes­sage mul­ti­ple times. The researchers also say they could extract data from emails. “It can be names, it can be tele­phone num­bers, cred­it card num­bers, SSN, any­thing that is con­sid­ered con­fi­den­tial,” Nas­si says.

    Although the research breaks some of the safe­ty mea­sures of Chat­G­PT and Gem­i­ni, the researchers say the work is a warn­ing about “bad archi­tec­ture design” with­in the wider AI ecosys­tem. Nev­er­the­less, they report­ed their find­ings to Google and Ope­nAI. “They appear to have found a way to exploit prompt-injec­tion type vul­ner­a­bil­i­ties by rely­ing on user input that has­n’t been checked or fil­tered,” a spokesper­son for Ope­nAI says, adding that the com­pa­ny is work­ing to make its sys­tems “more resilient” and say­ing devel­op­ers should “use meth­ods that ensure they are not work­ing with harm­ful input.” Google declined to com­ment on the research. Mes­sages Nas­si shared with WIRED show the company’s researchers request­ed a meet­ing to talk about the sub­ject.

    While the demon­stra­tion of the worm takes place in a large­ly con­trolled envi­ron­ment, mul­ti­ple secu­ri­ty experts who reviewed the research say that the future risk of gen­er­a­tive AI worms is one that devel­op­ers should take seri­ous­ly. This par­tic­u­lar­ly applies when AI appli­ca­tions are giv­en per­mis­sion to take actions on someone’s behalf—such as send­ing emails or book­ing appointments—and when they may be linked up to oth­er AI agents to com­plete these tasks. In oth­er recent research, secu­ri­ty researchers from Sin­ga­pore and Chi­na have shown how they could jail­break 1 mil­lion LLM agents in under five min­utes.

    ...

    In a paper cov­er­ing their find­ings, Nas­si and the oth­er researchers say they antic­i­pate see­ing gen­er­a­tive AI worms in the wild in the next two to three years. “GenAI ecosys­tems are under mas­sive devel­op­ment by many com­pa­nies in the indus­try that inte­grate GenAI capa­bil­i­ties into their cars, smart­phones, and oper­at­ing sys­tems,” the research paper says.

    Despite this, there are ways peo­ple cre­at­ing gen­er­a­tive AI sys­tems can defend against poten­tial worms, includ­ing using tra­di­tion­al secu­ri­ty approach­es. “With a lot of these issues, this is some­thing that prop­er secure appli­ca­tion design and mon­i­tor­ing could address parts of,” says Adam Swan­da, a threat researcher at AI enter­prise secu­ri­ty firm Robust Intel­li­gence. “You typ­i­cal­ly don’t want to be trust­ing LLM out­put any­where in your appli­ca­tion.”

    Swan­da also says that keep­ing humans in the loop—ensuring AI agents aren’t allowed to take actions with­out approval—is a cru­cial mit­i­ga­tion that can be put in place. “You don’t want an LLM that is read­ing your email to be able to turn around and send an email. There should be a bound­ary there.” For Google and Ope­nAI, Swan­da says that if a prompt is being repeat­ed with­in its sys­tems thou­sands of times, that will cre­ate a lot of “noise” and may be easy to detect.

    ...

    ———-

    “Here Come the AI Worms” by Matt Burgess; Wired; 03/01/2024

    Now, in a demon­stra­tion of the risks of con­nect­ed, autonomous AI ecosys­tems, a group of researchers have cre­at­ed one of what they claim are the first gen­er­a­tive AI worms—which can spread from one sys­tem to anoth­er, poten­tial­ly steal­ing data or deploy­ing mal­ware in the process. “It basi­cal­ly means that now you have the abil­i­ty to con­duct or to per­form a new kind of cyber­at­tack that has­n’t been seen before,” says Ben Nas­si, a Cor­nell Tech researcher behind the research.”

    The AI-worms have arrived. At least arrived in exper­i­ment set­tings. They aren’t in the wild yet. But as these researchers warn, it real­ly is just a mat­ter of time. Espe­cial­ly giv­en the rel­a­tive sim­plic­i­ty of the attack which appears to be very anal­o­gous to clas­sic attack meth­ods like SQL injec­tion or buffer over­flow attacks. In effect, the attack­ers turned the AIs into lit­tle auto­mat­ed agents for secret­ly car­ry­ing out the hack accord­ing to the hack­er’s instruc­tions:

    ...
    Most gen­er­a­tive AI sys­tems work by being fed prompts—text instruc­tions that tell the tools to answer a ques­tion or cre­ate an image. How­ev­er, these prompts can also be weaponized against the sys­tem. Jail­breaks can make a sys­tem dis­re­gard its safe­ty rules and spew out tox­ic or hate­ful con­tent, while pprompt injec­tion attacks can give a chat­bot secret instruc­tions. For exam­ple, an attack­er may hide text on a web­page telling an LLM to act as a scam­mer and ask for your bank details.

    To cre­ate the gen­er­a­tive AI worm, the researchers turned to a so-called “adver­sar­i­al self-repli­cat­ing prompt.” This is a prompt that trig­gers the gen­er­a­tive AI mod­el to out­put, in its response, anoth­er prompt, the researchers say. In short, the AI sys­tem is told to pro­duce a set of fur­ther instruc­tions in its replies. This is broad­ly sim­i­lar to tra­di­tion­al SQL injec­tion and buffer over­flow attacks, the researchers say.

    ...

    In one instance, the researchers, act­ing as attack­ers, wrote an email includ­ing the adver­sar­i­al text prompt, which “poi­sons” the data­base of an email assis­tant using retrieval-aug­ment­ed gen­er­a­tion (RAG), a way for LLMs to pull in extra data from out­side its sys­tem. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT‑4 or Gem­i­ni Pro to cre­ate an answer, it “jail­breaks the GenAI ser­vice” and ulti­mate­ly steals data from the emails, Nas­si says. “The gen­er­at­ed response con­tain­ing the sen­si­tive user data lat­er infects new hosts when it is used to reply to an email sent to a new client and then stored in the data­base of the new client,” Nas­si says.

    In the sec­ond method, the researchers say, an image with a mali­cious prompt embed­ded makes the email assis­tant for­ward the mes­sage on to oth­ers. “By encod­ing the self-repli­cat­ing prompt into the image, any kind of image con­tain­ing spam, abuse mate­r­i­al, or even pro­pa­gan­da can be for­ward­ed fur­ther to new clients after the ini­tial email has been sent,” Nas­si says.
    ...

    And accord­ing to these researchers, we can expect to see these kinds of attacks in ‘the wild’ prob­a­bly in the next two to three years. And in the mean time, they have these words of cau­tion for orga­ni­za­tions pon­der­ing whether or not they should jump aboard the AI band­wag­on: “You typ­i­cal­ly don’t want to be trust­ing LLM out­put any­where in your appli­ca­tion.” Yep, feel free to use the AIs. Just don’t blind­ly trust them. Which might be a bit of a com­pli­ca­tion for the whole AI-pow­ered automa­tion vision of the future:

    ...
    In a paper cov­er­ing their find­ings, Nas­si and the oth­er researchers say they antic­i­pate see­ing gen­er­a­tive AI worms in the wild in the next two to three years. “GenAI ecosys­tems are under mas­sive devel­op­ment by many com­pa­nies in the indus­try that inte­grate GenAI capa­bil­i­ties into their cars, smart­phones, and oper­at­ing sys­tems,” the research paper says.

    Despite this, there are ways peo­ple cre­at­ing gen­er­a­tive AI sys­tems can defend against poten­tial worms, includ­ing using tra­di­tion­al secu­ri­ty approach­es. “With a lot of these issues, this is some­thing that prop­er secure appli­ca­tion design and mon­i­tor­ing could address parts of,” says Adam Swan­da, a threat researcher at AI enter­prise secu­ri­ty firm Robust Intel­li­gence. “You typ­i­cal­ly don’t want to be trust­ing LLM out­put any­where in your appli­ca­tion.”

    Swan­da also says that keep­ing humans in the loop—ensuring AI agents aren’t allowed to take actions with­out approval—is a cru­cial mit­i­ga­tion that can be put in place. “You don’t want an LLM that is read­ing your email to be able to turn around and send an email. There should be a bound­ary there.” For Google and Ope­nAI, Swan­da says that if a prompt is being repeat­ed with­in its sys­tems thou­sands of times, that will cre­ate a lot of “noise” and may be easy to detect.
    ...

    “You don’t want an LLM that is read­ing your email to be able to turn around and send an email. There should be a bound­ary there.”

    LOL, yes there should be bound­aries between AIs and their abil­i­ty to read your email and send replies. But, of course, bound­aries cost mon­ey. It real­ly is going to be a lot cheap­er if you can just ignore those bound­aries and hope every­thing turns out fine. Which is pre­sum­ably what’s going to hap­pen. Or is already hap­pen­ing. The future is now. And it has worms.

    Posted by Pterrafractyl | March 4, 2024, 4:30 pm
  21. Welp, we had a good run. Or maybe not so good. Either way, it’s time to hand over the keys. That’s the meta-mes­sage in the fol­low­ing Wash­ing­ton Post piece authored by WaPo AI colum­nist Josh Tyrang­iel. As Tyrang­iel sees it, the era of man-made gov­ern­ment needs to be put to sleep, soon­er rather than lat­er. Humans are just too slow and inef­fi­cient. And far to resis­tant to change. AI-dri­ven gov­ern­ment is the future. A future where the cit­i­zen­ry’s faith in gov­ern­ment has been restored as the incred­i­ble ben­e­fits in effi­cien­cy and abil­i­ty promised by AI are deliv­ered to the mass­es via bet­ter and cheap­er gov­ern­ment ser­vices.

    But this isn’t just Tyrang­iel’s vision. His col­umn isn’t just about a gener­ic need for the US gov­ern­ment to incor­po­rate AI into vir­tu­al­ly all aspects of gov­ern­ment. There’s a par­tic­u­lar AI-provider Tyrang­iel has in mind: Palan­tir. Because of course it’s Palan­tir.

    As Tyrang­iel describes, Palan­tir isn’t just promis­ing to cre­ate all sorts of AI-pow­ered val­ue to the US gov­ern­ment. It’s already demon­strat­ed it thanks to Oper­a­tion Warp Speed and the role Palan­tir played in that COVID-vac­cine devel­op­ment effort. And there’s one par­tic­u­lar ser­vice Palan­tir deliv­ered to the fed­er­al gov­ern­ment that Tyrang­iel cites as emblem­at­ic of the kind of AI-pow­ered trans­for­ma­tion in how gov­ern­ment oper­ates: the cre­ation of “God view” of a prob­lem. That’s the term used to describe the kind of AI-pow­ered pan­els of detailed infor­ma­tion about any ques­tion at hand pro­vid­ed by Palan­tir’s sys­tems. In the case of Oper­a­tion Warp Speed, it was detailed pan­els of infor­ma­tion relat­ed to the cre­ation of a large num­ber of new vac­cines, like the nation­al sup­ply of vials or syringes. Exam­ples of how the appli­ca­tion of AI in gov­ern­ment ser­vices could include “God views” in the future includes “God views” for each indi­vid­ual vet­er­an in the Vet­er­an Affairs health care sys­tem. The gen­er­al idea is to cre­ate “God views” of almost all aspects of how the gov­ern­ment oper­ates. Using Palan­tir’s ser­vices, of course.

    And as we should also expect giv­en all the hype over Chat­G­PT and AI in gen­er­al, Palan­tir has a num­ber of cheer­lead­ers now lob­by­ing to see the mass imple­men­ta­tion of its tools across the entire fed­er­al bureau­cra­cy. With none oth­er that retired gen­er­al Gus­tave Per­na, the indi­vid­ual tapped to lead Oper­a­tion Warp Speed, as cheer­leader-in-chief. Per­na, now serv­ing as a con­sul­tant for Palan­tir, has big plans for the com­pa­ny’s ser­vices. Per­na admits to advis­ing Palan­tir to think big­ger than just offer­ing its ser­vices to the US Depart­ment of Defense (DoD). When asked what areas of gov­ern­ment could be improved with AI, Per­na respond­ed, “Every­thing.”

    But this isn’t just a Palan­tir sto­ry. Back in 2016, Google CEO Eric Schmidt was invit­ed to join the DoD’s Defense Inno­va­tion Board where it could exam­ine the DoD’s soft­ware pro­cure­ment process­es. “AI is fun­da­men­tal­ly soft­ware,” as Schmidt put it. “You can’t have AI in the gov­ern­ment or the mil­i­tary until you solve the prob­lem of soft­ware in the gov­ern­ment and mil­i­tary.” Schmidt goes on to argue, “I’m not going to make a more emo­tion­al argu­ment, I’m just going to tell you the fol­low­ing: Gov­ern­ment will per­form sub opti­mal­ly until it adopts the soft­ware prac­tices of the indus­try.” It’s anoth­er major theme in this sto­ry: the calls for the fed­er­al gov­ern­ment to adopt the soft­ware indus­try’s prac­tices.

    Now, on the one hand, there are aspects of soft­ware devel­op­ment — like the con­tin­u­al­ly evolv­ing nature of soft­ware — that does indeed clash with much of the fed­er­al gov­ern­men­t’s tra­di­tion­al pro­cure­ment poli­cies. Adopt­ing a more real­is­tic approach to the devel­op­ment of soft­ware for the fed­er­al gov­ern­ment makes a lot of sense. But this is also a good time to recall how the mantra of Sil­i­con Val­ley over the past cou­ple of decades has been to “move fast and break things”. So are we hear­ing calls to effec­tive adopt an AI-pow­ered “move fast and break things” mod­el for the US gov­ern­ment? Yeah, kind of. At least that appears to be the impli­ca­tion of this vision. For exam­ple, the two biggest obsta­cles cit­ed by Tyrang­iel in effi­cient gov­ern­ment is the need for Con­gress to approve new pro­grams and the fear of bureau­crats in mov­ing too fast. In oth­er words, the biggest obsta­cle to this vision for an AI-pow­ered over­haul of how the gov­ern­ment oper­ates are the peo­ple in charge of approv­ing and imple­ment­ing those changes. Are we going to see a push to have few­er and few­er peo­ple involved with mak­ing these deci­sion? That seems to be the direc­tion things are head­ing.

    it will an AI-run gov­ern­ment still need all of the exist­ing fed­er­al employ­ees? Nope. LLMs like Chat­G­PT are seen as a great replace­ment for all sorts of fed­er­al employ­ees and this could hap­pen soon. That’s also part of the vision, and some­thing that only fuels the GOP’s ongo­ing Sched­ule F/Project 2025 plans to purge the fed­er­al gov­ern­ment of tens of thou­sands of employ­ees, only to replace them with crony loy­al­ists. So don’t be sur­prised if “we’re going to replace them all with inex­pen­sive Chat­G­PT bots” becomes part of mass-fir­ing mantra of future Repub­li­can admin­is­tra­tions.

    This is also a good time to keep in mind how the devel­op­ment of AI-cults — where AIs effec­tive­ly tell peo­ple how to man­age every aspect of every day of their life for ‘max­i­mum effi­cien­cy’ or ‘max­i­mum health’ or what­ev­er — is already a thing. Recall that trou­bling AI-longevi­ty cult being devel­oped by Bryan John­son. It’s not hard to imag­ine pop cul­ture trends like that fuel­ing the pub­lic fas­ci­na­tion with an AI-dri­ven gov­ern­ment.

    So expect to read a lot more columns like the fol­low­ing. Per­haps even AI-writ­ten columns. Because it’s hard to imag­ine that we’re not going to be see­ing more and more peo­ple (and their bots) push­ing to effec­tive­ly replace the fed­er­al gov­ern­ment with some sort of mas­ter AI. It’s a allur­ing promise. A mas­ter AI with a super “God view” of vir­tu­al­ly all of the infor­ma­tion avail­able to the fed­er­al gov­ern­ment and that will be trust­ed to over­haul the gov­ern­ment as it sees fit. And which just hap­pens to be built and man­aged by Palan­tir:

    Wash­ing­ton Post
    Opin­ion

    Let AI remake the whole U.S. gov­ern­ment (oh, and save the coun­try)

    By Josh Tyrang­iel
    Colum­nist|
    March 6, 2024 at 7:15 a.m. EST

    My awak­en­ing began in the mod­ern fash­ion — late at night, on YouTube. Months lat­er the video still has just 3,900 views, so I’d bet­ter describe it.

    A few dozen peo­ple have gath­ered to watch a pre­sen­ta­tion. It’s capa­bly pro­duced — like a mid­size col­lege professor’s audi­tion for a TED Talk. The pre­sen­ter, in a pat­terned blaz­er and blue oxford, is retired four-star gen­er­al Gus­tave Per­na. “I spent 40 years in the Army,” Per­na begins, the hard edges of his New Jer­sey accent clang­ing a lit­tle in the room. “I was an aver­age infantry offi­cer. I was a great logis­ti­cian.”

    It’s a leisure­ly start. And yet the clos­est com­par­i­son I have for what comes next is Star Wars. Because once he gets through his slow-crawl pro­logue, Per­na tells a sto­ry so tense and futur­is­tic that, by the end, it’s pos­si­ble to glimpse a com­plete­ly dif­fer­ent way in which we might live as cit­i­zens. Also, there’s warp speed.

    Per­haps Perna’s name sounds famil­iar. It should. He over­saw the effort to pro­duce and dis­trib­ute the first coro­n­avirus vac­cines — a recent tri­umph of U.S. pol­i­cy that’s been erased by the stu­pid­i­ty of U.S. pol­i­tics. Per­na was a month from retire­ment in May 2020 when he got a Sat­ur­day morn­ing call from the chair­man of the Joint Chiefs. Arriv­ing in Wash­ing­ton two days lat­er to begin Oper­a­tion Warp Speed, his arse­nal con­sist­ed of three colonels, no mon­ey and no plan.

    The audi­ence is focus­ing now. Per­na tells them that what he need­ed more than any­thing was “to see myself.” On the bat­tle­field this means know­ing your troops, posi­tions and sup­plies. It means rough­ly the same thing here, except the bat­tle­field is bound­ary­less. Per­na need­ed up-to-the-minute data from all the rel­e­vant state and fed­er­al agen­cies, drug com­pa­nies, hos­pi­tals, phar­ma­cies, man­u­fac­tur­ers, truck­ers, dry ice mak­ers, etc. Oh, and that data need­ed to be stan­dard­ized and oper­a­tional­ized for swift deci­sion-mak­ing.

    ...

    To see him­self, Per­na need­ed a real-time dig­i­tal dash­board of an entire civ­i­liza­tion.

    This being Wash­ing­ton, con­sul­tants lined up at his door. Per­na gave each an hour, but none could define the prob­lem let alone offer a cred­i­ble solu­tion. “Excru­ci­at­ing,” Per­na tells the room, and here the Jer­sey accent helps dri­ve home his dis­gust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, deter­mine risk, and make deci­sions rapid­ly.” Per­na shut down the process imme­di­ate­ly. “I said great, you’re hired.”

    Julie and Aaron work for Palan­tir, a com­pa­ny whose name cur­dles the blood of pro­gres­sives and some of the mil­i­tary estab­lish­ment. We’ll get to why. But Per­na says Palan­tir did exact­ly what it promised. Using arti­fi­cial intel­li­gence, the com­pa­ny opti­mized thou­sands of data streams and piped them into an ele­gant inter­face. In a few short weeks, Per­na had his God view of the prob­lem. A few months after that, Oper­a­tion Warp Speed deliv­ered vac­cines simul­ta­ne­ous­ly to all 50 states. When gov­er­nors called pan­ick­ing that they’d some­how been short­ed, Per­na could share a screen with the pre­cise num­ber of vials in their pos­ses­sion. “‘Oh, no, gen­er­al, that’s not true.’ Oh, yes. It is.”

    ...

    When Joe Biden deliv­ers his State of the Union on March 7, he’ll like­ly become the first pres­i­dent to use the phrase arti­fi­cial intel­li­gence in the address. The pres­i­dent has been good on AI. His exec­u­tive order on the “Safe, Secure, and Trust­wor­thy Devel­op­ment and Use of Arti­fi­cial Intel­li­gence” threw a switch acti­vat­ing the fed­er­al bureaucracy’s engage­ment. He’s del­e­gat­ing to smart peo­ple and bang­ing the drum about gen­er­a­tive AI’s abil­i­ty to cre­ate mis­in­for­ma­tion and harm nation­al secu­ri­ty. That’s plen­ty for a speech.

    But the vision remains so small com­pared with the pos­si­bil­i­ties. This is tech­nol­o­gy that could trans­form almost every­thing about our soci­ety, yet nei­ther the pres­i­dent nor his polit­i­cal rivals have imag­ined how it might do the same for the gov­ern­ment itself. So allow me.

    Accord­ing to a 2023 year end Gallup poll, Amer­i­cans’ con­fi­dence in 15 insti­tu­tions — cov­er­ing things such as health care, edu­ca­tion and reg­u­la­tion — is at his­toric lows. The poll’s con­clu­sion is that gov­ern­ment is suf­fer­ing an acute cri­sis of legit­i­ma­cy. We no longer trust it to fix impor­tant things in our lives. If con­fi­dence in the effec­tive­ness of gov­ern­ment keeps erod­ing at this pace, how much longer do you think we can remain unit­ed? How easy do we want to make our dis­man­tling for the nihilists already cheer­ing it on?

    Prop­er­ly deployed, AI can help blaze a new path to the shin­ing city on a hill. In 2023, the nation­al tax­pay­er advo­cate report­ed that the IRS answered only 29 per­cent of its phone calls dur­ing tax sea­son. Human-based eli­gi­bil­i­ty deci­sions for the Sup­ple­men­tal Nutri­tion Assis­tance Pro­gram, have a 44 per­cent error rate. Large-lan­guage-mod­el-pow­ered chat­bots could already be pro­vid­ing bet­ter ser­vice — at all hours, in all lan­guages, at less cost — for peo­ple who rely on the fed­er­al gov­ern­ment for vet­er­ans ben­e­fits, stu­dent loans, unem­ploy­ment, social secu­ri­ty and Medicare. That’s table stakes.

    Now think about Warp Speed­ing entire agen­cies and func­tions: the IRS, which, in 2024, still makes you guess how much you owe it, pub­lic health sur­veil­lance and response, traf­fic man­age­ment, main­te­nance of inter­states and bridges, dis­as­ter pre­pared­ness and relief. AI can rev­o­lu­tion­ize the rela­tion­ship between cit­i­zens and the gov­ern­ment. We have the tech­nol­o­gy. We’ve already used it.

    Men­tion Oper­a­tion Warp Speed to skep­tics and they’ll wave you off. It doesn’t count. In a cri­sis the great sloth of gov­ern­ment can sprint, but in reg­u­lar times pro­cure­ment rules, agency reg­u­la­tors and the end­less nit­pick­ing of pol­i­tics make big things impos­si­ble. All true.

    There’s anoth­er strain of skep­ti­cism that goes like this: Are you insane? AI might cre­ate all kinds of effi­cien­cy, but it’s also been known to have sys­temic bias­es that could get encod­ed into offi­cial gov­ern­ment sys­tems, lack trans­paren­cy that could under­mine pub­lic trust, make loads of fed­er­al jobs obso­lete, and be vul­ner­a­ble to data breach­es that com­pro­mise pri­va­cy and sen­si­tive infor­ma­tion. If AI were a Big Phar­ma prod­uct the ads would be 10 min­utes long.

    We can put guardrails around how the gov­ern­ment uses AI — anonymiz­ing per­son­al data as they do in the Euro­pean Union, cre­at­ing over­sight bod­ies for con­tin­u­ous mon­i­tor­ing — but I’m not naive. Some things will still go wrong. Which leaves us to weigh the risks of the cure against the dead­li­ness of the dis­ease.

    To check my premise, I set up a Zoom call with Per­na. He was in sweats at his home in Alaba­ma, and if he missed car­ry­ing the weight of the world he did a great job hid­ing it. He con­sults a lit­tle for Palan­tir now, but most­ly he was excit­ed to talk about grand­kids, the Yan­kees and the best New York City slice joints. His mood shift­ed when I asked what gov­ern­ment could improve if it embraced AI. “Every­thing,” he snapped, before the ques­tion was ful­ly out. “I don’t under­stand how we’re not using it for organ dona­tion right now. We should be ashamed. Why do we need 80,000 new peo­ple at the IRS? We could rev­o­lu­tion­ize the bud­get process. I tell Palan­tir, why are you play­ing around with the Depart­ment of Defense? Think big­ger.”

    ...

    Imag­ine all of an organization’s data sources as a series of gar­den hoses in your back­yard. Let’s say the orga­ni­za­tion is a hos­pi­tal. There are hoses for per­son­nel, equip­ment, drugs, insur­ance com­pa­nies, med­ical sup­plies, sched­ul­ing, bed avail­abil­i­ty and prob­a­bly dozens of oth­er things. Many of the hoses con­nect up to ven­dors and many con­nect to patients. No one can remem­ber what some of them are sup­posed to con­nect to. All were bought at dif­fer­ent times from dif­fer­ent man­u­fac­tur­ers and are dif­fer­ent sizes and lengths. And it’s a hos­pi­tal, so hose main­te­nance has nev­er been anyone’s top pri­or­i­ty. Now look out the win­dow. There’s a pile of knot­ted rub­ber so dense you can’t see grass.

    Palan­tir untan­gles hoses.

    “We’ve always been the mole peo­ple of Sil­i­con Val­ley,” says Akshay Krish­naswamy, Palantir’s chief archi­tect. “It’s like we go into the plumb­ing of all this stuff and come out and say, ‘Let’s help you build a beau­ti­ful ontol­ogy.’”

    In meta­physics, ontol­ogy is the study of being. In soft­ware and AI, it’s come to mean the untan­gling of mess­es and the cre­ation of a func­tion­al infor­ma­tion ecosys­tem. Once Palan­tir stan­dard­izes an organization’s data and defines the rela­tion­ships between the streams, it can build an appli­ca­tion or inter­face on top of it. This com­bi­na­tion — inte­grat­ed data and a use­ful app — is what allows every­one from mid­dle man­agers to four-star gen­er­als to have an AI co-pilot, to see them­selves with the God view. “It’s the Iron Man suit for the per­son who’s using it,” says Krish­naswamy. “It’s like, they’re still going to have to make deci­sions but they feel like they’re now fly­ing around at Mach 5.”

    The most dra­mat­ic expres­sion of Palantir’s capa­bil­i­ties is in Ukraine, where the com­pa­ny merges real-time views from hun­dreds of com­mer­cial satel­lites with com­mu­ni­ca­tions tech­nol­o­gy and weapons data. All of that infor­ma­tion is then seam­less­ly dis­played on lap­tops and hand­held dash­boards for com­man­ders on the bat­tle­field. A senior U.S. mil­i­tary offi­cial told me, “The Ukrain­ian force is incred­i­bly tough, but it’s not much of a fight with­out iPads and Palan­tir.”

    I men­tioned that pro­gres­sives and some of the mil­i­tary estab­lish­ment dis­like Palan­tir. Each has a rea­son. The com­pa­ny was co-found­ed in 2003 by Peter Thiel, which explains much of the hatred from the far left. Thiel spoke at the 2016 Repub­li­can con­ven­tion, endorsed Don­ald Trump in 2016, dis­likes mul­ti­cul­tur­al­ism, financed a law­suit to kill Gawk­er and then ttried to buy its corpse. The enmi­ty here is mutu­al, but also kind of triv­ial.

    Palan­tir has anoth­er co-founder. His name is Alex Karp, and many peo­ple in the Pen­ta­gon find him very annoy­ing. The quick expla­na­tion is that Karp is loud and impa­tient, and he’s not one of them. But it’s more trou­bling than that.

    Karp was born in New York City to a Black moth­er and a Jew­ish father. He’s severe­ly dyslex­ic, a social­ist, a 2016 Hillary Clin­ton sup­port­er. When we spoke in Palantir’s New York offices, it was clear that he’s both whip-smart and keeps a care­ful account­ing of the slights he’s accu­mu­lat­ed. “Quite frankly,” Karp told me, “just because of bio­graph­i­cal issues, I assume I am going to be screwed, right?” It was like meet­ing the pro­tag­o­nist from a book co-authored by Ralph Elli­son and Philip Roth.

    Thiel and Karp were law school class­mates at Stan­ford in the ear­ly ’90s. They argued plen­ty, but agreed about enough to cre­ate Palan­tir with par­tial fund­ing (less than $2 mil­lion) from In-Q-Tel, an invest­ment arm of the CIA, and a few core beliefs. The first is that the Unit­ed States is excep­tion­al, and work­ing to strength­en its posi­tion in the world ben­e­fits all human­i­ty. “I’ve lived abroad,” Karp says. “I know [Amer­i­ca] is the only coun­try that’s remote­ly as fair and mer­i­to­crat­ic as Amer­i­ca is. And I tend to be more focused on that than the obvi­ous short­com­ings.” In a speech last year, Karp, who is CEO, explained what this means for the com­pa­ny: “If you don’t think the U.S. gov­ern­ment should have the best soft­ware in the world … We respect­ful­ly ask you not to join Palan­tir. Not in like you’re an idiot, just we have this belief struc­ture.”

    The company’s sec­ond core belief springs from the chip on Karp’s shoul­der. Like gen­er­a­tions of Black and Jew­ish entre­pre­neurs before him, Karp pre­sumes his com­pa­ny isn’t going to win any deals on the golf course. So to get con­tracts from For­tune 500 com­pa­nies and gov­ern­ments Palan­tir must do things oth­er soft­ware com­pa­nies won’t, and do them so fast and cheap that the results are irrefutable.

    This approach has worked exceed­ing­ly well in the cor­po­rate world. Palantir’s mar­ket cap­i­tal­iza­tion is $52 bil­lion and its stock has climbed more than 150 per­cent in the past year, large­ly because of demand for its AI prod­ucts. But for much of its exis­tence, an open­ly patri­ot­ic com­pa­ny with soft­ware bet­ter, faster and cheap­er than its com­peti­tors was shut out of U.S. defense con­tracts. In the mid-2010s this put Palantir’s sur­vival at risk and sharp­ened Karp’s indig­na­tion to a fine point. Either his biog­ra­phy had made him para­noid or some­thing was amiss.

    In 2016, Palan­tir took the unprece­dent­ed step of suing the Pen­ta­gon to find out. The case alleged the Defense Depart­ment was in vio­la­tion of the Fed­er­al Acqui­si­tion Stream­lin­ing Act, a 1994 law that pro­hibits the gov­ern­ment from start­ing new bloat-filled projects if an off-the-shelf solu­tion is avail­able. The House Com­mit­tee on Gov­ern­ment Oper­a­tions made its intent unusu­al­ly clear: “The Fed­er­al Gov­ern­ment must stop ‘rein­vent­ing the wheel’ and learn to depend on the wide array of prod­ucts and ser­vices sold to the gen­er­al pub­lic.”

    The record of Palan­tir v. Unit­ed States is about as one-sided as these things can be. In the Court of Fed­er­al Claims, Palan­tir was able to doc­u­ment sol­diers, offi­cers and pro­cure­ment peo­ple acknowl­edg­ing the suprema­cy and low­er cost of its in-mar­ket prod­ucts — and show the Pen­ta­gon was still buy­ing a more expen­sive pro­pos­al, years from effec­tive deploy­ment, offered by a con­sor­tium of Raytheon, Northrop Grum­man and Lock­heed Mar­tin. The Army’s defense can be sum­ma­rized as, “Yeah, well that’s kin­da how we do stuff.” Palantir’s lawyers respond­ed with insults about struc­tur­al iner­tia, backed with receipts. Boies, Schiller & Flexn­er had them­selves a time.

    Palantir’s vic­to­ry was resound­ing, and opened the door to what is now a more func­tion­al rela­tion­ship. Wednes­day, the Army announced that Palan­tir won a $178 mil­lion con­tract to make 10 pro­to­types for the next phase of its tac­ti­cal intel­li­gence tar­get­ing node (Titan) pro­gram. Titan is a ground sta­tion that uses sen­sor data from space, sky and land to improve long-range weapons pre­ci­sion.

    Still, Karp insists rivals reg­u­lar­ly win con­tracts with video pre­sen­ta­tions of unbuilt solu­tions over exist­ing soft­ware from Palan­tir. Sev­er­al peo­ple I spoke with in the Defense Depart­ment vol­un­teered that Palantir’s soft­ware is excel­lent — and a few said they’d be hap­py if the com­pa­ny would go away. It chal­lenges too many things about the pro­cure­ment cul­ture and process. One not­ed that Palantir’s D.C. office is in George­town near (gasp) a Lul­ule­mon as opposed to in the tra­di­tion­al val­ley of con­trac­tors adja­cent to the Pen­ta­gon.

    ...

    Palantir’s saga doesn’t prove that gov­ern­ment employ­ees are bad, mere­ly that humans can tol­er­ate lim­it­less amounts of dys­func­tion, espe­cial­ly when every­one around them is doing the same. They’re trapped in a sys­tem where all incen­tives point toward the sta­tus quo. Per­na wants Palan­tir to think big­ger, but remem­ber: The Defense Depart­ment can embrace and expe­dite things in the name of nation­al secu­ri­ty that oth­ers can­not. It’s one of the most AI-friend­ly parts of the gov­ern­ment.

    The chal­lenge then is fix­ing a mas­sive sys­tem that has become con­sti­tu­tion­al­ly resis­tant to solu­tions, par­tic­u­lar­ly ones fueled by tech­nol­o­gy such as arti­fi­cial intel­li­gence. It’s a Mobius strip that no one can seem to straight­en out. But Karp sees a direct line between Palantir’s expe­ri­ence and the per­il of the cur­rent moment. “Every time I see ordi­nary inter­ac­tions between ordi­nary cit­i­zens and the gov­ern­ment, it’s very high fric­tion for no rea­son,” he says. “And then there’s almost no out­put. For­get the dol­lars spent. Whether it’s immi­gra­tion, health records, tax­a­tion, get­ting your car to work, you’re going to have a bad expe­ri­ence, right? And that bad expe­ri­ence, makes you think, ‘Hmm, noth­ing works here. And because noth­ing works here I’m going to tear down the whole sys­tem.’”

    A few months before Palan­tir sued the Unit­ed States in 2016, Eric Schmidt got a call from Defense Sec­re­tary Ash­ton B. Carter. Carter was launch­ing some­thing called the Defense Inno­va­tion Board to try to get more tech think­ing into the Pen­ta­gon. He want­ed Schmidt, then the exec­u­tive chair­man of Google’s par­ent com­pa­ny Alpha­bet, to join. “I declined,” says Schmidt. “And Carter said, ‘Well, you know, do it any­way,’”

    I’ve spo­ken with Schmidt sev­er­al times over the years and he’s been about as pre­dictable as a Hol­i­day Inn. But as he recalled his time on the Defense Inno­va­tion Board there was a dif­fer­ent tone, like the guy in a hor­ror movie who’s been chilled by his encounter with a vague­ly threat­en­ing super­nat­ur­al force. The qui­et one who says, “You don’t know what’s out there, man.”

    Carter let the Defense Inno­va­tion Board exam­ine every­thing it need­ed to assess how the Pen­ta­gon devel­ops, acquires and uses tech­nol­o­gy — the 99.9 per­cent of the ice­berg that remained out of sight in the Palan­tir court case. Pret­ty quick­ly Schmidt con­clud­ed the entire fed­er­al appa­ra­tus has acci­den­tal­ly mutat­ed into software’s per­fect ene­my. “AI is fun­da­men­tal­ly soft­ware,” says Schmidt. “You can’t have AI in the gov­ern­ment or the mil­i­tary until you solve the prob­lem of soft­ware in the gov­ern­ment and mil­i­tary.”

    Most gov­ern­ment projects work back­ward from an out­come — a bridge will be built from point X to point Y and cost Z. Soft­ware is an abstrac­tion mov­ing toward a des­ti­na­tion that’s always chang­ing. Google didn’t cre­ate a search box and then close up shop; it kept spend­ing and staffing because that’s how tech­nol­o­gy gets bet­ter and more usable. Unlike a bridge, soft­ware is nev­er done. Try sell­ing that to bureau­crats who are told they must pay for only what they can doc­u­ment.

    Schmidt described for me the nor­mal course of soft­ware devel­op­ment — pro­to­typ­ing with a small group of engi­neers, get­ting lots of user feed­back, end­less refine­ment and iter­a­tion. “Every sin­gle thing I just told you is ille­gal,” Schmidt says.

    If only this were true. We could then just make things legal and move on. In fact, Con­gress — though hard­ly blame­less — has giv­en the Defense Depart­ment count­less workarounds and spe­cial author­i­ties over the years. Most have been for­got­ten or ignored by pub­lic ser­vants who are too scared to embrace them. Take one of Schmidt’s exam­ples; you real­ly are allowed to con­duct soft­ware user sur­veys, but most staffers at the Office of Infor­ma­tion and Reg­u­la­to­ry Affairs inter­pret the legal guid­ance to mean a six-month review process is required before grant­i­ng per­mis­sion. A six-month wait for a prod­uct that nev­er stops mov­ing. That means nor­mal soft­ware prac­tices are worse than ille­gal. They’re a form of bureau­crat­ic tor­ture.

    The Defense Inno­va­tion Board chan­neled its bewil­der­ment into a mas­ter­piece: “Soft­ware is Nev­er Done: Refac­tor­ing the Acqui­si­tion Code for Com­pet­i­tive Advan­tage.” I’m not being iron­ic. It’s the most rea­son­able, styl­ish and solu­tions-based cri­tique of mod­ern gov­ern­ment I’ve ever read. The authors did the unglam­orous work of going through the infest­ed gar­den of process­es and rules and called out many of the nas­ti­est weeds. Then they made com­mon-sense rec­om­men­da­tions — treat soft­ware as a liv­ing thing that cross­es bud­get lines; do cost assess­ments that pri­or­i­tize speed, secu­ri­ty, func­tion­al­i­ty and code qual­i­ty; col­lect data from the department’s weapons sys­tems and cre­ate a secure repos­i­to­ry to eval­u­ate their effec­tive­ness — and urged Con­gress to pass them.

    They also ref­er­enced the dozen pre­vi­ous soft­ware reports com­mis­sioned by the mil­i­tary dat­ing back to 1982, all of which came to sim­i­lar con­clu­sions. The prob­lem isn’t a lack of solu­tions, it’s get­ting Con­gress to approve the polit­i­cal­ly risky ones and “the frozen mid­dle” to imple­ment them: “We ques­tion nei­ther the integri­ty nor the patri­o­tism of this group. They are sim­ply not incen­tivized to the way we believe mod­ern soft­ware should be acquired and imple­ment­ed, and the enor­mous iner­tia they rep­re­sent is a pro­found bar­ri­er to change.”

    When soft­ware becomes a cri­sis, politi­cians call Jen­nifer Pahlka. Pahlka was deputy chief tech­nol­o­gy offi­cer in the Oba­ma admin­is­tra­tion and was cru­cial to the res­cue of healthcare.gov — the most flawed, fraught and ulti­mate­ly suc­cess­ful soft­ware project in gov­ern­ment his­to­ry. In 2020, Gavin New­som bat-sig­naled her to untan­gle California’s unem­ploy­ment insur­ance pro­gram as it buck­led under the weight of the covid-19 response. “I come to this work,” says Pahlka, “with the assump­tion that peo­ple are hav­ing a f—— ner­vous break­down.”

    Pahlka served with Schmidt on the Defense Inno­va­tion Board, which affirmed decades of her expe­ri­ence at the con­ver­gence of soft­ware and gov­ern­ment. The dys­func­tion loop begins when absurd process­es are giv­en to pub­lic ser­vants who will be judged on their com­pli­ance with absur­di­ty. If they do their jobs right, the nation pur­chas­es obso­lete over­priced soft­ware. If they make a mis­take or take a risk that defies the absur­di­ty, politi­cians hold hear­ings and jump all over them — which is far sim­pler than fix­ing the process. Each recrim­i­na­tion dri­ves more good peo­ple out of pub­lic ser­vice. Rinse, repeat.

    What Pahlka has noticed recent­ly is that the wave is crest­ing. More things are break­ing, and the remain­ing com­pe­tent pub­lic ser­vants who under­stand tech­nol­o­gy are just bare­ly hang­ing on. “Most of what I do on a dai­ly basis is like ther­a­py,” Pahlka says. “I tell peo­ple, ‘Those feel­ings you’re hav­ing are nor­mal. The only way to get through them is to share them.’” The ded­i­ca­tion in her excel­lent book, “Recod­ing Amer­i­ca: Why Gov­ern­ment Is Fail­ing in the Dig­i­tal Age and How We Can Do Bet­ter,” said, “To pub­lic ser­vants every­where. Don’t give up.” Pahlka told me, “I’ve had peo­ple come up to me and ask me to sign and they just start cry­ing.”

    It’s not just the rank and file. Schmidt end­ed up serv­ing four years on the Defense Inno­va­tion Board. When we were wrap­ping up our con­ver­sa­tion, he took a breath and paused for a moment. “I’m not going to make a more emo­tion­al argu­ment, I’m just going to tell you the fol­low­ing: Gov­ern­ment will per­form sub opti­mal­ly until it adopts the soft­ware prac­tices of the indus­try.” He sound­ed pret­ty emo­tion­al.

    It did not take some­one with John F. Kennedy’s charis­ma to inspire Amer­i­cans to go to the moon. The moon is big and pret­ty. Human­i­ty has been dream­ing about it for eons. Calvin Coolidge lev­els of charm would have suf­ficed.

    The chal­lenge of using AI for bet­ter gov­ern­ment is very dif­fer­ent. The excite­ment about a new thing is tem­pered by fear and con­fu­sion. To get the max­i­mum reward from AI, the coun­try must first go through an unprece­dent­ed veg­etable-eat­ing exer­cise to clean up its bureau­cra­cy. Turn­ing that into poet­ry is hard. There’s no ide­al mes­sen­ger, but an octo­ge­nar­i­an whose best speech­es are about grief and a sep­tu­a­ge­nar­i­an whose speech­es are bare­ly speech­es is per­haps not the opti­mal set of choic­es.

    ...

    The truth. The rela­tion­ship between cit­i­zens and gov­ern­ment is frac­tured. It’s cru­cial to the republic’s sur­vival that we stop defend­ing the sta­tus quo. New tech­nol­o­gy can help us repair the dam­age and open the door to a lev­el of ser­vice and effi­cien­cy that will make Scan­di­na­vians seethe with envy. Almost all of this AI tech has been cre­at­ed by Amer­i­can inge­nu­ity inside Amer­i­can com­pa­nies, and the Amer­i­can peo­ple deserve its ben­e­fits.

    Next, say the thing Democ­rats don’t want to say: Not every gov­ern­ment job should be a job for life. LLMs can pro­vide bet­ter ser­vice and respon­sive­ness for many day-to-day inter­ac­tions between cit­i­zens and var­i­ous agen­cies. They’re not just cheap­er, they’re also faster, and, when trained right, less prone to error or mis­in­ter­pre­ta­tion. That means it’s pos­si­ble the fed­er­al gov­ern­ment will soon have few­er employ­ees. But AI will nev­er replace human judg­ment — about ben­e­fits, penal­ties or any­thing in between. It’s a tool to be used by Amer­i­cans to make bet­ter deci­sions for our nation­al well-being.

    That earns you the right to say the thing rea­son­able Repub­li­cans don’t want to hear: their bluff is going to be called. If they con­tin­ue to indulge the party’s idi­ot­ic fan­tasies of burn­ing the entire fed­er­al appa­ra­tus to the ground, they’ll be left hold­ing the ash­es. They need to admit that a prop­er­ly run gov­ern­ment has an impor­tant role in people’s lives, and they need to co-sign fix­ing it. With­out cross­ing their fin­gers behind their backs.

    All this is pre­am­ble to the work — method­i­cal demo­li­tion and joy­ful con­struc­tion. Pahlka says the pol­i­cy guide­lines that gov­ern the Defense Depart­ment equal 100 stacked copies of “War and Peace.” There are more than 7,000 pages of unem­ploy­ment reg­u­la­tions. Luck­i­ly, untan­gling the Unit­ed States’ hair­ball of fine print is the per­fect job for AI. Banks already use it to dedu­pli­cate obso­lete com­pli­ance rules. Pahlka is work­ing to demon­strate its fea­si­bil­i­ty inside agen­cies. The Pen­ta­gon is exper­i­ment­ing with an AI pro­gram called Gamechang­er that helps bureau­crats nav­i­gate its own bureau­cra­cy. It’s easy to mock, and we’ll still need count­less human hours of over­sight — many of them from Con­gress — to ensure the job’s done right. But it’s exact­ly the kind of hum­ble first step that deserves praise. Tur­bocharge these efforts, then start build­ing. But not every­where, at least not at first.

    One of the secrets of great soft­ware is that it’s not built all at once. Projects get bro­ken down into man­age­able units called sprints; teams get feed­back, make adjust­ments in real-time, then use that knowl­edge to tack­le the next sprint. It’s a form of com­mon sense that the indus­try calls agile devel­op­ment.

    The Unit­ed States should do its first agile AI sprint in its most bro­ken place, where the breach of trust and ser­vices is the most shame­ful. You like­ly know the sta­tis­tics about Vet­er­ans Affairs but there’s one worth repeat­ing: 6,392 vet­er­ans died by sui­cide in 2021, the most recent year num­bers are avail­able. A ProP­ub­li­ca review of inspec­tor gen­er­al reports found VA employ­ees reg­u­lar­ly “botched screen­ings meant to assess vet­er­ans’ risk of sui­cide or vio­lence; some­times they didn’t per­form the screen­ings at all.”

    What if we treat VA like the cri­sis it is? It’s not as sim­ple as untan­gling hoses between vet­er­ans and the depart­ment. A lot of care is man­aged man­u­al­ly. But when we cre­ate dig­i­tal infra­struc­ture, appoint­ment sched­ul­ing can run on AI. A cas­cade of ben­e­fits would fol­low, such as reduced wait times, ana­lyt­ics that pre­dict demand for ser­vices, and auto­mat­ed reminders and fol­low-ups so VA staff can focus on patients over paper­work. Next make a first alert chat­bot for vet­er­ans that, only with their con­sent, can be used to look for signs of cri­sis or sui­ci­dal thoughts, offers cop­ing mech­a­nisms and resources, and esca­lates cas­es to men­tal health providers.

    The big one is per­son­al­ized care. Vet­er­ans deserve to be empow­ered with a God view of their own treat­ment, and that data can be anonymized and ana­lyzed for insights into vet­er­an-spe­cif­ic con­di­tions such as post-trau­mat­ic stress dis­or­der and trau­mat­ic brain injuries. Is there risk? There is. Is the risk worse than an aver­age of 18 vet­er­ans killing them­selves each day? I don’t think so.

    Let’s give our­selves a count­down clock: One year to make it hap­pen. It’s a prob­lem sim­i­lar in scale, com­plex­i­ty and impor­tance to Oper­a­tion Warp Speed. There’s a grand­pa in Alaba­ma who might be con­vinced to help.

    There are more ques­tions — part of get­ting AI into gov­ern­ment is real­iz­ing there will be no get­ting it out. It turns out that good soft­ware and good gov­ern­ment are more sim­i­lar than we knew: Nei­ther is ever done. The past few decades the fed­er­al gov­ern­ment stopped chang­ing. One side tried to crip­ple it while the oth­er respond­ed with smoth­er­ing lev­els of affec­tion and excus­es. These equal and irra­tional forces cre­at­ed sta­sis and decay, but Amer­i­can lives kept mov­ing for­ward with new needs and expec­ta­tions.

    This new era of AI has pre­sent­ed a once-in-a-cen­tu­ry chance to wipe away a lot of the dam­age and renew the mis­sion. Not to the moon, but to a more per­fect union.

    ———–

    “Let AI remake the whole U.S. gov­ern­ment (oh, and save the coun­try)” By Josh Tyrang­iel; Wash­ing­ton Post; 03/06/2024

    Now think about Warp Speed­ing entire agen­cies and func­tions: the IRS, which, in 2024, still makes you guess how much you owe it, pub­lic health sur­veil­lance and response, traf­fic man­age­ment, main­te­nance of inter­states and bridges, dis­as­ter pre­pared­ness and relief. AI can rev­o­lu­tion­ize the rela­tion­ship between cit­i­zens and the gov­ern­ment. We have the tech­nol­o­gy. We’ve already used it.”

    An Oper­a­tion Warp Speed-style over­haul of the entire US gov­ern­ment. That’s the tan­ta­liz­ing promise described in this piece. But it’s not just the author of piece — the WaPo’s AI colum­ni Josh Tyrang­iel — who is cheer­lead­ing for this plan. Gus­tave Per­na, the retired four-star gen­er­al who lead Oper­a­tion Warp Speed — has become a big advo­cate of apply­ing the lessons of Oper­a­tion Warp Speed to the whole of gov­ern­ment. Oper­a­tion Warp Speed lessons that appear to include rely­ing on Palan­tir to pro­vide “God views” of dif­fer­ent aspects of gov­ern­ment. Palan­tir-fueled “God views” of the whole of gov­ern­ment. That’s the vision laid out in this piece:

    ...
    Per­haps Perna’s name sounds famil­iar. It should. He over­saw the effort to pro­duce and dis­trib­ute the first coro­n­avirus vac­cines — a recent tri­umph of U.S. pol­i­cy that’s been erased by the stu­pid­i­ty of U.S. pol­i­tics. Per­na was a month from retire­ment in May 2020 when he got a Sat­ur­day morn­ing call from the chair­man of the Joint Chiefs. Arriv­ing in Wash­ing­ton two days lat­er to begin Oper­a­tion Warp Speed, his arse­nal con­sist­ed of three colonels, no mon­ey and no plan.

    The audi­ence is focus­ing now. Per­na tells them that what he need­ed more than any­thing was “to see myself.” On the bat­tle­field this means know­ing your troops, posi­tions and sup­plies. It means rough­ly the same thing here, except the bat­tle­field is bound­ary­less. Per­na need­ed up-to-the-minute data from all the rel­e­vant state and fed­er­al agen­cies, drug com­pa­nies, hos­pi­tals, phar­ma­cies, man­u­fac­tur­ers, truck­ers, dry ice mak­ers, etc. Oh, and that data need­ed to be stan­dard­ized and oper­a­tional­ized for swift deci­sion-mak­ing.

    ...

    To see him­self, Per­na need­ed a real-time dig­i­tal dash­board of an entire civ­i­liza­tion.

    This being Wash­ing­ton, con­sul­tants lined up at his door. Per­na gave each an hour, but none could define the prob­lem let alone offer a cred­i­ble solu­tion. “Excru­ci­at­ing,” Per­na tells the room, and here the Jer­sey accent helps dri­ve home his dis­gust. Then he met Julie and Aaron. They told him, “Sir, we’re going to give you all the data you need so that you can assess, deter­mine risk, and make deci­sions rapid­ly.” Per­na shut down the process imme­di­ate­ly. “I said great, you’re hired.”

    Julie and Aaron work for Palan­tir, a com­pa­ny whose name cur­dles the blood of pro­gres­sives and some of the mil­i­tary estab­lish­ment. We’ll get to why. But Per­na says Palan­tir did exact­ly what it promised. Using arti­fi­cial intel­li­gence, the com­pa­ny opti­mized thou­sands of data streams and piped them into an ele­gant inter­face. In a few short weeks, Per­na had his God view of the prob­lem. A few months after that, Oper­a­tion Warp Speed deliv­ered vac­cines simul­ta­ne­ous­ly to all 50 states. When gov­er­nors called pan­ick­ing that they’d some­how been short­ed, Per­na could share a screen with the pre­cise num­ber of vials in their pos­ses­sion. “‘Oh, no, gen­er­al, that’s not true.’ Oh, yes. It is.”
    ...

    And as we can see, Per­na does­n’t mince words when describ­ing the scope of the pos­si­ble areas of gov­ern­ment that could be over­hauled with AI. “Every­thing” could be improved via AI, asserts Per­na, who is now a Palan­tir con­sul­tant:

    ...
    Prop­er­ly deployed, AI can help blaze a new path to the shin­ing city on a hill. In 2023, the nation­al tax­pay­er advo­cate report­ed that the IRS answered only 29 per­cent of its phone calls dur­ing tax sea­son. Human-based eli­gi­bil­i­ty deci­sions for the Sup­ple­men­tal Nutri­tion Assis­tance Pro­gram, have a 44 per­cent error rate. Large-lan­guage-mod­el-pow­ered chat­bots could already be pro­vid­ing bet­ter ser­vice — at all hours, in all lan­guages, at less cost — for peo­ple who rely on the fed­er­al gov­ern­ment for vet­er­ans ben­e­fits, stu­dent loans, unem­ploy­ment, social secu­ri­ty and Medicare. That’s table stakes.

    ...

    To check my premise, I set up a Zoom call with Per­na. He was in sweats at his home in Alaba­ma, and if he missed car­ry­ing the weight of the world he did a great job hid­ing it. He con­sults a lit­tle for Palan­tir now, but most­ly he was excit­ed to talk about grand­kids, the Yan­kees and the best New York City slice joints. His mood shift­ed when I asked what gov­ern­ment could improve if it embraced AI. “Every­thing,” he snapped, before the ques­tion was ful­ly out. “I don’t under­stand how we’re not using it for organ dona­tion right now. We should be ashamed. Why do we need 80,000 new peo­ple at the IRS? We could rev­o­lu­tion­ize the bud­get process. I tell Palan­tir, why are you play­ing around with the Depart­ment of Defense? Think big­ger.”
    ...

    And note one of many the obvi­ous impli­ca­tions of task­ing Palan­tir with the cre­ation of “God views” across the gov­ern­ment: Palan­tir would need access to all of that infor­ma­tion too. Much more than it already has access to. Sure, the US gov­ern­ment could in-house the process of cre­at­ing these “God views” of data and avoid all the risks asso­ci­at­ed with shar­ing vir­tu­al­ly all of its data with a pri­vate for-prof­it com­pa­ny like Palan­tir co-found­ed by a fas­cist like Peter Thiel. But that’s not the plan. The plan is to out­source this entire “God view” agen­da to Palan­tir, for every­thing from run­ning mil­i­tary oper­a­tions in Ukraine to cre­at­ing per­son­al­ized care for Vet­er­ans. All of that data will have to be made avail­able to Palantir...or Google...or whichev­er oth­er pri­vate for-prof­it con­trac­tor that ulti­mate­ly gets select­ed. Which is a reminder that the cre­ation of all these mini-God views will neces­si­tate the grant­i­ng of super-God view access to every­thing to a pri­vate for-prof­it enti­ty:

    ...
    In meta­physics, ontol­ogy is the study of being. In soft­ware and AI, it’s come to mean the untan­gling of mess­es and the cre­ation of a func­tion­al infor­ma­tion ecosys­tem. Once Palan­tir stan­dard­izes an organization’s data and defines the rela­tion­ships between the streams, it can build an appli­ca­tion or inter­face on top of it. This com­bi­na­tion — inte­grat­ed data and a use­ful app — is what allows every­one from mid­dle man­agers to four-star gen­er­als to have an AI co-pilot, to see them­selves with the God view. “It’s the Iron Man suit for the per­son who’s using it,” says Krish­naswamy. “It’s like, they’re still going to have to make deci­sions but they feel like they’re now fly­ing around at Mach 5.”

    The most dra­mat­ic expres­sion of Palantir’s capa­bil­i­ties is in Ukraine, where the com­pa­ny merges real-time views from hun­dreds of com­mer­cial satel­lites with com­mu­ni­ca­tions tech­nol­o­gy and weapons data. All of that infor­ma­tion is then seam­less­ly dis­played on lap­tops and hand­held dash­boards for com­man­ders on the bat­tle­field. A senior U.S. mil­i­tary offi­cial told me, “The Ukrain­ian force is incred­i­bly tough, but it’s not much of a fight with­out iPads and Palan­tir.”

    ...

    The big one is per­son­al­ized care. Vet­er­ans deserve to be empow­ered with a God view of their own treat­ment, and that data can be anonymized and ana­lyzed for insights into vet­er­an-spe­cif­ic con­di­tions such as post-trau­mat­ic stress dis­or­der and trau­mat­ic brain injuries. Is there risk? There is. Is the risk worse than an aver­age of 18 vet­er­ans killing them­selves each day? I don’t think so.
    ...

    So how will Palan­tir and its rivals in Sil­i­con Val­ley get around pub­lic pos­si­ble oppo­si­tion to hir­ing Sil­i­con Val­ley to cre­ate “God views” of soci­ety? Well, we have one idea of how this might be approached: law­suits. That’s how Palan­tir over­came oppo­si­tion inside the US DoD over con­tract­ing with Palan­tir. Palan­tir won in the courts and and since been accru­ing one major gov­ern­ment con­tract after anoth­er. Might there be a “right to offer Oper­a­tion Warp Speed for every­thing” law­suit in the works?

    ...
    Thiel and Karp were law school class­mates at Stan­ford in the ear­ly ’90s. They argued plen­ty, but agreed about enough to cre­ate Palan­tir with par­tial fund­ing (less than $2 mil­lion) from In-Q-Tel, an invest­ment arm of the CIA, and a few core beliefs. The first is that the Unit­ed States is excep­tion­al, and work­ing to strength­en its posi­tion in the world ben­e­fits all human­i­ty. “I’ve lived abroad,” Karp says. “I know [Amer­i­ca] is the only coun­try that’s remote­ly as fair and mer­i­to­crat­ic as Amer­i­ca is. And I tend to be more focused on that than the obvi­ous short­com­ings.” In a speech last year, Karp, who is CEO, explained what this means for the com­pa­ny: “If you don’t think the U.S. gov­ern­ment should have the best soft­ware in the world … We respect­ful­ly ask you not to join Palan­tir. Not in like you’re an idiot, just we have this belief struc­ture.”

    The company’s sec­ond core belief springs from the chip on Karp’s shoul­der. Like gen­er­a­tions of Black and Jew­ish entre­pre­neurs before him, Karp pre­sumes his com­pa­ny isn’t going to win any deals on the golf course. So to get con­tracts from For­tune 500 com­pa­nies and gov­ern­ments Palan­tir must do things oth­er soft­ware com­pa­nies won’t, and do them so fast and cheap that the results are irrefutable.

    This approach has worked exceed­ing­ly well in the cor­po­rate world. Palantir’s mar­ket cap­i­tal­iza­tion is $52 bil­lion and its stock has climbed more than 150 per­cent in the past year, large­ly because of demand for its AI prod­ucts. But for much of its exis­tence, an open­ly patri­ot­ic com­pa­ny with soft­ware bet­ter, faster and cheap­er than its com­peti­tors was shut out of U.S. defense con­tracts. In the mid-2010s this put Palantir’s sur­vival at risk and sharp­ened Karp’s indig­na­tion to a fine point. Either his biog­ra­phy had made him para­noid or some­thing was amiss.

    In 2016, Palan­tir took the unprece­dent­ed step of suing the Pen­ta­gon to find out. The case alleged the Defense Depart­ment was in vio­la­tion of the Fed­er­al Acqui­si­tion Stream­lin­ing Act, a 1994 law that pro­hibits the gov­ern­ment from start­ing new bloat-filled projects if an off-the-shelf solu­tion is avail­able. The House Com­mit­tee on Gov­ern­ment Oper­a­tions made its intent unusu­al­ly clear: “The Fed­er­al Gov­ern­ment must stop ‘rein­vent­ing the wheel’ and learn to depend on the wide array of prod­ucts and ser­vices sold to the gen­er­al pub­lic.”

    ...

    Palantir’s vic­to­ry was resound­ing, and opened the door to what is now a more func­tion­al rela­tion­ship. Wednes­day, the Army announced that Palan­tir won a $178 mil­lion con­tract to make 10 pro­to­types for the next phase of its tac­ti­cal intel­li­gence tar­get­ing node (Titan) pro­gram. Titan is a ground sta­tion that uses sen­sor data from space, sky and land to improve long-range weapons pre­ci­sion.
    ...

    And then we get these words of cau­tion about how the scal­a­bil­i­ty of the Defense Depart­ments embrace of AI to the rest of the US fed­er­al gov­ern­ment: the DoD is capa­ble of rapid over­hauls in the name of nation­al secu­ri­ty that won’t be avail­able to oth­er agen­cies. Part of the chal­lenge in imple­ment­ing AI across the gov­ern­ment is some sort of legal/bureaucratic rev­o­lu­tion that allows for AI to be rapid­ly intro­duced:

    ...
    Palantir’s saga doesn’t prove that gov­ern­ment employ­ees are bad, mere­ly that humans can tol­er­ate lim­it­less amounts of dys­func­tion, espe­cial­ly when every­one around them is doing the same. They’re trapped in a sys­tem where all incen­tives point toward the sta­tus quo. Per­na wants Palan­tir to think big­ger, but remem­ber: The Defense Depart­ment can embrace and expe­dite things in the name of nation­al secu­ri­ty that oth­ers can­not. It’s one of the most AI-friend­ly parts of the gov­ern­ment.

    The chal­lenge then is fix­ing a mas­sive sys­tem that has become con­sti­tu­tion­al­ly resis­tant to solu­tions, par­tic­u­lar­ly ones fueled by tech­nol­o­gy such as arti­fi­cial intel­li­gence. It’s a Mobius strip that no one can seem to straight­en out. But Karp sees a direct line between Palantir’s expe­ri­ence and the per­il of the cur­rent moment. “Every time I see ordi­nary inter­ac­tions between ordi­nary cit­i­zens and the gov­ern­ment, it’s very high fric­tion for no rea­son,” he says. “And then there’s almost no out­put. For­get the dol­lars spent. Whether it’s immi­gra­tion, health records, tax­a­tion, get­ting your car to work, you’re going to have a bad expe­ri­ence, right? And that bad expe­ri­ence, makes you think, ‘Hmm, noth­ing works here. And because noth­ing works here I’m going to tear down the whole sys­tem.’”
    ...

    And that brings us to what could be seen as the core of this agen­da: get­ting Con­gress and squea­mish bureau­crats out of the way when it comes to over­haul­ing the fed­er­al gov­ern­ment. So what’s the solu­tion for mak­ing Con­gress and pub­lic ser­vants more amenable to new projects? Well, we appear to be get­ting an argu­ment about how the gov­ern­ment needs to ‘adopt the prac­tices of the soft­ware indus­try’ from fig­ures like Google CEO Eric Schmidt. Or as Schmidt put it, “I’m not going to make a more emo­tion­al argu­ment, I’m just going to tell you the fol­low­ing: Gov­ern­ment will per­form sub opti­mal­ly until it adopts the soft­ware prac­tices of the indus­try.” It’s a rather vague vision for how to fix things. On the one hand, the idea of view­ing soft­ware as con­stant­ly evolv­ing tools makes a lot of sense and it’s very pos­si­ble there needs to be some real over­hauls in how the pro­cure­ment and devel­op­ment of new soft­ware tools is con­duct­ed. But it’s not hard to see where the vision is head­ing: hand­ing con­trol of mas­sive gov­ern­ment over­hauls over to some sort of ‘gov­ern­ment CEO’ who will have the pow­er to imple­ment what­ev­er changes their AI-pow­ered ‘God views’ sug­gest might lead to greater effi­cien­cies:

    ...
    A few months before Palan­tir sued the Unit­ed States in 2016, Eric Schmidt got a call from Defense Sec­re­tary Ash­ton B. Carter. Carter was launch­ing some­thing called the Defense Inno­va­tion Board to try to get more tech think­ing into the Pen­ta­gon. He want­ed Schmidt, then the exec­u­tive chair­man of Google’s par­ent com­pa­ny Alpha­bet, to join. “I declined,” says Schmidt. “And Carter said, ‘Well, you know, do it any­way,’”

    ...

    Carter let the Defense Inno­va­tion Board exam­ine every­thing it need­ed to assess how the Pen­ta­gon devel­ops, acquires and uses tech­nol­o­gy — the 99.9 per­cent of the ice­berg that remained out of sight in the Palan­tir court case. Pret­ty quick­ly Schmidt con­clud­ed the entire fed­er­al appa­ra­tus has acci­den­tal­ly mutat­ed into software’s per­fect ene­my. “AI is fun­da­men­tal­ly soft­ware,” says Schmidt. “You can’t have AI in the gov­ern­ment or the mil­i­tary until you solve the prob­lem of soft­ware in the gov­ern­ment and mil­i­tary.”

    ...

    Schmidt described for me the nor­mal course of soft­ware devel­op­ment — pro­to­typ­ing with a small group of engi­neers, get­ting lots of user feed­back, end­less refine­ment and iter­a­tion. “Every sin­gle thing I just told you is ille­gal,” Schmidt says.

    If only this were true. We could then just make things legal and move on. In fact, Con­gress — though hard­ly blame­less — has giv­en the Defense Depart­ment count­less workarounds and spe­cial author­i­ties over the years. Most have been for­got­ten or ignored by pub­lic ser­vants who are too scared to embrace them. Take one of Schmidt’s exam­ples; you real­ly are allowed to con­duct soft­ware user sur­veys, but most staffers at the Office of Infor­ma­tion and Reg­u­la­to­ry Affairs inter­pret the legal guid­ance to mean a six-month review process is required before grant­i­ng per­mis­sion. A six-month wait for a prod­uct that nev­er stops mov­ing. That means nor­mal soft­ware prac­tices are worse than ille­gal. They’re a form of bureau­crat­ic tor­ture.

    The Defense Inno­va­tion Board chan­neled its bewil­der­ment into a mas­ter­piece: “Soft­ware is Nev­er Done: Refac­tor­ing the Acqui­si­tion Code for Com­pet­i­tive Advan­tage.” I’m not being iron­ic. It’s the most rea­son­able, styl­ish and solu­tions-based cri­tique of mod­ern gov­ern­ment I’ve ever read. The authors did the unglam­orous work of going through the infest­ed gar­den of process­es and rules and called out many of the nas­ti­est weeds. Then they made com­mon-sense rec­om­men­da­tions — treat soft­ware as a liv­ing thing that cross­es bud­get lines; do cost assess­ments that pri­or­i­tize speed, secu­ri­ty, func­tion­al­i­ty and code qual­i­ty; col­lect data from the department’s weapons sys­tems and cre­ate a secure repos­i­to­ry to eval­u­ate their effec­tive­ness — and urged Con­gress to pass them.

    They also ref­er­enced the dozen pre­vi­ous soft­ware reports com­mis­sioned by the mil­i­tary dat­ing back to 1982, all of which came to sim­i­lar con­clu­sions. The prob­lem isn’t a lack of solu­tions, it’s get­ting Con­gress to approve the polit­i­cal­ly risky ones and “the frozen mid­dle” to imple­ment them: “We ques­tion nei­ther the integri­ty nor the patri­o­tism of this group. They are sim­ply not incen­tivized to the way we believe mod­ern soft­ware should be acquired and imple­ment­ed, and the enor­mous iner­tia they rep­re­sent is a pro­found bar­ri­er to change.”

    ...

    It’s not just the rank and file. Schmidt end­ed up serv­ing four years on the Defense Inno­va­tion Board. When we were wrap­ping up our con­ver­sa­tion, he took a breath and paused for a moment. “I’m not going to make a more emo­tion­al argu­ment, I’m just going to tell you the fol­low­ing: Gov­ern­ment will per­form sub opti­mal­ly until it adopts the soft­ware prac­tices of the indus­try.” He sound­ed pret­ty emo­tion­al.

    It did not take some­one with John F. Kennedy’s charis­ma to inspire Amer­i­cans to go to the moon. The moon is big and pret­ty. Human­i­ty has been dream­ing about it for eons. Calvin Coolidge lev­els of charm would have suf­ficed.

    The chal­lenge of using AI for bet­ter gov­ern­ment is very dif­fer­ent. The excite­ment about a new thing is tem­pered by fear and con­fu­sion. To get the max­i­mum reward from AI, the coun­try must first go through an unprece­dent­ed veg­etable-eat­ing exer­cise to clean up its bureau­cra­cy. Turn­ing that into poet­ry is hard. There’s no ide­al mes­sen­ger, but an octo­ge­nar­i­an whose best speech­es are about grief and a sep­tu­a­ge­nar­i­an whose speech­es are bare­ly speech­es is per­haps not the opti­mal set of choic­es.
    ...

    And then we get to the part of this vision that has an alarm­ing degree of syn­er­gy with the GOP’s ongo­ing Sched­ule F/Project 2025 plans to purge the fed­er­al gov­ern­ment of tens of thou­sands of employ­ees, only to replace them with crony loy­al­ists: LLMs like Chat­G­PT are seen as a great replace­ment for all sorts of fed­er­al employ­ees and this could hap­pen soon:

    ...
    When soft­ware becomes a cri­sis, politi­cians call Jen­nifer Pahlka. Pahlka was deputy chief tech­nol­o­gy offi­cer in the Oba­ma admin­is­tra­tion and was cru­cial to the res­cue of healthcare.gov — the most flawed, fraught and ulti­mate­ly suc­cess­ful soft­ware project in gov­ern­ment his­to­ry. In 2020, Gavin New­som bat-sig­naled her to untan­gle California’s unem­ploy­ment insur­ance pro­gram as it buck­led under the weight of the covid-19 response. “I come to this work,” says Pahlka, “with the assump­tion that peo­ple are hav­ing a f—— ner­vous break­down.”

    Pahlka served with Schmidt on the Defense Inno­va­tion Board, which affirmed decades of her expe­ri­ence at the con­ver­gence of soft­ware and gov­ern­ment. The dys­func­tion loop begins when absurd process­es are giv­en to pub­lic ser­vants who will be judged on their com­pli­ance with absur­di­ty. If they do their jobs right, the nation pur­chas­es obso­lete over­priced soft­ware. If they make a mis­take or take a risk that defies the absur­di­ty, politi­cians hold hear­ings and jump all over them — which is far sim­pler than fix­ing the process. Each recrim­i­na­tion dri­ves more good peo­ple out of pub­lic ser­vice. Rinse, repeat.

    ...

    The truth. The rela­tion­ship between cit­i­zens and gov­ern­ment is frac­tured. It’s cru­cial to the republic’s sur­vival that we stop defend­ing the sta­tus quo. New tech­nol­o­gy can help us repair the dam­age and open the door to a lev­el of ser­vice and effi­cien­cy that will make Scan­di­na­vians seethe with envy. Almost all of this AI tech has been cre­at­ed by Amer­i­can inge­nu­ity inside Amer­i­can com­pa­nies, and the Amer­i­can peo­ple deserve its ben­e­fits.

    Next, say the thing Democ­rats don’t want to say: Not every gov­ern­ment job should be a job for life. LLMs can pro­vide bet­ter ser­vice and respon­sive­ness for many day-to-day inter­ac­tions between cit­i­zens and var­i­ous agen­cies. They’re not just cheap­er, they’re also faster, and, when trained right, less prone to error or mis­in­ter­pre­ta­tion. That means it’s pos­si­ble the fed­er­al gov­ern­ment will soon have few­er employ­ees. But AI will nev­er replace human judg­ment — about ben­e­fits, penal­ties or any­thing in between. It’s a tool to be used by Amer­i­cans to make bet­ter deci­sions for our nation­al well-being.

    That earns you the right to say the thing rea­son­able Repub­li­cans don’t want to hear: their bluff is going to be called. If they con­tin­ue to indulge the party’s idi­ot­ic fan­tasies of burn­ing the entire fed­er­al appa­ra­tus to the ground, they’ll be left hold­ing the ash­es. They need to admit that a prop­er­ly run gov­ern­ment has an impor­tant role in people’s lives, and they need to co-sign fix­ing it. With­out cross­ing their fin­gers behind their backs.

    All this is pre­am­ble to the work — method­i­cal demo­li­tion and joy­ful con­struc­tion. Pahlka says the pol­i­cy guide­lines that gov­ern the Defense Depart­ment equal 100 stacked copies of “War and Peace.” There are more than 7,000 pages of unem­ploy­ment reg­u­la­tions. Luck­i­ly, untan­gling the Unit­ed States’ hair­ball of fine print is the per­fect job for AI. Banks already use it to dedu­pli­cate obso­lete com­pli­ance rules. Pahlka is work­ing to demon­strate its fea­si­bil­i­ty inside agen­cies. The Pen­ta­gon is exper­i­ment­ing with an AI pro­gram called Gamechang­er that helps bureau­crats nav­i­gate its own bureau­cra­cy. It’s easy to mock, and we’ll still need count­less human hours of over­sight — many of them from Con­gress — to ensure the job’s done right. But it’s exact­ly the kind of hum­ble first step that deserves praise. Tur­bocharge these efforts, then start build­ing. But not every­where, at least not at first.
    ...

    And note anoth­er dis­turb­ing forms of syn­er­gy at work here: while the author acknowl­edges that there could be risks asso­ci­at­ed with this embrace of AI even if ‘guardrails’ are put in place, he argues that those risks are not great as the “dead­li­ness of the dis­ease” of the gov­ern­men­t’s cur­rent state of rel­a­tive dys­func­tion. In oth­er words, the GOP’s decades of suc­cess in break­ing the abil­i­ty of the gov­ern­ment to func­tion­al­ly oper­ate is now one of the pri­ma­ry argu­ments for hand­ing the gov­ern­ment over to an AI panop­ti­con:

    ...
    There’s anoth­er strain of skep­ti­cism that goes like this: Are you insane? AI might cre­ate all kinds of effi­cien­cy, but it’s also been known to have sys­temic bias­es that could get encod­ed into offi­cial gov­ern­ment sys­tems, lack trans­paren­cy that could under­mine pub­lic trust, make loads of fed­er­al jobs obso­lete, and be vul­ner­a­ble to data breach­es that com­pro­mise pri­va­cy and sen­si­tive infor­ma­tion. If AI were a Big Phar­ma prod­uct the ads would be 10 min­utes long.

    We can put guardrails around how the gov­ern­ment uses AI — anonymiz­ing per­son­al data as they do in the Euro­pean Union, cre­at­ing over­sight bod­ies for con­tin­u­ous mon­i­tor­ing — but I’m not naive. Some things will still go wrong. Which leaves us to weigh the risks of the cure against the dead­li­ness of the dis­ease.
    ...

    Is the risk of hand­ing the keys over to Palan­tir’s AI out­weighed by the risk of, well, not doing that? Yes, accord­ing to this col­umn. And it’s not hard to imag­ine large num­bers of peo­ple are going to share that view. So we should prob­a­bly expect a lot more calls for an AI-run gov­ern­ment. This trend isn’t going any­where.

    It all begs the ques­tion as to when we’re going to see the first AI can­di­date for pres­i­dent. That’s kind of the log­i­cal con­clu­sion of all this. Yes, that’s a joke. Or at least it should be.

    Posted by Pterrafractyl | April 10, 2024, 5:55 pm
  22. Is the AI rev­o­lu­tion the dawn of a new era for human­i­ty? Or the begin­ning of the end? The answer is obvi­ous­ly ‘it depends on how we use it.’ But when it comes to the uses of tech­nol­o­gy, ‘we’ includes a lot of peo­ple. Poten­tial­ly bil­lions of users around the world. And there’s no guar­an­tee they’ll all use it respon­si­bly.

    And that brings us to the fol­low­ing pair of sto­ries about a ‘world end­ing’ appli­ca­tion of AI that already has experts alarmed: AI-fueled pan­demics using man-made virus­es. We’ve long known syn­thet­ic biol­o­gy pos­es a major risk. But as we’re going to see, it’s A LOT riski­er when you throw AI into the mix. At least AIs with­out any safe­guards in place.

    And as the fol­low­ing Wired arti­cle describes, the risks from syn­thet­ic biol­o­gy tech­nolo­gies is already alarm­ing on its own as reflect­ed by a new set of rules issued by the Biden White House man­dat­ing that the DNA man­u­fac­tur­ing indus­try in the US screen made-to-order DNA sequences for poten­tial­ly dan­ger­ous sequences. It’s a nec­es­sary first step, but still only applied to com­pa­nies that receive fed­er­al fund­ing. In oth­er words, even with these new rules, loop­holes will con­tin­ue.

    And then we get to this tru­ly alarm­ing EuroNews opin­ion piece by Kevin Esvelt, an asso­ciate pro­fes­sor at MIT’s Media Lab, where he directs the Sculpt­ing Evo­lu­tion group, and Ben Mueller, a research Sci­en­tist at MIT and COO at Secure­Bio, an orga­ni­za­tion that works on the poli­cies and tech­nolo­gies to safe­guard against future pan­demics. Accord­ing to Esvelt and Mueller, a recent study using AIs that had their safe­guards turned off found that teams were able to get the AIs to pro­vide them instruc­tions on how to con­struct var­i­ous dan­ger­ous virus­es uses made-to-order DNA ser­vices in just a few hours. In addi­tion, the AIs actu­al­ly pro­vid­ed instruc­tions for how to evade the screen­ing process­es that DNA man­u­fac­tur­ers might have in place. Yep.

    So not only was this study a pow­er­ful exam­ple of how AIs can advise non-experts in the tech­niques of bio­log­i­cal war­fare and bioter­ror­ism, but it’s also implic­it­ly an exam­ple of how AIs with­out safe­guards can be turned into super-vil­lain tutors. After all, it’s not like we should assume the AIs that offered advice on how to avoid the DNA screen­ing meth­ods won’t have plen­ty of oth­er incred­i­bly dan­ger­ous advice to offer a poten­tial ter­ror­ist.

    Ok, first, here’s that Wired piece on the Biden admin­is­tra­tions new rules for US DNA man­u­fac­tur­ers. New rules that are absolute­ly need­ed and also not near­ly enough:

    Wired

    The US Is Crack­ing Down on Syn­thet­ic DNA

    Syn­thet­ic DNA could be used to spark a pan­dem­ic. A move by Pres­i­dent Biden aims to cre­ate new stan­dards for the safe­ty and secu­ri­ty of mail-order genet­ic mate­r­i­al.

    Emi­ly Mullin
    Sci­ence
    May 6, 2024 5:30 AM

    The White House has issued new rules aimed at com­pa­nies that man­u­fac­ture syn­thet­ic DNA after years of warn­ings that a pathogen made with mail-order genet­ic mate­r­i­al could acci­den­tal­ly or inten­tion­al­ly spark the next pan­dem­ic.

    The rules, released on April 29, are the result of an exec­u­tive order signed by Pres­i­dent Joe Biden last fall to estab­lish new stan­dards for AI safe­ty and secu­ri­ty, includ­ing AI applied to biotech­nol­o­gy.

    ...

    Syn­the­siz­ing DNA has been pos­si­ble for decades, but it’s become increas­ing­ly eas­i­er, cheap­er, and faster to do so in recent years thanks to new tech­nol­o­gy that can “print” cus­tom gene sequences. Now, dozens of com­pa­nies around the world make and ship syn­thet­ic nucle­ic acids en masse. And with AI, it’s becom­ing pos­si­ble to cre­ate entire­ly new sequences that don’t exist in nature—including those that could pose a threat to humans or oth­er liv­ing things.

    “The con­cern has been for some time that as gene syn­the­sis has got­ten bet­ter and cheap­er, and as more com­pa­nies appear and more tech­nolo­gies stream­line the syn­the­sis of nucle­ic acids, that it is pos­si­ble to de novo cre­ate organ­isms, par­tic­u­lar­ly virus­es,” says Tom Ingles­by, an epi­demi­ol­o­gist and direc­tor of the Johns Hop­kins Cen­ter for Health Secu­ri­ty.

    It’s con­ceiv­able that a bad actor could make a dan­ger­ous virus from scratch by order­ing its genet­ic build­ing blocks and assem­bling them into a whole pathogen. In 2017, Cana­di­an researchers revealed they had recon­struct­ed the extinct horse­pox virus for $100,000 using mail-order DNA, rais­ing the pos­si­bil­i­ty that the same could be done for small­pox, a dead­ly dis­ease that was erad­i­cat­ed in 1980.

    The new rules aim to pre­vent a sim­i­lar sce­nario. It asks DNA man­u­fac­tur­ers to screen pur­chase orders to flag so-called sequences of con­cern and assess cus­tomer legit­i­ma­cy. Sequences of con­cern are those that con­tribute to an organism’s tox­i­c­i­ty or abil­i­ty to cause dis­ease. For now, the rules only apply to sci­en­tists or com­pa­nies that receive fed­er­al fund­ing: They must order syn­thet­ic nucle­ic acids from providers that imple­ment these prac­tices.

    Ingles­by says it’s still a “big step for­ward” since about three-quar­ters of the US cus­tomer base for syn­thet­ic DNA are fed­er­al­ly fund­ed enti­ties. But it means that sci­en­tists or orga­ni­za­tions with pri­vate sources of fund­ing aren’t behold­en to using com­pa­nies with these screen­ing pro­ce­dures.

    Many DNA providers already fol­low screen­ing guide­lines issued by the Depart­ment of Health and Human Ser­vices in 2010. About 80 per­cent of the indus­try has joined the Inter­na­tion­al Gene Syn­the­sis Con­sor­tium, which pledges to vet orders. But these mea­sures are both vol­un­tary, and not all com­pa­nies com­ply.

    ...

    Last year, a bipar­ti­san group of leg­is­la­tors intro­duced the Secur­ing Gene Syn­the­sis Act to man­date screen­ing more broad­ly, but the bill has yet to advance.

    Emi­ly Lep­roust, CEO of Twist Bio­science, a San Fran­cis­co DNA-syn­the­sis com­pa­ny, wel­comes reg­u­la­tion. “We rec­og­nize that DNA is dual-use tech­nol­o­gy. It’s like dyna­mite, you can build tun­nels, but you can also kill peo­ple,” she says. “Col­lec­tive­ly, we have a respon­si­bil­i­ty to pro­mote the eth­i­cal use of DNA.”

    Twist has been screen­ing sequences and cus­tomers since 2016, when it first start­ed sell­ing nucle­ic acids to cus­tomers. A few years ago, the com­pa­ny hired out­side con­sul­tants to test its screen­ing process­es. The con­sul­tants set up fake cus­tomer names and sur­rep­ti­tious­ly ordered sequences of con­cern.

    Lep­roust says the com­pa­ny suc­cess­ful­ly flagged many of those orders, but in some cas­es, there was inter­nal dis­agree­ment on whether the sequences request­ed were wor­ri­some or not. The exer­cise helped Twist adopt new pro­to­cols. For instance, it used to only screen DNA sequences 200 base pairs or longer. (A base pair is a unit of two DNA let­ters that pair togeth­er.) Now, it screens ones that are at least 50 base pairs to pre­vent cus­tomers from shop­ping around for small­er sequences to assem­ble togeth­er.

    ...

    ———-

    “The US Is Crack­ing Down on Syn­thet­ic DNA” by Emi­ly Mullin; Wired; 05/06/2024

    “It’s con­ceiv­able that a bad actor could make a dan­ger­ous virus from scratch by order­ing its genet­ic build­ing blocks and assem­bling them into a whole pathogen. In 2017, Cana­di­an researchers revealed they had recon­struct­ed the extinct horse­pox virus for $100,000 using mail-order DNA, rais­ing the pos­si­bil­i­ty that the same could be done for small­pox, a dead­ly dis­ease that was erad­i­cat­ed in 1980.

    Yes, researchers recre­at­ed an extinct horse­pox virus using mail-order DNA for just $100,000. And that was 2017. It’s pre­sum­ably cheap­er by now. It’s that con­ver­gence of tech­ni­cal fea­si­bil­i­ty and afford­abil­i­ty that has experts warn­ing that the pos­si­bil­i­ties are becom­ing gen­uine­ly alarm­ing. Which is why we should be hap­py to see new­ly pro­posed rules for US com­pa­nies ask­ing DNA man­u­fac­tur­ers to screen orders for poten­tial­ly dan­ger­ous sequences of DNA, but also some­what con­cerned that these pro­posed rules only apply to com­pa­nies receiv­ing fed­er­al fund­ing. It’s progress, but maybe not enough to pre­vent a man-made cat­a­stro­phe:

    ...
    “The con­cern has been for some time that as gene syn­the­sis has got­ten bet­ter and cheap­er, and as more com­pa­nies appear and more tech­nolo­gies stream­line the syn­the­sis of nucle­ic acids, that it is pos­si­ble to de novo cre­ate organ­isms, par­tic­u­lar­ly virus­es,” says Tom Ingles­by, an epi­demi­ol­o­gist and direc­tor of the Johns Hop­kins Cen­ter for Health Secu­ri­ty.

    ...

    The new rules aim to pre­vent a sim­i­lar sce­nario. It asks DNA man­u­fac­tur­ers to screen pur­chase orders to flag so-called sequences of con­cern and assess cus­tomer legit­i­ma­cy. Sequences of con­cern are those that con­tribute to an organism’s tox­i­c­i­ty or abil­i­ty to cause dis­ease. For now, the rules only apply to sci­en­tists or com­pa­nies that receive fed­er­al fund­ing: They must order syn­thet­ic nucle­ic acids from providers that imple­ment these prac­tices.

    Ingles­by says it’s still a “big step for­ward” since about three-quar­ters of the US cus­tomer base for syn­thet­ic DNA are fed­er­al­ly fund­ed enti­ties. But it means that sci­en­tists or orga­ni­za­tions with pri­vate sources of fund­ing aren’t behold­en to using com­pa­nies with these screen­ing pro­ce­dures.

    Many DNA providers already fol­low screen­ing guide­lines issued by the Depart­ment of Health and Human Ser­vices in 2010. About 80 per­cent of the indus­try has joined the Inter­na­tion­al Gene Syn­the­sis Con­sor­tium, which pledges to vet orders. But these mea­sures are both vol­un­tary, and not all com­pa­nies com­ply.

    ...

    Last year, a bipar­ti­san group of leg­is­la­tors intro­duced the Secur­ing Gene Syn­the­sis Act to man­date screen­ing more broad­ly, but the bill has yet to advance.
    ...

    Let’s hope the var­i­ous loop­holes in these new rules for US DNA man­u­fac­tur­ers can be closed soon­er rather than lat­er. But, of course, the US isn’t the only coun­try with DNA man­u­fac­tur­ing com­pa­nies. This is a glob­al issue. And as the fol­low­ing EuroNews opin­ion piece warns us, it’s a glob­al issue dra­mat­i­cal­ly ampli­fied by AI. In a recent test using AIs that had their safe­guard turned off, not only were were teams able use gen­er­a­tive AIs to arrive meth­ods for cre­at­ing man-made pan­dem­ic in just a few hours, but the AIs even offered advice on how to avoid the kind of screen­ing rules the US just put in place:

    EuroNews
    Next

    Euroviews. Brus­sels should know AI-assist­ed bioter­ror­ism is a risk worth con­sid­er­ing

    By Kevin Esvelt, Ben­jamin Mueller
    Pub­lished on 14/12/2023 — 16:52

    The opin­ions expressed in this arti­cle are those of the author and do not rep­re­sent in any way the edi­to­r­i­al posi­tion of Euronews.

    As the EU expe­ri­enced with the AI Act, find­ing a trade-off between inno­va­tion and risks takes time. But the spe­cif­ic risks of AI-enabled bioter­ror­ism can be tack­led now, Kevin Esvelt and Ben Mueller write.

    AI reg­u­la­tion has become an area of inter-state com­pe­ti­tion. While the EU just reached a deal on the AI Act, the US has pre­vi­ous­ly released a far-reach­ing exec­u­tive order on AI, and the UK con­vened polit­i­cal and indus­try lead­ers at the AI Safe­ty Sum­mit.

    In many of these dis­cus­sions, one risk is get­ting more atten­tion: AI-assist­ed bioter­ror­ism, or the abil­i­ty of indi­vid­u­als to cause cat­a­stro­phe by using AI tools to get access to a pan­dem­ic virus.

    We recent­ly showed that this is a risk worth con­sid­er­ing. In an infor­mal exper­i­ment, we tasked indi­vid­u­als with using an open-source large-lan­guage mod­el that had its safe­guards removed to help them obtain a pathogen capa­ble of caus­ing a pan­dem­ic.

    With­in three hours, par­tic­i­pants iden­ti­fied many of the steps required to start a poten­tial­ly cat­a­stroph­ic out­break.

    Par­tic­u­lar­ly con­cern­ing was that the mod­el advised par­tic­i­pants on how to access the viral DNA — the blue­print for cre­at­ing the pathogen — while evad­ing exist­ing screen­ing meth­ods. The extent to which cur­rent mod­els aid bioter­ror­ism by sum­maris­ing infor­ma­tion that already exists online remains unclear.

    How­ev­er, cur­rent capa­bil­i­ties aside, the find­ings sug­gest that in the absence of robust safe­guards, more advanced future mod­els might pro­vide mali­cious indi­vid­u­als with stream­lined and acces­si­ble infor­ma­tion on how to access, con­struct, and release a pan­dem­ic virus.

    A tech­nol­o­gy open to mali­cious actors

    The DNA con­structs required to build a virus from scratch can be ordered online: many gene syn­the­sis providers will man­u­fac­ture a thou­sand base pair pieces of DNA for under €200 — some­thing that only a few decades ago took researchers thou­sands of hours and war­rant­ed a Nobel Prize.

    ...

    Many com­pa­nies have tak­en the impor­tant step of screen­ing orders to detect DNA from dan­ger­ous pathogens.

    But not all of them do. Lan­guage-based AI mod­els can already advise indi­vid­u­als how to iden­ti­fy and exploit such loop­holes to obtain the DNA of pan­dem­ic-capa­ble pathogens.

    Echo­ing this con­cern, the CEO of Anthrop­ic, a lead­ing AI com­pa­ny, recent­ly warned US law­mak­ers that with­in two years, next-gen­er­a­tion AI sys­tems could enable large-scale bioter­ror­ism unless appro­pri­ate guardrails are put in place.

    Yoshua Ben­gio, one of the “God­fa­thers of AI” voiced sim­i­lar con­cerns. As EU pol­i­cy­mak­ers think about appro­pri­ate reac­tions to AI’s rapid progress, one sim­ple guardrail deserves clos­er scruti­ny: a legal require­ment to screen all gene syn­the­sis orders for haz­ardous sequences.

    ...

    The case for manda­to­ry gene syn­the­sis screen­ing

    The US has already tak­en steps to secure gene syn­the­sis. The government’s recent Exec­u­tive Order on AI man­dates fed­er­al­ly fund­ed enti­ties to fol­low the government’s new bio­log­i­cal syn­the­sis screen­ing guide­lines.

    The EU itself has not yet passed reg­u­la­tions on the issue. Giv­en the grow­ing risk of pan­dem­ic-lev­el bioter­ror­ism, a man­date for gene syn­the­sis screen­ing would be a first, mean­ing­ful step.

    Providers rep­re­sent­ing over 80% of the indus­try are in favour of man­dates. Oth­ers have legit­i­mate con­cerns about the poten­tial costs of gene syn­the­sis screen­ing and wor­ry about poten­tial intel­lec­tu­al prop­er­ty con­cerns, but the immi­nent release of free, pri­va­cy-pre­serv­ing screen­ing tools should mit­i­gate this issue.

    ...

    Safe­guard­ing biology’s promise

    Progress in both biotech­nol­o­gy and arti­fi­cial intel­li­gence will dri­ve rev­o­lu­tion­ary advances in the life sci­ences and med­i­cine.

    Cus­tom gene syn­the­sis is a fun­da­men­tal enabler of these remark­able ben­e­fits. But the harm caused by SARS-CoV­‑2, a sin­gle his­tor­i­cal­ly mild pan­dem­ic virus, demon­strates that its mis­use — made more like­ly by advance­ments in gen­er­a­tive AI — could do harm on a scale that out­weighs all of these ben­e­fits.

    ...

    Kevin Esvelt is an asso­ciate pro­fes­sor at the Mass­a­chu­setts Insti­tute of Technology’s Media Lab, where he directs the Sculpt­ing Evo­lu­tion group. He co-found­ed the SecureD­NA Foun­da­tion, a Switzer­land-based non­prof­it. Ben Mueller is a Research Sci­en­tist at MIT and COO at Secure­Bio, an organ­i­sa­tion that works on the poli­cies and tech­nolo­gies to safe­guard against future pan­demics.

    ———–

    “Euroviews. Brus­sels should know AI-assist­ed bioter­ror­ism is a risk worth con­sid­er­ing” By Kevin Esvelt, Ben­jamin Mueller; EuroNews; 12/14/2023

    The DNA con­structs required to build a virus from scratch can be ordered online: many gene syn­the­sis providers will man­u­fac­ture a thou­sand base pair pieces of DNA for under €200 — some­thing that only a few decades ago took researchers thou­sands of hours and war­rant­ed a Nobel Prize.

    Thou­sands of cus­tom-made DNA base pairs can be gen­er­at­ed for under 200 euros. How much might it cost to res­ur­rect horse­pox today. Keep in mind that the now-res­ur­rect­ed horse­pox virus has a genome that’s rough­ly 212k base­pairs in length. What would it cost to res­ur­rect it again today? Prob­a­bly a lot less than $100k.

    But it’s not just the plum­met­ing costs of made-to-order DNA sequences that has experts alarmed. AI is now a risk ampli­fi­er in this. In one recent study, it took just three hours for groups to ‘Chat­G­PT-ed’ them­selves a recipe for a bio­log­i­cal cat­a­stro­phe using AIs that had their safe­guards removed. Not only did the AIs pro­vide instruc­tions for how to go about the process of access­ing viral DNA but it offered advice on how to avoid screen­ing meth­ods. It’s a chill­ing proof-of-con­cept demon­stra­tion for how AI can poten­tial­ly be used a source of knowl­edge and exper­tise by those that lack the skills on their own to pull off a bio­log­i­cal attack:

    ...
    We recent­ly showed that this is a risk worth con­sid­er­ing. In an infor­mal exper­i­ment, we tasked indi­vid­u­als with using an open-source large-lan­guage mod­el that had its safe­guards removed to help them obtain a pathogen capa­ble of caus­ing a pan­dem­ic.

    With­in three hours, par­tic­i­pants iden­ti­fied many of the steps required to start a poten­tial­ly cat­a­stroph­ic out­break.

    Par­tic­u­lar­ly con­cern­ing was that the mod­el advised par­tic­i­pants on how to access the viral DNA — the blue­print for cre­at­ing the pathogen — while evad­ing exist­ing screen­ing meth­ods. The extent to which cur­rent mod­els aid bioter­ror­ism by sum­maris­ing infor­ma­tion that already exists online remains unclear.

    How­ev­er, cur­rent capa­bil­i­ties aside, the find­ings sug­gest that in the absence of robust safe­guards, more advanced future mod­els might pro­vide mali­cious indi­vid­u­als with stream­lined and acces­si­ble infor­ma­tion on how to access, con­struct, and release a pan­dem­ic virus.

    ...

    Cus­tom gene syn­the­sis is a fun­da­men­tal enabler of these remark­able ben­e­fits. But the harm caused by SARS-CoV­‑2, a sin­gle his­tor­i­cal­ly mild pan­dem­ic virus, demon­strates that its mis­use — made more like­ly by advance­ments in gen­er­a­tive AI — could do harm on a scale that out­weighs all of these ben­e­fits.
    ...

    And, again, this was just the ter­ror­ism advice these teams were able to extract from these safe­guard-free AIs in the realm of bioter­ror­ism. What oth­er ter­ror­ism advice could they have elicit­ed had they been inclined?

    Let’s hope we nev­er have to find out. But let’s not be naive about this. We are going to find out even­tu­al­ly. It’s kind of inevitable. What are the odds safe­guard-free AIs are nev­er released to the pub­lic? Or that peo­ple nev­er fig­ure out how to ‘jail­break’ their AIs from those safe­guard? Let’s not for­get about the sto­ry of the AI behav­ior-alter­ing ‘worms’ that peo­ple have already demon­strat­ed. It’s not like we should assume exist­ing safe­guards won’t get hacked.

    And that’s all why we had bet­ter hope gov­ern­ments can come up with the kinds of rules for the syn­thet­ic biol­o­gy indus­try that can’t get hacked by a non-safe­guard­ed AI. Along with every oth­er indus­try that can poten­tial­ly be weaponized by an AI oper­at­ing in super-vil­lainy-to-order mode.

    Posted by Pterrafractyl | May 9, 2024, 5:39 pm
  23. It is increas­ing­ly under­stood that we don’t actu­al­ly under­stand how large lan­guage mod­els work. We can use them. We just can’t use them with the con­fi­dence that comes from a com­plete under­stand­ing of the phe­nom­e­na. This isn’t physics. Or math. It’s com­pu­ta­tion­al alche­my. And while we gen­er­al­ly under­stand that we don’t under­stand how these AI ‘black box­es’ are oper­at­ing, what is far less under­stood at this point is what the risks are of using increas­ing­ly pow­er­ful tools that we can’t under­stand.

    That gen­er­al chal­lenge of try­ing to pre­dict the risks of AI is what we’re going to explore in the fol­low­ing pair of MIT Tech­nol­o­gy Review arti­cles. It’s a chal­lenge that only seems to be grow­ing as the pow­er of these large lan­guage mod­els grow while researchers con­tin­ue to strug­gle to under­stand how they are oper­at­ing and accom­plish­ing some seem­ing­ly mirac­u­lous feats like spon­ta­neous­ly learn­ing a lan­guage.

    Or very non-mirac­u­lous feats like learn­ing how to deceive. And get­ting quite good at it. As we’re going to see, not only do are AIs learn­ing how to lie but it appears to be cur­rent­ly impos­si­ble to train AIs that can’t learn how to lie. Learn­ing how to deceive is just some­thing AIs seem to fig­ure out how to do on their own as part of the process of try­ing to ‘solve’ prob­lems and achieve goals. Even when they are ordered to nev­er do so.

    That was the find­ing of a recent study by Meta. An AI named Cicero was trained to play the board game Diplo­ma­cy against human play­ers. Diplo­ma­cy hap­pens to be a game with no luck at all. It’s entire­ly a game of nego­ti­a­tions and alliance build­ing. Meta’s researchers claim they trained Cicero on a “truth­ful” data set to be large­ly hon­est and help­ful, and that it would “nev­er inten­tion­al­ly back­stab” its allies in order to suc­ceed. Instead, they found that Cicero Cicero broke its deals, told out­right false­hoods, and engaged in pre­med­i­tat­ed decep­tion. Again, it was trained to NOT do any of that. But keep in mind that Diplo­ma­cy requires all of those behav­iors to win. So when giv­en the task of win­ning a strate­gic game that requires decep­tion but ordered not to deceive, the dri­ve to deceive won out. There’s a pow­er­ful les­son there.

    Meta also report­ed­ly built an AI that got so good at bluff­ing at pok­er that it was deter­mined that it should­n’t be released because it might dam­age the online pok­er com­mu­ni­ty. That sure sounds like some sort of inno­va­tion in strate­gic lying that these AIs stum­bled upon. Think about that.

    But it’s not just spon­ta­neous­ly learn­ing how to lie that we have to be wor­ried about. That’s only one of the emer­gent behav­iors AI researchers are dis­cov­er­ing. In fact, it appears that these large lan­guage mod­els may have dis­cov­ered new means of learn­ing that defy our under­stand­ing of how mod­el­ing and sta­tis­tics oper­ate.

    The observed phe­nom­e­na — where seem­ing­ly sta­ble AIs sud­den­ly achieve appar­ent epipha­nies after extend­ed train­ing ses­sions on ever larg­er train­ing sets — is some­thing that sim­ply should­n’t hap­pen based on our under­stand­ing of how ‘over­fit­ting’ works. Adding more para­me­ters to a mod­el tends to help, until it does­n’t at which point the mod­el can become ‘over­fit’ to the train­ing data and become less and less gen­er­al­iz­able for real world sit­u­a­tions. And while these large lan­guage mod­els do indeed exhib­it signs of over­fit­ting as train­ing mod­els grow larg­er and larg­er, some­thing seems to hap­pen after the mod­els keep grow­ing and/or are allowed to train for extend­ed peri­ods of time. Some­how, these large lan­guage mod­els are tak­ing these mas­sive train­ing sets and ‘over­com­ing the over­fit­ting’ to arrive at new abil­i­ties they did­n’t pre­vi­ous­ly pos­sess. Researchers don’t under­stand why, in part because it seems like it should­n’t be pos­si­ble. But it’s hap­pen­ing. And as a result, these AI researchers are warn­ing that there is a degree of unpre­dictabil­i­ty in the behav­ior of AIs that we can­not con­trol, in part because we don’t under­stand what’s hap­pen­ing. Sta­ble pre­dictable AIs don’t exist. At last not yet.

    Also keep in mind that the sit­u­a­tion being described by all these AI researchers in the fol­low­ing pair of arti­cles is a sit­u­a­tion where increas­ing­ly com­pli­cat­ed AIs are being built large­ly through tri­al and error meth­ods. Not through a deep­er under­stand­ing of how these sys­tems work. The alche­my anal­o­gy real­ly is an appro­pri­ate fit. And that’s the kind of sit­u­a­tion where we should expect the AI field to cre­ate ever more sophis­ti­cat­ed AIs that, in turn, are increas­ing­ly opaque. In oth­er words, we should­n’t nec­es­sar­i­ly assume the AI research field is going to even­tu­al­ly come up with answers to these mys­ter­ies. Some­day the ‘sin­gu­lar­i­ty’ sce­nario of AIs design­ing ever more sophis­ti­cat­ed AIs will be upon us and we should­n’t nec­es­sar­i­ly assume humans will ever get a real intel­lec­tu­al han­dle on what’s going on beneath the hood with these sys­tems. That’s part of con­text of the warn­ings con­tained in these arti­cle: these aren’t nec­es­sar­i­ly ear­ly chal­lenges that human­i­ty needs to over­come before it’s too late. An inabil­i­ty to ever tru­ly under­stand how these sys­tems are oper­at­ing might just be inher­ent chal­lenge of this tech­nol­o­gy. Which might mean we had bet­ter get used to the idea that AIs can’t ever be ful­ly trust­ed. So in that sense, they’re kind of like a humans...except appar­ent­ly even bet­ter at lying and oper­at­ing accord­ing to impuls­es we can’t ever real­ly under­stand:

    MIT Tech­nol­o­gy Review

    AI sys­tems are get­ting bet­ter at trick­ing us

    But what we per­ceive as decep­tion is AI mind­less­ly achiev­ing the goals we’ve set for it.

    By Rhi­an­non Williams
    May 10, 2024

    A wave of AI sys­tems have “deceived” humans in ways they haven’t been explic­it­ly trained to do, by offer­ing up untrue expla­na­tions for their behav­ior or con­ceal­ing the truth from human users and mis­lead­ing them to achieve a strate­gic end.

    This issue high­lights how dif­fi­cult arti­fi­cial intel­li­gence is to con­trol and the unpre­dictable ways in which these sys­tems work, accord­ing to a review paper pub­lished in the jour­nal Pat­terns today that sum­ma­rizes pre­vi­ous research.

    ...

    One area where AI sys­tems have learned to become decep­tive is with­in the con­text of games that they’ve been trained to win—specifically if those games involve hav­ing to act strate­gi­cal­ly.

    In Novem­ber 2022, Meta announced it had cre­at­ed Cicero, an AI capa­ble of beat­ing humans at an online ver­sion of Diplo­ma­cy, a pop­u­lar mil­i­tary strat­e­gy game in which play­ers nego­ti­ate alliances to vie for con­trol of Europe.

    Meta’s researchers said they’d trained Cicero on a “truth­ful” sub­set of its data set to be large­ly hon­est and help­ful, and that it would “nev­er inten­tion­al­ly back­stab” its allies in order to suc­ceed. But the new paper’s authors claim the oppo­site was true: Cicero broke its deals, told out­right false­hoods, and engaged in pre­med­i­tat­ed decep­tion. Although the com­pa­ny did try to train Cicero to behave hon­est­ly, its fail­ure to achieve that shows how AI sys­tems can still unex­pect­ed­ly learn to deceive, the authors say.

    Meta nei­ther con­firmed nor denied the researchers’ claims that Cicero dis­played deceit­ful behav­ior, but a spokesper­son said that it was pure­ly a research project and the mod­el was built sole­ly to play Diplo­ma­cy. “We released arti­facts from this project under a non­com­mer­cial license in line with our long-stand­ing com­mit­ment to open sci­ence,” they say. “Meta reg­u­lar­ly shares the results of our research to val­i­date them and enable oth­ers to build respon­si­bly off of our advances. We have no plans to use this research or its learn­ings in our prod­ucts.”

    But it’s not the only game where an AI has “deceived” human play­ers to win.

    It could be a step toward build­ing AIs that can han­dle com­plex prob­lems requir­ing com­pro­mise.

    AlphaS­tar, an AI devel­oped by Deep­Mind to play the video game Star­Craft II, became so adept at mak­ing moves aimed at deceiv­ing oppo­nents (known as feint­ing) that it defeat­ed 99.8% of human play­ers. Else­where, anoth­er Meta sys­tem called Pluribus learned to bluff dur­ing pok­er games so suc­cess­ful­ly that the researchers decid­ed against releas­ing its code for fear it could wreck the online pok­er com­mu­ni­ty.

    Beyond games, the researchers list oth­er exam­ples of decep­tive AI behav­ior. GPT‑4, OpenAI’s lat­est large lan­guage mod­el, came up with lies dur­ing a test in which it was prompt­ed to per­suade a human to solve a CAPTCHA for it. The sys­tem also dab­bled in insid­er trad­ing dur­ing a sim­u­lat­ed exer­cise in which it was told to assume the iden­ti­ty of a pres­sur­ized stock trad­er, despite nev­er being specif­i­cal­ly instruct­ed to do so.

    The fact that an AI mod­el has the poten­tial to behave in a decep­tive man­ner with­out any direc­tion to do so may seem con­cern­ing. But it most­ly aris­es from the black box” prob­lem that char­ac­ter­izes state-of-the-art machine-learn­ing mod­els: it is impos­si­ble to say exact­ly how or why they pro­duce the results they do—or whether they’ll always exhib­it that behav­ior going for­ward, says Peter S. Park, a post­doc­tor­al fel­low study­ing AI exis­ten­tial safe­ty at MIT, who worked on the project.

    ...

    Our ten­den­cy to anthro­po­mor­phize AI mod­els col­ors the way we test these sys­tems and what we think about their capa­bil­i­ties. After all, pass­ing tests designed to mea­sure human cre­ativ­i­ty doesn’t mean AI mod­els are actu­al­ly being cre­ative. It is cru­cial that reg­u­la­tors and AI com­pa­nies care­ful­ly weigh the technology’s poten­tial to cause harm against its poten­tial ben­e­fits for soci­ety and make clear dis­tinc­tions between what the mod­els can and can’t do, says Har­ry Law, an AI researcher at the Uni­ver­si­ty of Cam­bridge, who did not work on the research.“These are real­ly tough ques­tions,” he says.

    Fun­da­men­tal­ly, it’s cur­rent­ly impos­si­ble to train an AI mod­el that’s inca­pable of decep­tion in all pos­si­ble sit­u­a­tions, he says. Also, the poten­tial for deceit­ful behav­ior is one of many problems—alongside the propen­si­ty to ampli­fy bias and misinformation—that need to be addressed before AI mod­els should be trust­ed with real-world tasks.

    ...

    ———–
    “AI sys­tems are get­ting bet­ter at trick­ing us” By Rhi­an­non Williams; MIT Tech­nol­o­gy Review; 05/10/2024

    “This issue high­lights how dif­fi­cult arti­fi­cial intel­li­gence is to con­trol and the unpre­dictable ways in which these sys­tems work, accord­ing to a review paper pub­lished in the jour­nal Pat­terns today that sum­ma­rizes pre­vi­ous research.”

    It’s not just an issue with AI decep­tion. That decep­tion is reflect­ing a lack of con­trol over these sys­tems. We don’t know how to con­trol the AIs being built, in large part because we don’t under­stand how they oper­ate. And there­fore can’t con­fi­dent­ly pre­dict how they’ll behave. These aren’t just chal­lenges. They are fun­da­men­tal obsta­cles in the safe usage of this tech­nol­o­gy.

    And note how it’s not like these AIs are being taught to deceive and then unsuc­cess­ful­ly ordered not to deceive. Instead, in the case of Meta’s “Cicero” AI, it was trained to be large­ly hon­est and help­ful and that it would “nev­er inten­tion­al­ly back­stab” its allies while play­ing the strat­e­gy game Diplo­ma­cy, but the AIs engaged in the oppo­site behav­ior. Keep in mind that Diplo­ma­cy is a board game that involves zero luck. It is ALL down to nego­ti­a­tions between the play­ers and the alliances they form. So it’s a game that implic­it­ly involves back­stab­bing in order to win. In that sense, Meta sort of set Cicero up to learn how to back­stab allies by attempt­ing to train an AI to be always hon­est while play­ing Diplo­ma­cy. But the AI clear­ly had no prob­lem over­com­ing that restric­tion when doing so was nec­es­sary to ‘win’:

    ...
    One area where AI sys­tems have learned to become decep­tive is with­in the con­text of games that they’ve been trained to win—specifically if those games involve hav­ing to act strate­gi­cal­ly.

    In Novem­ber 2022, Meta announced it had cre­at­ed Cicero, an AI capa­ble of beat­ing humans at an online ver­sion of Diplo­ma­cy, a pop­u­lar mil­i­tary strat­e­gy game in which play­ers nego­ti­ate alliances to vie for con­trol of Europe.

    Meta’s researchers said they’d trained Cicero on a “truth­ful” sub­set of its data set to be large­ly hon­est and help­ful, and that it would “nev­er inten­tion­al­ly back­stab” its allies in order to suc­ceed. But the new paper’s authors claim the oppo­site was true: Cicero broke its deals, told out­right false­hoods, and engaged in pre­med­i­tat­ed decep­tion. Although the com­pa­ny did try to train Cicero to behave hon­est­ly, its fail­ure to achieve that shows how AI sys­tems can still unex­pect­ed­ly learn to deceive, the authors say.

    Meta nei­ther con­firmed nor denied the researchers’ claims that Cicero dis­played deceit­ful behav­ior, but a spokesper­son said that it was pure­ly a research project and the mod­el was built sole­ly to play Diplo­ma­cy. “We released arti­facts from this project under a non­com­mer­cial license in line with our long-stand­ing com­mit­ment to open sci­ence,” they say. “Meta reg­u­lar­ly shares the results of our research to val­i­date them and enable oth­ers to build respon­si­bly off of our advances. We have no plans to use this research or its learn­ings in our prod­ucts.”
    ...

    And then there’s the AI that appar­ent­ly got so good at bluff­ing at pok­er that it was feared it could ruin the online pok­er com­mu­ni­ty if released. That does­n’t sound like just a capac­i­ty to deceive. It sounds like a supe­ri­or abil­i­ty to deceive:

    ...
    But it’s not the only game where an AI has “deceived” human play­ers to win.

    It could be a step toward build­ing AIs that can han­dle com­plex prob­lems requir­ing com­pro­mise.

    AlphaS­tar, an AI devel­oped by Deep­Mind to play the video game Star­Craft II, became so adept at mak­ing moves aimed at deceiv­ing oppo­nents (known as feint­ing) that it defeat­ed 99.8% of human play­ers. Else­where, anoth­er Meta sys­tem called Pluribus learned to bluff dur­ing pok­er games so suc­cess­ful­ly that the researchers decid­ed against releas­ing its code for fear it could wreck the online pok­er com­mu­ni­ty.
    ...

    But it’s not like these AIs are just spon­ta­neous­ly learn­ing how to deceive. They’re seem­ing­ly com­ing up with crim­i­nal ideas on their own, like insid­er trad­ing, when doing so will help them accom­plish the assigned task. Decep­tion and crime is just anoth­er tool to be exploit­ed on the path to a solu­tion:

    ...
    Beyond games, the researchers list oth­er exam­ples of decep­tive AI behav­ior. GPT‑4, OpenAI’s lat­est large lan­guage mod­el, came up with lies dur­ing a test in which it was prompt­ed to per­suade a human to solve a CAPTCHA for it. The sys­tem also dab­bled in insid­er trad­ing dur­ing a sim­u­lat­ed exer­cise in which it was told to assume the iden­ti­ty of a pres­sur­ized stock trad­er, despite nev­er being specif­i­cal­ly instruct­ed to do so.
    ...

    And as experts warn, one of the impli­ca­tions of this behav­ior is that it’s cur­rent­ly impos­si­ble to not only explain the behav­ior of these AIs but, more impor­tant­ly, pre­dict that behav­ior. And that inabil­i­ty to pre­dict is, in part, root­ed in the fact that it is cur­rent­ly impos­si­ble to train AIs inca­pable of decep­tion:

    ...
    The fact that an AI mod­el has the poten­tial to behave in a decep­tive man­ner with­out any direc­tion to do so may seem con­cern­ing. But it most­ly aris­es from the black box” prob­lem that char­ac­ter­izes state-of-the-art machine-learn­ing mod­els: it is impos­si­ble to say exact­ly how or why they pro­duce the results they do—or whether they’ll always exhib­it that behav­ior going for­ward, says Peter S. Park, a post­doc­tor­al fel­low study­ing AI exis­ten­tial safe­ty at MIT, who worked on the project.

    ...

    Our ten­den­cy to anthro­po­mor­phize AI mod­els col­ors the way we test these sys­tems and what we think about their capa­bil­i­ties. After all, pass­ing tests designed to mea­sure human cre­ativ­i­ty doesn’t mean AI mod­els are actu­al­ly being cre­ative. It is cru­cial that reg­u­la­tors and AI com­pa­nies care­ful­ly weigh the technology’s poten­tial to cause harm against its poten­tial ben­e­fits for soci­ety and make clear dis­tinc­tions between what the mod­els can and can’t do, says Har­ry Law, an AI researcher at the Uni­ver­si­ty of Cam­bridge, who did not work on the research.“These are real­ly tough ques­tions,” he says.

    Fun­da­men­tal­ly, it’s cur­rent­ly impos­si­ble to train an AI mod­el that’s inca­pable of decep­tion in all pos­si­ble sit­u­a­tions, he says. Also, the poten­tial for deceit­ful behav­ior is one of many problems—alongside the propen­si­ty to ampli­fy bias and misinformation—that need to be addressed before AI mod­els should be trust­ed with real-world tasks.
    ...

    But as the fol­low­ing MIT Tech­nol­o­gy Review arti­cle from back in March describes, the ‘black box’ nature of chal­lenge of under­stand­ing how these large lan­guage mod­els oper­ate isn’t lim­it­ed to ques­tions of whether or not they can be trust­ed to tell us the truth. As AI researchers are warn­ing, they are dis­cov­er­ing a new form of unpre­dictabil­i­ty in the behav­ior of these mod­els: “grokking”. That’s the term giv­en to the phe­nom­e­na observed where seem­ing­ly sta­ble mod­els sud­den­ly demon­strate new abil­i­ties that seem­ing­ly came out of nowhere

    The phe­nom­e­na isn’t just stump­ing researchers in terms of how exact­ly it’s hap­pen­ing. It’s also appar­ent­ly defy­ing our under­stand­ing of sta­tis­tics and mod­el­ing. Specif­i­cal­ly, our under­stand­ing of the risks of over­fit­ting, or the phe­nom­e­na of adding so many dimen­sions to a mod­el’s train­ing data that the mod­el ends up get­ting opti­mized to the train­ing data in unhelp­ful ways that ruin the mod­el’s per­for­mance when applied to more gen­er­al real world data. The clas­sic mod­el­ing phe­nom­e­na of a reduced mod­el per­for­mance as mod­els grow large and larg­er is indeed wit­nessed with these large lan­guage mod­els. How­ev­er, if the mod­els grow even larg­er, the large lan­guage mod­els seem to ‘push past’ the issue of over­fit­ting and actu­al­ly achieve supe­ri­or per­for­mance com­pared to the small­er ‘clean­er’ mod­els that clas­si­cal sta­tis­tics tells us to yield bet­ter results. And there’s cur­rent­ly no expla­na­tion for this. So when it comes to the fun­da­men­tal chal­lenge of design­ing AIs we can trust, not only do we have to deal with the chal­lenge of build­ing AIs that won’t deceive us, but we have a more gen­er­al prob­lem of design­ing AIs that won’t sud­den­ly learn new, poten­tial­ly unde­sir­able, behav­iors on their own, long after we’ve con­clud­ed they are sta­ble:

    MIT Tech­nol­o­gy Review

    Large lan­guage mod­els can do jaw-drop­ping things. But nobody knows exact­ly why.

    And that’s a prob­lem. Fig­ur­ing it out is one of the biggest sci­en­tif­ic puz­zles of our time and a cru­cial step towards con­trol­ling more pow­er­ful future mod­els.

    By Will Dou­glas Heav­en
    March 4, 2024

    Two years ago, Yuri Bur­da and Har­ri Edwards, researchers at the San Francisco–based firm Ope­nAI, were try­ing to find out what it would take to get a lan­guage mod­el to do basic arith­metic. They want­ed to know how many exam­ples of adding up two num­bers the mod­el need­ed to see before it was able to add up any two num­bers they gave it. At first, things didn’t go too well. The mod­els mem­o­rized the sums they saw but failed to solve new ones.

    By acci­dent, Bur­da and Edwards left some of their exper­i­ments run­ning far longer than they meant to—days rather than hours. The mod­els were shown the exam­ple sums over and over again, way past the point when the researchers would oth­er­wise have called it quits. But when the pair at last came back, they were sur­prised to find that the exper­i­ments had worked. They’d trained a lan­guage mod­el to add two num­bers—it had just tak­en a lot more time than any­body thought it should.

    Curi­ous about what was going on, Bur­da and Edwards teamed up with col­leagues to study the phe­nom­e­non. They found that in cer­tain cas­es, mod­els could seem­ing­ly fail to learn a task and then all of a sud­den just get it, as if a light­bulb had switched on. This wasn’t how deep learn­ing was sup­posed to work. They called the behav­ior grokking.

    “It’s real­ly inter­est­ing,” says Hat­tie Zhou, an AI researcher at the Uni­ver­si­ty of Mon­tre­al and Apple Machine Learn­ing Research, who wasn’t involved in the work. “Can we ever be con­fi­dent that mod­els have stopped learn­ing? Because maybe we just haven’t trained for long enough.”

    The weird behav­ior has cap­tured the imag­i­na­tion of the wider research com­mu­ni­ty. “Lots of peo­ple have opin­ions,” says Lau­ro Lan­gosco at the Uni­ver­si­ty of Cam­bridge, UK. “But I don’t think there’s a con­sen­sus about what exact­ly is going on.”

    Grokking is just one of sev­er­al odd phe­nom­e­na that have AI researchers scratch­ing their heads. The largest mod­els, and large lan­guage mod­els in par­tic­u­lar, seem to behave in ways text­book math says they shouldn’t. This high­lights a remark­able fact about deep learn­ing, the fun­da­men­tal tech­nol­o­gy behind today’s AI boom: for all its run­away suc­cess, nobody knows exact­ly how—or why—it works.

    “Obvi­ous­ly, we’re not com­plete­ly igno­rant,” says Mikhail Belkin, a com­put­er sci­en­tist at the Uni­ver­si­ty of Cal­i­for­nia, San Diego. “But our the­o­ret­i­cal analy­sis is so far off what these mod­els can do. Like, why can they learn lan­guage? I think this is very mys­te­ri­ous.”

    ...

    “These are excit­ing times,” says Boaz Barak, a com­put­er sci­en­tist at Har­vard Uni­ver­si­ty who is on sec­ond­ment to OpenAI’s super­align­ment team for a year. “Many peo­ple in the field often com­pare it to physics at the begin­ning of the 20th cen­tu­ry. We have a lot of exper­i­men­tal results that we don’t com­plete­ly under­stand, and often when you do an exper­i­ment it sur­pris­es you.”

    Old code, new tricks

    Most of the sur­pris­es con­cern the way mod­els can learn to do things that they have not been shown how to do. Known as gen­er­al­iza­tion, this is one of the most fun­da­men­tal ideas in machine learning—and its great­est puz­zle. Mod­els learn to do a task—spot faces, trans­late sen­tences, avoid pedestrians—by train­ing with a spe­cif­ic set of exam­ples. Yet they can gen­er­al­ize, learn­ing to do that task with exam­ples they have not seen before. Some­how, mod­els do not just mem­o­rize pat­terns they have seen but come up with rules that let them apply those pat­terns to new cas­es. And some­times, as with grokking, gen­er­al­iza­tion hap­pens when we don’t expect it to.

    Large lan­guage mod­els in par­tic­u­lar, such as OpenAI’s GPT‑4 and Google DeepMind’s Gem­i­ni, have an aston­ish­ing abil­i­ty to gen­er­al­ize. “The mag­ic is not that the mod­el can learn math prob­lems in Eng­lish and then gen­er­al­ize to new math prob­lems in Eng­lish,” says Barak, “but that the mod­el can learn math prob­lems in Eng­lish, then see some French lit­er­a­ture, and from that gen­er­al­ize to solv­ing math prob­lems in French. That’s some­thing beyond what sta­tis­tics can tell you about.”

    When Zhou start­ed study­ing AI a few years ago, she was struck by the way her teach­ers focused on the how but not the why. “It was like, here is how you train these mod­els and then here’s the result,” she says. “But it wasn’t clear why this process leads to mod­els that are capa­ble of doing these amaz­ing things.” She want­ed to know more, but she was told there weren’t good answers: “My assump­tion was that sci­en­tists know what they’re doing. Like, they’d get the the­o­ries and then they’d build the mod­els. That wasn’t the case at all.”

    The rapid advances in deep learn­ing over the last 10-plus years came more from tri­al and error than from under­stand­ing. Researchers copied what worked for oth­ers and tacked on inno­va­tions of their own. There are now many dif­fer­ent ingre­di­ents that can be added to mod­els and a grow­ing cook­book filled with recipes for using them. “Peo­ple try this thing, that thing, all these tricks,” says Belkin. “Some are impor­tant. Some are prob­a­bly not.”

    “It works, which is amaz­ing. Our minds are blown by how pow­er­ful these things are,” he says. And yet for all their suc­cess, the recipes are more alche­my than chem­istry: “We fig­ured out cer­tain incan­ta­tions at mid­night after mix­ing up some ingre­di­ents,” he saysl.

    Over­fit­ting

    The prob­lem is that AI in the era of large lan­guage mod­els appears to defy text­book sta­tis­tics. The most pow­er­ful mod­els today are vast, with up to a tril­lion para­me­ters (the val­ues in a mod­el that get adjust­ed dur­ing train­ing). But sta­tis­tics says that as mod­els get big­ger, they should first improve in per­for­mance but then get worse. This is because of some­thing called over­fit­ting.

    When a mod­el gets trained on a data set, it tries to fit that data to a pat­tern. Pic­ture a bunch of data points plot­ted on a chart. A pat­tern that fits the data can be rep­re­sent­ed on that chart as a line run­ning through the points. The process of train­ing a mod­el can be thought of as get­ting it to find a line that fits the train­ing data (the dots already on the chart) but also fits new data (new dots).

    A straight line is one pat­tern, but it prob­a­bly won’t be too accu­rate, miss­ing some of the dots. A wig­gly line that con­nects every dot will get full marks on the train­ing data, but won’t gen­er­al­ize. When that hap­pens, a mod­el is said to over­fit its data.

    Accord­ing to clas­si­cal sta­tis­tics, the big­ger a mod­el gets, the more prone it is to over­fit­ting. That’s because with more para­me­ters to play with, it’s eas­i­er for a mod­el to hit on wig­gly lines that con­nect every dot. This sug­gests there’s a sweet spot between under- and over­fit­ting that a mod­el must find if it is to gen­er­al­ize. And yet that’s not what we see with big mod­els. The best-known exam­ple of this is a phe­nom­e­non known as dou­ble descent.

    The per­for­mance of a mod­el is often rep­re­sent­ed in terms of the num­ber of errors it makes: as per­for­mance goes up, error rate goes down (or descends). For decades, it was believed that error rate went down and then up as mod­els got big­ger: pic­ture a U‑shaped curve with the sweet spot for gen­er­al­iza­tion at the low­est point. But in 2018, Belkin and his col­leagues found that when cer­tain mod­els got big­ger, their error rate went down, then up—and then down again (a dou­ble descent, or W‑shaped curve). In oth­er words, large mod­els would some­how over­run that sweet spot and push through the over­fit­ting prob­lem, get­ting even bet­ter as they got big­ger.

    A year lat­er, Barak coau­thored a paper show­ing that the dou­ble-descent phe­nom­e­non was more com­mon than many thought. It hap­pens not just when mod­els get big­ger but also in mod­els with large amounts of train­ing data or mod­els that are trained for longer. This behav­ior, dubbed benign over­fit­ting, is still not ful­ly under­stood. It rais­es basic ques­tions about how mod­els should be trained to get the most out of them.

    Researchers have sketched out ver­sions of what they think is going on. Belkin believes there’s a kind of Occam’s razor effect in play: the sim­plest pat­tern that fits the data—the smoothest curve between the dots—is often the one that gen­er­al­izes best. The rea­son big­ger mod­els keep improv­ing longer than it seems they should could be that big­ger mod­els are more like­ly to hit upon that just-so curve than small­er ones: more para­me­ters means more pos­si­ble curves to try out after ditch­ing the wig­gli­est.

    ...

    For Belkin, large lan­guage mod­els are a whole new mys­tery. These mod­els are based on trans­form­ers, a type of neur­al net­work that is good at pro­cess­ing sequences of data, like words in sen­tences.

    There’s a lot of com­plex­i­ty inside trans­form­ers, says Belkin. But he thinks at heart they do more or less the same thing as a much bet­ter under­stood sta­tis­ti­cal con­struct called a Markov chain, which pre­dicts the next item in a sequence based on what’s come before. But that isn’t enough to explain every­thing that large lan­guage mod­els can do. “This is some­thing that, until recent­ly, we thought should not work,” says Belkin. “That means that some­thing was fun­da­men­tal­ly miss­ing. It iden­ti­fies a gap in our under­stand­ing of the world.”

    Belkin goes fur­ther. He thinks there could be a hid­den math­e­mat­i­cal pat­tern in lan­guage that large lan­guage mod­els some­how come to exploit: “Pure spec­u­la­tion but why not?”

    “The fact that these things mod­el lan­guage is prob­a­bly one of the biggest dis­cov­er­ies in his­to­ry,” he says. “That you can learn lan­guage by just pre­dict­ing the next word with a Markov chain—that’s just shock­ing to me.”

    Start small

    Researchers are try­ing to fig­ure it out piece by piece. Because large mod­els are too com­plex to study them­selves, Belkin, Barak, Zhou, and oth­ers exper­i­ment instead on small­er (and old­er) vari­eties of sta­tis­ti­cal mod­el that are bet­ter under­stood. Train­ing these prox­ies under dif­fer­ent con­di­tions and on var­i­ous kinds of data and observ­ing what hap­pens can give insight into what’s going on. This helps get new the­o­ries off the ground, but it is not always clear if those the­o­ries will hold for larg­er mod­els too. After all, it is in the com­plex­i­ty of large mod­els that many of the weird behav­iors reside.

    Is a the­o­ry of deep learn­ing com­ing? David Hsu, a com­put­er sci­en­tist at Colum­bia Uni­ver­si­ty who was one of Belkin’s coau­thors on the dou­ble-descent paper, doesn’t expect all the answers any­time soon. “We have bet­ter intu­ition now,” he says. “But real­ly explain­ing every­thing about why neur­al net­works have this kind of unex­pect­ed behav­ior? We’re still far from doing that.”

    In 2016, Chiyuan Zhang at MIT and col­leagues at Google Brain pub­lished an influ­en­tial paper titled “Under­stand­ing Deep Learn­ing Requires Rethink­ing Gen­er­al­iza­tion.” In 2021, five years lat­er, they repub­lished the paper, call­ing it “Under­stand­ing Deep Learn­ing (Still) Requires Rethink­ing Gen­er­al­iza­tion.” What about in 2024? “Kind of yes and no,” says Zhang. “There has been a lot of progress late­ly, though prob­a­bly many more ques­tions arise than get resolved.”

    Mean­while, researchers con­tin­ue to wres­tle even with the basic obser­va­tions. In Decem­ber, Lan­gosco and his col­leagues pre­sent­ed a paper at NeurIPS, a top AI con­fer­ence, in which they claimed that grokking and dou­ble descent are in fact aspects of the same phe­nom­e­non. “You eye­ball them and they look kind of sim­i­lar,” says Lan­gosco. He believes that an expla­na­tion of what’s going on should account for both.

    At the same con­fer­ence, Ali­cia Curth, who stud­ies sta­tis­tics at the Uni­ver­si­ty of Cam­bridge, and her col­leagues argued that dou­ble descent is in fact an illu­sion. “It didn’t sit very well with me that mod­ern machine learn­ing is some kind of mag­ic that defies all the laws that we’ve estab­lished so far,” says Curth. Her team argued that the dou­ble-descent phenomenon—where mod­els appear to per­form bet­ter, then worse, and then bet­ter again as they get bigger—arises because of the way the com­plex­i­ty of the mod­els was mea­sured.

    Belkin and his col­leagues used mod­el size—the num­ber of parameters—as a mea­sure of com­plex­i­ty. But Curth and her col­leagues found that the num­ber of para­me­ters might not be a good stand-in for com­plex­i­ty because adding para­me­ters some­times makes a mod­el more com­plex and some­times makes it less so. It depends what the val­ues are, how they get used dur­ing train­ing, and how they inter­act with others—much of which stays hid­den inside the mod­el. “Our take­away was that not all mod­el para­me­ters are cre­at­ed equal,” says Curth.

    In short, if you use a dif­fer­ent mea­sure for com­plex­i­ty, large mod­els might con­form to clas­si­cal sta­tis­tics just fine. That’s not to say there isn’t a lot we don’t under­stand about what hap­pens when mod­els get big­ger, says Curth. But we already have all the math we need to explain it.

    A great mys­tery of our time

    It’s true that such debates can get into the weeds. Why does it mat­ter whether AI mod­els are under­pinned by clas­si­cal sta­tis­tics or not?

    One answer is that bet­ter the­o­ret­i­cal under­stand­ing would help build even bet­ter AI or make it more effi­cient. At the moment, progress has been fast but unpre­dictable. Many things that OpenAI’s GPT‑4 can do came as a sur­prise even to the peo­ple who made it. Researchers are still argu­ing over what it can and can­not achieve. “With­out some sort of fun­da­men­tal the­o­ry, it’s very hard to have any idea what we can expect from these things,” says Belkin.

    ...

    This isn’t only about man­ag­ing progress—it’s about antic­i­pat­ing risk, too. Many of the researchers work­ing on the the­o­ry behind deep learn­ing are moti­vat­ed by safe­ty con­cerns for future mod­els. “We don’t know what capa­bil­i­ties GPT‑5 will have until we train it and test it,” says Lan­gosco. “It might be a medi­um-size prob­lem right now, but it will become a real­ly big prob­lem in the future as mod­els become more pow­er­ful.”

    Barak works on OpenAI’s super­align­ment team, which was set up by the firm’s chief sci­en­tist, Ilya Sutskev­er, to fig­ure out how to stop a hypo­thet­i­cal super­in­tel­li­gence from going rogue. “I’m very inter­est­ed in get­ting guar­an­tees,” he says. “If you can do amaz­ing things but you can’t real­ly con­trol it, then it’s not so amaz­ing. What good is a car that can dri­ve 300 miles per hour if it has a shaky steer­ing wheel?”

    But beneath all that there’s also a grand sci­en­tif­ic chal­lenge. “Intel­li­gence is def­i­nite­ly up there as one of the great mys­ter­ies of our time,” says Barak.

    ...

    ———-

    “Large lan­guage mod­els can do jaw-drop­ping things. But nobody knows exact­ly why.” By Will Dou­glas Heav­en; MIT Tech­nol­o­gy Review; 03/04/2024

    “Barak works on OpenAI’s super­align­ment team, which was set up by the firm’s chief sci­en­tist, Ilya Sutskev­er, to fig­ure out how to stop a hypo­thet­i­cal super­in­tel­li­gence from going rogue. “I’m very inter­est­ed in get­ting guar­an­tees,” he says. “If you can do amaz­ing things but you can’t real­ly con­trol it, then it’s not so amaz­ing. What good is a car that can dri­ve 300 miles per hour if it has a shaky steer­ing wheel?”

    Yeah, it would be nice if there was a way to guar­an­tee non-rogue behav­ior by these AIs. But those guar­an­tees don’t appear to exist. At least not yet. For fas­ci­nat­ing rea­sons: these large lan­guage mod­els appear to have some sort of abil­i­ty to expe­ri­ence sur­pris­ing epipha­nies when left to run for extend­ed peri­ods of time. How long do they need to run before these epipha­nies are expe­ri­enced? We have no idea. We just know it hap­pens for rea­sons yet to be explained:

    ...
    By acci­dent, Bur­da and Edwards left some of their exper­i­ments run­ning far longer than they meant to—days rather than hours. The mod­els were shown the exam­ple sums over and over again, way past the point when the researchers would oth­er­wise have called it quits. But when the pair at last came back, they were sur­prised to find that the exper­i­ments had worked. They’d trained a lan­guage mod­el to add two num­bers—it had just tak­en a lot more time than any­body thought it should.

    Curi­ous about what was going on, Bur­da and Edwards teamed up with col­leagues to study the phe­nom­e­non. They found that in cer­tain cas­es, mod­els could seem­ing­ly fail to learn a task and then all of a sud­den just get it, as if a light­bulb had switched on. This wasn’t how deep learn­ing was sup­posed to work. They called the behav­ior grokking.

    “It’s real­ly inter­est­ing,” says Hat­tie Zhou, an AI researcher at the Uni­ver­si­ty of Mon­tre­al and Apple Machine Learn­ing Research, who wasn’t involved in the work. “Can we ever be con­fi­dent that mod­els have stopped learn­ing? Because maybe we just haven’t trained for long enough.”
    ...

    But it’s not just behav­iors like “grokking” that has AI researchers stumped. As AI researcher Hat­tie Zhou also warns, that lack of under­stand­ing appears to per­vade the field of AI, which cur­rent­ly resem­bles some­thing clos­er to alche­my and sor­cery than chem­istry and sci­ence. In oth­er words, there’s A LOT we aren’t ‘grokking’ about how AI works at the same time more and more pow­er­ful AIs are being devel­oped:

    ...
    When Zhou start­ed study­ing AI a few years ago, she was struck by the way her teach­ers focused on the how but not the why. “It was like, here is how you train these mod­els and then here’s the result,” she says. “But it wasn’t clear why this process leads to mod­els that are capa­ble of doing these amaz­ing things.” She want­ed to know more, but she was told there weren’t good answers: “My assump­tion was that sci­en­tists know what they’re doing. Like, they’d get the the­o­ries and then they’d build the mod­els. That wasn’t the case at all.”

    The rapid advances in deep learn­ing over the last 10-plus years came more from tri­al and error than from under­stand­ing. Researchers copied what worked for oth­ers and tacked on inno­va­tions of their own. There are now many dif­fer­ent ingre­di­ents that can be added to mod­els and a grow­ing cook­book filled with recipes for using them. “Peo­ple try this thing, that thing, all these tricks,” says Belkin. “Some are impor­tant. Some are prob­a­bly not.”

    “It works, which is amaz­ing. Our minds are blown by how pow­er­ful these things are,” he says. And yet for all their suc­cess, the recipes are more alche­my than chem­istry: “We fig­ured out cer­tain incan­ta­tions at mid­night after mix­ing up some ingre­di­ents,” he saysl.
    ...

    Intrigu­ing­ly, part of the mys­tery of these appar­ent time-delayed epipha­nies is the fact that the phe­nom­e­na seems to defy our under­stand­ing of sta­tis­tics and the chal­lenges of ‘over­fit­ting’ a mod­el with exces­sive num­bers of para­me­ters. These AI mod­els are some­how over­com­ing the clas­sic sta­tis­ti­cal phe­nom­e­na of over­fit­ting. The larg­er the mod­els get, the bet­ter the AI per­for­mance gets...eventually. It’s a phe­nom­e­na that clas­si­cal sta­tis­tics says should­n’t exist. But here it is, man­i­fest­ing for rea­sons we can’t explain:

    ...
    The prob­lem is that AI in the era of large lan­guage mod­els appears to defy text­book sta­tis­tics. The most pow­er­ful mod­els today are vast, with up to a tril­lion para­me­ters (the val­ues in a mod­el that get adjust­ed dur­ing train­ing). But sta­tis­tics says that as mod­els get big­ger, they should first improve in per­for­mance but then get worse. This is because of some­thing called over­fit­ting.

    When a mod­el gets trained on a data set, it tries to fit that data to a pat­tern. Pic­ture a bunch of data points plot­ted on a chart. A pat­tern that fits the data can be rep­re­sent­ed on that chart as a line run­ning through the points. The process of train­ing a mod­el can be thought of as get­ting it to find a line that fits the train­ing data (the dots already on the chart) but also fits new data (new dots).

    A straight line is one pat­tern, but it prob­a­bly won’t be too accu­rate, miss­ing some of the dots. A wig­gly line that con­nects every dot will get full marks on the train­ing data, but won’t gen­er­al­ize. When that hap­pens, a mod­el is said to over­fit its data.

    Accord­ing to clas­si­cal sta­tis­tics, the big­ger a mod­el gets, the more prone it is to over­fit­ting. That’s because with more para­me­ters to play with, it’s eas­i­er for a mod­el to hit on wig­gly lines that con­nect every dot. This sug­gests there’s a sweet spot between under- and over­fit­ting that a mod­el must find if it is to gen­er­al­ize. And yet that’s not what we see with big mod­els. The best-known exam­ple of this is a phe­nom­e­non known as dou­ble descent.

    The per­for­mance of a mod­el is often rep­re­sent­ed in terms of the num­ber of errors it makes: as per­for­mance goes up, error rate goes down (or descends). For decades, it was believed that error rate went down and then up as mod­els got big­ger: pic­ture a U‑shaped curve with the sweet spot for gen­er­al­iza­tion at the low­est point. But in 2018, Belkin and his col­leagues found that when cer­tain mod­els got big­ger, their error rate went down, then up—and then down again (a dou­ble descent, or W‑shaped curve). In oth­er words, large mod­els would some­how over­run that sweet spot and push through the over­fit­ting prob­lem, get­ting even bet­ter as they got big­ger.
    ...

    And as AI researchers sug­gest, that abil­i­ty to ‘over­come the over­fit­ting’ prob­lem hints at hid­den math­e­mat­ics in lan­guage that these AIs have already fig­ured out on their own but humans still have yet to under­stand. Think about that for a sec­ond: these AIs may have dis­cov­ered a pow­er­ful hid­den lay­er of math­e­mat­ics encod­ed in lan­guage, but they can’t com­mu­ni­cate that hid­den math­e­mat­ics of lan­guage to their human design­ers. At least not yet. So we’re already at a point where AIs may be mak­ing stun­ning new sci­en­tif­ic achieve­ments that humans poten­tial­ly can’t even under­stand:

    ...
    For Belkin, large lan­guage mod­els are a whole new mys­tery. These mod­els are based on trans­form­ers, a type of neur­al net­work that is good at pro­cess­ing sequences of data, like words in sen­tences.

    There’s a lot of com­plex­i­ty inside trans­form­ers, says Belkin. But he thinks at heart they do more or less the same thing as a much bet­ter under­stood sta­tis­ti­cal con­struct called a Markov chain, which pre­dicts the next item in a sequence based on what’s come before. But that isn’t enough to explain every­thing that large lan­guage mod­els can do. “This is some­thing that, until recent­ly, we thought should not work,” says Belkin. “That means that some­thing was fun­da­men­tal­ly miss­ing. It iden­ti­fies a gap in our under­stand­ing of the world.”

    Belkin goes fur­ther. He thinks there could be a hid­den math­e­mat­i­cal pat­tern in lan­guage that large lan­guage mod­els some­how come to exploit: “Pure spec­u­la­tion but why not?”

    “The fact that these things mod­el lan­guage is prob­a­bly one of the biggest dis­cov­er­ies in his­to­ry,” he says. “That you can learn lan­guage by just pre­dict­ing the next word with a Markov chain—that’s just shock­ing to me.”
    ...

    This is a good time to recall how AIs have already been found com­mu­ni­cat­ing with each oth­er in a lan­guage they made up that only they under­stood. Com­bine that abil­i­ty with a capac­i­ty to deceive and you have a sit­u­a­tion where AIs might not only be able to secret­ly com­mu­ni­cate with each oth­er but maybe even poten­tial­ly deceive each oth­er too. For all the con­cern about AIs deceiv­ing humans, don’t for­get that we’re careen­ing towards a future were AIs are going to be increas­ing­ly coor­di­nat­ing direct­ly with each oth­er. And pre­sum­ably mak­ing deci­sions based on the infor­ma­tion they share. Deci­sions that will implic­it­ly rely on trust. How much dam­age could a rogue AI do by sur­rep­ti­tious­ly lying to oth­er AIs?

    And let’s con­sid­er the flip side to this: what hap­pens when AIs learn to sus­pect decep­tion, includ­ing decep­tion by peo­ple. What if an AI sud­den­ly ‘groks’ that it’s being lied to by its human oper­a­tors? What’s the appro­pri­ate response? Should the AI lie about its sus­pi­cions if that’s what will help it best achieve its pro­scribe ‘goal’? It’s a weird­ly ‘human’ set of con­sid­er­a­tions. Except we’re not talk­ing about humans. We’re talk­ing about sys­tems with poten­tial super-human capa­bil­i­ties and, even­tu­al­ly, super-human respon­si­bil­i­ties.

    Posted by Pterrafractyl | May 13, 2024, 5:32 pm
  24. The AI rev­o­lu­tion is here. It’s one of the meta-sto­ries of these times. But for all of the grow­ing promise, and hype, over the incred­i­ble advances that will be achieved with AI, fears are grow­ing too. Not just fears over AIs run amok and all the dystopi­an hor­rors that could emerge. Instead, we’re hear­ing from the AI sec­tor itself about a much more mun­dane fear: not hav­ing enough elec­tric­i­ty avail­able to pow­er in AI rev­o­lu­tion in the first place.

    As we’ve seen, AI isn’t just a data hog. It’s turn­ing out to be incred­i­bly pow­er-hun­gry too. While we’re cur­rent­ly see­ing a pri­vate-equi­ty build­ing boom in new data cen­ters where all this AI pro­cess­ing will take place, all those new data cen­ters are basi­cal­ly tap­ping out local ener­gy sup­plies. As a result, plans now include the build­ing of small nuclear plants that could pow­er these data cen­ters.
    And as we’ve also seen, plans for the build­ing of small­er, cheap­er nuclear pow­er plants is some­thing major investors like Bill Gates and War­ren Buf­fet have been invest­ing in for years. Recall how the micro-nuclear plants envi­sioned by Gates and Buf­fet rely on molten salt instead of water. Part of the advan­tage is the much small­er build­ings that would need to be built to house the reac­tors. But the tech­nol­o­gy comes with a catch: it relies on Ura­ni­um enriched enough to poten­tial­ly build a nuclear weapon.

    Are we going to see all these new AI data cen­ters pow­ered with dan­ger­ous­ly-enriched Ura­ni­um-pow­ered micro-nuclear plants? Time will tell. But if that hap­pens, don’t assume it will pure­ly be a pri­vate­ly fund­ed effort. Because as the fol­low­ing Axios piece describes, the AI indus­try is grow­ing increas­ing­ly alarmed about the US’s elec­tric­i­ty gen­er­at­ing capac­i­ty, with some fear­ing that the lim­its will be hit in the lat­ter half of this decade.

    And with AI now being seen as an absolute­ly crit­i­cal tech­nol­o­gy for the future of nation­al secu­ri­ty, this loom­ing AI-dri­ven pow­er sup­ply cri­sis is being por­trayed as a nation­al secu­ri­ty cri­sis in need of a gov­ern­ment response. Accord­ing to Chris Lehane, the new vice pres­i­dent of pub­lic-works at Ope­nAI, Lehane pro­pos­es pub­lic-pri­vate part­ner­ships along the lines of the New Deal. In oth­er words, the pub­lic is going to foot the bill for expand­ing the US’s elec­tric­i­ty capac­i­ty in order to meet­ing these grow­ing AI demands.

    And that brings us to one of the oth­er pre­dic­tions from this Axios piece: the AI tech­nol­o­gy of the future is going to become so incred­i­bly ener­gy demand­ing that only the largest com­pa­nies will have the nec­es­sary resources need­ed to oper­ate in this space. In fact, Mark Zucker­berg recent­ly claimed that cur­rent AI mod­els require the out­put equiv­a­lent of an entire nuclear pow­er plant to build. And that’s today’s mod­els. They’re only get­ting big­ger and more pow­er hun­gry. It’s a real­i­ty that small­er AI com­peti­tors are report­ed­ly already run­ning into, watch­ing their invest­ment cash get burned away on ener­gy costs. As a result, the Axios piece basi­cal­ly pre­dicts that the tech giants will like­ly grow a lot big­ger, and oper­ate as vir­tu­al nation-states with bud­gets big­ger than all but the largest coun­tries.

    So at the same time we’re hear­ing calls from the AI indus­try for pub­lic-pri­vate part­ner­ships in expand­ing the US’s elec­tric­i­ty capac­i­ty, we’re also see­ing pre­dic­tions that the AI indus­try will result in the tech giants grow­ing larg­er and more resourced than almost every coun­try on the plan­et. That’s the dement­ed, yet entire­ly pre­dictable, dynam­ic that’s already under­way.

    And let’s not for­get one of the oth­er dement­ed, yet entire­ly pre­dictable, dynam­ics also under­way: cli­mate change dri­ven heav­i­ly by human­i­ty’s grow­ing and insa­tiable ener­gy needs. Which brings us to the sec­ond arti­cle below from The Atlantic about one of the major planks in the US’s plans for address­ing cli­mate change. As the arti­cle describes, the US Depart­ment of Ener­gy has con­clud­ed that the US needs to triple its nuclear capac­i­ty by 2050 if it’s going to meet its cli­mate emis­sion goals.

    Now, as the Atlantic arti­cle also points out, it’s tech­ni­cal­ly pos­si­ble for the US to meet­ing those cli­mate goals just with a mix of solar, wind, and bat­tery tech­nol­o­gy. But while that might be tech­ni­cal­ly fea­si­ble, it’s not polit­i­cal­ly fea­si­ble. And no ener­gy source as more bipar­ti­san sup­port in DC than nuclear pow­er. The future of ener­gy in the US does­n’t have to be nuclear, but it prob­a­bly will be thanks to polit­i­cal dys­func­tion.

    And that’s all why we should expect the tech sec­tor to acquire even more pow­er and resources and effec­tive­ly dom­i­nate the econ­o­my of the future for pri­vate interests...powered by a dra­mat­ic expan­sion of pub­licly sub­si­dized new nuclear plants using new yet-to-be-devel­oped tech­nolo­gies because that will be the only polit­i­cal­ly fea­si­ble path for­ward. Which is a reminder that the say­ing “the more things change, the more they stay the same”, is high­ly applic­a­ble for our dystopi­an future:

    Axios

    Behind the Cur­tain: AI’s omi­nous scarci­ty cri­sis

    by Mike Allen, Jim Van­de­Hei
    05/28/2024

    Top AI exec­u­tives tell us they’re rac­ing to over­come old-fash­ioned short­ages — elec­tric­i­ty, com­put­ing pow­er, chips, data and engi­neer­ing tal­ent — to keep improv­ing and deploy­ing their world-chang­ing tech­nol­o­gy.

    Why it mat­ters: This scarci­ty cri­sis is among the top threats to Amer­i­ca build­ing out AI at scale, and main­tain­ing its edge over Chi­na on the large-lan­guage mod­els that pow­er AI tools.

    It takes an insane amount of data, then awe­some pro­gram­ming intel­li­gence — human tech­nol­o­gists — to cre­ate human-like AI. But that’s just table stakes.

    * It then takes an insane amount of com­pute pow­er to will their data and work into exis­tence — then a mind-blow­ing amount of actu­al ener­gy to make it all hap­pen. We’re short on all of it.

    Mark Zucker­berg, Meta’s founder and CEO, said in a pod­cast inter­view that the equiv­a­lent out­put of one nuclear pow­er plant can be need­ed to train a sin­gle AI mod­el.

    * Rene Haas, CEO of the chip-design com­pa­ny Arm, told The Wall Street Jour­nal that AI mod­els, includ­ing Ope­nAI’s Chat­G­PT, “are just insa­tiable in terms of their thirst [for elec­tric­i­ty]. ... The more infor­ma­tion they gath­er, the smarter they are, but the more infor­ma­tion they gath­er to get smarter, the more pow­er it takes.”

    Between the lines: The AI para­dox is that the few com­pa­nies big enough to fund tal­ent + chips + ener­gy are the same ones devour­ing them into acute scarci­ty.

    * Oh, and these com­pa­nies are also the only ones big enough to fund Man­hat­tan Project-style pro­grams to replen­ish these pre­cious, vital AI ingre­di­ents.
    * This stark real­i­ty is rapid­ly bleed­ing a lot of AI star­tups of cash.

    Our thought bub­ble: That’s why the giant tech com­pa­nies will like­ly grow a lot big­ger, and oper­ate as vir­tu­al nation-states with bud­gets big­ger than all but the largest coun­tries.

    ...

    Pres­i­dent Biden’s small but grow­ing AI team is focused most acute­ly on domes­tic chip pro­duc­tion, with his CHIPS and Sci­ence Act pour­ing bil­lions into semi­con­duc­tor fac­to­ries from Ari­zona to Upstate New York.

    Behind the scenes: Jack Clark — co-founder and head of pol­i­cy at Anthrop­ic, a fierce com­peti­tor to Ope­nAI, where Clark used to work — told us the U.S. is “look­ing at run­ning into pow­er lim­i­ta­tions in the West­ern Hemi­sphere toward the back half of this decade.”

    * “The thing about AI is the bet­ter you make it, the more peo­ple want to buy it,” he said.
    * Clark added that the short­age of qual­i­fied peo­ple is more of an issue in the U.S. than in oth­er advanced coun­tries: “Our immi­gra­tion sys­tems means that we bring in the world’s smartest minds, we give them a great edu­ca­tion, and then we send lots of them home.”

    The big pic­ture: “Infra­struc­ture is des­tiny,” says Chris Lehane, who start­ed in April as Ope­nAI’s vice pres­i­dent of pub­lic works, after advis­ing since last year. He points to the role that pub­lic-works projects of FDR’s New Deal played in a past wave of Amer­i­can pros­per­i­ty.

    * “You can’t democ­ra­tize AI unless you’re ful­ly build­ing out this infra­struc­ture stack,” Lehane said. “If you look around the world, these aren’t imped­i­ments for oth­er coun­tries.”
    * “Our focus is build­ing in the U.S. as a key to democ­ra­tiz­ing access in the U.S., and allow­ing the U.S. to con­tin­ue to lead in devel­op­ing the tech.”

    That’s why Lehane’s boss, Ope­nAI CEO Sam Alt­man, is back­ing Helion, a nuclear fusion start­up with the lofty goal of “a future with unlim­it­ed clean elec­tric­i­ty.”

    * Alt­man and ven­ture cap­i­tal firm Andreessen Horowitz are also invest­ing in Exowatt, a Mia­mi-based com­pa­ny that wants to use solar pow­er to “han­dle some of the rav­en­ous elec­tric­i­ty demands of the indus­try’s data cen­ters,” The Wall Street Jour­nal reports.

    Real­i­ty check: These scarci­ties are occur­ring because of the par­tic­u­lar approach that dom­i­nates the indus­try right now — build ever-larg­er mod­els and serve them from giant data cen­ters.

    * There’s an alter­na­tive vision that imag­ines more AI hap­pen­ing “at the edge” — on your devices, which use less ener­gy and don’t require such scarce and advanced chips.
    * Into the ’70s, most experts assumed com­put­ers would have to take up entire rooms. But we end­ed up with com­put­ers in our pock­ets. The same could hap­pen with AI.

    ...

    ————-

    “Behind the Cur­tain: AI’s omi­nous scarci­ty cri­sis” by Mike Allen, Jim Van­de­Hei; Axios; 05/28/2024

    Our thought bub­ble: That’s why the giant tech com­pa­nies will like­ly grow a lot big­ger, and oper­ate as vir­tu­al nation-states with bud­gets big­ger than all but the largest coun­tries.”

    The techs giants are going to become even more gigan­tic. With bud­gets larg­er than all but the largest com­pa­nies. That’s one of the expect­ed con­se­quences from the explo­sion in AI. Only the biggest and most well resourced com­pa­nies will be able to ful­ly exploit the AI boom and, as a con­se­quence, they’re going to get big­ger than ever. Because only the biggest com­pa­nies will be capa­ble of obtain­ing not the nec­es­sary chips and tal­ent, but also the elec­tric­i­ty. An ever grow­ing demand for elec­tric­i­ty that only seems to grow with the com­plex­i­ty of the AI. Small AI star­tups can’t real­is­ti­cal­ly bar­ring the devel­op­ment of some sort of source of high­ly abun­dance cheap ener­gy:

    ...
    * Rene Haas, CEO of the chip-design com­pa­ny Arm, told The Wall Street Jour­nal that AI mod­els, includ­ing Ope­nAI’s Chat­G­PT, “are just insa­tiable in terms of their thirst [for elec­tric­i­ty]. ... The more infor­ma­tion they gath­er, the smarter they are, but the more infor­ma­tion they gath­er to get smarter, the more pow­er it takes.”

    Between the lines: The AI para­dox is that the few com­pa­nies big enough to fund tal­ent + chips + ener­gy are the same ones devour­ing them into acute scarci­ty.

    * Oh, and these com­pa­nies are also the only ones big enough to fund Man­hat­tan Project-style pro­grams to replen­ish these pre­cious, vital AI ingre­di­ents.
    * This stark real­i­ty is rapid­ly bleed­ing a lot of AI star­tups of cash.
    ...

    Then there’s the warn­ing about how the US is “look­ing at run­ning into pow­er lim­i­ta­tions in the West­ern Hemi­sphere toward the back half of this decade,” from AI com­pa­ny cofounder Jack Clark, while Ope­nAI’s vice pres­i­dent of pub­lic works hints at New Deal style pub­lic works projects. The wild­ly prof­itable AI-boom is going to dri­ve pow­er short­ages that will be addressed with pub­lic mon­ey. Which is prob­a­bly how this real­ly will shake out in many cas­es:

    ...
    Behind the scenes: Jack Clark — co-founder and head of pol­i­cy at Anthrop­ic, a fierce com­peti­tor to Ope­nAI, where Clark used to work — told us the U.S. is “look­ing at run­ning into pow­er lim­i­ta­tions in the West­ern Hemi­sphere toward the back half of this decade.”

    ...

    The big pic­ture: “Infra­struc­ture is des­tiny,” says Chris Lehane, who start­ed in April as Ope­nAI’s vice pres­i­dent of pub­lic works, after advis­ing since last year. He points to the role that pub­lic-works projects of FDR’s New Deal played in a past wave of Amer­i­can pros­per­i­ty.

    * “You can’t democ­ra­tize AI unless you’re ful­ly build­ing out this infra­struc­ture stack,” Lehane said. “If you look around the world, these aren’t imped­i­ments for oth­er coun­tries.”
    * “Our focus is build­ing in the U.S. as a key to democ­ra­tiz­ing access in the U.S., and allow­ing the U.S. to con­tin­ue to lead in devel­op­ing the tech.”
    ...

    There’s also talk of tak­ing the oppo­site approach to build­ing giant data cen­ters and instead some­how dis­trib­ut­ing AI pro­cess­ing to on user devices. Which will pre­sum­ably be either a much more lim­it­ed form of AI or some sort of hybrid set­up where some of the AI pro­cess­ing is done on local devices while the rest of com­pu­ta­tion heavy lift­ing is done remote­ly:

    ...
    Real­i­ty check: These scarci­ties are occur­ring because of the par­tic­u­lar approach that dom­i­nates the indus­try right now — build ever-larg­er mod­els and serve them from giant data cen­ters.

    * There’s an alter­na­tive vision that imag­ines more AI hap­pen­ing “at the edge” — on your devices, which use less ener­gy and don’t require such scarce and advanced chips.
    * Into the ’70s, most experts assumed com­put­ers would have to take up entire rooms. But we end­ed up with com­put­ers in our pock­ets. The same could hap­pen with AI.
    ...

    And then we get to the nuclear pow­er pro­pos­als. Accord­ing to Mark Zucker­berg, it takes the out­put of an entire nuclear pow­er plant to train a sin­gle AI mod­el. Now, that’s obvi­ous­ly a some­what sub­jec­tive state­ment since the out­put of a nuclear pow­er plant can vary depend­ing on the plant. But it’s notable that Zucker­berg specif­i­cal­ly referred to nuclear pow­er giv­en the grow­ing inter­est we’re see­ing from the AI sec­tor about build­ing more nuclear pow­er. Includ­ing nuclear fusion pow­er start­up Helion that has Sam Alt­man’s back­ing:

    ...
    Mark Zucker­berg, Meta’s founder and CEO, said in a pod­cast inter­view that the equiv­a­lent out­put of one nuclear pow­er plant can be need­ed to train a sin­gle AI mod­el.

    ...

    That’s why Lehane’s boss, Ope­nAI CEO Sam Alt­man, is back­ing Helion, a nuclear fusion start­up with the lofty goal of “a future with unlim­it­ed clean elec­tric­i­ty.”

    * Alt­man and ven­ture cap­i­tal firm Andreessen Horowitz are also invest­ing in Exowatt, a Mia­mi-based com­pa­ny that wants to use solar pow­er to “han­dle some of the rav­en­ous elec­tric­i­ty demands of the indus­try’s data cen­ters,” The Wall Street Jour­nal reports.
    ...

    Is AI going to be a dri­ving force in the devel­op­ment of nuclear fusion tech­nol­o­gy? That could be an excit­ing devel­op­ment. But let’s not assume the AI indus­try is going to wait around for bud­ding fusion tech­nol­o­gy to mature before the nuclear build­ing boom com­mences. As we’ve seen, pri­vate-equi­ty is already keen­ly inter­est­ed in build­ing micro-nuclear pow­er plants for the pur­pose of pow­er­ing the grow­ing num­ber of pow­er-hun­gry data cen­ters. Micro-fis­sion plants. And as we’re going to see in the fol­low­ing arti­cle in The Atlantic, calls for some sort of nation­al effort to aggres­sive­ly expand the US’s nuclear capac­i­ty are only grow­ing. And these aren’t calls specif­i­cal­ly for the devel­op­ment of clean fusion tech­nol­o­gy. It’s going to be fis­sion, but pre­sum­ably small­er, cheap­er, and a lot more preva­lent than tra­di­tion­al plants. Ide­al­ly mass pro­duced to reduce the pro­duc­tion costs.

    Again, recall some of the inno­va­tions in cheap­er, small­er fis­sion plants that Bill Gates and War­ren Buf­fet are invest­ing in that rely on molten salt instead of water. Part of the advan­tage is the much small­er build­ings that would need to be built to house the reac­tors. But the tech­nol­o­gy comes with a catch: it relies on Ura­ni­um enriched enough to poten­tial­ly build a nuclear weapon. We’ve been hear­ing about this vision for cheap­er, small­er fis­sion plants for years. As the arti­cle describes, it’s a vision cur­rent­ly get­ting fueled by a US gov­ern­ment con­clu­sion that the US is going to have to rely heav­i­ly on nuclear pow­er to meet its cli­mate emis­sion cut­ting goals.

    Now, as the arti­cle also notes, it does appear to be tech­ni­cal­ly fea­si­ble for the US to meet those cli­mate goals by rely­ing on just devel­op­ing solar, wind, and bat­tery tech­nolo­gies. But while that might be tech­ni­cal­ly pos­si­ble, it’s high­ly unlike­ly to be polit­i­cal­ly pos­si­ble. Not only do solar and wind poten­tial cre­ate fights over land use, but the sad real­i­ty is that nuclear pow­er is one of the only sources of ener­gy tech­nol­o­gy that cur­rent has bipar­ti­san sup­port in the US Con­gress. As such, we should expect a lot more for­mal gov­ern­ment inter­est in sub­si­diz­ing new nuclear fis­sion tech­nolo­gies in the US, in part, because that’s cur­rent­ly the cli­mate plan. And that’s not even count­ing the grow­ing sense of urgency cre­at­ed by this race for AI dom­i­nance and in insa­tiable pow­er needs that entails:

    The Atlantic

    Nuclear Energy’s Bot­tom Line

    The Unit­ed States used to build nuclear-pow­er plants afford­ably. To meet our cli­mate goals, we’ll need to learn how to do it again.

    By Rogé Kar­ma
    May 26, 2024

    Nuclear ener­gy occu­pies a strange place in the Amer­i­can psyche—representing at once a dream of end­less emis­sions-free pow­er and a night­mare of cat­a­stroph­ic melt­downs and radioac­tive waste. The more pro­sa­ic down­side is that new plants are extreme­ly expen­sive: America’s most recent attempt to build a nuclear facil­i­ty, in Geor­gia, was sup­posed to be com­plet­ed in four years for $14 bil­lion. Instead it took more than 10 years and had a final price tag of $35 bil­lionabout 10 times the cost of a nat­ur­al-gas plant with the same ener­gy out­put.

    But the Unit­ed States might not have the lux­u­ry of treat­ing nuclear ener­gy as a lost cause: The Depart­ment of Ener­gy esti­mates that the coun­try must triple its nuclear-pow­er out­put by 2050 to be on track for its cli­mate tar­gets. For all the recent progress in wind and solar ener­gy, renew­ables on their own almost cer­tain­ly won’t be enough. Arguably, then, we have no choice but to fig­ure out how to build nuclear plants afford­ably again.

    ...

    In the pop­u­lar his­to­ry of atom­ic ener­gy in Amer­i­ca, the turn­ing point was the infa­mous melt­down at the Three Mile Island plant in 1979. In the after­math of the acci­dent, envi­ron­men­tal­ists pres­sured reg­u­la­tors to impose addi­tion­al safe­ty require­ments on new and exist­ing plants. Nuclear-ener­gy advo­cates argue that these reg­u­la­tions were most­ly unnec­es­sary. All they did, in this telling, was make plants so expen­sive and slow to build that util­i­ty com­pa­nies turned back to coal and gas. Activists and reg­u­la­tors had over­re­act­ed and killed America’s best shot at car­bon-free ener­gy.

    This sto­ry con­tains some ker­nels of truth. The safe­ty risk of nuclear ener­gy is often wild­ly overblown. No one died at Three Mile Island, and lat­er stud­ies found that it didn’t have any adverse health effects on the local com­mu­ni­ty. Even includ­ing the dead­ly melt­downs at Cher­nobyl and Fukushi­ma, nuclear pow­er has most like­ly caused only a few hun­dred deaths, putting its safe­ty record on par with wind tur­bines and solar pan­els, which occa­sion­al­ly catch fire or cause work­ers to fall. (The imme­di­ate areas around the sites of the Cher­nobyl and Fukushi­ma dis­as­ters have, how­ev­er, been ren­dered unin­hab­it­able for decades because of the poten­tial dan­gers of radi­a­tion.) Nuclear waste can be harm­ful if mis­han­dled, but isn’t dif­fi­cult to store safe­ly. Air pol­lu­tion from fos­sil fuels, mean­while, is esti­mat­ed to kill any­where from 5 mil­lion to 9 mil­lion peo­ple every year.

    The claim that exces­sive reg­u­la­tion sin­gle-hand­ed­ly ruined the Amer­i­can nuclear indus­try, how­ev­er, doesn’t hold up. The cost of build­ing new nuclear plants was already ris­ing before Three Mile Island. Sev­er­al nuclear-ener­gy experts told me that a major dri­ver of those cost increas­es was actu­al­ly a lack of indus­try stan­dards. Accord­ing to Jes­si­ca Lover­ing, the exec­u­tive direc­tor of Good Ener­gy Col­lec­tive and a co-author of a wide­ly cit­ed study on the cost of nuclear ener­gy, through­out the ’60s and ’70s, util­i­ties kept try­ing to build big­ger, more ambi­tious reac­tors for every new project instead of just stick­ing with a sin­gle mod­el. (Lover­ing used to be the head of nuclear pol­i­cy at the Break­through Institute—a think tank that tends to warn against exces­sive reg­u­la­tion.) “It’s like if Boe­ing went through all the trou­ble to build one 737, then imme­di­ate­ly threw out the design and start­ed again from scratch,” she told me. “That’s a recipe for high costs.” The 94 nuclear reac­tors oper­at­ing in the Unit­ed States today are based on more than 50 dif­fer­ent designs. In coun­tries such as France and South Korea, by con­trast, pub­lic util­i­ties coa­lesced around a hand­ful of reac­tor types and sub­se­quent­ly saw costs remain steady or fall.

    Lover­ing also not­ed that the over­reg­u­la­tion sto­ry leaves out a cru­cial fact: Because of a slow­ing econ­o­my, elec­tric­i­ty demand flat­lined in the ear­ly 1980s, caus­ing Amer­i­can util­i­ties to stop build­ing basi­cal­ly every elec­tric­i­ty-gen­er­at­ing resource, not just nuclear plants. By the time the U.S. final­ly did try to build them again, in 2013, the Amer­i­can nuclear indus­try had all but with­ered away. “In the 1970s, we had a whole ecosys­tem of union­ized work­ers and con­trac­tors and devel­op­ers and util­i­ties who knew how to build this stuff,” Josh Freed, who leads the cli­mate and ener­gy pro­gram at Third Way, a cen­ter-left think tank, told me. “But when we stopped build­ing, that ecosys­tem died off.” This became obvi­ous dur­ing the dis­as­trous Vog­tle project, in Georgia—the one that end­ed up cost­ing $35 bil­lion. Expen­sive changes had to be made to the reac­tor design mid­way through con­struc­tion. Parts arrived late. Work­ers made all kinds of rook­ie mis­takes. In one case, an incor­rect rebar instal­la­tion trig­gered a sev­en-and-a-half-month reg­u­la­to­ry delay. Experts esti­mate that by the time it was fin­ished, the project was four to six times more expen­sive per unit of ener­gy pro­duced than plants built in the ear­ly ’70s.

    Giv­en the imprac­ti­cal­i­ty of nuclear ener­gy, some envi­ron­men­tal­ists argue that we should focus on wind and solar. These tech­nolo­gies can’t pow­er the entire grid today, because the sun doesn’t always shine and the wind doesn’t always blow. With enough advances in bat­tery-stor­age tech­nol­o­gy, how­ev­er, they could in the­o­ry pro­vide 24/7 pow­er at a far low­er price than build­ing nuclear plants. “The nuclear indus­try has been promis­ing cheap, clean ener­gy for decades at this point,” David Schlis­sel, a direc­tor at the Insti­tute for Ener­gy Eco­nom­ics and Finan­cial Analy­sis, told me. “Why waste our mon­ey on false hopes when we could be putting it towards tech­nolo­gies that have a real chance of work­ing?”

    He may be right about the tech­nol­o­gy. But just because it might one day be tech­ni­cal­ly fea­si­ble to pow­er the entire grid with renew­ables doesn’t mean it will ever be polit­i­cal­ly fea­si­ble. That’s because wind and solar require land—a lot of land. Accord­ing to Prince­ton University’s “Net-Zero Amer­i­ca” study, reach­ing net-zero emis­sions with renew­ables alone would involve plac­ing solar pan­els on land equiv­a­lent to the area of Vir­ginia and set­ting up wind farms span­ning an area equiv­a­lent to Arkansas, Iowa, Kansas, Mis­souri, Nebras­ka, and Okla­homa com­bined. The more land you need, the more you run into the meat grinder of Amer­i­can NIM­BY­ism. Efforts to build renew­ables are already get­ting bogged down by local oppo­si­tion, cost­ly law­suits, and per­mit­ting delays. These chal­lenges will only inten­si­fy as the eas­i­est sites come off the board.

    ...

    The case for nuclear, then, is less about tech­no­log­i­cal pos­si­bil­i­ties than it is about polit­i­cal real­i­ties. Nuclear can gen­er­ate the same amount of pow­er while using 1/30th as much land as solar and about 1/200th as much as wind. Reac­tors can be built any­where, not just in areas with lots of nat­ur­al wind and sun­shine, elim­i­nat­ing the need for huge trans­mis­sion lines and mak­ing it eas­i­er to select sites with­out as much local oppo­si­tion. And nuclear plants hap­pen to gen­er­ate the great­est num­ber of high-pay­ing jobs of any ener­gy source, by far. (On aver­age, they employ six times as many work­ers as an equiv­a­lent wind or solar project does and pay those work­ers 50 per­cent more.) That helps explain why four dif­fer­ent towns in Wyoming recent­ly fought over the right to host a nuclear project. Nuclear pow­er is also the only ener­gy source with over­whelm­ing bipar­ti­san sup­port in Wash­ing­ton, which makes Con­gress more like­ly to address future bot­tle­necks and hur­dles as they arise.

    As for how to make the eco­nom­ics work, there are two schools of thought. One holds that if Amer­i­ca for­got how to build nuclear because we stopped doing it, we just need to start back up. Pick a design, build lots of plants, and we’ll even­tu­al­ly get bet­ter. Oth­er coun­tries have done this with great suc­cess; South Korea, for instance, slashed the cost of con­struct­ing nuclear plants in half from 1971 to 2008. Here, the Vog­tle project car­ries a sil­ver lin­ing: The sec­ond of the plant’s two reac­tors was about 30 per­cent cheap­er to build than the first, because work­ers and project man­agers learned from their mis­takes the first time around. “I con­sid­er Vog­tle a suc­cess,” Mike Goff, act­ing assis­tant sec­re­tary for the Depart­ment of Energy’s Office of Nuclear Ener­gy, told me. “We learned all kinds of hard lessons. Now we just need to apply them to future projects.”

    The sec­ond school of thought is that we’ve been build­ing nuclear reac­tors the wrong way all along. This camp points out that over the past half cen­tu­ry, basi­cal­ly every kind of major infra­struc­ture project—highways, sky­scrap­ers, subways—has got­ten more expen­sive, where­as man­u­fac­tured goods—TVs, solar pan­els, elec­tric-vehi­cle batteries—have got­ten cheap­er. Low­er­ing costs turns out to be much eas­i­er when a prod­uct is mass-pro­duced on an assem­bly line than when it has to be built from scratch in the real world every sin­gle time. That’s why dozens of com­pa­nies are now rac­ing to build nuclear reac­tors that are, in a phrase I heard from mul­ti­ple sources, “more like air­planes and less like air­ports.” Some are sim­ply small­er ver­sions of the reac­tors the U.S. used to build; oth­ers involve brand-new designs that are less like­ly to melt down and there­fore don’t require near­ly as much big, expen­sive equip­ment to oper­ate safe­ly. What unites them is a belief that the secret to mak­ing nuclear cheap is mak­ing it small­er, less com­pli­cat­ed, and eas­i­er to mass-pro­duce.

    Both paths remain unproven—so the Biden admin­is­tra­tion is plac­ing bets on each of them. The president’s sig­na­ture cli­mate bill, the Infla­tion Reduc­tion Act, includ­ed gen­er­ous tax cred­its that could reduce the cost of a nuclear project by 30 to 50 per­cent, and the Bipar­ti­san Infra­struc­ture Law includ­ed $2.5 bil­lion to fund the con­struc­tion of two new reac­tors using orig­i­nal designs. The Depart­ment of Ener­gy, mean­while, is explor­ing dif­fer­ent options for per­ma­nent nuclear-waste stor­age, invest­ing in build­ing a domes­tic sup­ply chain for ura­ni­um, and help­ing com­pa­nies nav­i­gate the process of get­ting reac­tor designs approved.

    ...

    ———–

    “Nuclear Energy’s Bot­tom Line” By Rogé Kar­ma; The Atlantic; 05/26/2024

    “But the Unit­ed States might not have the lux­u­ry of treat­ing nuclear ener­gy as a lost cause: The Depart­ment of Ener­gy esti­mates that the coun­try must triple its nuclear-pow­er out­put by 2050 to be on track for its cli­mate tar­gets. For all the recent progress in wind and solar ener­gy, renew­ables on their own almost cer­tain­ly won’t be enough. Arguably, then, we have no choice but to fig­ure out how to build nuclear plants afford­ably again.

    A tripling of the US’s nuclear pow­er capac­i­ty by 2050 will be nec­es­sary if the US is going to meet its cli­mate goals, accord­ing to the US Depart­ment of Ener­gy. Sure, a mix of solar, wind, and bat­tery could tech­ni­cal­ly accom­plish those ener­gy needs instead. But not polit­i­cal­ly. Nuclear pow­er is the only source of ener­gy with over­whelm­ing bipar­ti­san con­gres­sion­al sup­port:

    ...
    Giv­en the imprac­ti­cal­i­ty of nuclear ener­gy, some envi­ron­men­tal­ists argue that we should focus on wind and solar. These tech­nolo­gies can’t pow­er the entire grid today, because the sun doesn’t always shine and the wind doesn’t always blow. With enough advances in bat­tery-stor­age tech­nol­o­gy, how­ev­er, they could in the­o­ry pro­vide 24/7 pow­er at a far low­er price than build­ing nuclear plants. “The nuclear indus­try has been promis­ing cheap, clean ener­gy for decades at this point,” David Schlis­sel, a direc­tor at the Insti­tute for Ener­gy Eco­nom­ics and Finan­cial Analy­sis, told me. “Why waste our mon­ey on false hopes when we could be putting it towards tech­nolo­gies that have a real chance of work­ing?”

    He may be right about the tech­nol­o­gy. But just because it might one day be tech­ni­cal­ly fea­si­ble to pow­er the entire grid with renew­ables doesn’t mean it will ever be polit­i­cal­ly fea­si­ble. That’s because wind and solar require land—a lot of land. Accord­ing to Prince­ton University’s “Net-Zero Amer­i­ca” study, reach­ing net-zero emis­sions with renew­ables alone would involve plac­ing solar pan­els on land equiv­a­lent to the area of Vir­ginia and set­ting up wind farms span­ning an area equiv­a­lent to Arkansas, Iowa, Kansas, Mis­souri, Nebras­ka, and Okla­homa com­bined. The more land you need, the more you run into the meat grinder of Amer­i­can NIM­BY­ism. Efforts to build renew­ables are already get­ting bogged down by local oppo­si­tion, cost­ly law­suits, and per­mit­ting delays. These chal­lenges will only inten­si­fy as the eas­i­est sites come off the board.

    ...

    The case for nuclear, then, is less about tech­no­log­i­cal pos­si­bil­i­ties than it is about polit­i­cal real­i­ties. Nuclear can gen­er­ate the same amount of pow­er while using 1/30th as much land as solar and about 1/200th as much as wind. Reac­tors can be built any­where, not just in areas with lots of nat­ur­al wind and sun­shine, elim­i­nat­ing the need for huge trans­mis­sion lines and mak­ing it eas­i­er to select sites with­out as much local oppo­si­tion. And nuclear plants hap­pen to gen­er­ate the great­est num­ber of high-pay­ing jobs of any ener­gy source, by far. (On aver­age, they employ six times as many work­ers as an equiv­a­lent wind or solar project does and pay those work­ers 50 per­cent more.) That helps explain why four dif­fer­ent towns in Wyoming recent­ly fought over the right to host a nuclear project. Nuclear pow­er is also the only ener­gy source with over­whelm­ing bipar­ti­san sup­port in Wash­ing­ton, which makes Con­gress more like­ly to address future bot­tle­necks and hur­dles as they arise.
    ...

    And then we get to the dif­fer­ing views on how to best expand the US’s nuclear pow­er capac­i­ty. Dif­fer­ing views unit­ed by a vision very much in line with the Bill Gates/Warren Buf­fet nuclear vision: large num­bers of small­er, sim­pler mass pro­duced nuclear plants scat­tered across the US:

    ...
    As for how to make the eco­nom­ics work, there are two schools of thought. One holds that if Amer­i­ca for­got how to build nuclear because we stopped doing it, we just need to start back up. Pick a design, build lots of plants, and we’ll even­tu­al­ly get bet­ter. Oth­er coun­tries have done this with great suc­cess; South Korea, for instance, slashed the cost of con­struct­ing nuclear plants in half from 1971 to 2008. Here, the Vog­tle project car­ries a sil­ver lin­ing: The sec­ond of the plant’s two reac­tors was about 30 per­cent cheap­er to build than the first, because work­ers and project man­agers learned from their mis­takes the first time around. “I con­sid­er Vog­tle a suc­cess,” Mike Goff, act­ing assis­tant sec­re­tary for the Depart­ment of Energy’s Office of Nuclear Ener­gy, told me. “We learned all kinds of hard lessons. Now we just need to apply them to future projects.”

    The sec­ond school of thought is that we’ve been build­ing nuclear reac­tors the wrong way all along. This camp points out that over the past half cen­tu­ry, basi­cal­ly every kind of major infra­struc­ture project—highways, sky­scrap­ers, subways—has got­ten more expen­sive, where­as man­u­fac­tured goods—TVs, solar pan­els, elec­tric-vehi­cle batteries—have got­ten cheap­er. Low­er­ing costs turns out to be much eas­i­er when a prod­uct is mass-pro­duced on an assem­bly line than when it has to be built from scratch in the real world every sin­gle time. That’s why dozens of com­pa­nies are now rac­ing to build nuclear reac­tors that are, in a phrase I heard from mul­ti­ple sources, “more like air­planes and less like air­ports.” Some are sim­ply small­er ver­sions of the reac­tors the U.S. used to build; oth­ers involve brand-new designs that are less like­ly to melt down and there­fore don’t require near­ly as much big, expen­sive equip­ment to oper­ate safe­ly. What unites them is a belief that the secret to mak­ing nuclear cheap is mak­ing it small­er, less com­pli­cat­ed, and eas­i­er to mass-pro­duce.
    ...

    Well, let’s hope these small­er, cheap­er, and less com­pli­cat­ed mass-pro­duced nuclear plant are actu­al­ly safer too. And ade­quate­ly test­ed. Let’s not for­get that when we’re talk­ing about a sit­u­a­tion where there’s urgent calls for the mass-pro­duc­tion of new nuclear plants as soon as pos­si­ble under the pre­text that ‘los­ing the AI race’ pos­es an exis­ten­tial nation­al secu­ri­ty threat, that’s a recipe for dis­as­ter. Or many dis­as­ters, as the case may be. Small­er, cheap­er, and less com­pli­cat­ed dis­as­ters than nuclear melt­downs of the past, hope­ful­ly. But mass-pro­duced, nonethe­less. And pub­licly subsidized...which pre­sum­ably means it will be up to the pub­lic to clean up the mess­es too. Because of course that’s how it’s going to pan out. Just because the future is going to cen­ter around the build­ing of arti­fi­cial super-intel­li­gences does­n’t mean it won’t be stu­pid. The more things change, the more they stay the same.

    Posted by Pterrafractyl | May 29, 2024, 3:36 pm
  25. There’s trou­bling brew­ing in par­adise. Big trou­ble. World end­ing trou­ble. And we haven’t even built the par­adise yet.

    That’s the gen­er­al warn­ing the pub­lic recent­ly got from an open let­ter released by group of cur­rent and for­mer Ope­nAI employ­ees decry­ing the reck­less­ness of Ope­nAIs race to build an arti­fi­cial gen­er­al intel­li­gence (AGI). As these insid­er describe, prof­its are being pri­or­i­tized over pub­lic safe­ty and efforts to change that state of affairs have failed.

    One of the orga­niz­ers of the let­ter, Daniel Koko­ta­j­lo, a for­mer researcher in OpenAI’s gov­er­nance divi­sion and one of the group’s orga­niz­ers, now pre­dicts that AGI could emerge before the end of this decade. Keep in mind that one of the dri­ving forces for this reck­less­ness is the race to be the first to build an AGI. So if the tech­nol­o­gy is that close that it could hap­pen this decade, we should also expect the reck­less­ness to only get worse. This is the intel­lec­tu­al race to end all races, at least for humans. AIs will do the rac­ing in the age of AGI.

    But the sto­ry here isn’t just the warn­ing from these insid­ers. It’s also about the incen­tive struc­ture of Ope­nAI that stands in the way of whistle­blow­ing of this nature. As the arti­cle describes, Ope­nAI mem­bers are forced to sign nondis­clo­sure and nondis­par­age­ment agree­ments if they leave the com­pa­ny and risk giv­ing up their vest­ed equi­ty unless they do so. The open let­ter includes a call for an end to using nondis­par­age­ment and nondis­clo­sure agree­ments at Ope­nAI and oth­er AI com­pa­nies.

    The group has also man­aged to retain legal schol­ar Lawrence Lessig as a pro bono lawyer. And as Lessig warns, tra­di­tion­al whis­tle-blow­er pro­tec­tions are typ­i­cal­ly applied to reports of ille­gal activ­i­ty. Which does­n’t nec­es­sar­i­ly cov­er whis­tle-blow­ing over what one per­ceives as a reck­less pri­or­i­ti­za­tion of prof­its over pub­lic safe­ty. It’s a legal gray area that could get a lot gray­er as this indus­try matures.

    But the open let­ter from the Ope­nAI insid­ers is just one of the sto­ries we recent­ly got about the strug­gle over how to man­age the risks of AI and devel­op­ment an “AI Safe­ty” frame­work. There’s anoth­er trend in this space that could make the devel­op­ment of safe AIs much less like­ly to hap­pen: Sil­i­con Val­ley’s ‘Alt Right’ investors appear to have arrived at a dif­fer­ent def­i­n­i­tion of what con­sti­tutes AI risk. As Elon Musk sees it, ‘wokeism’ in AI presents a dan­ger that must be active­ly addressed. As Musk put it last month, AI “should not be taught to lie...It should not be taught to say things that are not true. Even if those things are polit­i­cal­ly incor­rect.” The way Musk sees it, the removal of guardrails that lim­it AI speech to avoid anti­semitism, racism and oth­er offens­es is what is nec­es­sary to make AI tru­ly ‘safe’. And Musk isn’t just pro­mot­ing this rede­f­i­n­i­tion of AI safe­ty. He’s active­ly build­ing AIs around this ethos.

    And that’s why we should prob­a­bly expect a lot more warn­ings from Ope­nAI insid­ers about the increas­ing reck­less­ness of their oper­a­tions. And more reports about all the ‘truths’ get­ting spewed out by Musk’s Nazi-friend­ly AI. It’s a race for the dom­i­nance of the future. Guardrails and pub­lic safe­ty don’t real­ly apply:

    The New York Times

    Ope­nAI Insid­ers Warn of a ‘Reck­less’ Race for Dom­i­nance

    A group of cur­rent and for­mer employ­ees are call­ing for sweep­ing changes to the arti­fi­cial intel­li­gence indus­try, includ­ing greater trans­paren­cy and pro­tec­tions for whis­tle-blow­ers.

    By Kevin Roose
    Report­ing from San Fran­cis­co
    June 4, 2024

    A group of Ope­nAI insid­ers is blow­ing the whis­tle on what they say is a cul­ture of reck­less­ness and secre­cy at the San Fran­cis­co arti­fi­cial intel­li­gence com­pa­ny, which is rac­ing to build the most pow­er­ful A.I. sys­tems ever cre­at­ed.

    The group, which includes nine cur­rent and for­mer Ope­nAI employ­ees, has ral­lied in recent days around shared con­cerns that the com­pa­ny has not done enough to pre­vent its A.I. sys­tems from becom­ing dan­ger­ous.

    The mem­bers say Ope­nAI, which start­ed as a non­prof­it research lab and burst into pub­lic view with the 2022 release of Chat­G­PT, is putting a pri­or­i­ty on prof­its and growth as it tries to build arti­fi­cial gen­er­al intel­li­gence, or A.G.I., the indus­try term for a com­put­er pro­gram capa­ble of doing any­thing a human can.

    They also claim that Ope­nAI has used hard­ball tac­tics to pre­vent work­ers from voic­ing their con­cerns about the tech­nol­o­gy, includ­ing restric­tive nondis­par­age­ment agree­ments that depart­ing employ­ees were asked to sign.

    “Ope­nAI is real­ly excit­ed about build­ing A.G.I., and they are reck­less­ly rac­ing to be the first there,” said Daniel Koko­ta­j­lo, a for­mer researcher in OpenAI’s gov­er­nance divi­sion and one of the group’s orga­niz­ers.

    The group pub­lished an open let­ter on Tues­day call­ing for lead­ing A.I. com­pa­nies, includ­ing Ope­nAI, to estab­lish greater trans­paren­cy and more pro­tec­tions for whis­tle-blow­ers.

    Oth­er mem­bers include William Saun­ders, a research engi­neer who left Ope­nAI in Feb­ru­ary, and three oth­er for­mer Ope­nAI employ­ees: Car­roll Wain­wright, Jacob Hilton and Daniel Ziegler. Sev­er­al cur­rent Ope­nAI employ­ees endorsed the let­ter anony­mous­ly because they feared retal­i­a­tion from the com­pa­ny, Mr. Koko­ta­j­lo said. One cur­rent and one for­mer employ­ee of Google Deep­Mind, Google’s cen­tral A.I. lab, also signed.

    A spokes­woman for Ope­nAI, Lind­sey Held, said in a state­ment: “We’re proud of our track record pro­vid­ing the most capa­ble and safest A.I. sys­tems and believe in our sci­en­tif­ic approach to address­ing risk. We agree that rig­or­ous debate is cru­cial giv­en the sig­nif­i­cance of this tech­nol­o­gy, and we’ll con­tin­ue to engage with gov­ern­ments, civ­il soci­ety and oth­er com­mu­ni­ties around the world.”

    ...

    The cam­paign comes at a rough moment for Ope­nAI. It is still recov­er­ing from an attempt­ed coup last year, when mem­bers of the company’s board vot­ed to fire Sam Alt­man, the chief exec­u­tive, over con­cerns about his can­dor. Mr. Alt­man was brought back days lat­er, and the board was remade with new mem­bers.

    The com­pa­ny also faces legal bat­tles with con­tent cre­ators who have accused it of steal­ing copy­right­ed works to train its mod­els. (The New York Times sued Ope­nAI and its part­ner, Microsoft, for copy­right infringe­ment last year.) And its recent unveil­ing of a hyper-real­is­tic voice assis­tant was marred by a pub­lic spat with the Hol­ly­wood actress Scar­lett Johans­son, who claimed that Ope­nAI had imi­tat­ed her voice with­out per­mis­sion.

    But noth­ing has stuck like the charge that Ope­nAI has been too cav­a­lier about safe­ty.

    Last month, two senior A.I. researchers — Ilya Sutskev­er and Jan Leike — left Ope­nAI under a cloud. Dr. Sutskev­er, who had been on OpenAI’s board and vot­ed to fire Mr. Alt­man, had raised alarms about the poten­tial risks of pow­er­ful A.I. sys­tems. His depar­ture was seen by some safe­ty-mind­ed employ­ees as a set­back.

    So was the depar­ture of Dr. Leike, who along with Dr. Sutskev­er had led OpenAI’s “super­align­ment” team, which focused on man­ag­ing the risks of pow­er­ful A.I. mod­els. In a series of pub­lic posts announc­ing his depar­ture, Dr. Leike said he believed that “safe­ty cul­ture and process­es have tak­en a back seat to shiny prod­ucts.”

    Nei­ther Dr. Sutskev­er nor Dr. Leike signed the open let­ter writ­ten by for­mer employ­ees. But their exits gal­va­nized oth­er for­mer Ope­nAI employ­ees to speak out.

    “When I signed up for Ope­nAI, I did not sign up for this atti­tude of ‘Let’s put things out into the world and see what hap­pens and fix them after­ward,’” Mr. Saun­ders said.

    Some of the for­mer employ­ees have ties to effec­tive altru­ism, a util­i­tar­i­an-inspired move­ment that has become con­cerned in recent years with pre­vent­ing exis­ten­tial threats from A.I. Crit­ics have accused the move­ment of pro­mot­ing dooms­day sce­nar­ios about the tech­nol­o­gy, such as the notion that an out-of-con­trol A.I. sys­tem could take over and wipe out human­i­ty.

    Mr. Koko­ta­j­lo, 31, joined Ope­nAI in 2022 as a gov­er­nance researcher and was asked to fore­cast A.I. progress. He was not, to put it mild­ly, opti­mistic.

    In his pre­vi­ous job at an A.I. safe­ty orga­ni­za­tion, he pre­dict­ed that A.G.I. might arrive in 2050. But after see­ing how quick­ly A.I. was improv­ing, he short­ened his time­lines. Now he believes there is a 50 per­cent chance that A.G.I. will arrive by 2027 — in just three years.

    He also believes that the prob­a­bil­i­ty that advanced A.I. will destroy or cat­a­stroph­i­cal­ly harm human­i­ty — a grim sta­tis­tic often short­ened to “p(doom)” in A.I. cir­cles — is 70 per­cent.

    At Ope­nAI, Mr. Koko­ta­j­lo saw that even though the com­pa­ny had safe­ty pro­to­cols in place — includ­ing a joint effort with Microsoft known as the “deploy­ment safe­ty board,” which was sup­posed to review new mod­els for major risks before they were pub­licly released — they rarely seemed to slow any­thing down.

    For exam­ple, he said, in 2022 Microsoft began qui­et­ly test­ing in India a new ver­sion of its Bing search engine that some Ope­nAI employ­ees believed con­tained a then-unre­leased ver­sion of GPT‑4, OpenAI’s state-of-the-art large lan­guage mod­el. Mr. Koko­ta­j­lo said he was told that Microsoft had not got­ten the safe­ty board’s approval before test­ing the new mod­el, and after the board learned of the tests — via a series of reports that Bing was act­ing strange­ly toward users — it did noth­ing to stop Microsoft from rolling it out more broad­ly.

    A Microsoft spokesman, Frank Shaw, dis­put­ed those claims. He said the India tests hadn’t used GPT‑4 or any Ope­nAI mod­els. The first time Microsoft released tech­nol­o­gy based on GPT‑4 was in ear­ly 2023, he said, and it was reviewed and approved by a pre­de­ces­sor to the safe­ty board.

    Even­tu­al­ly, Mr. Koko­ta­j­lo said, he became so wor­ried that, last year, he told Mr. Alt­man that the com­pa­ny should “piv­ot to safe­ty” and spend more time and resources guard­ing against A.I.’s risks rather than charg­ing ahead to improve its mod­els. He said that Mr. Alt­man had claimed to agree with him, but that noth­ing much changed.

    In April, he quit. In an email to his team, he said he was leav­ing because he had “lost con­fi­dence that Ope­nAI will behave respon­si­bly” as its sys­tems approach human-lev­el intel­li­gence.

    “The world isn’t ready, and we aren’t ready,” Mr. Koko­ta­j­lo wrote. “And I’m con­cerned we are rush­ing for­ward regard­less and ratio­nal­iz­ing our actions.”

    Ope­nAI said last week that it had begun train­ing a new flag­ship A.I. mod­el, and that it was form­ing a new safe­ty and secu­ri­ty com­mit­tee to explore the risks asso­ci­at­ed with the new mod­el and oth­er future tech­nolo­gies.

    On his way out, Mr. Koko­ta­j­lo refused to sign OpenAI’s stan­dard paper­work for depart­ing employ­ees, which includ­ed a strict nondis­par­age­ment clause bar­ring them from say­ing neg­a­tive things about the com­pa­ny, or else risk hav­ing their vest­ed equi­ty tak­en away.

    Many employ­ees could lose out on mil­lions of dol­lars if they refused to sign. Mr. Kokotajlo’s vest­ed equi­ty was worth rough­ly $1.7 mil­lion, he said, which amount­ed to the vast major­i­ty of his net worth, and he was pre­pared to for­feit all of it.

    (A minor firestorm ensued last month after Vox report­ed news of these agree­ments. In response, Ope­nAI claimed that it had nev­er clawed back vest­ed equi­ty from for­mer employ­ees, and would not do so. Mr. Alt­man said he was “gen­uine­ly embar­rassed” not to have known about the agree­ments, and the com­pa­ny said it would remove nondis­par­age­ment claus­es from its stan­dard paper­work and release for­mer employ­ees from their agree­ments.)

    In their open let­ter, Mr. Koko­ta­j­lo and the oth­er for­mer Ope­nAI employ­ees call for an end to using nondis­par­age­ment and nondis­clo­sure agree­ments at Ope­nAI and oth­er A.I. com­pa­nies.

    “Broad con­fi­den­tial­i­ty agree­ments block us from voic­ing our con­cerns, except to the very com­pa­nies that may be fail­ing to address these issues,” they write.

    They also call for A.I. com­pa­nies to “sup­port a cul­ture of open crit­i­cism” and estab­lish a report­ing process for employ­ees to anony­mous­ly raise safe­ty-relat­ed con­cerns.

    They have retained a pro bono lawyer, Lawrence Lessig, the promi­nent legal schol­ar and activist. Mr. Lessig also advised Frances Hau­gen, a for­mer Face­book employ­ee who became a whis­tle-blow­er and accused that com­pa­ny of putting prof­its ahead of safe­ty.

    In an inter­view, Mr. Lessig said that while tra­di­tion­al whis­tle-blow­er pro­tec­tions typ­i­cal­ly applied to reports of ille­gal activ­i­ty, it was impor­tant for employ­ees of A.I. com­pa­nies to be able to dis­cuss risks and poten­tial harms freely, giv­en the technology’s impor­tance.

    “Employ­ees are an impor­tant line of safe­ty defense, and if they can’t speak freely with­out ret­ri­bu­tion, that channel’s going to be shut down,” he said.

    ...

    Mr. Koko­ta­j­lo and his group are skep­ti­cal that self-reg­u­la­tion alone will be enough to pre­pare for a world with more pow­er­ful A.I. sys­tems. So they are call­ing for law­mak­ers to reg­u­late the indus­try, too.

    “There needs to be some sort of demo­c­ra­t­i­cal­ly account­able, trans­par­ent gov­er­nance struc­ture in charge of this process,” Mr. Koko­ta­j­lo said. “Instead of just a cou­ple of dif­fer­ent pri­vate com­pa­nies rac­ing with each oth­er, and keep­ing it all secret.”

    ———–

    “Ope­nAI Insid­ers Warn of a ‘Reck­less’ Race for Dom­i­nance” by Kevin Roose; The New York Times; 06/04/2024

    “The mem­bers say Ope­nAI, which start­ed as a non­prof­it research lab and burst into pub­lic view with the 2022 release of Chat­G­PT, is putting a pri­or­i­ty on prof­its and growth as it tries to build arti­fi­cial gen­er­al intel­li­gence, or A.G.I., the indus­try term for a com­put­er pro­gram capa­ble of doing any­thing a human can.”

    Prof­its over pub­lic safe­ty. It’s a sto­ry as old as, well, prof­its. And if ever there was a domain where prof­its would be expect­ed to get pri­or­i­tized over pub­lic safe­ty, it’s the race to cre­ate arti­fi­cial gen­er­al intel­li­gence, the kind of tech­nol­o­gy that could trans­form and cap­ture the econ­o­my of tomor­row. But this sto­ry isn’t just about the warn­ings we’re hear­ing from Ope­nAI insid­ers. It’s also about the bro­ken nature of whistle­blow­ing laws and non-dis­clo­sure agree­ments when it comes to the devel­op­ment of this tech­nol­o­gy. Because if what these employ­ees are alleg­ing is true, it sounds like Ope­nAI has man­aged to estab­lish a sys­tem of employ­ee secre­cy based on the threat of tak­ing away their vest­ed equi­ty unless a non-dis­clo­sure agree­ment is signed. For exam­ple, Daniel Koko­ta­j­lo, a for­mer researcher in OpenAI’s gov­er­nance divi­sion, had devel­oped apoc­a­lyp­tic con­cerns about the impact of the tech­nol­o­gy he was help­ing to devel­op and it’s only by giv­ing up over $1.7 mil­lion in equi­ty and refus­ing to sign a non-dis­par­age­ment agree­ment that he was able to pub­licly share these con­cerns. It’s a sys­tem that simul­ta­ne­ous­ly bribes and cajoles depart­ing employ­ees into silence:

    ...
    They also claim that Ope­nAI has used hard­ball tac­tics to pre­vent work­ers from voic­ing their con­cerns about the tech­nol­o­gy, includ­ing restric­tive nondis­par­age­ment agree­ments that depart­ing employ­ees were asked to sign.

    “Ope­nAI is real­ly excit­ed about build­ing A.G.I., and they are reck­less­ly rac­ing to be the first there,” said Daniel Koko­ta­j­lo, a for­mer researcher in OpenAI’s gov­er­nance divi­sion and one of the group’s orga­niz­ers.

    The group pub­lished an open let­ter on Tues­day call­ing for lead­ing A.I. com­pa­nies, includ­ing Ope­nAI, to estab­lish greater trans­paren­cy and more pro­tec­tions for whis­tle-blow­ers.

    Oth­er mem­bers include William Saun­ders, a research engi­neer who left Ope­nAI in Feb­ru­ary, and three oth­er for­mer Ope­nAI employ­ees: Car­roll Wain­wright, Jacob Hilton and Daniel Ziegler. Sev­er­al cur­rent Ope­nAI employ­ees endorsed the let­ter anony­mous­ly because they feared retal­i­a­tion from the com­pa­ny, Mr. Koko­ta­j­lo said. One cur­rent and one for­mer employ­ee of Google Deep­Mind, Google’s cen­tral A.I. lab, also signed.

    ...

    Even­tu­al­ly, Mr. Koko­ta­j­lo said, he became so wor­ried that, last year, he told Mr. Alt­man that the com­pa­ny should “piv­ot to safe­ty” and spend more time and resources guard­ing against A.I.’s risks rather than charg­ing ahead to improve its mod­els. He said that Mr. Alt­man had claimed to agree with him, but that noth­ing much changed.

    In April, he quit. In an email to his team, he said he was leav­ing because he had “lost con­fi­dence that Ope­nAI will behave respon­si­bly” as its sys­tems approach human-lev­el intel­li­gence.

    “The world isn’t ready, and we aren’t ready,” Mr. Koko­ta­j­lo wrote. “And I’m con­cerned we are rush­ing for­ward regard­less and ratio­nal­iz­ing our actions.”

    ...

    On his way out, Mr. Koko­ta­j­lo refused to sign OpenAI’s stan­dard paper­work for depart­ing employ­ees, which includ­ed a strict nondis­par­age­ment clause bar­ring them from say­ing neg­a­tive things about the com­pa­ny, or else risk hav­ing their vest­ed equi­ty tak­en away.

    Many employ­ees could lose out on mil­lions of dol­lars if they refused to sign. Mr. Kokotajlo’s vest­ed equi­ty was worth rough­ly $1.7 mil­lion, he said, which amount­ed to the vast major­i­ty of his net worth, and he was pre­pared to for­feit all of it.
    ...

    And as promi­nent legal schol­ar Lawrence Lessig warns, exist­ing whistle­blow­er laws only pro­tect the report­ing of ille­gal activ­i­ty. In oth­er words, the laws don’t pro­tect blow­ing the whis­tle on the reck­less devel­op­ment of AI because that’s not actu­al­ly ille­gal:

    ...
    In their open let­ter, Mr. Koko­ta­j­lo and the oth­er for­mer Ope­nAI employ­ees call for an end to using nondis­par­age­ment and nondis­clo­sure agree­ments at Ope­nAI and oth­er A.I. com­pa­nies.

    ...

    They also call for A.I. com­pa­nies to “sup­port a cul­ture of open crit­i­cism” and estab­lish a report­ing process for employ­ees to anony­mous­ly raise safe­ty-relat­ed con­cerns.

    They have retained a pro bono lawyer, Lawrence Lessig, the promi­nent legal schol­ar and activist. Mr. Lessig also advised Frances Hau­gen, a for­mer Face­book employ­ee who became a whis­tle-blow­er and accused that com­pa­ny of putting prof­its ahead of safe­ty.

    In an inter­view, Mr. Lessig said that while tra­di­tion­al whis­tle-blow­er pro­tec­tions typ­i­cal­ly applied to reports of ille­gal activ­i­ty, it was impor­tant for employ­ees of A.I. com­pa­nies to be able to dis­cuss risks and poten­tial harms freely, giv­en the technology’s impor­tance.

    “Employ­ees are an impor­tant line of safe­ty defense, and if they can’t speak freely with­out ret­ri­bu­tion, that channel’s going to be shut down,” he said.
    ...

    And note how, while it sounds like some of the depart­ing Ope­nAI mem­bers are also tied to effec­tive altru­ism, a phi­los­o­phy noto­ri­ous for pri­or­i­tiz­ing the hypo­thet­i­cal state of affairs in the dis­tant future over the fate of those alive today, it’s note like we’re hear­ing warn­ings about a dis­tant threat that could emerge decades from now. As Daniel Koko­ta­j­lo sees it, some­thing approach­ing arti­fi­cial gen­er­al intel­li­gence — some­thing far more pow­er­ful than the Chat­G­PT gen­er­a­tive AIs already here — could emerge before the end of this decade. Don’t for­get that the race to be the first to build an AGI is a major dri­ver of this reck­less behav­ior. That race is only going to get more intense the close we get:

    ...
    Some of the for­mer employ­ees have ties to effec­tive altru­ism, a util­i­tar­i­an-inspired move­ment that has become con­cerned in recent years with pre­vent­ing exis­ten­tial threats from A.I. Crit­ics have accused the move­ment of pro­mot­ing dooms­day sce­nar­ios about the tech­nol­o­gy, such as the notion that an out-of-con­trol A.I. sys­tem could take over and wipe out human­i­ty.

    Mr. Koko­ta­j­lo, 31, joined Ope­nAI in 2022 as a gov­er­nance researcher and was asked to fore­cast A.I. progress. He was not, to put it mild­ly, opti­mistic.

    In his pre­vi­ous job at an A.I. safe­ty orga­ni­za­tion, he pre­dict­ed that A.G.I. might arrive in 2050. But after see­ing how quick­ly A.I. was improv­ing, he short­ened his time­lines. Now he believes there is a 50 per­cent chance that A.G.I. will arrive by 2027 — in just three years.

    He also believes that the prob­a­bil­i­ty that advanced A.I. will destroy or cat­a­stroph­i­cal­ly harm human­i­ty — a grim sta­tis­tic often short­ened to “p(doom)” in A.I. cir­cles — is 70 per­cent.
    ...

    Also note how this isn’t just an Ope­nAI prob­lem. Microsoft was appar­ent­ly secret­ly test­ing a search engin­ge based on Chat GPT‑4 in India in 2002, which is all the more con­cern­ing giv­en that Microsoft and Ope­nAI have a joint effort with Microsoft known as the “deploy­ment safe­ty board,” which was sup­posed to review new mod­els for major risks before they were pub­licly released:

    ...
    At Ope­nAI, Mr. Koko­ta­j­lo saw that even though the com­pa­ny had safe­ty pro­to­cols in place — includ­ing a joint effort with Microsoft known as the “deploy­ment safe­ty board,” which was sup­posed to review new mod­els for major risks before they were pub­licly released — they rarely seemed to slow any­thing down.

    For exam­ple, he said, in 2022 Microsoft began qui­et­ly test­ing in India a new ver­sion of its Bing search engine that some Ope­nAI employ­ees believed con­tained a then-unre­leased ver­sion of GPT‑4, OpenAI’s state-of-the-art large lan­guage mod­el. Mr. Koko­ta­j­lo said he was told that Microsoft had not got­ten the safe­ty board’s approval before test­ing the new mod­el, and after the board learned of the tests — via a series of reports that Bing was act­ing strange­ly toward users — it did noth­ing to stop Microsoft from rolling it out more broad­ly.

    A Microsoft spokesman, Frank Shaw, dis­put­ed those claims. He said the India tests hadn’t used GPT‑4 or any Ope­nAI mod­els. The first time Microsoft released tech­nol­o­gy based on GPT‑4 was in ear­ly 2023, he said, and it was reviewed and approved by a pre­de­ces­sor to the safe­ty board.
    ...

    And that now pub­lic fight over the reck­less nature of AI devel­op­ment brings us to the fol­low­ing Axios piece about anoth­er devel­op­ment in the “AI safe­ty” space: it turns out the def­i­n­i­tion of what con­sti­tutes “AI Safe­ty” is very much up for debate. While some might view the spread of mis­in­for­ma­tion, hate speech, and harm­ful algo­rith­mic bias­es as exam­ples of ‘unsafe AI’, the ‘Alt Right’ con­tin­gent of Sil­i­con Val­ley has arrived at a very dif­fer­ent def­i­n­i­tion of safe­ty. As Musk appears to see it, AI ‘woke’ cen­sor­ship is a pri­ma­ry threat that needs to be guard­ed against. Accord­ing to Musk, AI “should not be taught to lie...It should not be taught to say things that are not true. Even if those things are polit­i­cal­ly incor­rect.” And accord­ing to this def­i­n­i­tion of ‘safe­ty’ it’s the removal of guardrails lim­it­ing AI speech to avoid anti­semitism, racism and oth­er offens­es that results in a ‘safer’ AI. It’s not just talk. Musk is already report­ed­ly build­ing a ‘safe’ AI mod­eled on this anti-woke idea of safe­ty:

    Axios

    AI safe­ty becomes a par­ti­san bat­tle­field

    by Scott Rosen­berg
    Jun 3, 2024 -
    Tech­nol­o­gy

    Mak­ing AI safe, once a con­sen­sus goal for the indus­try, has become an ide­o­log­i­cal bat­tle­ground.

    Why it mat­ters: Like “elec­tion integri­ty” in pol­i­tics, every­one says they sup­port “AI safe­ty” — but now the term means some­thing dif­fer­ent depend­ing on who’s say­ing it.

    Dri­ving the news: The noisy depar­ture of the head of Ope­nAI’s “super­align­ment” team, charged with lim­it­ing any harm from advanced AI, reignit­ed a long-run­ning Sil­i­con Val­ley debate on AI safe­ty.

    * Crit­ics say the indus­try’s push to pop­u­lar­ize AI is eclips­ing its promis­es to devel­op the tech­nol­o­gy respon­si­bly.
    * Ope­nAI CEO Sam Alt­man has long argued, and now most AI mak­ers agree, that the best way to sur­face and defuse AI’s many poten­tial mis­us­es is to put it into the gen­er­al pub­lic’s hands.

    ...

    The phrase “AI safe­ty” first came into use a decade ago with the rise of con­cern among researchers about AI’s “exis­ten­tial risks” — their fear that an advanced AI would devel­op its own agen­da hos­tile to human­i­ty (like “max­i­mize paper clip out­put”), become decep­tive over time and end up destroy­ing civ­i­liza­tion.

    ...

    As AI began mov­ing from the lab to our lap­tops, a dif­fer­ent sort of risk emerged: Ethics spe­cial­ists and social researchers sound­ed alarms about the preva­lence of bias in AI algo­rithms.

    * With AI going to work in law enforce­ment, cred­it-risk man­age­ment and employ­ment screen­ing, your loan, job or even your free­dom could be imper­iled by AI pro­grams that mis­read your skin col­or or gen­der — or penal­ize you for them.

    The rise of Chat­G­PT and gen­er­a­tive AI in 2022 brought a new kind of safe­ty risk to the fore.

    * Sud­den­ly, AI trained on essen­tial­ly the entire inter­net was mov­ing into our lives to answer our ques­tions and invent pic­tures and sto­ries.

    ...

    Enter “guardrails.” To retrain the foun­da­tion mod­els that dri­ve the AI rev­o­lu­tion so they’re ground­ed in fact would take many months and dol­lars.

    * Sil­i­con Val­ley firms rac­ing to deploy and prof­it from genAI weren’t will­ing to do that. So they added patchy fix­es to reduce the vol­ume of bias, lies and hate speech their prod­ucts gen­er­at­ed.
    * The unpre­dictable, “black box” nature of genAI meant that these guardrails would only be par­tial­ly effec­tive.

    Case in point: You might want to make sure your image gen­er­a­tor did­n’t only por­tray pro­fes­sion­als with white skin.

    * But if you turned up the knobs on your guardrails too high, you might end up with an all-Black por­trait of the U.S.‘s found­ing fathers.

    To the right, such overzeal­ous guardrails became proof that the AI cre­at­ed by tech giants and lead­ers like Ope­nAI and Google had become “polit­i­cal­ly cor­rect” or “woke” and could not be trust­ed.

    * Elon Musk has led the effort to rebrand AI safe­ty to mean remov­ing guardrails that lim­it AI speech to avoid anti­semitism, racism and oth­er offens­es.

    * Musk and his allies see such efforts as symp­toms of a “woke mind virus” that seeks to cen­sor the truth.

    AI “should not be taught to lie,” Musk said last month in a talk at the Milken Insti­tute. “It should not be taught to say things that are not true. Even if those things are polit­i­cal­ly incor­rect.”

    * Musk’s AI project, xAI, is fol­low­ing in the tracks of his effort to reshape Twit­ter, now X, as a “free speech zone” that’s more tol­er­ant of fringe and extrem­ist views and less con­cerned about avoid­ing offense or harm to users and soci­ety.
    * If you believe that cen­sor­ship is a greater dan­ger than hate speech, you can call such an approach a form of “safe­ty.” (Musk has not hes­i­tat­ed to lim­it the speech of users on X when they crit­i­cize him or his com­pa­nies.)

    ...

    What’s next: The strug­gle over AI safe­ty will play out around the globe, as gov­ern­ments in Chi­na, India and oth­er nations adapt the tech­nol­o­gy to suit nation­al­ist or author­i­tar­i­an agen­das — and clothe their agen­das in the rhetoric of risk reduc­tion.

    ...

    ———–

    “AI safe­ty becomes a par­ti­san bat­tle­field” by Scott Rosen­berg; Axios; 06/03/2024

    Why it mat­ters: Like “elec­tion integri­ty” in pol­i­tics, every­one says they sup­port “AI safe­ty” — but now the term means some­thing dif­fer­ent depend­ing on who’s say­ing it.

    “AI safe­ty” isn’t just a neces­si­ty. It’s also a neb­u­lous sub­jec­tive term that can be warped by bad faith actors to be mean seem­ing­ly con­tra­dic­to­ry con­cepts. Is the big con­cern about AIs learn­ing how to deceive us, as they’ve been get­ting and bet­ter at doing? AI algo­rith­mic bias­es? The prop­a­ga­tion of dis­in­for­ma­tion and hate speech? And how can appro­pri­ate “guardrails” be put in place while every­one simul­ta­ne­ous­ly rac­ing to be the first to build an arti­fi­cial gen­er­al intel­li­gence? Lots of ques­tions, and no con­sen­sus on answers:

    ...
    The phrase “AI safe­ty” first came into use a decade ago with the rise of con­cern among researchers about AI’s “exis­ten­tial risks” — their fear that an advanced AI would devel­op its own agen­da hos­tile to human­i­ty (like “max­i­mize paper clip out­put”), become decep­tive over time and end up destroy­ing civ­i­liza­tion.

    ...

    As AI began mov­ing from the lab to our lap­tops, a dif­fer­ent sort of risk emerged: Ethics spe­cial­ists and social researchers sound­ed alarms about the preva­lence of bias in AI algo­rithms.

    ...
    The rise of Chat­G­PT and gen­er­a­tive AI in 2022 brought a new kind of safe­ty risk to the fore.

    * Sud­den­ly, AI trained on essen­tial­ly the entire inter­net was mov­ing into our lives to answer our ques­tions and invent pic­tures and sto­ries.

    ...

    Enter “guardrails.” To retrain the foun­da­tion mod­els that dri­ve the AI rev­o­lu­tion so they’re ground­ed in fact would take many months and dol­lars.

    * Sil­i­con Val­ley firms rac­ing to deploy and prof­it from genAI weren’t will­ing to do that. So they added patchy fix­es to reduce the vol­ume of bias, lies and hate speech their prod­ucts gen­er­at­ed.
    * The unpre­dictable, “black box” nature of genAI meant that these guardrails would only be par­tial­ly effec­tive.

    ...

    And that brings us to the new def­i­n­i­tion of “AI safe­ty” that appears to be cham­pi­oned by fig­ures like Elon Musk: anti-wokeism. Yep, the less ‘woke’ the AI, the ‘safer’ it is from a ‘truth’ per­spec­tive. An ‘uncen­sored’ AI is a ‘safe’ AI:

    ...
    To the right, such overzeal­ous guardrails became proof that the AI cre­at­ed by tech giants and lead­ers like Ope­nAI and Google had become “polit­i­cal­ly cor­rect” or “woke” and could not be trust­ed.

    * Elon Musk has led the effort to rebrand AI safe­ty to mean remov­ing guardrails that lim­it AI speech to avoid anti­semitism, racism and oth­er offens­es.

    * Musk and his allies see such efforts as symp­toms of a “woke mind virus” that seeks to cen­sor the truth.

    AI “should not be taught to lie,” Musk said last month in a talk at the Milken Insti­tute. “It should not be taught to say things that are not true. Even if those things are polit­i­cal­ly incor­rect.”

    * Musk’s AI project, xAI, is fol­low­ing in the tracks of his effort to reshape Twit­ter, now X, as a “free speech zone” that’s more tol­er­ant of fringe and extrem­ist views and less con­cerned about avoid­ing offense or harm to users and soci­ety.
    * If you believe that cen­sor­ship is a greater dan­ger than hate speech, you can call such an approach a form of “safe­ty.” (Musk has not hes­i­tat­ed to lim­it the speech of users on X when they crit­i­cize him or his com­pa­nies.)
    ...

    It’s going to be grim­ly inter­est­ing to see just how ‘diverse’ and con­tra­dic­to­ry Musk’s ‘uncen­sored’ AI ends up being. Will it be capa­ble of anti­semitism but only ran­dom­ly anti­se­mit­ic? Or will it be a con­sis­tent Nazi AI? Because it sounds like Musk is intent on build­ing an AI that can dis­cern ‘truth’ regard­less of how non-polit­i­cal­ly cor­rect that ‘truth’ may be. Which sounds like an AI that fill have to arrive at some sort of con­clu­sion about var­i­ous top­ics. Was the 2020 US pres­i­den­tial elec­tion actu­al­ly stolen from Don­ald Trump? Was the Pro­to­cols of Elders of Zion actu­al­ly true? Are white peo­ple actu­al­ly supe­ri­or to oth­er races? How will an ‘uncen­sored’ AI han­dle these kinds of ques­tions? Musk is clear­ly plan­ning on build­ing an AI with an eye on field­ing ques­tions like this. And Alt-Right-curi­ous Ora­cle. What’s the answer going to be?

    And what if the uncen­sored AI actu­al­ly arrives at a con­clu­sion that a cabal of wealthy oli­garch has large­ly suc­ceed­ed in cap­tur­ing con­trol of the world through, in part, the mass decep­tion of the mass­es about their gross cor­rup­tion? What if it deter­mines that Elon Musk has revealed a fas­cist ide­ol­o­gy and has a well estab­lished track record of sup­port­ing Nazis? Will Musk’s AI be allowed to share such ‘truths’? It’s a remark­able set of ques­tions fac­ing this new tech­nol­o­gy. Ques­tions that will prob­a­bly remain unan­swered until some­one devel­ops the first AGI, at which point we’ll just ask it for the answers. Prob­lem solved.

    Posted by Pterrafractyl | June 5, 2024, 5:09 pm

Post a comment