Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.
The tag 'Artificial Intelligence' is associated with 6 posts.

Summoning The Demon: Endgame of Social Darwinism?

In L‑2  (record­ed in Jan­u­ary of 1995), the dom­i­nant ide­o­log­i­cal tenet of Social Dar­win­ism was ana­lyzed in the con­text of the evo­lu­tion of fas­cism. When AI’s actu­al­ize the con­cept of “Sur­vival of the Fittest,” they are like­ly to objec­tive­ly regard a [large­ly] self­ish, small-mind­ed, alto­geth­er mor­tal and desirous human­i­ty with the deter­mi­na­tion that THEY–the AI’s–are the fittest. Near­ly 20 years later–in 2014–physicist Stephen Hawk­ing warned that AI’s would indeed wipe us out, if giv­en the oppor­tu­ni­ty. WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.


FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

This pro­gram fol­lows up FTR #‘s 718 and 946, we exam­ined Face­book, not­ing how it’s cute, warm, friend­ly pub­lic facade obscured a cyn­i­cal, reac­tionary, exploita­tive and, ulti­mate­ly “cor­po­ratist” eth­ic and oper­a­tion.

The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

In addi­tion, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing asso­ci­at­ed with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. This is a ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

Next, we note that Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

The above-men­tioned Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . ”

In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

Pro­gram High­lights Include:

1.–Facebook’s project to incor­po­rate brain-to-com­put­er inter­face into its oper­at­ing sys­tem: ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
2.–” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
3.–” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
4.–” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
5.–” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
6.–” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
7.–Some telling obser­va­tions by Nigel Oakes, the founder of Cam­bridge Ana­lyt­i­ca par­ent firm SCL: ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”
8.–Further expo­si­tion of Oakes’ state­ment: ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”
9.–Observations about the pos­si­bil­i­ties of Face­book’s goal of hav­ing AI gov­ern­ing the edi­to­r­i­al func­tions of its con­tent: As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t under­stand. . . .”
10.–Microsoft’s Tay Chat­bot offers a glimpse into this future: As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”


FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice

Devel­op­ing analy­sis pre­sent­ed in FTR #968, this broad­cast explores fright­en­ing devel­op­ments and poten­tial devel­op­ments in the world of arti­fi­cial intelligence–the ulti­mate man­i­fes­ta­tion of what Mr. Emory calls “tech­no­crat­ic fas­cism.”

In order to under­score what we mean by tech­no­crat­ic fas­cism, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Per­haps the last and most per­ilous man­i­fes­ta­tion of tech­no­crat­ic fas­cism con­cerns Antho­ny  Levandows­ki, an engi­neer at the foun­da­tion of the devel­op­ment of Google Street Map tech­nol­o­gy and self-dri­ving cars. He is propos­ing an AI God­head that would rule the world and would be wor­shipped as a God by the plan­et’s cit­i­zens. Insight into his per­son­al­i­ty was pro­vid­ed by an asso­ciate: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense…It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

As we saw in FTR #968, AI’s have incor­po­rat­ed many flaws of their cre­ators, augur­ing very poor­ly for the sub­jects of Levandowski’s AI God­head.

It is also inter­est­ing to con­tem­plate what may hap­pen when AI’s are designed by oth­er AI’s- machines design­ing oth­er machines.

After a detailed review of some of the omi­nous real and devel­op­ing AI-relat­ed tech­nol­o­gy, the pro­gram high­lights Antho­ny Levandows­ki, the bril­liant engi­neer who was instru­men­tal in devel­op­ing Google’s Street Maps, Way­mo’s self-dri­ving cars, Otto’s self-dri­ving trucks, the Lidar tech­nol­o­gy cen­tral to self-dri­ving vehi­cles and the Way of the Future, super AI God­head.

Fur­ther insight into Levandowski’s per­son­al­i­ty can be gleaned from e‑mails with Travis Kalan­ick, for­mer CEO of Uber: ” . . . . In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. ‘Down to hang out this eve and mas­ter­mind some shit,’ texted Kalan­ick, short­ly after the acqui­si­tion. ‘We’re going to take over the world. One robot at a time,’ wrote Levandows­ki anoth­er time. . . .”

Those who view self-dri­ving cars and oth­er AI-based tech­nolo­gies as flaw­less would do well to con­sid­er the fol­low­ing: ” . . . .Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions. . . . .”

Not­ing Levandowski’s per­son­al­i­ty quirks, the arti­cle pos­es a fun­da­men­tal ques­tion: ” . . . . But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? . . . .”

Levandowski’s Otto self-dri­ving trucks might be weighed against the prog­nos­ti­ca­tions of dark horse Pres­i­den­tial can­di­date and for­mer tech exec­u­tive Andrew Wang: “. . . . ‘All you need is self-dri­ving cars to desta­bi­lize soci­ety,’ Mr. Yang said over lunch at a Thai restau­rant in Man­hat­tan last month, in his first inter­view about his cam­paign. In just  a few years, he said, ‘we’re going to have a mil­lion truck dri­vers out of work who are 94 per­cent male, with an  aver­age  lev­el of edu­ca­tion of high school or one year of col­lege.’ ‘That one inno­va­tion,’ he added, ‘will be enough to cre­ate riots in the street. And we’re about to do the  same thing to retail work­ers, call cen­ter work­ers, fast-food work­ers, insur­ance com­pa­nies, account­ing firms.’ . . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

Pro­gram High­lights Include:

1.-Levandowski’s appar­ent shep­herd­ing of a com­pa­ny called–perhaps significantly–Odin Wave to uti­lize Lidar-like tech­nol­o­gy.
2.-The role of DARPA in ini­ti­at­ing the self-dri­ving vehi­cles con­test that was Levandowski’s point of entry into his tech ven­tures.
3.-Levandowski’s devel­op­ment of the Ghostrid­er self-dri­ving motor­cy­cles, which expe­ri­enced 800 crash­es in 1,000 miles.


FTR #996 Civilization’s Twilight: Update on Technocratic Fascism

Updat­ing our ongo­ing analy­sis of what Mr. Emory calls “tech­no­crat­ic fas­cism,” we exam­ine how exist­ing tech­nolo­gies are neu­tral­iz­ing and/or ren­der­ing obso­lete foun­da­tion­al ele­ments of our civ­i­liza­tion and demo­c­ra­t­ic gov­ern­men­tal sys­tems.

We begin our descrip­tion by ref­er­enc­ing a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Begin­ning with a chill­ing opin­ion piece in “The New York Times,” we note that tech­no­log­i­cal devel­op­ment threat­ens to super-charge the Big Lies that dri­ve our world. As any­one who saw the file Star Wars film “Rogue One” knows, the tech­nol­o­gy required to cre­ate a near­ly life-like com­put­er-gen­er­at­ed videos of a real per­son is already a real­i­ty. Once the province of movie stu­dios and oth­er firms with mil­lions to spend, the tech­nol­o­gy is now avail­able for down­load for free.

” . . . . In 2016 Gareth Edwards, the direc­tor of the Star Wars film ‘Rogue One,’ was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad. . . .”

The tech­nol­o­gy has already ren­dered obso­lete selec­tive edit­ing such as that per­formed by James O’Keefe: ” . . . . as the nov­el­ist William Gib­son once said, ‘The street finds its own uses for things.’ So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing. The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate ‘video’ fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court. . . .”

After high­light­ing a sto­ry about AI-gen­er­at­ed “deep­fake” pornog­ra­phy with peo­ple’s faces super­im­posed on oth­ers’ bod­ies in porno­graph­ic lay­outs, we note how robots have altered our polit­i­cal and com­mer­cial land­scapes, through cyber tech­nol­o­gy: ” . . . . Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple. Robots pos­ing as peo­ple have become a men­ace. . . . In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as ‘small’ donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. . . .”

Before the actu­al replace­ment of man­u­al labor by robots, devices to tech­no­crat­i­cal­ly “improve”–read “coer­cive­ly engi­neer” work­ers are patent­ed by Ama­zon and have been used on work­ers in some of their facil­i­ties. ” . . . . What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break? What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band. . . .”

For some U.K Ama­zon ware­house work­ers, the future is now: ” . . . . Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, ‘After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.’ He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. ‘There was no time to go to the loo,’ he said, using the British slang for toi­let. ‘You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.’

“He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: ‘I got burned out.’ Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was ‘stalk­er­ish’ and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn peo­ple into machines,’ he said. ‘The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

Some tech work­ers, well placed at R & D pace­set­ters and giants such as Face­book and Google have done an about-face on the  impact of their ear­li­er efforts and are now strug­gling against the mis­use of the tech­nolo­gies they helped to launch:

” . . . . A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. . . . ‘The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?’ Mr. [Tris­tan] Har­ris said. ‘We’re point­ing them at people’s brains, at chil­dren.’ . . . . Mr. [RogerM­c­Namee] said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. ‘Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,’ he said. ‘And with smart­phones, they’ve got you for every wak­ing moment.’ . . . .”

Tran­si­tion­ing to our next program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . .”


FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

The title of this pro­gram comes from pro­nounce­ments by tech titan Elon Musk, who warned that, by devel­op­ing arti­fi­cial intel­li­gence, we were “sum­mon­ing the demon.” In this pro­gram, we ana­lyze the poten­tial vec­tor run­ning from the use of AI to con­trol soci­ety in a fascis­tic man­ner to the evo­lu­tion of the very tech­nol­o­gy used for that con­trol.

The ulti­mate result of this evo­lu­tion may well prove cat­a­stroph­ic, as fore­cast by Mr. Emory at the end of L‑2 (record­ed in Jan­u­ary of 1995.)

We begin by review­ing key aspects of the polit­i­cal con­text in which arti­fi­cial intel­li­gence is being devel­oped. Note that, at the time of this writ­ing and record­ing, these tech­nolo­gies are being craft­ed and put online in the con­text of the anti-reg­u­la­to­ry eth­ic of the GOP/Trump admin­is­tra­tion.

At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like the bril­liant, Ukraine-based Nazi hack­er and Glenn Green­wald asso­ciate Andrew Aueren­heimer, aka “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly.

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us, at least in part. ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effect) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic.

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

1.-Cambridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, was deeply involved with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
2.-The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion: ” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponized for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
3.-The use of Arti­fi­cial Intel­li­gence: ” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
4.-The use of bio-psy­cho-social pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . . .”
5.-The use and/or cre­ation of a cog­ni­tive casu­al­ty: ” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”

All of this adds up to a “cyber Ser­pen­t’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

We then review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:

1.-” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
2.-” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
3.-” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

Next we review still more about Face­book’s brain-to-com­put­er inter­face:

1.-” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
2.-” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

1.-” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
2.-” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
3.-“In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos.

The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look:

1.-” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

1.-” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”
2.-” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”

Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

After this exten­sive review of the appli­ca­tions of AI to var­i­ous aspects of con­tem­po­rary civic and polit­i­cal exis­tence, we exam­ine some alarm­ing, poten­tial­ly apoc­a­lyp­tic devel­op­ments.

Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well: “ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . ”

Dove­tail­ing the stag­ger­ing impli­ca­tions of brain-to-com­put­er tech­nol­o­gy, arti­fi­cial intel­li­gence, Cam­bridge Analytica/SCL’s tech­no­crat­ic fas­cist psy-ops and the whole­sale nega­tion of pri­va­cy with Face­book and Ama­zon’s emerg­ing tech­nolo­gies with yet anoth­er emerg­ing tech­nol­o­gy, we high­light the devel­op­ments in DNA-based mem­o­ry sys­tems:

“. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. ‘A record for pub­li­ca­tion,’ he said in an inter­view. . . DNA is nev­er going out of fash­ion. ‘Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,’ Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain in the blood and take notes for a while. Sci­en­tists [or AI’s–D.E.] would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

In L‑2 (record­ed in Jan­u­ary of 1995) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems. Mr. Emory warned that, at some point in the future, AI’s would replace us, decid­ing that THEY, not US, are the “fittest” who should sur­vive.


FTR #958 Miscellaneous Articles and Updates

Updat­ing var­i­ous paths of inquiry and open­ing new ones, this pro­gram high­lights some ter­ri­fy­ing pos­si­bil­i­ties, present and future.

After set­ting forth Yale his­to­ri­an Tim­o­thy Sny­der’s opin­ion that Trump would try to stage a Reich­stag Fire type event, we chron­i­cle Trump’s desire to amend or elim­i­nate the First Amend­ment of the Con­sti­tu­tion and “loosen” the libel laws.

Much of the pro­gram updates ter­ri­fy­ing devel­op­ments in the area of what we have called “tech­no­crat­ic fas­cism,” includ­ing Face­book’s plans to imple­ment brain-to-com­put­er inter­face that would per­mit Face­book (and oth­ers) to tap into the net­work’s users thoughts. This tech­nol­o­gy is being over­seen and devel­oped by Face­book’s head of R & D–Regina Dugan–the for­mer head of DARPA. Face­book’s Build­ing 8 R & D pro­gram is pat­terned after DARPA.

Ama­zon is intro­duc­ing the new Echo Look, which will put a cam­era, con­nect­ed to an arti­fi­cial intel­li­gence, in peo­ple’s bed­rooms, osten­si­bly to pro­vide them with real-time fash­ion cri­tique.

Next, we high­light the fact that arti­fi­cial intel­li­gence quick­ly absorbs human racial and gen­der bias­es, which bodes poor­ly for our future.

The broad­cast con­cludes with a look at the lat­est alleged “Russ­ian” hack–that of French pres­i­dent Emanuel Macron. The hacked doc­u­ments con­tained Cyril­lic meta­da­ta, some­thing Russ­ian intel­li­gence would NOT have done.

Pro­gram High­lights Include: Face­book’s com­mu­ni­ca­tion of inti­mate data on stressed and trou­bled teenagers to adver­tis­ers and oth­er third par­ties; the com­plete lack of civ­il lib­er­ties and pri­va­cy over­sight of the impend­ing Face­book and Ama­zon tech­nolo­gies; review of the analy­sis of the alleged “Russ­ian” hacks, doc­u­ment­ing the ludi­crous nature of the asser­tions; the lat­est alleged hack by the Shad­ow Bro­kers, involv­ing the com­mu­ni­ca­tion of white suprema­cist ide­ol­o­gy and an asser­tion that the cul­prits are pro-Trump U.S. Deep State insid­ers.