- Spitfire List - http://spitfirelist.com -

FTR #996 Civilization’s Twilight: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE [1]. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE [2].

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE [3].

You can sub­scribe to RSS feed from Spitfirelist.com HERE [3].

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE [4].

This broad­cast was record­ed in one, 60-minute seg­ment [5].

[6]Intro­duc­tion: Updat­ing our ongo­ing analy­sis of what Mr. Emory calls “tech­no­crat­ic fas­cism,” we exam­ine how exist­ing tech­nolo­gies are neu­tral­iz­ing and/or ren­der­ing obso­lete foun­da­tion­al ele­ments of our civ­i­liza­tion and demo­c­ra­t­ic gov­ern­men­tal sys­tems.

For pur­pos­es of refresh­ing the line of argu­ment pre­sent­ed here, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. [7] ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathanHack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self [8] walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Begin­ning with a chill­ing opin­ion piece in the New York Times, we note that tech­no­log­i­cal devel­op­ment threat­ens to super-charge the Big Lies that dri­ve our world. As any­one who saw the file Star Wars film “Rogue One” knows, the tech­nol­o­gy required to cre­ate a near­ly life-like com­put­er-gen­er­at­ed videos of a real per­son is already a real­i­ty. Once the province of movie stu­dios and oth­er firms with mil­lions to spend, the tech­nol­o­gy is now avail­able for down­load for free.

” . . . . In 2016 Gareth Edwards, the direc­tor of the Star Wars film ‘Rogue One,’ was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad. . . .”

[9]The tech­nol­o­gy has already ren­dered obso­lete selec­tive edit­ing such as that per­formed by James O’Keefe: ” . . . . as the nov­el­ist William Gib­son once said, ‘The street finds its own uses for things.’ So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing. The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate ‘video’ fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court. . . .”

After high­light­ing a sto­ry about AI-gen­er­at­ed “deep­fake” pornog­ra­phy with peo­ple’s faces super­im­posed on oth­ers’ bod­ies in porno­graph­ic lay­outs, we note how robots have altered our polit­i­cal and com­mer­cial land­scapes, through cyber tech­nol­o­gy: ” . . . . Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple. Robots pos­ing as peo­ple have become a men­ace. . . . In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as ‘small’ donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. . . .”

Before the actu­al replace­ment of man­u­al labor by robots, devices to tech­no­crat­i­cal­ly “improve”–read “coer­cive­ly engi­neer” work­ers are patent­ed by Ama­zon and have been used on work­ers in some of their facil­i­ties. ” . . . . What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break? What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band. . . .”

For some U.K Ama­zon ware­house work­ers, the future is now: ” . . . . Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, ‘After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.’ He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. ‘There was no time to go to the loo,’ he said, using the British slang for toi­let. ‘You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.’

“He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: ‘I got burned out.’ Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was ‘stalk­er­ish’ and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn peo­ple into machines,’ he said. ‘The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

Some tech work­ers, well placed at R & D pace­set­ters and giants such as Face­book and Google have done an about-face on the  impact of their ear­li­er efforts and are now strug­gling against the mis­use of the tech­nolo­gies they helped to launch:

” . . . . A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. . . . ‘The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?’ Mr. [Tris­tan] Har­ris said. ‘We’re point­ing them at people’s brains, at chil­dren.’ . . . . Mr. [RogerM­c­Namee] said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. ‘Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,’ he said. ‘And with smart­phones, they’ve got you for every wak­ing moment.’ . . . .”

Tran­si­tion­ing to our next program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . .”

[10]

Pro and Con on the sub­ject of Arti­fi­cial Intel­li­gence

1.  There was a chill­ing recent opin­ion piece in the New York Times. Tech­no­log­i­cal devel­op­ment threat­ens to super-charge the Big Lies that dri­ve our world. As any­one who saw the file Star Wars film “Rogue One” knows well, the tech­nol­o­gy required to cre­ate a near­ly life-like com­put­er-gen­er­at­ed videos of a real per­son is already a real­i­ty. Once the province of movie stu­dios and oth­er firms with mil­lions to spend, the tech­nol­o­gy is now avail­able for down­load for free.

” . . . . In 2016 Gareth Edwards, the direc­tor of the Star Wars film ‘Rogue One,’ was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad. . . .”

The tech­nol­o­gy has already ren­dered obso­lete selec­tive edit­ing such as that per­formed by James O’Keefe: ” . . . . as the nov­el­ist William Gib­son once said, ‘The street finds its own uses for things.’ So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing. The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate ‘video’ fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court. . . .”

“Our Hack­able Polit­i­cal Future” by Hen­ry J. Far­rell and Rick Perl­stein; The New York Times; 02/04/2018 [11]

Imag­ine it is the spring of 2019. A bot­tom-feed­ing web­site, per­haps tied to Rus­sia, “sur­faces” video of a sex scene star­ring an 18-year-old Kirsten Gilli­brand. It is soon debunked as a fake, the prod­uct of a user-friend­ly video appli­ca­tion that employs gen­er­a­tive adver­sar­i­al net­work tech­nol­o­gy to con­vinc­ing­ly swap out one face for anoth­er.

It is the sum­mer of 2019, and the sto­ry, pre­dictably, has stuck around — part talk-show joke, part right-wing talk­ing point. “It’s news,” polit­i­cal jour­nal­ists say in their own defense. “Peo­ple are talk­ing about it. How can we not?”

Then it is fall. The junior sen­a­tor from New York State announces her cam­paign for the pres­i­den­cy. At a din­er in New Hamp­shire, one “low infor­ma­tion” vot­er asks anoth­er: “Kirsten What’s‑her-name? She’s run­ning for pres­i­dent? Didn’t she have some­thing to do with pornog­ra­phy?”

Wel­come to the shape of things to come. In 2016 Gareth Edwards, the direc­tor of the Star Wars film “Rogue One,” was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad.

Pro­grams like these have many legit­i­mate appli­ca­tions. They can help com­put­er-secu­ri­ty experts probe for weak­ness­es in their defens­es and help self-dri­ving cars learn how to nav­i­gate unusu­al weath­er con­di­tions. But as the nov­el­ist William Gib­son once said, “The street finds its own uses for things.” So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing.

The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate “video” fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court.

A pro­gram called Face2Face, devel­oped at Stan­ford, films one per­son speak­ing, then manip­u­lates that person’s image to resem­ble some­one else’s. Throw in voice manip­u­la­tion tech­nol­o­gy, and you can lit­er­al­ly make any­one say any­thing — or at least seem to.

The tech­nol­o­gy isn’t quite there; Princess Leia was a lit­tle wood­en, if you looked care­ful­ly. But it’s clos­er than you might think. And even when fake video isn’t per­fect, it can con­vince peo­ple who want to be con­vinced, espe­cial­ly when it rein­forces offen­sive gen­der or racial stereo­types.

In 2007, Barack Obama’s polit­i­cal oppo­nents insist­ed that footage exist­ed of Michelle Oba­ma rant­i­ng against “whitey.” In the future, they may not have to wor­ry about whether it actu­al­ly exist­ed. If some­one called their bluff, they may sim­ply be able to invent it, using data from stock pho­tos and pre-exist­ing footage.

The next step would be one we are already famil­iar with: the exploita­tion of the algo­rithms used by social media sites like Twit­ter and Face­book to spread sto­ries viral­ly to those most inclined to show inter­est in them, even if those sto­ries are fake.

It might be impos­si­ble to stop the advance of this kind of tech­nol­o­gy. But the rel­e­vant algo­rithms here aren’t only the ones that run on com­put­er hard­ware. They are also the ones that under­gird our too eas­i­ly hacked media sys­tem, where garbage acquires the per­fumed scent of legit­i­ma­cy with all too much ease. Edi­tors, jour­nal­ists and news pro­duc­ers can play a role here — for good or for bad.

Out­lets like Fox News spread sto­ries about the mur­der of Demo­c­ra­t­ic staff mem­bers and F.B.I. con­spir­a­cies to frame the pres­i­dent. Tra­di­tion­al news orga­ni­za­tions, fear­ing that they might be left behind in the new atten­tion econ­o­my, strug­gle to max­i­mize “engage­ment with con­tent.”

This gives them a built-in incen­tive to spread infor­ma­tion­al virus­es that enfee­ble the very demo­c­ra­t­ic insti­tu­tions that allow a free media to thrive. Cable news shows con­sid­er it their pro­fes­sion­al duty to pro­vide “bal­ance” by giv­ing par­ti­san talk­ing heads free rein to spout non­sense — or ampli­fy the non­sense of our cur­rent pres­i­dent.

It already feels as though we are liv­ing in an alter­na­tive sci­ence-fic­tion uni­verse where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democ­ra­cy assumes that its cit­i­zens share the same real­i­ty. We’re about to find out whether democ­ra­cy can be pre­served when this assump­tion no longer holds.

2. Both Twit­ter and Porn­Hub, the online pornog­ra­phy giant, are already tak­ing action to remove numer­ous “Deep­fake” videos of celebri­ties being super-imposed onto porn actors in response to the flood of such videos that are already being gen­er­at­ed [12].

“Porn­Hub, Twit­ter Ban ‘Deep­fake’ AI-Mod­i­fied Porn” by Angela Moscar­i­to­lo; PC Mag­a­zine; 02/07/2018 [12].

It might be kind of com­i­cal to see Nico­las Cage’s face on the body of a woman, but expect to see less of this type of con­tent float­ing around on Porn­Hub and Twit­ter in the future.

As Moth­er­board first report­ed [13], both sites are tak­ing action against arti­fi­cial intel­li­gence-pow­ered pornog­ra­phy, known as “deep­fakes.”

Deep­fakes, for the unini­ti­at­ed, are porn videos cre­at­ed by using a machine learn­ing algo­rithm to match someone’s face to anoth­er person’s body. Loads of celebri­ties have had their faces used in porn scenes with­out their con­sent, and the results are almost flaw­less. Check out the SFW exam­ple below for a bet­ter idea of what we’re talk­ing about.
[see chill­ing­ly real­is­tic video of Nico­las Cage’s head on a woman’s body [14]]
In a state­ment to PCMag on Wednes­day, Porn­Hub Vice Pres­i­dent Corey Price said the com­pa­ny in 2015 intro­duced a sub­mis­sion form, which lets users eas­i­ly flag non­con­sen­su­al con­tent like revenge porn for removal. Peo­ple have also start­ed using that tool to flag deep­fakes, he said.

The com­pa­ny still has a lot of clean­ing up to do. Moth­er­board report­ed there are still tons of deep­fakes on Porn­Hub.

“I was able to eas­i­ly find dozens of deep­fakes post­ed in the last few days, many under the search term ‘deep­fakes’ or with deep­fakes and the name of celebri­ties in the title of the video,” Motherboard’s Saman­tha Cole wrote.

Over on Twit­ter, mean­while, users can now be sus­pend­ed for post­ing deep­fakes and oth­er non­con­sen­su­al porn.

“We will sus­pend any account we iden­ti­fy as the orig­i­nal poster of inti­mate media that has been pro­duced or dis­trib­uted with­out the subject’s con­sent,” a Twit­ter spokesper­son told Moth­er­board [15]. “We will also sus­pend any account ded­i­cat­ed to post­ing this type of con­tent.”

The site report­ed that Dis­cord and Gfy­cat take a sim­i­lar stance on deep­fakes. For now, these types of videos appear to be pri­mar­i­ly cir­cu­lat­ing via Red­dit, where the deep­fake com­mu­ni­ty cur­rent­ly boasts around 90,000 sub­scribers.

3. No “ifs,” “ands,” or “bots!”  ” . . . . Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple. Robots pos­ing as peo­ple have become a men­ace. . . . In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as ‘small’ donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. . . .”

“Please Prove You’re Not a Robot” by Tim Wu; The New York Times [16]; 7/16/2017; p. 8 (Review Sec­tion). [16]

 When sci­ence fic­tion writ­ers first imag­ined robot inva­sions, the idea was that bots would become smart and pow­er­ful enough to take over the world by force, whether on their own or as direct­ed by some evil­do­er. In real­i­ty, some­thing only slight­ly less scary is hap­pen­ing.

Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple.

Robots pos­ing as peo­ple have become a men­ace. For pop­u­lar Broad­way shows (need we say “Hamil­ton”?), it is actu­al­ly bots, not humans, who do much and maybe most of the tick­et buy­ing. Shows sell out imme­di­ate­ly, and the mid­dle­men (quite lit­er­al­ly, evil robot mas­ters) reap mil­lions in ill-got­ten gains.

Philip Howard, who runs the Com­pu­ta­tion­al Pro­pa­gan­da Research Project at Oxford, stud­ied the deploy­ment of pro­pa­gan­da bots dur­ing vot­ing on Brex­it, and the recent Amer­i­can and French pres­i­den­tial elec­tions. Twit­ter is par­tic­u­lar­ly dis­tort­ed by its mil­lions of robot accounts; dur­ing the French elec­tion, it was prin­ci­pal­ly Twit­ter robots who were try­ing to make #Macron­Leaks into a scan­dal. Face­book has admit­ted it was essen­tial­ly hacked dur­ing the Amer­i­can elec­tion in Novem­ber. In Michi­gan, Mr. Howard notes, “junk news was shared just as wide­ly as pro­fes­sion­al news in the days lead­ing up to the elec­tion.”

Robots are also being used to attack the demo­c­ra­t­ic fea­tures of the admin­is­tra­tive state. This spring, the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion put its pro­posed revo­ca­tion of net neu­tral­i­ty up for pub­lic com­ment. In pre­vi­ous years such pro­ceed­ings attract­ed mil­lions of (human) com­men­ta­tors. This time, some­one with an agen­da but no actu­al pub­lic sup­port unleashed robots who imper­son­at­ed (via stolen iden­ti­ties) hun­dreds of thou­sands of peo­ple, flood­ing the sys­tem with fake com­ments against fed­er­al net neu­tral­i­ty rules.

To be sure, today’s imper­son­ation-bots are dif­fer­ent from the robots imag­ined in sci­ence fic­tion: They aren’t sen­tient, don’t car­ry weapons and don’t have phys­i­cal bod­ies. Instead, fake humans just have what­ev­er is nec­es­sary to make them seem human enough to “pass”: a name, per­haps a vir­tu­al appear­ance, a cred­it-card num­ber and, if nec­es­sary, a pro­fes­sion, birth­day and home address. They are brought to life by pro­grams or scripts that give one per­son the pow­er to imi­tate thou­sands.

The prob­lem is almost cer­tain to get worse, spread­ing to even more areas of life as bots are trained to become bet­ter at mim­ic­k­ing humans. Giv­en the degree to which prod­uct reviews have been swamped by robots (which tend to hand out five stars with aban­don), com­mer­cial sab­o­tage in the form of neg­a­tive bot reviews is not hard to pre­dict.

In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as “small” donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. So far, we’ve been con­tent to leave the prob­lem to the tech indus­try, where the focus has been on build­ing defens­es, usu­al­ly in the form of Captchas (“com­plete­ly auto­mat­ed pub­lic Tur­ing test to tell com­put­ers and humans apart”), those annoy­ing “type this” tests to prove you are not a robot. But leav­ing it all to indus­try is not a long-term solu­tion.

For one thing, the defens­es don’t actu­al­ly deter imper­son­ation bots, but per­verse­ly reward who­ev­er can beat them. And per­haps the great­est prob­lem for a democ­ra­cy is that com­pa­nies like Face­book and Twit­ter lack a seri­ous finan­cial incen­tive to do any­thing about mat­ters of pub­lic con­cern, like the mil­lions of fake users who are cor­rupt­ing the demo­c­ra­t­ic process.

Twit­ter esti­mates at least 27 mil­lion prob­a­bly fake accounts; researchers sug­gest the real num­ber is clos­er to 48 mil­lion, yet the com­pa­ny does lit­tle about the prob­lem. The prob­lem is a pub­lic as well as pri­vate one, and imper­son­ation robots should be con­sid­ered what the law calls “hostis humani gener­is”: ene­mies of mankind, like pirates and oth­er out­laws. That would allow for a bet­ter offen­sive strat­e­gy: bring­ing the pow­er of the state to bear on the peo­ple deploy­ing the robot armies to attack com­merce or democ­ra­cy.

The ide­al anti-robot cam­paign would employ a mixed tech­no­log­i­cal and legal approach. Improved robot detec­tion might help us find the robot mas­ters or poten­tial­ly help nation­al secu­ri­ty unleash coun­ter­at­tacks, which can be nec­es­sary when attacks come from over­seas. There may be room for dep­u­tiz­ing pri­vate par­ties to hunt down bad robots. A sim­ple legal rem­e­dy would be a “ Blade Run­ner” law that makes it ille­gal to deploy any pro­gram that hides its real iden­ti­ty to pose as a human. Auto­mat­ed process­es should be required to state, “I am a robot.” When deal­ing with a fake human, it would be nice to know.

Using robots to fake sup­port, steal tick­ets or crash democ­ra­cy real­ly is the kind of evil that sci­ence fic­tion writ­ers were warn­ing about. The use of robots takes advan­tage of the fact that polit­i­cal cam­paigns, elec­tions and even open mar­kets make human­is­tic assump­tions, trust­ing that there is wis­dom or at least legit­i­ma­cy in crowds and val­ue in pub­lic debate. But when sup­port and opin­ion can be man­u­fac­tured, bad or unpop­u­lar argu­ments can win not by log­ic but by a nov­el, dan­ger­ous form of force — the ulti­mate threat to every democ­ra­cy.

4. Before the actu­al replace­ment of man­u­al labor by robots, devices to tech­no­crat­i­cal­ly “improve”–read “coer­cive­ly engi­neer” work­ers are patent­ed by Ama­zon and have been used on work­ers in some of their facil­i­ties.

” . . . . What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break? What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band. . . .”

For some U.K Ama­zon ware­house work­ers, the future is now: ” . . . . Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, ‘After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.’ He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. ‘There was no time to go to the loo,’ he said, using the British slang for toi­let. ‘You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.’

He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: ‘I got burned out.’ Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was ‘stalk­er­ish’ and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn peo­ple into machines,’ he said. ‘The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

“Track Hands of Work­ers? Ama­zon Has Patents for It” by Cey­lan Yegin­su; The New York Times; 2/2/2018; P. B3 [West­ern Edi­tion]. [17]

 What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break?

What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band, though it was unclear if Ama­zon planned to actu­al­ly man­u­fac­ture the track­ing device and have employ­ees wear it.

The online retail giant, which plans to build a sec­ond head­quar­ters and recent­ly short­list­ed 20 poten­tial host cities for it, has also been known to exper­i­ment inhouse with new tech­nol­o­gy before sell­ing it world­wide.

Ama­zon, which rarely dis­clos­es infor­ma­tion on its patents, could not imme­di­ate­ly be reached for com­ment on Thurs­day. But the patent dis­clo­sure goes to the heart about a glob­al debate about pri­va­cy and secu­ri­ty. Ama­zon already has a rep­u­ta­tion for a work­place cul­ture that thrives on a hard-hit­ting man­age­ment style, and has exper­i­ment­ed with how far it can push white-col­lar work­ers in order to reach its deliv­ery tar­gets.

Pri­va­cy advo­cates, how­ev­er, note that a lot can go wrong even with every­day track­ing tech­nol­o­gy. On Mon­day, the tech indus­try was jolt­ed by the dis­cov­ery that Stra­va, a fit­ness app that allows users to track their activ­i­ties and com­pare their per­for­mance with oth­er peo­ple run­ning or cycling in the same places, had unwit­ting­ly high­light­ed the loca­tions of Unit­ed States mil­i­tary bases and the move­ments of their per­son­nel in Iraq and Syr­ia.

The patent appli­ca­tions, filed in 2016, were pub­lished in Sep­tem­ber, and Ama­zon won them this week, accord­ing to Geek­Wire, which report­ed the patents’ pub­li­ca­tion on Tues­day. In the­o­ry, Amazon’s pro­posed tech­nol­o­gy would emit ultra­son­ic sound puls­es and radio trans­mis­sions to track where an employee’s hands were in rela­tion to inven­to­ry bins, and pro­vide “hap­tic feed­back” to steer the work­er toward the cor­rect bin.

The aim, Ama­zon says in the patent, is to stream­line “time con­sum­ing” tasks, like respond­ing to orders and pack­ag­ing them for speedy deliv­ery. With guid­ance from a wrist­band, work­ers could fill orders faster. Crit­ics say such wrist­bands raise con­cerns about pri­va­cy and would add a new lay­er of sur­veil­lance to the work­place, and that the use of the devices could result in employ­ees being treat­ed more like robots than human beings.

Cur­rent and for­mer Ama­zon employ­ees said the com­pa­ny already used sim­i­lar track­ing tech­nol­o­gy in its ware­hous­es and said they would not be sur­prised if it put the patents into prac­tice.

Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, “After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.” He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. “There was no time to go to the loo,” he said, using the British slang for toi­let. “You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.”

He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: “I got burned out.” Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was “stalk­er­ish” and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be “in the wrong place at the wrong time.” “They want to turn peo­ple into machines,” he said. “The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.”

Many com­pa­nies file patents for prod­ucts that nev­er see the light of day. And Ama­zon would not be the first employ­er to push bound­aries in the search for a more effi­cient, speedy work force. Com­pa­nies are increas­ing­ly intro­duc­ing arti­fi­cial intel­li­gence into the work­place to help with pro­duc­tiv­i­ty, and tech­nol­o­gy is often used to mon­i­tor employ­ee where­abouts.

One com­pa­ny in Lon­don is devel­op­ing arti­fi­cial intel­li­gence sys­tems to flag unusu­al work­place behav­ior, while anoth­er used a mes­sag­ing appli­ca­tion to track its employ­ees. In Wis­con­sin, a tech­nol­o­gy com­pa­ny called Three Square Mar­ket offered employ­ees an oppor­tu­ni­ty to have microchips implant­ed under their skin in order, it said, to be able to use its ser­vices seam­less­ly. Ini­tial­ly, more than 50 out of 80 staff mem­bers at its head­quar­ters in Riv­er Falls, Wis., vol­un­teered.

5. Some tech work­ers, well placed at R & D pace­set­ters and giants such as Face­book and Google have done an about-face on the  impact of their ear­li­er efforts and are now strug­gling against the mis­use of the tech­nolo­gies they helped to launch:

” . . . . A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. . . . ‘The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?’ Mr. [Tris­tan] Har­ris said. ‘We’re point­ing them at people’s brains, at chil­dren.’ . . . . Mr. [RogerM­c­Namee] said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. ‘Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,’ he said. ‘And with smart­phones, they’ve got you for every wak­ing moment.’ . . . .”

“Ear­ly Face­book and Google Employ­ees Join Forces to Fight What They Built” by Nel­lie Bowles; The New York Times; 2/5/2018; p. B6 [West­ern Edi­tion]. [18]

A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. The cohort is cre­at­ing a union of con­cerned experts called the Cen­ter for Humane Tech­nol­o­gy. Along with the non­prof­it media watch­dog group Com­mon Sense Media, it also plans an anti-tech addic­tion lob­by­ing effort and an ad cam­paign at 55,000 pub­lic schools in the Unit­ed States.

The cam­paign, titled The Truth About Tech, will be fund­ed with $7 mil­lion from Com­mon Sense and cap­i­tal raised by the Cen­ter for Humane Tech­nol­o­gy. Com­mon Sense also has $50 mil­lion in donat­ed media and air­time from part­ners includ­ing Com­cast and DirecTV. It will be aimed at edu­cat­ing stu­dents, par­ents and teach­ers about the dan­gers of tech­nol­o­gy, includ­ing the depres­sion that can come from heavy use of social media.

“We were on the inside,” said Tris­tan Har­ris, a for­mer in-house ethi­cist at Google who is head­ing the new group. “We know what the com­pa­nies mea­sure. We know how they talk, and we know how the engi­neer­ing works.”

The effect of tech­nol­o­gy, espe­cial­ly on younger minds, has become hot­ly debat­ed in recent months. In Jan­u­ary, two big Wall Street investors asked Apple to study the health effects of its prod­ucts and to make it eas­i­er to lim­it children’s use of iPhones and iPads. Pedi­atric and men­tal health experts called on Face­book last week to aban­don a mes­sag­ing ser­vice the com­pa­ny had intro­duced for chil­dren as young as 6.

Par­ent­ing groups have also sound­ed the alarm about YouTube Kids, a prod­uct aimed at chil­dren that some­times fea­tures dis­turb­ing con­tent. “The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?” Mr. Har­ris said. “We’re point­ing them at people’s brains, at chil­dren.” Sil­i­con Val­ley exec­u­tives for years posi­tioned their com­pa­nies as tight-knit fam­i­lies and rarely spoke pub­licly against one anoth­er.

That has changed. Chamath Pal­i­hapi­tiya, a ven­ture cap­i­tal­ist who was an ear­ly employ­ee at Face­book, said in Novem­ber that the social net­work was “rip­ping apart the social fab­ric of how soci­ety works.” The new Cen­ter for Humane Tech­nol­o­gy includes an unprece­dent­ed alliance of for­mer employ­ees of some of today’s biggest tech com­pa­nies.

Apart from Mr. Har­ris, the cen­ter includes Sandy Parak­i­las, a for­mer Face­book oper­a­tions man­ag­er; Lynn Fox, a for­mer Apple and Google com­mu­ni­ca­tions exec­u­tive; Dave Morin, a for­mer Face­book exec­u­tive; Justin Rosen­stein, who cre­at­ed Facebook’s Like but­ton and is a co-founder of Asana; Roger McNamee, an ear­ly investor in Face­book; and Renée DiRes­ta, a tech­nol­o­gist who stud­ies bots. The group expects its num­bers to grow.

Its first project to reform the indus­try will be to intro­duce a Ledger of Harms — a web­site aimed at guid­ing rank-and-file engi­neers who are con­cerned about what they are being asked to build. The site will include data on the health effects of dif­fer­ent tech­nolo­gies and ways to make prod­ucts that are health­i­er.

Jim Stey­er, chief exec­u­tive and founder of Com­mon Sense, said the Truth About Tech cam­paign was mod­eled on anti­smok­ing dri­ves and focused on chil­dren because of their vul­ner­a­bil­i­ty. That may sway tech chief exec­u­tives to change, he said. Already, Apple’s chief exec­u­tive, Tim­o­thy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Face­book investor Sean Park­er also recent­ly said of the social net­work that “God only knows what it’s doing to our children’s brains.”

Mr. Stey­er said, “You see a degree of hypocrisy with all these guys in Sil­i­con Val­ley.” The new group also plans to begin lob­by­ing for laws to cur­tail the pow­er of big tech com­pa­nies. It will ini­tial­ly focus on two pieces of leg­is­la­tion: a bill being intro­duced by Sen­a­tor Edward J. Markey, Demo­c­rat of Mass­a­chu­setts, that would com­mis­sion research on technology’s impact on children’s health, and a bill in Cal­i­for­nia by State Sen­a­tor Bob Hertzberg, a Demo­c­rat, which would pro­hib­it the use of dig­i­tal bots with­out iden­ti­fi­ca­tion.

Mr. McNamee said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. “Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,” he said. “And with smart­phones, they’ve got you for every wak­ing moment.” He said the peo­ple who made these prod­ucts could stop them before they did more harm. “This is an oppor­tu­ni­ty for me to cor­rect a wrong,” Mr. McNamee said.

6. Tran­si­tion­ing to our next program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . .”

“The Rise of the Machine” by Cade Metz; The New York Times; 11/6/2017; p. B1 [West­ern Edi­tion]. [19]

 They are a dream of researchers but per­haps a night­mare for high­ly skilled com­put­er pro­gram­mers: arti­fi­cial­ly intel­li­gent machines that can build oth­er arti­fi­cial­ly intel­li­gent machines. With recent speech­es in both Sil­i­con Val­ley and Chi­na, Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data.

AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. The project is part of a much larg­er effort to bring the lat­est and great­est A.I. tech­niques to a wider col­lec­tion of com­pa­nies and soft­ware devel­op­ers.

The tech indus­try is promis­ing every­thing from smart­phone apps that can rec­og­nize faces to cars that can dri­ve on their own. But by some esti­mates, only 10,000 peo­ple world­wide have the edu­ca­tion, expe­ri­ence and tal­ent need­ed to build the com­plex and some­times mys­te­ri­ous math­e­mat­i­cal algo­rithms that will dri­ve this new breed of arti­fi­cial intel­li­gence.

The world’s largest tech busi­ness­es, includ­ing Google, Face­book and Microsoft, some­times pay mil­lions of dol­lars a year to A.I. experts, effec­tive­ly cor­ner­ing the mar­ket for this hard-to-find tal­ent. The short­age isn’t going away any­time soon, just because mas­ter­ing these skills takes years of work. The indus­try is not will­ing to wait. Com­pa­nies are devel­op­ing all sorts of tools that will make it eas­i­er for any oper­a­tion to build its own A.I. soft­ware, includ­ing things like image and speech recog­ni­tion ser­vices and online chat­bots. “We are fol­low­ing the same path that com­put­er sci­ence has fol­lowed with every new type of tech­nol­o­gy,” said Joseph Sirosh, a vice pres­i­dent at Microsoft, which recent­ly unveiled a tool to help coders build deep neur­al net­works, a type of com­put­er algo­rithm that is dri­ving much of the recent progress in the A.I. field. “We are elim­i­nat­ing a lot of the heavy lift­ing.” This is not altru­ism.

Researchers like Mr. Dean believe that if more peo­ple and com­pa­nies are work­ing on arti­fi­cial intel­li­gence, it will pro­pel their own research. At the same time, com­pa­nies like Google, Ama­zon and Microsoft see seri­ous mon­ey in the trend that Mr. Sirosh described. All of them are sell­ing cloud-com­put­ing ser­vices that can help oth­er busi­ness­es and devel­op­ers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief tech­ni­cal offi­cer of Mal­ong, a start-up in Chi­na that offers sim­i­lar ser­vices. “And the tools are not yet sat­is­fy­ing all the demand.”

This is most like­ly what Google has in mind for AutoML, as the com­pa­ny con­tin­ues to hail the project’s progress. Google’s chief exec­u­tive, Sun­dar Pichai, boast­ed about AutoML last month while unveil­ing a new Android smart­phone.

Even­tu­al­ly, the Google project will help com­pa­nies build sys­tems with arti­fi­cial intel­li­gence even if they don’t have exten­sive exper­tise, Mr. Dean said. Today, he esti­mat­ed, no more than a few thou­sand com­pa­nies have the right tal­ent for build­ing A.I., but many more have the nec­es­sary data. “We want to go from thou­sands of orga­ni­za­tions solv­ing machine learn­ing prob­lems to mil­lions,” he said.

Google is invest­ing heav­i­ly in cloud-com­put­ing ser­vices — ser­vices that help oth­er busi­ness­es build and run soft­ware — which it expects to be one of its pri­ma­ry eco­nom­ic engines in the years to come. And after snap­ping up such a large por­tion of the world’s top A.I researchers, it has a means of jump-start­ing this engine.

Neur­al net­works are rapid­ly accel­er­at­ing the devel­op­ment of A.I. Rather than build­ing an image-recog­ni­tion ser­vice or a lan­guage trans­la­tion app by hand, one line of code at a time, engi­neers can much more quick­ly build an algo­rithm that learns tasks on its own. By ana­lyz­ing the sounds in a vast col­lec­tion of old tech­ni­cal sup­port calls, for instance, a machine-learn­ing algo­rithm can learn to rec­og­nize spo­ken words.

But build­ing a neur­al net­work is not like build­ing a web­site or some run-of-themill smart­phone app. It requires sig­nif­i­cant math skills, extreme tri­al and error, and a fair amount of intu­ition. Jean-François Gag­né, the chief exec­u­tive of an inde­pen­dent machine-learn­ing lab called Ele­ment AI, refers to the process as “a new kind of com­put­er pro­gram­ming.”

In build­ing a neur­al net­work, researchers run dozens or even hun­dreds of exper­i­ments across a vast net­work of machines, test­ing how well an algo­rithm can learn a task like rec­og­niz­ing an image or trans­lat­ing from one lan­guage to anoth­er. Then they adjust par­tic­u­lar parts of the algo­rithm over and over again, until they set­tle on some­thing that works. Some call it a “dark art,” just because researchers find it dif­fi­cult to explain why they make par­tic­u­lar adjust­ments.

But with AutoML, Google is try­ing to auto­mate this process. It is build­ing algo­rithms that ana­lyze the devel­op­ment of oth­er algo­rithms, learn­ing which meth­ods are suc­cess­ful and which are not. Even­tu­al­ly, they learn to build more effec­tive machine learn­ing. Google said AutoML could now build algo­rithms that, in some cas­es, iden­ti­fied objects in pho­tos more accu­rate­ly than ser­vices built sole­ly by human experts. Bar­ret Zoph, one of the Google researchers behind the project, believes that the same method will even­tu­al­ly work well for oth­er tasks, like speech recog­ni­tion or machine trans­la­tion. This is not always an easy thing to wrap your head around. But it is part of a sig­nif­i­cant trend in A.I. research. Experts call it “learn­ing to learn” or “met­alearn­ing.”

Many believe such meth­ods will sig­nif­i­cant­ly accel­er­ate the progress of A.I. in both the online and phys­i­cal worlds. At the Uni­ver­si­ty of Cal­i­for­nia, Berke­ley, researchers are build­ing tech­niques that could allow robots to learn new tasks based on what they have learned in the past. “Com­put­ers are going to invent the algo­rithms for us, essen­tial­ly,” said a Berke­ley pro­fes­sor, Pieter Abbeel. “Algo­rithms invent­ed by com­put­ers can solve many, many prob­lems very quick­ly — at least that is the hope.”

This is also a way of expand­ing the num­ber of peo­ple and busi­ness­es that can build arti­fi­cial intel­li­gence. These meth­ods will not replace A.I. researchers entire­ly. Experts, like those at Google, must still do much of the impor­tant design work.

But the belief is that the work of a few experts can help many oth­ers build their own soft­ware. Rena­to Negrin­ho, a researcher at Carnegie Mel­lon Uni­ver­si­ty who is explor­ing tech­nol­o­gy sim­i­lar to AutoML, said this was not a real­i­ty today but should be in the years to come. “It is just a mat­ter of when,” he said.