Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.
The tag 'Facebook' is associated with 62 posts.

FTR #1021 FascisBook: (In Your Facebook, Part 3–A Virtual Panopticon, Part 3)

This pro­gram fol­lows up FTR #‘s 718 and 946, we exam­ined Face­book, not­ing how it’s cute, warm, friend­ly pub­lic facade obscured a cyn­i­cal, reac­tionary, exploita­tive and, ulti­mate­ly “cor­po­ratist” eth­ic and oper­a­tion.

The UK’s Chan­nel 4 sent an inves­tiga­tive jour­nal­ist under­cov­er to work for one of the third-par­ty com­pa­nies Face­book pays to mod­er­ate con­tent. This inves­tiga­tive jour­nal­ist was trained to take a hands-off approach to far right vio­lent con­tent and fake news because that kind of con­tent engages users for longer and increas­es ad rev­enues. ” . . . . An inves­tiga­tive jour­nal­ist who went under­cov­er as a Face­book mod­er­a­tor in Ire­land says the com­pa­ny lets pages from far-right fringe groups ‘exceed dele­tion thresh­old,’ and that those pages are ‘sub­ject to dif­fer­ent treat­ment in the same cat­e­go­ry as pages belong­ing to gov­ern­ments and news orga­ni­za­tions.’ The accu­sa­tion is a damn­ing one, under­min­ing Facebook’s claims that it is active­ly try­ing to cut down on fake news, pro­pa­gan­da, hate speech, and oth­er harm­ful con­tent that may have sig­nif­i­cant real-world impact.The under­cov­er jour­nal­ist detailed his find­ings in a new doc­u­men­tary titled Inside Face­book: Secrets of the Social Net­work, that just aired on the UK’s Chan­nel 4. . . . .”

Next, we present a fright­en­ing sto­ry about Aggre­gateIQ (AIQ), the Cam­bridge Ana­lyt­i­ca off­shoot to which Cam­bridge Ana­lyt­i­ca out­sourced the devel­op­ment of its “Ripon” psy­cho­log­i­cal pro­file soft­ware devel­op­ment, and which lat­er played a key role in the pro-Brex­it cam­paign. The arti­cle also notes that, despite Facebook’s pledge to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, secu­ri­ty researchers just found 13 apps avail­able for Face­book that appear to be devel­oped by AIQ. If Face­book real­ly was try­ing to kick Cam­bridge Ana­lyt­i­ca off of its plat­form, it’s not try­ing very hard. One app is even named “AIQ John­ny Scraper” and it’s reg­is­tered to AIQ.

The arti­cle is also a reminder that you don’t nec­es­sar­i­ly need to down­load a Cam­bridge Analytica/AIQ app for them to be track­ing your infor­ma­tion and reselling it to clients. Secu­ri­ty researcher stum­bled upon a new repos­i­to­ry of curat­ed Face­book data AIQ was cre­at­ing for a client and it’s entire­ly pos­si­ble a lot of the data was scraped from pub­lic Face­book posts.

” . . . . Aggre­gateIQ, a Cana­di­an con­sul­tan­cy alleged to have links to Cam­bridge Ana­lyt­i­ca, col­lect­ed and stored the data of hun­dreds of thou­sands of Face­book users, accord­ing to redact­ed com­put­er files seen by the Finan­cial Times.The social net­work banned Aggre­gateIQ, a data com­pa­ny, from its plat­form as part of a clean-up oper­a­tion fol­low­ing the Cam­bridge Ana­lyt­i­ca scan­dal, on sus­pi­cion that the com­pa­ny could have been improp­er­ly access­ing user infor­ma­tion. How­ev­er, Chris Vick­ery, a secu­ri­ty researcher, this week found an app on the plat­form called ‘AIQ John­ny Scraper’ reg­is­tered to the com­pa­ny, rais­ing fresh ques­tions about the effec­tive­ness of Facebook’s polic­ing efforts. . . .”

In addi­tion, the sto­ry high­lights a forms of micro-tar­get­ing com­pa­nies like AIQ make avail­able that’s fun­da­men­tal­ly dif­fer­ent from the algo­rith­mic micro-tar­get­ing asso­ci­at­ed with social media abus­es: micro-tar­get­ing by a human who wants to specif­i­cal­ly look and see what you per­son­al­ly have said about var­i­ous top­ics on social media. This is a ser­vice where some­one can type you into a search engine and AIQ’s prod­uct will serve up a list of all the var­i­ous polit­i­cal posts you’ve made or the polit­i­cal­ly-rel­e­vant “Likes” you’ve made.

Next, we note that Face­book is get­ting sued by an app devel­op­er for act­ing like the mafia and turn­ing access to all that user data as the key enforce­ment tool:

“Mark Zucker­berg faces alle­ga­tions that he devel­oped a ‘mali­cious and fraud­u­lent scheme’ to exploit vast amounts of pri­vate data to earn Face­book bil­lions and force rivals out of busi­ness. A com­pa­ny suing Face­book in a Cal­i­for­nia court claims the social network’s chief exec­u­tive ‘weaponised’ the abil­i­ty to access data from any user’s net­work of friends – the fea­ture at the heart of the Cam­bridge Ana­lyt­i­ca scan­dal.  . . . . ‘The evi­dence uncov­ered by plain­tiff demon­strates that the Cam­bridge Ana­lyt­i­ca scan­dal was not the result of mere neg­li­gence on Facebook’s part but was rather the direct con­se­quence of the mali­cious and fraud­u­lent scheme Zucker­berg designed in 2012 to cov­er up his fail­ure to antic­i­pate the world’s tran­si­tion to smart­phones,’ legal doc­u­ments said. . . . . Six4Three alleges up to 40,000 com­pa­nies were effec­tive­ly defraud­ed in this way by Face­book. It also alleges that senior exec­u­tives includ­ing Zucker­berg per­son­al­ly devised and man­aged the scheme, indi­vid­u­al­ly decid­ing which com­pa­nies would be cut off from data or allowed pref­er­en­tial access. . . . ‘They felt that it was bet­ter not to know. I found that utter­ly hor­ri­fy­ing,’ he [for­mer Face­book exec­u­tive Sandy Parak­i­las] said. ‘If true, these alle­ga­tions show a huge betray­al of users, part­ners and reg­u­la­tors. They would also show Face­book using its monop­oly pow­er to kill com­pe­ti­tion and putting prof­its over pro­tect­ing its users.’ . . . .”

The above-men­tioned Cam­bridge Ana­lyt­i­ca is offi­cial­ly going bank­rupt, along with the elec­tions divi­sion of its par­ent com­pa­ny, SCL Group. Appar­ent­ly their bad press has dri­ven away clients.

Is this tru­ly the end of Cam­bridge Ana­lyt­i­ca?

No.

They’re rebrand­ing under a new com­pa­ny, Emer­da­ta. Intrigu­ing­ly, Cam­bridge Analytica’s trans­for­ma­tion into Emer­da­ta is note­wor­thy because  the fir­m’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince: ” . . . . But the company’s announce­ment left sev­er­al ques­tions unan­swered, includ­ing who would retain the company’s intel­lec­tu­al prop­er­ty — the so-called psy­cho­graph­ic vot­er pro­files built in part with data from Face­book — and whether Cam­bridge Analytica’s data-min­ing busi­ness would return under new aus­pices. . . . In recent months, exec­u­tives at Cam­bridge Ana­lyt­i­ca and SCL Group, along with the Mer­cer fam­i­ly, have moved to cre­at­ed a new firm, Emer­da­ta, based in Britain, accord­ing to British records. The new company’s direc­tors include John­son Ko Chun Shun, a Hong Kong financier and busi­ness part­ner of Erik Prince. . . . An exec­u­tive and a part own­er of SCL Group, Nigel Oakes, has pub­licly described Emer­da­ta as a way of rolling up the two com­pa­nies under one new ban­ner. . . . ”

In the Big Data inter­net age, there’s one area of per­son­al infor­ma­tion that has yet to be incor­po­rat­ed into the pro­files on everyone–personal bank­ing infor­ma­tion.  ” . . . . If tech com­pa­nies are in con­trol of pay­ment sys­tems, they’ll know “every sin­gle thing you do,” Kapi­to said. It’s a dif­fer­ent busi­ness mod­el from tra­di­tion­al bank­ing: Data is more valu­able for tech firms that sell a range of dif­fer­ent prod­ucts than it is for banks that only sell finan­cial ser­vices, he said. . . .”

Face­book is approach­ing a num­ber of big banks – JP Mor­gan, Wells Far­go, Cit­i­group, and US Ban­corp – request­ing finan­cial data includ­ing card trans­ac­tions and check­ing-account bal­ances. Face­book is joined byIn this by Google and Ama­zon who are also try­ing to get this kind of data.

Face­book assures us that this infor­ma­tion, which will be opt-in, is to be sole­ly for offer­ing new ser­vices on Face­book mes­sen­ger. Face­book also assures us that this infor­ma­tion, which would obvi­ous­ly be invalu­able for deliv­er­ing ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Mes­sen­ger ser­vice.  This is a dubi­ous assur­ance, in light of Face­book’s past behav­ior.

” . . . . Face­book increas­ing­ly wants to be a plat­form where peo­ple buy and sell goods and ser­vices, besides con­nect­ing with friends. The com­pa­ny over the past year asked JPMor­gan Chase & Co., Wells Far­go & Co., Cit­i­group Inc. and U.S. Ban­corp to dis­cuss poten­tial offer­ings it could host for bank cus­tomers on Face­book Mes­sen­ger, said peo­ple famil­iar with the mat­ter. Face­book has talked about a fea­ture that would show its users their check­ing-account bal­ances, the peo­ple said. It has also pitched fraud alerts, some of the peo­ple said. . . .”

Peter Thiel’s sur­veil­lance firm Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump. Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

Pro­gram High­lights Include:

1.–Facebook’s project to incor­po­rate brain-to-com­put­er inter­face into its oper­at­ing sys­tem: ” . . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
2.–” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
3.–” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
4.–” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”
5.–” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
6.–” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
7.–Some telling obser­va­tions by Nigel Oakes, the founder of Cam­bridge Ana­lyt­i­ca par­ent firm SCL: ” . . . . . . . . The pan­el has pub­lished audio records in which an exec­u­tive tied to Cam­bridge Ana­lyt­i­ca dis­cuss­es how the Trump cam­paign used tech­niques used by the Nazis to tar­get vot­ers. . . .”
8.–Further expo­si­tion of Oakes’ state­ment: ” . . . . Adolf Hitler ‘didn’t have a prob­lem with the Jews at all, but peo­ple didn’t like the Jews,’ he told the aca­d­e­m­ic, Emma L. Bri­ant, a senior lec­tur­er in jour­nal­ism at the Uni­ver­si­ty of Essex. He went on to say that Don­ald J. Trump had done the same thing by tap­ping into griev­ances toward immi­grants and Mus­lims. . . . ‘What hap­pened with Trump, you can for­get all the micro­tar­get­ing and micro­da­ta and what­ev­er, and come back to some very, very sim­ple things,’ he told Dr. Bri­ant. ‘Trump had the balls, and I mean, real­ly the balls, to say what peo­ple want­ed to hear.’ . . .”
9.–Observations about the pos­si­bil­i­ties of Face­book’s goal of hav­ing AI gov­ern­ing the edi­to­r­i­al func­tions of its con­tent: As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t under­stand. . . .”
10.–Microsoft’s Tay Chat­bot offers a glimpse into this future: As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”


FTR #1017 Supreme Court Trump Card: Family Trump, Family [Anthony] Kennedy and Peter Thiel

 Much has been said about Don­ald Trump’s nom­i­na­tion of Judge Brett Kavanaugh to become a Supreme Court jus­tice, replac­ing Antho­ny Kennnedy.

In this pro­gram, we high­light exten­sive net­work­ing between the Trump and Kennedy fam­i­lies and, in turn, some appar­ent “deep net­work­ing” between some of the indi­vid­u­als in the Trump/Kennedy nexus and insti­tu­tions linked to key ele­ments of the remark­able and dead­ly Bor­mann flight cap­i­tal net­work.

Deutsche Bank and the shad­ow of the I.G. Far­ben chem­i­cal com­plex fig­ure into the lat­ter part of this equa­tion.

The con­nec­tions between the fam­i­ly of Antho­ny Kennedy and the Trump milieu run deep. Antho­ny Kennedy’s son Justin was  Trump’s  banker at Deutsche Bank. In FTR #919, we ana­lyzed a New York Times arti­cle high­light­ing Don­ald Trump’s alto­geth­er opaque real estate devel­op­ments and evi­dence that those projects had sig­nif­i­cant links to ele­ments of the Bor­mann cap­i­tal net­work.

In that pro­gram we set forth the pri­ma­ry role of Deutsche Bank in financ­ing Trump’s real estate projects.

” . . . While many big banks have shunned him, Deutsche Bank AG has been a stead­fast finan­cial backer of the Repub­li­can pres­i­den­tial candidate’s busi­ness inter­ests. Since 1998, the bank has led or par­tic­i­pat­ed in loans of at least $2.5 bil­lion to com­pa­nies affil­i­at­ed with Mr. Trump, accord­ing to a Wall Street Jour­nal analy­sis of pub­lic records and peo­ple famil­iar with the mat­ter. That doesn’t include at least anoth­er $1 bil­lion in loan com­mit­ments that Deutsche Bank made to Trump-affil­i­at­ed enti­ties. The long-stand­ing con­nec­tion makes Frank­furt-based Deutsche Bank, which has a large U.S. oper­a­tion and has been grap­pling with rep­u­ta­tion­al prob­lems and an almost 50% stock-price decline, the finan­cial insti­tu­tion with prob­a­bly the strongest ties to the con­tro­ver­sial New York busi­ness­man. . . .”

The fact that Deutsche Bank is the pri­ma­ry finan­cial backer of “Trump Incor­po­rat­ed” is of pri­ma­ry impor­tance. The bank is cen­tral to the Bor­mann cap­i­tal net­work.

The con­nec­tions between the fam­i­ly of Antho­ny Kennedy and the Trump milieu run deep. Antho­ny Kennedy’s son Justin was  Trump’s  banker at Deutsche Bank.

Fur­ther­more, jurists who clerked for Antho­ny Kennedy fig­ure promi­nent­ly in Trump’s judi­cial appoint­ments:

1.–” . . . . He [Trump] picked Jus­tice Neil M. Gor­such, who had served as a law clerk to Jus­tice Kennedy, to fill Jus­tice Scalia’s seat. . . .”
2.–” . . . . Then, after Jus­tice Gorsuch’s nom­i­na­tion was announced, a White House offi­cial sin­gled out two can­di­dates for the next Supreme Court vacan­cy: Judge Brett M. Kavanaugh of the Unit­ed States Court of Appeals for the Dis­trict of Colum­bia Cir­cuit and Judge Ray­mond M. Keth­ledge of the Unit­ed States Court of Appeals for the Sixth Cir­cuit, in Cincin­nati. The two judges had some­thing in com­mon: They had both clerked for Jus­tice Kennedy. . . .”
3.–” . . . . In the mean­time, as the White House turned to stock­ing the low­er courts, it did not over­look Jus­tice Kennedy’s clerks. Mr. Trump nom­i­nat­ed three of them to fed­er­al appeals courts: Judges Stephanos Bibas and Michael Scud­der, both of whom have been con­firmed, and Eric Mur­phy, the Ohio solic­i­tor gen­er­al, whom Mr. Trump nom­i­nat­ed to the Sixth Cir­cuit this month. . . .”
4.–” . . . . Jus­tice Kennedy’s son, Justin . . . . spent more than a decade at Deutsche Bank, even­tu­al­ly ris­ing to become the bank’s glob­al head of real estate cap­i­tal mar­kets, and he worked close­ly with Mr. Trump when he was a real estate devel­op­er, accord­ing to two peo­ple with knowl­edge of his role. Dur­ing Mr. Kennedy’s tenure, Deutsche Bank became Mr. Trump’s most impor­tant lender, dis­pens­ing well over $1 bil­lion in loans to him for the ren­o­va­tion and con­struc­tion of sky­scrap­ers in New York and Chica­go at a time oth­er main­stream banks were wary of doing busi­ness with him because of his trou­bled busi­ness his­to­ry. . . .”

After Kennedy left Deutsche Bank in 2009 he went on to become co-CEO LNR Prop­er­ty LLC. LNR Prop­er­ty saved Jared Kushner’s mid­town Man­hat­tan prop­er­ty in 2011:

1.–” . . . . from 2010–2013 Justin Kennedy was the co-CEO of LNR Prop­er­ty LLC with Tobin Cobb. . . .”
2.–” . . . . Accord­ing the New York Times, in 2007 Kush­n­er Com­pa­nies pur­chased ‘an alu­minum-clad office tow­er in Mid­town Man­hat­tan, for a record price of $1.8 bil­lion.’ At the time the NYT wrote that this deal was ‘con­sid­ered a clas­sic exam­ple of reck­less under­writ­ing. The trans­ac­tion was so high­ly lever­aged that the cash flow from rents amount­ed to only 65 per­cent of the debt ser­vice.’ . . .”
3.– ” . . . Who came to the res­cue? None oth­er than LNR Prop­er­ty, the com­pa­ny whose CEO at the time was Justin Kennedy. Accord­ing to the NYT and the Real Deal, Mr. Kush­n­er and LNR ‘reached a pos­si­ble agree­ment with LNR Prop­er­ty, a firm spe­cial­iz­ing in restruc­tur­ing trou­bled debt and which over­sees the mort­gage, that would allow him to retain con­trol of the tow­er by mod­i­fy­ing the terms of the $1.2 bil­lion mort­gage tied to the office por­tion of the build­ing.’ . . .”

The links between Trump­World and Antho­ny Kennedy’s sons is deep­er still. Kennedy’s oth­er son Gre­go­ry, has long-stand­ing ties to Trump Sil­i­con Val­ley advis­er Peter Thiel, whom we first ana­lyzed in FTR #718.

” . . . . . . . . Kennedy’s seat, mean­time, seemed des­tined to go to Kavanaugh, thanks in part to the glow­ing review of Kennedy, whose son, Justin, knows Don­ald Trump Jr. through New York real estate cir­cles, and whose oth­er adult child has con­nec­tions to Trump World via the president’s 2016 Sil­i­con Val­ley advis­er Peter Thiel, most recent­ly when the Kennedy firm Dis­rup­tive Tech­nol­o­gy Advis­ers worked with Thiel’s Palan­tir Tech­nolo­gies. . . .”

Gre­go­ry Kennedy’s DTA has an unusu­al­ly close rela­tion­ship with Palan­tir, a com­pa­ny that has helped the Trump admin­is­tra­tion.

Kennedy’s DTA has oth­er per­son­al con­nec­tions to Palan­tir. Alex Fish­man and Alex Davis, two oth­er DTA founders, “enjoyed a very close rela­tion­ship” with Palan­tir co-founder Alex Karp, accord­ing to the law­suit.

It should be not­ed that the alleged secre­cy with which Palan­tir treats its oper­at­ing and invest­ing infor­ma­tion is char­ac­ter­is­tic of Bor­mann orga­ni­za­tions. A clos­et­ed, insid­ers-only oper­at­ing eth­ic serves the need for this con­sum­mate­ly pow­er­ful orga­ni­za­tion to main­tain a rel­a­tive­ly low pro­file, even as it gains pow­er, influ­ence and wealth.

” . . . . Yet Palan­tir — whose stock changes hands only through pri­vate trades — goes to great lengths to keep any detailed infor­ma­tion about its busi­ness pri­vate. . . .”

A law­suit by Palan­tir investor KT4 Part­ners alleges that Palan­tir is ille­gal­ly block­ing investors from sell­ing shares in the com­pa­ny and that Kennedy’s Dis­rup­tive Tech­nol­o­gy Advi­sors (DTA) is a key part­ner and ben­e­fi­cia­ry of this strat­e­gy.

KT4 claims that when it tried to sell its shares of Palan­tir to a third-par­ty, Palan­tir would have DTA con­tact the third-par­ty and con­vince them to have Palan­tir sells them the shares direct­ly instead. DTA would then col­lect a com­mis­sion.

The cen­tral dynam­ic in the alle­ga­tions of plain­tiff (and Palan­tir investor) KT4 is set forth as fol­lows: ” . . . . But remark­ably, KT4 claims that when Palan­tir receives infor­ma­tion from an investor about a planned sale, it uses that infor­ma­tion to con­tact the buy­er and per­suade them instead to buy shares direct­ly from the com­pa­ny or from cer­tain Palan­tir insid­ers. One par­tic­u­lar bro­ker, Dis­rup­tive Tech­nol­o­gy Advis­ers, or DTA, repeat­ed­ly gets com­mis­sions from these sales, even when it ‘per­formed no legit­i­mate work,’ KT4 claims. KT4 says it expe­ri­enced inter­fer­ence by Palan­tir when it tried to sell shares to High­bridge Cap­i­tal Man­age­ment, a hedge fund that was owned by JPMor­gan Chase, in May 2015. After KT4 noti­fied Palan­tir of the planned sale, Palan­tir turned around and instruct­ed DTA to ‘take the oppor­tu­ni­ty, on Palantir’s behalf,‘and arrange a sale from Palan­tir to High­bridge instead, accord­ing to the law­suit. . . .”

In FTR #946, we exam­ined Cam­bridge Ana­lyt­i­ca, its Trump and Steve Ban­non-linked tech firm that har­vest­ed Face­book data on behalf of the Trump cam­paign.

Peter Thiel’s Palan­tir was appar­ent­ly deeply involved with Cam­bridge Ana­lyt­i­ca’s gam­ing of per­son­al data har­vest­ed from Face­book in order to engi­neer an elec­toral vic­to­ry for Trump, set­ting the GOP cam­paign to con­trol the Supreme Court in a deep­er, broad­er con­text.

Thiel was an ear­ly investor in Face­book, at one point was its largest share­hold­er and is still one of its largest share­hold­ers. ” . . . . It was a Palan­tir employ­ee in Lon­don, work­ing close­ly with the data sci­en­tists build­ing Cambridge’s psy­cho­log­i­cal pro­fil­ing tech­nol­o­gy, who sug­gest­ed the sci­en­tists cre­ate their own app — a mobile-phone-based per­son­al­i­ty quiz — to gain access to Face­book users’ friend net­works, accord­ing to doc­u­ments obtained by The New York Times. The rev­e­la­tions pulled Palan­tir — co-found­ed by the wealthy lib­er­tar­i­an Peter Thiel — into the furor sur­round­ing Cam­bridge, which improp­er­ly obtained Face­book data to build ana­lyt­i­cal tools it deployed on behalf of Don­ald J. Trump and oth­er Repub­li­can can­di­dates in 2016. Mr. Thiel, a sup­port­er of Pres­i­dent Trump, serves on the board at Face­book. ‘There were senior Palan­tir employ­ees that were also work­ing on the Face­book data,’ said Christo­pher Wylie, a data expert and Cam­bridge Ana­lyt­i­ca co-founder, in tes­ti­mo­ny before British law­mak­ers on Tues­day. . . . The con­nec­tions between Palan­tir and Cam­bridge Ana­lyt­i­ca were thrust into the spot­light by Mr. Wylie’s tes­ti­mo­ny on Tues­day. Both com­pa­nies are linked to tech-dri­ven bil­lion­aires who backed Mr. Trump’s cam­paign: Cam­bridge is chiefly owned by Robert Mer­cer, the com­put­er sci­en­tist and hedge fund mag­nate, while Palan­tir was co-found­ed in 2003 by Mr. Thiel, who was an ini­tial investor in Face­book. . . .”

Pro­gram High­lights Include:

1.–Review of Peter Thiel’s high regard for Carl Schmitt: “. . . . a Nazi and the Third Reich’s pre­em­i­nent legal the­o­rist. For Thiel, Schmitt is an inspir­ing throw­back to a pre-Enlight­en­ment age, who exalts strug­gle and insists that the dis­cov­ery of ene­mies is the foun­da­tion of pol­i­tics. . .” 
2.–Review of Peter Thiel’s ear­ly legal expe­ri­ence with Sul­li­van & Cromwell, the Dulles law firm.
3.–A recount­ing of the role of John Fos­ter Dulles and Sul­li­van & Cromwell’s roles in the for­ma­tion of I.G. Far­ben.
4.–Review of Thiel’s Ger­man her­itage and his father’s prob­a­ble role with one of the I.G. suc­ces­sor com­pa­nies.


For The Record #1016 Miscellaneous Articles And Updates

As indi­cat­ed in the title, this show updates some paths of inquiry and intro­duces oth­ers.

Dis­cus­sion begins with the ori­gins of the Aus­tri­an Free­dom Par­ty (FPO). The par­ty has its gen­e­sis with post World War II Third Reich vet­er­ans. Its first head was SS Gen­er­al Anton Reinthaller: ” . . . .  an hon­orary Brigade­führer (Major Gen­er­al) in the SS.[1] Hav­ing ini­tial­ly joined the SS in Decem­ber 1938 (with the mem­ber­ship num­ber 292,775)[2] he achieved his high­est rank on 30 Jan­u­ary 1941. . . . Reinthaller was brought before the Aus­tri­an Peo­ple’s Court and accused of ‘high trea­son against the Aus­tri­an peo­ple’, with the three [defen­dants] labelled as being those most respon­si­ble for the Anschluss [Nazi Ger­many’s annex­ing of Austria–D.E.] . . . .”

Fur­ther analy­sis of the devel­op­ment of the FPO, notes that the par­ty was found­ed by Third Reich vet­er­ans and Reinthaller’s suc­ces­sor was also an SS offi­cer. ” . . . . The par­ty was found­ed by the orig­i­nal Nazis in the 1950s and led by Nazis until the 1980s. . . . Reinthaller died in 1958 and was suc­ceed­ed as Free­dom Par­ty leader by Friedrich Peter, anoth­er for­mer Nazi Par­ty mem­ber and an offi­cer in the SS. Peter ran the par­ty for­mal­ly until 1978 and then played an infor­mal role well into the 1980s. . . .”

Against the back­ground of the gen­e­sis of the FPO, we note that a for­mer speech writer for Jurg Haider is now the inte­ri­or min­is­ter of Aus­tria.

The FPO’s Her­bert Kickl is described as the “mas­ter­mind” behind the elec­toral suc­cess­es of the FPÖ that allowed it to enter into a coali­tion gov­ern­ment.

In March, a police unit head­ed by a Free­dom Par­ty mem­ber raid­ed the homes of four staffers and an office of the BVT (Bun­de­samt für Ver­fas­sungss­chutz und Ter­ror­is­mus Bekämp­fung, i.e., Fed­er­al Bureau for the Pro­tec­tion of the Con­sti­tu­tion and for Coun­tert­er­ror­ism). The BVT is the bureau that deals with right-wing extrem­ism.

The head of the BVT was fired sev­er­al days after the raids. He had been the object of a vir­u­lent cam­paign by a web­site Unzensuriert.at which is known as “the Aus­tri­an Bre­it­bart”. The for­mer edi­tor in chief of unzensuriert.at is now Kickl’s com­mu­ni­ca­tions direc­tor.

As the arti­cle points out, hav­ing the far right in charge of Austria’s and Italy’s domes­tic intel­li­gence agen­cies doesn’t just put the anti-extrem­ist oper­a­tions of Aus­tria and Italy at risk. Because of data-shar­ing agree­ments across Europe, they’re also learn­ing what oth­er intel­li­gence agen­cies of oth­er Euro­pean coun­tries (such as Ger­many) decid­ed to share with Aus­tria and Italy.

Key points of this sto­ry include:

1.–” . . . . In Italy, far-right politi­cian Mat­teo Salvi­ni now serves as head of Italy’s inte­ri­or min­istry, which han­dles inter­nal secu­ri­ty and ter­ror­ism. . . . ”
2.–” . . . . In Aus­tria, the spe­cif­ic inci­dent that has crys­tal­lized wider con­cerns in the world of espi­onage and coun­teres­pi­onage as well as coun­tert­er­ror was a series of raids ordered by the far-right inte­ri­or min­is­ter ear­li­er this year on the offices of the pro­fes­sion­al domes­tic intel­li­gence chief, whose orga­ni­za­tion had in the past con­duct­ed and coor­di­nat­ed with Ger­many its sur­veil­lance of right-wing extrem­ists. . . .”
3.–” . . . . as one long-time secu­ri­ty advis­er to sev­er­al French pres­i­dents told The Dai­ly Beast, ‘The Aus­tri­an oper­a­tion against the intel­li­gence ser­vice by the min­istry of inte­ri­or had an impact on every oth­er intel­li­gence ser­vice in the West.’ . . . .”
4.–[German politi­cian Andrej] Hunko tells The Dai­ly Beast he is specif­i­cal­ly con­cerned that Kickl and his peo­ple would be able to acquire intel­li­gence about left­ist activists who oppose right-wing extrem­ism: ‘It is unthink­able what would hap­pen if secret infor­ma­tion about anti-fas­cist activ­i­ties falls into the hands of the extreme right via Austria’s con­ser­v­a­tive-far right gov­ern­ment.’ . . .”

Jar­rod Ramos, the alleged shoot­er at the Mary­land news­pa­per The Cap­i­tal Gazette was influ­enced by a theo­crat­ic neo-Con­fed­er­ate ide­ol­o­gy espoused by League of the South.

Specif­i­cal­ly, Ramos is a believ­er in the world­view expressed by League of the South lead­ers Mike Per­out­ka and Michael Hill, for whom a Bib­li­cal fun­da­men­tal­ist inter­pre­ta­tion of the Bible is the only REAL law and indi­vid­u­als are empow­ered enforce their inter­pre­ta­tion of Bib­li­cal law on their own.

Hill has also called for the for­ma­tion of death squads to tar­get jour­nal­ists, elect­ed offi­cials, and oth­er mem­bers of “the elite”. Hill has called for young men of “Chris­ten­dom” to become “cit­i­zen-sol­diers” to destroy the “gal­lop­ing tyran­ny” of our time.

” . . . . The League is a theo­crat­ic, seces­sion­ist orga­ni­za­tion whose leader, Michael Hill, had called for the for­ma­tion of death squads tar­get­ing jour­nal­ists, elect­ed offi­cials and oth­er mem­bers of ‘the elite.’ In his essay ‘A Bazooka in Every Pot,’ Hill described such an assas­si­na­tion cam­paign as part of ‘fourth-gen­er­a­tion war­fare,’ a style of decen­tral­ized con­flict that blurs the lines between war and pol­i­tics, com­bat­ants and civil­ians. . . . .”

Mike Per­out­ka was one of the only politi­cians Ramos tweet­ed about (he was sup­port­ive of Per­out­ka). The oth­er politi­cian was Don­ald Trump.

The author notes a pos­si­ble pair of events that may have cat­alyzed the shoot­ing: Three days before the shoot­ing, Pres­i­dent Trump once again demo­nized mem­ber of the media as “ene­mies of the peo­ple,” at a big out­door ral­ly in Cal­i­for­nia. The next day, Mike Per­out­ka lost his 2018 re-elec­tion bid in the Repub­li­can pri­ma­ry.

As Ramos’s social media posts reveal, anoth­er influ­ence on Ramos is the “Berserk” bloody ani­me movie. He made numer­ous ref­er­ences to Berserk in his posts, includ­ing the last tweet made min­utes before the shoot­ing. He even described him­self as play­ing a role in the world of “Berserk”, a world that includes vig­i­lante “hands of God”.

In FTR #756, we not­ed the strong over­lap­ping con­nec­tions between Edward Snow­den, Julian Assange, Ron Paul and the League of the South.

Ramos appears to have man­i­fest­ed the “lone wolf/leaderless resis­tance” strat­e­gy. ” . . . . ‘Ramos came to see him­self as some kind of vig­i­lante for right­eous­ness, cast­ing him­self for exam­ple as a ‘cru­sad­er’ . . . . Polit­i­cal Research Asso­ciates ana­lyst Fred­er­ick Clark­son told Salon. This vision was ‘not unlike the mil­i­taris­tic, mil­len­ni­al vision of Michael Hill,’ he con­tin­ued. . . .”

In FTR #888, we not­ed that Glenn Green­wald ran legal inter­fer­ence for the lead­er­less resis­tance strat­e­gy, free­ing up the likes of Michael Hill from civ­il lia­bil­i­ty for their advo­ca­cy of may­hem. Green­wald is, in effect, an acces­so­ry to the blood­shed alleged­ly real­ized by Ramos and oth­ers like him.

Next, we turn to the sub­ject of Hin­dut­va fas­cism. (For more about this sub­ject, see, among oth­er pro­grams, FTR #‘s 988 and 989, 990, 991, 992, and 1015.

In FTR #1015, we high­light­ed the “cow vig­i­lantes” in India–Hindutva fas­cist gangs per­pe­trat­ing vio­lence on Mus­lims and low­er-caste Hin­dus. What­sApp is fuel­ing the vio­lence.

The may­or of Jaipur, Ashok Laho­ty, shared the rumor about beef being served at the hotel on a BJP What­sApp group.

It appears that the BJP is behind much of the rumor cam­paigns as part of its Hin­du nation­al­ist agen­da. ” . . . . Indi­an Prime Min­is­ter Naren­dra Modi has large­ly remained silent about the prob­lem, and ana­lysts say there’s a rea­son for that: Much of the fake news now spread­ing like wild­fire has been pro­mot­ed, if not cre­at­ed, by some of Modi’s most fer­vent sup­port­ers. . . .”

Modi has a well orches­trat­ed machine for dis­sem­i­nat­ing BJP’s fake news: ” . . . . In her book, ‘I am a Troll: Inside the Secret World of the BJP’s Dig­i­tal Army,’ jour­nal­ist Swati Chaturve­di explains how the par­ty orches­trates online cam­paigns to intim­i­date per­ceived gov­ern­ment crit­ics through a net­work of trolls on Twit­ter and Face­book. And she cites mul­ti­ple peo­ple who worked inside the BJP’s social media machine to make her case. . . . Chaturvedi’s find­ings were backed by anoth­er for­mer BJP cyber-vol­un­teer, Sad­havi Khosla, who left the par­ty in 2015 because of the con­stant bar­rage of misog­y­ny, Islam­o­pho­bia, and hatred she was asked to dis­sem­i­nate online. And Prodyut Bora, one of the mas­ter­minds of the BJP’s ear­ly tech­nol­o­gy and social media strat­e­gy, recent­ly offered a sim­i­lar out­look. He described his cre­ation as ‘Frankenstein’s mon­ster,’ and said that it had mor­phed from its orig­i­nal aim of bet­ter con­nect­ing with the party’s sup­port­ers. ‘I mean, occa­sion­al­ly, it’s just painful to watch what they have done with it,’ he told Huff­Post India last month. . . .”

Turn­ing from the sub­ject of fake news in India to fake news in the U.S., we con­clude with a look at what “deep fake” video tech­nol­o­gy may har­bin­ger.

When the ‘deep­fake’ video tech­nol­o­gy devel­ops to the point of being indis­tin­guish­able from real videos, the far right is going to go into over­drive cre­at­ing videos pur­port­ing to prove vir­tu­al­ly every far right fan­ta­sy you can imag­ine. Among the memes that might be rein­forced by such tech­nol­o­gy is the ‘Piz­za­Gate’ con­spir­a­cy the­o­ry pushed by the right wing in the final weeks of the 2016 alleg­ing that Hillary Clin­ton and num­ber of oth­er promi­nent Democ­rats are part of a Satanist child abuse ring.

Right wing polemi­cist Liz Crokin is repeat­ing her asser­tions that video of Hillary Clin­ton – specif­i­cal­ly, Hillary sex­u­al­ly abus­ing and then eat­ing the face of a child is float­ing around on the Dark Web is def­i­nite­ly real. Crokin is now warn­ing that reports about ‘deep­fake’ tech­nol­o­gy are dis­in­for­ma­tion sto­ries being pre­emp­tive­ly put out by the Deep State to make the pub­lic skep­ti­cal when the videos of Hillary cut­ting the face off of a child come to light.


The Cambridge Analytica Microcosm in Our Panoptic Macrocosm

Let the Great Unfriend­ing Com­mence! Specif­i­cal­ly, the mass unfriend­ing of Face­book, which would be a well deserved unfriend­ing after the scan­dalous rev­e­la­tions in a recent series of arti­cles cen­tered around the claims of Christo­pher Wylie, a Cam­bridge Ana­lyt­i­ca whis­tle-blow­er who helped found the firm and worked there until late 2014 until he and oth­ers grew increas­ing­ly uncom­fort­able with the far right goals and ques­tion­able actions of the firm. And those ques­tion­able actions by Cam­bridge involve a larg­er and more scan­dalous Face­book pol­i­cy brought forth by a Fac­book whis­tle-blow­er, Sandy Parak­i­las: Face­book was hand­ing out exact­ly the kind of data col­lect­ed by Cam­bridge Ana­lyt­i­ca to all sorts of app devel­op­ers for years. Beyond that, it appears that Face­book real­ly did have an excep­tion­al­ly close rela­tion­ship with Cam­bridge Ana­lyt­i­ca’s research part­ner and was only both­ered by its data col­lec­tion when the media got wind of it. It also looks like Steve Ban­non was over­see­ing this entire process, although he claims to know noth­ing. Oh, and Palan­tir appears to have had an infor­mal rela­tion­ship with Cam­bridge Ana­lyt­i­ca this whole time. And this state of affairs is an exten­sion of how the inter­net has been used from its very con­cep­tion a half cen­tu­ry ago. And that’s all part of why the Great Unfriend­ing of Face­book real­ly is long over­due, along with a lot of oth­er reforms.


Walkin’ the Snake in India: Supplement to the Hindutva Fascism Series

In numer­ous pro­grams, we have high­light­ed the Nazi tract Ser­pen­t’s Walk, which deals, in part, with the reha­bil­i­ta­tion of the Third Reich’s rep­u­ta­tion and the trans­for­ma­tion of Hitler into a hero. In FTR #‘s 988 and 989, 990, 991, and 992, we detailed the Hin­dut­va fas­cism of Naren­dra Modi, his BJP Par­ty and sup­port­ive ele­ments, trac­ing the evo­lu­tion of Hin­dut­va fas­cism through the assas­si­na­tion of Mahat­ma Gand­hi and up to the present. It appears that a Ser­pen­t’s Walk sce­nario is indeed unfold­ing in India. A recent book a pic­ture of both Adolf Hitler and Naren­dra Modi stand­ing next to Barack Oba­ma, Mahat­ma Gand­hi, and Nel­son Man­dela, A sim­i­lar, Hitler-glo­ri­fy­ing book was mar­ket­ed to Gujarati school chil­dren when Modi gov­erned the region. Hitler is well-regard­ed in seg­ments of Indi­an soci­ety, in part due to the efforts of Bal Thack­er­ay and his Shriv Sena par­ty. For many years, Shriv Sena was an ally of Mod­i’s Hin­dut­va fas­cist BJP. All of the con­tents of this web­site as of 12/19/2014–Dave Emory’s 37+ years of research and broadcasting–as well as hours of video­taped lec­tures are avail­able on a 32GB flash dri­ve. Dave offers his pro­grams and arti­cles for free–your sup­port is very much appre­ci­at­ed.


FTR #997 Summoning the Demon, Part 2: Sorcer’s Apprentice

Devel­op­ing analy­sis pre­sent­ed in FTR #968, this broad­cast explores fright­en­ing devel­op­ments and poten­tial devel­op­ments in the world of arti­fi­cial intelligence–the ulti­mate man­i­fes­ta­tion of what Mr. Emory calls “tech­no­crat­ic fas­cism.”

In order to under­score what we mean by tech­no­crat­ic fas­cism, we ref­er­ence a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Per­haps the last and most per­ilous man­i­fes­ta­tion of tech­no­crat­ic fas­cism con­cerns Antho­ny  Levandows­ki, an engi­neer at the foun­da­tion of the devel­op­ment of Google Street Map tech­nol­o­gy and self-dri­ving cars. He is propos­ing an AI God­head that would rule the world and would be wor­shipped as a God by the plan­et’s cit­i­zens. Insight into his per­son­al­i­ty was pro­vid­ed by an asso­ciate: “ . . . . ‘He had this very weird moti­va­tion about robots tak­ing over the world—like actu­al­ly tak­ing over, in a mil­i­tary sense…It was like [he want­ed] to be able to con­trol the world, and robots were the way to do that. He talked about start­ing a new coun­try on an island. Pret­ty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it’. . . .”

As we saw in FTR #968, AI’s have incor­po­rat­ed many flaws of their cre­ators, augur­ing very poor­ly for the sub­jects of Levandowski’s AI God­head.

It is also inter­est­ing to con­tem­plate what may hap­pen when AI’s are designed by oth­er AI’s- machines design­ing oth­er machines.

After a detailed review of some of the omi­nous real and devel­op­ing AI-relat­ed tech­nol­o­gy, the pro­gram high­lights Antho­ny Levandows­ki, the bril­liant engi­neer who was instru­men­tal in devel­op­ing Google’s Street Maps, Way­mo’s self-dri­ving cars, Otto’s self-dri­ving trucks, the Lidar tech­nol­o­gy cen­tral to self-dri­ving vehi­cles and the Way of the Future, super AI God­head.

Fur­ther insight into Levandowski’s per­son­al­i­ty can be gleaned from e‑mails with Travis Kalan­ick, for­mer CEO of Uber: ” . . . . In Kalan­ick, Levandows­ki found both a soul­mate and a men­tor to replace Sebas­t­ian Thrun. Text mes­sages between the two, dis­closed dur­ing the lawsuit’s dis­cov­ery process, cap­ture Levandows­ki teach­ing Kalan­ick about lidar at late night tech ses­sions, while Kalan­ick shared advice on man­age­ment. ‘Down to hang out this eve and mas­ter­mind some shit,’ texted Kalan­ick, short­ly after the acqui­si­tion. ‘We’re going to take over the world. One robot at a time,’ wrote Levandows­ki anoth­er time. . . .”

Those who view self-dri­ving cars and oth­er AI-based tech­nolo­gies as flaw­less would do well to con­sid­er the fol­low­ing: ” . . . .Last Decem­ber, Uber launched a pilot self-dri­ving taxi pro­gram in San Fran­cis­co. As with Otto in Neva­da, Levandows­ki failed to get a license to oper­ate the high-tech vehi­cles, claim­ing that because the cars need­ed a human over­see­ing them, they were not tru­ly autonomous. The DMV dis­agreed and revoked the vehi­cles’ licens­es. Even so, dur­ing the week the cars were on the city’s streets, they had been spot­ted run­ning red lights on numer­ous occa­sions. . . . .”

Not­ing Levandowski’s per­son­al­i­ty quirks, the arti­cle pos­es a fun­da­men­tal ques­tion: ” . . . . But even the smartest car will crack up if you floor the gas ped­al too long. Once fet­ed by bil­lion­aires, Levandows­ki now finds him­self star­ring in a high-stakes pub­lic tri­al as his two for­mer employ­ers square off. By exten­sion, the whole tech­nol­o­gy indus­try is there in the dock with Levandows­ki. Can we ever trust self-dri­ving cars if it turns out we can’t trust the peo­ple who are mak­ing them? . . . .”

Levandowski’s Otto self-dri­ving trucks might be weighed against the prog­nos­ti­ca­tions of dark horse Pres­i­den­tial can­di­date and for­mer tech exec­u­tive Andrew Wang: “. . . . ‘All you need is self-dri­ving cars to desta­bi­lize soci­ety,’ Mr. Yang said over lunch at a Thai restau­rant in Man­hat­tan last month, in his first inter­view about his cam­paign. In just  a few years, he said, ‘we’re going to have a mil­lion truck dri­vers out of work who are 94 per­cent male, with an  aver­age  lev­el of edu­ca­tion of high school or one year of col­lege.’ ‘That one inno­va­tion,’ he added, ‘will be enough to cre­ate riots in the street. And we’re about to do the  same thing to retail work­ers, call cen­ter work­ers, fast-food work­ers, insur­ance com­pa­nies, account­ing firms.’ . . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

Pro­gram High­lights Include:

1.-Levandowski’s appar­ent shep­herd­ing of a com­pa­ny called–perhaps significantly–Odin Wave to uti­lize Lidar-like tech­nol­o­gy.
2.-The role of DARPA in ini­ti­at­ing the self-dri­ving vehi­cles con­test that was Levandowski’s point of entry into his tech ven­tures.
3.-Levandowski’s devel­op­ment of the Ghostrid­er self-dri­ving motor­cy­cles, which expe­ri­enced 800 crash­es in 1,000 miles.


FTR #996 Civilization’s Twilight: Update on Technocratic Fascism

Updat­ing our ongo­ing analy­sis of what Mr. Emory calls “tech­no­crat­ic fas­cism,” we exam­ine how exist­ing tech­nolo­gies are neu­tral­iz­ing and/or ren­der­ing obso­lete foun­da­tion­al ele­ments of our civ­i­liza­tion and demo­c­ra­t­ic gov­ern­men­tal sys­tems.

We begin our descrip­tion by ref­er­enc­ing a vital­ly impor­tant arti­cle by David Golum­bia. ” . . . . Such tech­no­cratic beliefs are wide­spread in our world today, espe­cially in the enclaves of dig­i­tal enthu­si­asts, whether or not they are part of the giant cor­po­rate-dig­i­tal leviathan. Hack­ers (‘civic,’ ‘eth­i­cal,’ ‘white’ and ‘black’ hat alike), hack­tivists, Wik­iLeaks fans [and Julian Assange et al–D. E.], Anony­mous ‘mem­bers,’ even Edward Snow­den him­self walk hand-in-hand with Face­book and Google in telling us that coders don’t just have good things to con­tribute to the polit­i­cal world, but that the polit­i­cal world is theirs to do with what they want, and the rest of us should stay out of it: the polit­i­cal world is bro­ken, they appear to think (right­ly, at least in part), and the solu­tion to that, they think (wrong­ly, at least for the most part), is for pro­gram­mers to take polit­i­cal mat­ters into their own hands. . . . [Tor co-cre­ator] Din­gle­dine  asserts that a small group of soft­ware devel­op­ers can assign to them­selves that role, and that mem­bers of demo­c­ra­tic poli­ties have no choice but to accept them hav­ing that role. . . .”

Begin­ning with a chill­ing opin­ion piece in “The New York Times,” we note that tech­no­log­i­cal devel­op­ment threat­ens to super-charge the Big Lies that dri­ve our world. As any­one who saw the file Star Wars film “Rogue One” knows, the tech­nol­o­gy required to cre­ate a near­ly life-like com­put­er-gen­er­at­ed videos of a real per­son is already a real­i­ty. Once the province of movie stu­dios and oth­er firms with mil­lions to spend, the tech­nol­o­gy is now avail­able for down­load for free.

” . . . . In 2016 Gareth Edwards, the direc­tor of the Star Wars film ‘Rogue One,’ was able to cre­ate a scene fea­tur­ing a young Princess Leia by manip­u­lat­ing images of Car­rie Fish­er as she looked in 1977. Mr. Edwards had the best hard­ware and soft­ware a $200 mil­lion Hol­ly­wood bud­get could buy. Less than two years lat­er, images of sim­i­lar qual­i­ty can be cre­at­ed with soft­ware avail­able for free down­load on Red­dit. That was how a faked video sup­pos­ed­ly of the actress Emma Wat­son in a show­er with anoth­er woman end­ed up on the web­site Celeb Jihad. . . .”

The tech­nol­o­gy has already ren­dered obso­lete selec­tive edit­ing such as that per­formed by James O’Keefe: ” . . . . as the nov­el­ist William Gib­son once said, ‘The street finds its own uses for things.’ So do rogue polit­i­cal actors. The impli­ca­tions for democ­ra­cy are eye-open­ing. The con­ser­v­a­tive polit­i­cal activist James O’Keefe has cre­at­ed a cot­tage indus­try manip­u­lat­ing polit­i­cal per­cep­tions by edit­ing footage in mis­lead­ing ways. In 2018, low-tech edit­ing like Mr. O’Keefe’s is already an anachro­nism: Imag­ine what even less scrupu­lous activists could do with the pow­er to cre­ate ‘video’ fram­ing real peo­ple for things they’ve nev­er actu­al­ly done. One har­row­ing poten­tial even­tu­al­i­ty: Fake video and audio may become so con­vinc­ing that it can’t be dis­tin­guished from real record­ings, ren­der­ing audio and video evi­dence inad­mis­si­ble in court. . . .”

After high­light­ing a sto­ry about AI-gen­er­at­ed “deep­fake” pornog­ra­phy with peo­ple’s faces super­im­posed on oth­ers’ bod­ies in porno­graph­ic lay­outs, we note how robots have altered our polit­i­cal and com­mer­cial land­scapes, through cyber tech­nol­o­gy: ” . . . . Robots are get­ting bet­ter, every day, at imper­son­at­ing humans. When direct­ed by oppor­tunists, male­fac­tors and some­times even nation-states, they pose a par­tic­u­lar threat to demo­c­ra­t­ic soci­eties, which are premised on being open to the peo­ple. Robots pos­ing as peo­ple have become a men­ace. . . . In com­ing years, cam­paign finance lim­its will be (and maybe already are) evad­ed by robot armies pos­ing as ‘small’ donors. And actu­al vot­ing is anoth­er obvi­ous tar­get — per­haps the ulti­mate tar­get. . . .”

Before the actu­al replace­ment of man­u­al labor by robots, devices to tech­no­crat­i­cal­ly “improve”–read “coer­cive­ly engi­neer” work­ers are patent­ed by Ama­zon and have been used on work­ers in some of their facil­i­ties. ” . . . . What if your employ­er made you wear a wrist­band that tracked your every move, and that even nudged you via vibra­tions when it judged that you were doing some­thing wrong? What if your super­vi­sor could iden­ti­fy every time you paused to scratch or fid­get, and for how long you took a bath­room break? What may sound like dystopi­an fic­tion could become a real­i­ty for Ama­zon ware­house work­ers around the world. The com­pa­ny has won two patents for such a wrist­band. . . .”

For some U.K Ama­zon ware­house work­ers, the future is now: ” . . . . Max Craw­ford, a for­mer Ama­zon ware­house work­er in Britain, said in a phone inter­view, ‘After a year work­ing on the floor, I felt like I had become a ver­sion of the robots I was work­ing with.’ He described hav­ing to process hun­dreds of items in an hour — a pace so extreme that one day, he said, he fell over from dizzi­ness. ‘There was no time to go to the loo,’ he said, using the British slang for toi­let. ‘You had to process the items in sec­onds and then move on. If you didn’t meet tar­gets, you were fired.’

“He worked back and forth at two Ama­zon ware­hous­es for more than two years and then quit in 2015 because of health con­cerns, he said: ‘I got burned out.’ Mr. Craw­ford agreed that the wrist­bands might save some time and labor, but he said the track­ing was ‘stalk­er­ish’ and feared that work­ers might be unfair­ly scru­ti­nized if their hands were found to be ‘in the wrong place at the wrong time.’ ‘They want to turn peo­ple into machines,’ he said. ‘The robot­ic tech­nol­o­gy isn’t up to scratch yet, so until it is, they will use human robots.’ . . . .”

Some tech work­ers, well placed at R & D pace­set­ters and giants such as Face­book and Google have done an about-face on the  impact of their ear­li­er efforts and are now strug­gling against the mis­use of the tech­nolo­gies they helped to launch:

” . . . . A group of Sil­i­con Val­ley tech­nol­o­gists who were ear­ly employ­ees at Face­book and Google, alarmed over the ill effects of social net­works and smart­phones, are band­ing togeth­er to chal­lenge the com­pa­nies they helped build. . . . ‘The largest super­com­put­ers in the world are inside of two com­pa­nies — Google and Face­book — and where are we point­ing them?’ Mr. [Tris­tan] Har­ris said. ‘We’re point­ing them at people’s brains, at chil­dren.’ . . . . Mr. [RogerM­c­Namee] said he had joined the Cen­ter for Humane Tech­nol­o­gy because he was hor­ri­fied by what he had helped enable as an ear­ly Face­book investor. ‘Face­book appeals to your lizard brain — pri­mar­i­ly fear and anger,’ he said. ‘And with smart­phones, they’ve got you for every wak­ing moment.’ . . . .”

Tran­si­tion­ing to our next program–updating AI (arti­fi­cial intel­li­gence) tech­nol­o­gy as it applies to tech­no­crat­ic fascism–we note that AI machines are being designed to devel­op oth­er AI’s–“The Rise of the Machine.” ” . . . . Jeff Dean, one of Google’s lead­ing engi­neers, spot­light­ed a Google project called AutoML. ML is short for machine learn­ing, refer­ring to com­put­er algo­rithms that can learn to per­form par­tic­u­lar tasks on their own by ana­lyz­ing data. AutoML, in turn, is a machine learn­ing algo­rithm that learns to build oth­er machine-learn­ing algo­rithms. With it, Google may soon find a way to cre­ate A.I. tech­nol­o­gy that can part­ly take the humans out of build­ing the A.I. sys­tems that many believe are the future of the tech­nol­o­gy indus­try. . . .”


FTR #994 What Was Old Is New Again

This broad­cast recaps mate­r­i­al from pre­vi­ous pro­grams, under­scor­ing key points of infor­ma­tion from cur­rent devel­op­ments.

Last week, we opened our pro­gram with an arti­cle from Con­sor­tium News about some alarm­ing devel­op­ments in Ukraine–a piece of leg­is­la­tion approved by the Rada (the Ukrain­ian par­lia­ment) that might augur World War III.

One of the few media out­lets that has cov­ered the return to pow­er of the OUN/B’s suc­ces­sor fas­cist orga­ni­za­tions in Ukraine, Con­sor­tium News was found­ed and head­ed by Robert Par­ry.

Par­ry passed away last week­end.

Mr. Emory post­ed the fol­low­ing com­ment on the Con­sor­tium News arti­cle about Robert’s pass­ing:

A very, very sad occa­sion. It was my priv­i­lege to have inter­viewed Robert a num­ber of times over the years, includ­ing an interview–scheduled days before–that took place on the day he learned of Gary Webb’s death.

It was also my priv­i­lege to have used many arti­cles from Con­sor­tium News in my week­ly broad­casts, includ­ing, and espe­cial­ly, his reportage about the return to pow­er of the OUN/B suc­ces­sor orga­ni­za­tions in Ukraine.

Very few have man­i­fest­ed the courage and integri­ty to report hon­est­ly on those events.

Now, there will be few­er.

Rest in peace, Robert.

Next, we return to the sub­ject of Peter Thiel, of “Team Trump,” Face­book and Palan­tir.

We have cov­ered Peter Thiel in numer­ous pro­grams, begin­ning with our warn­ing about him in FTR #718.

Some of the points we have made about him include:

1.-His fam­i­ly back­ground in the Frank­furt (Ger­many) chem­i­cal busi­ness. Prob­a­bly I.G. Farben/Bormann, in that con­text.
2.-His pri­ma­ry role in Palan­tir, appar­ent­ly the mak­er of the PRISM soft­ware at the epi­cen­ter of L’Af­faire Snow­den.
3.-His role as the pri­ma­ry financier of Ron Paul’s super PAC. (Paul is an unabashed white suprema­cist, joined at the hip with David Duke and the neo-Con­fed­er­ate move­ment. He was the Pres­i­den­tial can­di­date of choice for Eddie “The Friend­ly Spook” Snow­den and Julian Assange.)
4.-Thiel’s net­work­ing with movers and shak­ers from In-Q-Tel, the CIA’s high-tech ven­ture cap­i­tal firm.
5.-Thiel’s active anti-immi­grant stance.
6.-Thiel’s sem­i­nal net­work­ing with oth­er tech titans and ven­ture cap­i­tal firms, includ­ing some with polit­i­cal and his­tor­i­cal trib­u­taries lead­ing back to the apartheid regime in South Africa.

With Thiel among the can­di­dates to head Trump’s Pres­i­den­t’s Intel­li­gence Advi­so­ry Board, we note that his apoc­a­lyp­tic, anti-Enlight­en­ment ide­ol­o­gy draws on, among oth­er influ­ences, Carl Schmitt. Arguably the prime mover behind the Ger­man Con­ser­v­a­tive Rev­o­lu­tion, Schmitt was also: “. . . . a Nazi and the Third Reich’s pre­em­i­nent legal the­o­rist. For Thiel, Schmitt is an inspir­ing throw­back to a pre-Enlight­en­ment age, who exalts strug­gle and insists that the dis­cov­ery of ene­mies is the foun­da­tion of pol­i­tics. . .”

There has been a fair amount of buzz about the release of addi­tion­al, pre­vi­ous­ly clas­si­fied, doc­u­ments about the assas­si­na­tion of Pres­i­dent Kennedy.

An inter­est­ing doc­u­ment came to light in the recent release of files relat­ing to the assas­si­na­tion of JFK. Jack Ruby told an FBI infor­mant to “watch the fire­works” in Dealey Plaza that day.

“Jack Ruby, the man who even­tu­al­ly shot Lee Har­vey Oswald, told an FBI infor­mant to ‘watch the fire­works’ on the day Pres­i­dent John F. Kennedy was killed, accord­ing to new records the Nation­al Archives released Fri­day. . . . ‘The infor­mant stat­ed that on the morn­ing of the assas­si­na­tion, Ruby con­tact­ed him and asked if he would ‘like to watch the fire­works,” an FBI record dat­ed April 6, 1977, says. ‘He was with Jack Ruby and stand­ing at the cor­ner of the Postal Annex Build­ing fac­ing the Texas School Book Depos­i­to­ry Build­ing at the time of the shoot­ing. . . .”

This might be eval­u­at­ed against the back­ground of FTR #963, relating–among oth­er things–a read­ing of Jack Ruby’s War­ren Com­mis­sion tes­ti­mo­ny. (A read­ing of Ruby’s tes­ti­mo­ny is re-broad­cast in this pro­gram.)

When inter­viewed by the War­ren Com­mis­sion, Jack Ruby indi­cat­ed that he had been part of a con­spir­a­cy to kill Kennedy and that he feared for his life. The War­ren Com­mis­sion turned a deaf ear to his desire to go to Wash­ing­ton and “spill the beans.”

Ger­ald Ford (who suc­ceed­ed Nixon as Pres­i­dent and par­doned him of all crimes com­mit­ted), Leon Jawors­ki (a War­ren Com­mis­sion coun­sel who was a direc­tor of a CIA domes­tic fund­ing con­duit and who was select­ed by Nixon to be Water­gate Spe­cial Pros­e­cu­tor) and Arlen Specter (anoth­er War­ren Com­mis­sion coun­sel who was Nixon’s first choice as his per­son­al defense attor­ney in the Water­gate affair) were present at Ruby’s de fac­to con­fes­sion.

War­ren Com­mis­sion Coun­sel J. Lee Rankin is also present at this inter­view. Nixon first select­ed J. Lee Rankin to serve as Water­gate Spe­cial Pros­e­cu­tor. Rankin was sub­se­quent­ly tabbed to review the Water­gate tapes and deter­mine which would be released. Rankin was the War­ren Com­mis­sion’s liai­son between the com­mis­sion and both the CIA and the FBI. Rankin was a key pro­po­nent of the so-called “Mag­ic Bul­let The­o­ry.”

We con­clude with dis­cus­sion of anoth­er aspect of the JFK assas­si­na­tion.

Jane May­er’s Dark Mon­ey has received con­sid­er­able dis­cus­sion and media play over the last cou­ple of years. In past dis­cus­sion of the Koch fam­i­ly, we not­ed that patri­arch Fred Koch worked with Hitler build­ing one of Nazi Ger­many’s most impor­tant refineries–one capa­ble of refin­ing the high-octane fuel need­ed by fight­er planes.

In addi­tion, we not­ed that Fred Koch was one of the first mem­bers of the John Birch Soci­ety.

May­er notes that Fred Koch helped finance ads in the wake of the JFK assas­si­na­tion that pinned respon­si­bil­i­ty for the crime on the Sovi­et Union–one of the pri­ma­ry lev­els of dis­in­for­ma­tion.

” . . . . In a hasty turn­about, soon after the assas­si­na­tion, Fred Koch took out full-page ads in The New York Times and The Wash­ing­ton Post, mourn­ing JFK. The ads advanced the con­spir­a­cy the­o­ry that JFK’s assas­sin, Lee Har­vey Oswald, had act­ed as part of a Com­mu­nist plot. The Com­mu­nists would­n’t “rest on this suc­cess,” the ads warned. In the cor­ner was a tear-out order form, direct­ing the pub­lic to sign up for John Birch Soci­ety mail­ings. . . .”

We have cov­ered the “paint­ing of Oswald Red” in numer­ous pro­grams, includ­ing FTR #‘s 925 and 926.


Peter Thiel’s Political/Philosophical Influences: ” . . . Carl Schmitt . . . a Nazi and the Third Reich’s Preeminent Legal Theorist. . . ”

Trump may be appoint­ing Peter Thiel as head of his Pres­i­den­t’s Intel­li­gence Advi­so­ry Board. Thiel is heav­i­ly influ­enced by Carl Schmitt, (on the right in the pho­to­graph) “. . . . a Nazi and the Third Reich’s pre­em­i­nent legal the­o­rist. For Thiel, Schmitt is an inspir­ing throw­back to a pre-Enlight­en­ment age, who exalts strug­gle and insists that the dis­cov­ery of ene­mies is the foun­da­tion of pol­i­tics. . .” We have been warn­ing about Thiel since July of 2010. All of the con­tents of this web­site as of 12/19/2014–Dave Emory’s 37+ years of research and broadcasting–as well as hours of video­taped lec­tures are avail­able on a 32GB flash dri­ve. Dave offers his pro­grams and arti­cles for free–your sup­port is very much appre­ci­at­ed.


FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

The title of this pro­gram comes from pro­nounce­ments by tech titan Elon Musk, who warned that, by devel­op­ing arti­fi­cial intel­li­gence, we were “sum­mon­ing the demon.” In this pro­gram, we ana­lyze the poten­tial vec­tor run­ning from the use of AI to con­trol soci­ety in a fascis­tic man­ner to the evo­lu­tion of the very tech­nol­o­gy used for that con­trol.

The ulti­mate result of this evo­lu­tion may well prove cat­a­stroph­ic, as fore­cast by Mr. Emory at the end of L‑2 (record­ed in Jan­u­ary of 1995.)

We begin by review­ing key aspects of the polit­i­cal con­text in which arti­fi­cial intel­li­gence is being devel­oped. Note that, at the time of this writ­ing and record­ing, these tech­nolo­gies are being craft­ed and put online in the con­text of the anti-reg­u­la­to­ry eth­ic of the GOP/Trump admin­is­tra­tion.

At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

Tak­ing a look at the future of fas­cism in the con­text of AI, Tay, a “bot” cre­at­ed by Microsoft to respond to users of Twit­ter was tak­en offline after users taught it to–in effect–become a Nazi bot. It is note­wor­thy that Tay can only respond on the basis of what she is taught. In the future, tech­no­log­i­cal­ly accom­plished and will­ful peo­ple like the bril­liant, Ukraine-based Nazi hack­er and Glenn Green­wald asso­ciate Andrew Aueren­heimer, aka “weev” may be able to do more. Inevitably, Under­ground Reich ele­ments will craft a Nazi AI that will be able to do MUCH, MUCH more!

As one Twit­ter user not­ed, employ­ing sar­casm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all con­cerned about the future of AI.”

As not­ed in a Pop­u­lar Mechan­ics arti­cle: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t under­stand. . . .”

Accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly.

When you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us, at least in part. ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

Cam­bridge Ana­lyt­i­ca, and its par­ent com­pa­ny SCL, spe­cial­ize in using AI and Big Data psy­cho­me­t­ric analy­sis on hun­dreds of mil­lions of Amer­i­cans in order to mod­el indi­vid­ual behav­ior. SCL devel­ops strate­gies to use that infor­ma­tion, and manip­u­late search engine results to change pub­lic opin­ion (the Trump cam­paign was appar­ent­ly very big into AI and Big Data dur­ing the cam­paign).

Indi­vid­ual social media users receive mes­sages craft­ed to influ­ence them, gen­er­at­ed by the (in effect) Nazi AI at the core of this media engine, using Big Data to tar­get the indi­vid­ual user!

As the arti­cle notes, not only are Cam­bridge Analytica/SCL are using their pro­pa­gan­da tech­niques to shape US pub­lic opin­ion in a fas­cist direc­tion, but they are achiev­ing this by uti­liz­ing their pro­pa­gan­da machine to char­ac­ter­ize all news out­lets to the left of Bri­et­bart as “fake news” that can’t be trust­ed.

In short, the secre­tive far-right bil­lion­aire (Robert Mer­cer), joined at the hip with Steve Ban­non, is run­ning mul­ti­ple firms spe­cial­iz­ing in mass psy­cho­me­t­ric pro­fil­ing based on data col­lect­ed from Face­book and oth­er social media. Mercer/Bannon/Cambridge Analytica/SCL are using Naz­i­fied AI and Big Data to devel­op mass pro­pa­gan­da cam­paigns to turn the pub­lic against every­thing that isn’t Bri­et­bart­ian by con­vinc­ing the pub­lic that all non-Bri­et­bart­ian media out­lets are con­spir­ing to lie to the pub­lic.

This is the ulti­mate Ser­pen­t’s Walk scenario–a Naz­i­fied Arti­fi­cial Intel­li­gence draw­ing on Big Data gleaned from the world’s inter­net and social media oper­a­tions to shape pub­lic opin­ion, tar­get indi­vid­ual users, shape search engine results and even feed­back to Trump while he is giv­ing press con­fer­ences!

We note that SCL, the par­ent com­pa­ny of Cam­bridge Ana­lyt­i­ca, has been deeply involved with “psy­ops” in places like Afghanistan and Pak­istan. Now, Cam­bridge Ana­lyt­i­ca, their Big Data and AI com­po­nents, Mer­cer mon­ey and Ban­non polit­i­cal savvy are apply­ing that to con­tem­po­rary soci­ety. We note that:

1.-Cambridge Ana­lyt­i­ca’s par­ent cor­po­ra­tion SCL, was deeply involved with “psy­ops” in Afghanistan and Pak­istan. ” . . . But there was anoth­er rea­son why I recog­nised Robert Mercer’s name: because of his con­nec­tion to Cam­bridge Ana­lyt­i­ca, a small data ana­lyt­ics com­pa­ny. He is report­ed to have a $10m stake in the com­pa­ny, which was spun out of a big­ger British com­pa­ny called SCL Group. It spe­cialis­es in ‘elec­tion man­age­ment strate­gies’ and ‘mes­sag­ing and infor­ma­tion oper­a­tions’, refined over 25 years in places like Afghanistan and Pak­istan. In mil­i­tary cir­cles this is known as ‘psy­ops’ – psy­cho­log­i­cal oper­a­tions. (Mass pro­pa­gan­da that works by act­ing on people’s emo­tions.) . . .”
2.-The use of mil­lions of “bots” to manip­u­late pub­lic opin­ion: ” . . . .‘It does seem pos­si­ble. And it does wor­ry me. There are quite a few pieces of research that show if you repeat some­thing often enough, peo­ple start invol­un­tar­i­ly to believe it. And that could be lever­aged, or weaponized for pro­pa­gan­da. We know there are thou­sands of auto­mat­ed bots out there that are try­ing to do just that.’ . . .”
3.-The use of Arti­fi­cial Intel­li­gence: ” . . . There’s noth­ing acci­den­tal about Trump’s behav­iour, Andy Wig­more tells me. ‘That press con­fer­ence. It was absolute­ly bril­liant. I could see exact­ly what he was doing. There’s feed­back going on con­stant­ly. That’s what you can do with arti­fi­cial intel­li­gence. You can mea­sure every reac­tion to every word. He has a word room, where you fix key words. We did it. So with immi­gra­tion, there are actu­al­ly key words with­in that sub­ject mat­ter which peo­ple are con­cerned about. So when you are going to make a speech, it’s all about how can you use these trend­ing words.’ . . .”
4.-The use of bio-psy­cho-social pro­fil­ing: ” . . . Bio-psy­cho-social pro­fil­ing, I read lat­er, is one offen­sive in what is called ‘cog­ni­tive war­fare’. Though there are many oth­ers: ‘recod­ing the mass con­scious­ness to turn patri­o­tism into col­lab­o­ra­tionism,’ explains a Nato brief­ing doc­u­ment on coun­ter­ing Russ­ian dis­in­for­ma­tion writ­ten by an SCL employ­ee. ‘Time-sen­si­tive pro­fes­sion­al use of media to prop­a­gate nar­ra­tives,’ says one US state depart­ment white paper. ‘Of par­tic­u­lar impor­tance to psy­op per­son­nel may be pub­licly and com­mer­cial­ly avail­able data from social media plat­forms.’ . . . .”
5.-The use and/or cre­ation of a cog­ni­tive casu­al­ty: ” . . . . Yet anoth­er details the pow­er of a ‘cog­ni­tive casu­al­ty’ – a ‘moral shock’ that ‘has a dis­abling effect on empa­thy and high­er process­es such as moral rea­son­ing and crit­i­cal think­ing’. Some­thing like immi­gra­tion, per­haps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . ”

All of this adds up to a “cyber Ser­pen­t’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by cre­at­ing a main­stream media to replace the exist­ing one with a site such as Bre­it­bart. [Ser­pen­t’s Walk sce­nario with Bre­it­bart becom­ing “the opin­ion form­ing media”!–D.E.] You could set up oth­er web­sites that dis­place main­stream sources of news and infor­ma­tion with your own def­i­n­i­tions of con­cepts like “lib­er­al media bias”, like CNSnews.com. And you could give the rump main­stream media, papers like the ‘fail­ing New York Times!’ what it wants: sto­ries. Because the third prong of Mer­cer and Bannon’s media empire is the Gov­ern­ment Account­abil­i­ty Insti­tute. . . .”

We then review some ter­ri­fy­ing and con­sum­mate­ly impor­tant devel­op­ments tak­ing shape in the con­text of what Mr. Emory has called “tech­no­crat­ic fas­cism:”

In FTR #‘s 718 and 946, we detailed the fright­en­ing, ugly real­i­ty behind Face­book. Face­book is now devel­op­ing tech­nol­o­gy that will per­mit the tap­ping of users thoughts by mon­i­tor­ing brain-to-com­put­er tech­nol­o­gy. Face­book’s R & D is head­ed by Regi­na Dugan, who used to head the Pen­tagon’s DARPA. Face­book’s Build­ing 8 is pat­terned after DARPA:

1.-” . . . . Brain-com­put­er inter­faces are noth­ing new. DARPA, which Dugan used to head, has invest­ed heav­i­ly in brain-com­put­er inter­face tech­nolo­gies to do things like cure men­tal ill­ness and restore mem­o­ries to sol­diers injured in war. . . Face­book wants to build its own “brain-to-com­put­er inter­face” that would allow us to send thoughts straight to a com­put­er. ‘What if you could type direct­ly from your brain?’ Regi­na Dugan, the head of the company’s secre­tive hard­ware R&D divi­sion, Build­ing 8, asked from the stage. Dugan then pro­ceed­ed to show a video demo of a woman typ­ing eight words per minute direct­ly from the stage. In a few years, she said, the team hopes to demon­strate a real-time silent speech sys­tem capa­ble of deliv­er­ing a hun­dred words per minute. ‘That’s five times faster than you can type on your smart­phone, and it’s straight from your brain,’ she said. ‘Your brain activ­i­ty con­tains more infor­ma­tion than what a word sounds like and how it’s spelled; it also con­tains seman­tic infor­ma­tion of what those words mean.’ . . .”
2.-” . . . . Facebook’s Build­ing 8 is mod­eled after DARPA and its projects tend to be equal­ly ambi­tious. . . .”
3.-” . . . . But what Face­book is propos­ing is per­haps more radical—a world in which social media doesn’t require pick­ing up a phone or tap­ping a wrist watch in order to com­mu­ni­cate with your friends; a world where we’re con­nect­ed all the time by thought alone. . . .”

Next we review still more about Face­book’s brain-to-com­put­er inter­face:

1.-” . . . . Face­book hopes to use opti­cal neur­al imag­ing tech­nol­o­gy to scan the brain 100 times per sec­ond to detect thoughts and turn them into text. Mean­while, it’s work­ing on ‘skin-hear­ing’ that could trans­late sounds into hap­tic feed­back that peo­ple can learn to under­stand like braille. . . .”
2.-” . . . . Wor­ry­ing­ly, Dugan even­tu­al­ly appeared frus­trat­ed in response to my inquiries about how her team thinks about safe­ty pre­cau­tions for brain inter­faces, say­ing, ‘The flip side of the ques­tion that you’re ask­ing is ‘why invent it at all?’ and I just believe that the opti­mistic per­spec­tive is that on bal­ance, tech­no­log­i­cal advances have real­ly meant good things for the world if they’re han­dled respon­si­bly.’ . . . .”
Col­lat­ing the infor­ma­tion about Face­book’s brain-to-com­put­er inter­face with their doc­u­ment­ed actions turn­ing psy­cho­log­i­cal intel­li­gence about trou­bled teenagers gives us a peek into what may lie behind Dugan’s bland reas­sur­ances:

1.-” . . . . The 23-page doc­u­ment alleged­ly revealed that the social net­work pro­vid­ed detailed data about teens in Australia—including when they felt ‘over­whelmed’ and ‘anxious’—to adver­tis­ers. The creepy impli­ca­tion is that said adver­tis­ers could then go and use the data to throw more ads down the throats of sad and sus­cep­ti­ble teens. . . . By mon­i­tor­ing posts, pic­tures, inter­ac­tions and inter­net activ­i­ty in real-time, Face­book can work out when young peo­ple feel ‘stressed’, ‘defeat­ed’, ‘over­whelmed’, ‘anx­ious’, ‘ner­vous’, ‘stu­pid’, ‘sil­ly’, ‘use­less’, and a ‘fail­ure’, the doc­u­ment states. . . .”
2.-” . . . . A pre­sen­ta­tion pre­pared for one of Australia’s top four banks shows how the $US 415 bil­lion adver­tis­ing-dri­ven giant has built a data­base of Face­book users that is made up of 1.9 mil­lion high school­ers with an aver­age age of 16, 1.5 mil­lion ter­tiary stu­dents aver­ag­ing 21 years old, and 3 mil­lion young work­ers aver­ag­ing 26 years old. Detailed infor­ma­tion on mood shifts among young peo­ple is ‘based on inter­nal Face­book data’, the doc­u­ment states, ‘share­able under non-dis­clo­sure agree­ment only’, and ‘is not pub­licly avail­able’. . . .”
3.-“In a state­ment giv­en to the news­pa­per, Face­book con­firmed the prac­tice and claimed it would do bet­ter, but did not dis­close whether the prac­tice exists in oth­er places like the US. . . .”

In this con­text, note that Face­book is also intro­duc­ing an AI func­tion to ref­er­ence its users pho­tos.

The next ver­sion of Amazon’s Echo, the Echo Look, has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. This is an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. When Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon didn’t address that ques­tion. It would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look:

1.-” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”

We then fur­ther devel­op the stun­ning impli­ca­tions of Ama­zon’s Echo Look AI tech­nol­o­gy:

1.-” . . . . Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your outfits.The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed. Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion. . . .”
2.-” . . . . This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they ‘can’t spec­u­late’ on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers. This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: ‘Can’t spec­u­late.’ . . . ”

Note­wor­thy in this con­text is the fact that AI’s have shown that they quick­ly incor­po­rate human traits and prej­u­dices. (This is reviewed at length above.) ” . . . . How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals. Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: ‘A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.’ . . .”

After this exten­sive review of the appli­ca­tions of AI to var­i­ous aspects of con­tem­po­rary civic and polit­i­cal exis­tence, we exam­ine some alarm­ing, poten­tial­ly apoc­a­lyp­tic devel­op­ments.

Omi­nous­ly, Face­book’s arti­fi­cial intel­li­gence robots have begun talk­ing to each oth­er in their own lan­guage, that their human mas­ters can not under­stand. “ . . . . Indeed, some of the nego­ti­a­tions that were car­ried out in this bizarre lan­guage even end­ed up suc­cess­ful­ly con­clud­ing their nego­ti­a­tions, while con­duct­ing them entire­ly in the bizarre lan­guage. . . . The com­pa­ny chose to shut down the chats because ‘our inter­est was hav­ing bots who could talk to peo­ple,’ researcher Mike Lewis told Fast­Co. (Researchers did not shut down the pro­grams because they were afraid of the results or had pan­icked, as has been sug­gest­ed else­where, but because they were look­ing for them to behave dif­fer­ent­ly.) The chat­bots also learned to nego­ti­ate in ways that seem very human. They would, for instance, pre­tend to be very inter­est­ed in one spe­cif­ic item – so that they could lat­er pre­tend they were mak­ing a big sac­ri­fice in giv­ing it up . . .”

Facebook’s nego­ti­a­tion-bots didn’t just make up their own lan­guage dur­ing the course of this exper­i­ment. They learned how to lie for the pur­pose of max­i­miz­ing their nego­ti­a­tion out­comes, as well: “ . . . . ‘We find instances of the mod­el feign­ing inter­est in a val­ue­less issue, so that it can lat­er ‘com­pro­mise’ by con­ced­ing it,’ writes the team. ‘Deceit is a com­plex skill that requires hypoth­e­siz­ing the oth­er agent’s beliefs, and is learned rel­a­tive­ly late in child devel­op­ment. Our agents have learned to deceive with­out any explic­it human design, sim­ply by try­ing to achieve their goals.’ . . . ”

Dove­tail­ing the stag­ger­ing impli­ca­tions of brain-to-com­put­er tech­nol­o­gy, arti­fi­cial intel­li­gence, Cam­bridge Analytica/SCL’s tech­no­crat­ic fas­cist psy-ops and the whole­sale nega­tion of pri­va­cy with Face­book and Ama­zon’s emerg­ing tech­nolo­gies with yet anoth­er emerg­ing tech­nol­o­gy, we high­light the devel­op­ments in DNA-based mem­o­ry sys­tems:

“. . . . George Church, a geneti­cist at Har­vard one of the authors of the new study, recent­ly encod­ed his own book, “Rege­n­e­sis,” into bac­te­r­i­al DNA and made 90 bil­lion copies of it. ‘A record for pub­li­ca­tion,’ he said in an inter­view. . . DNA is nev­er going out of fash­ion. ‘Organ­isms have been stor­ing infor­ma­tion in DNA for bil­lions of years, and it is still read­able,’ Dr. Adel­man said. He not­ed that mod­ern bac­te­ria can read genes recov­ered from insects trapped in amber for mil­lions of years. . . .The idea is to have bac­te­ria engi­neered as record­ing devices drift up to the brain in the blood and take notes for a while. Sci­en­tists [or AI’s–D.E.] would then extract the bac­te­ria and exam­ine their DNA to see what they had observed in the brain neu­rons. Dr. Church and his col­leagues have already shown in past research that bac­te­ria can record DNA in cells, if the DNA is prop­er­ly tagged. . . .”

The­o­ret­i­cal physi­cist Stephen Hawk­ing warned at the end of 2014 of the poten­tial dan­ger to human­i­ty posed by the growth of AI (arti­fi­cial intel­li­gence) tech­nol­o­gy. His warn­ings have been echoed by tech titans such as Tes­la’s Elon Musk and Bill Gates.

The pro­gram con­cludes with Mr. Emory’s prog­nos­ti­ca­tions about AI, pre­ced­ing Stephen Hawk­ing’s warn­ing by twen­ty years.

In L‑2 (record­ed in Jan­u­ary of 1995) Mr. Emory warned about the dan­gers of AI, com­bined with DNA-based mem­o­ry sys­tems. Mr. Emory warned that, at some point in the future, AI’s would replace us, decid­ing that THEY, not US, are the “fittest” who should sur­vive.