Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #952 Be Afraid, Be VERY Afraid: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by ear­ly win­ter of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.) (The pre­vi­ous flash dri­ve was cur­rent through the end of May of 2012.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself HERE.

This broad­cast was record­ed in one, 60-minute seg­ment.

PoliticsBitcoinIntro­duc­tion: One of the illu­sions har­bored by many–in par­tic­u­lar, young peo­ple who have grown up with the inter­net, social net­works and mobile technology–sees dig­i­tal activ­i­ty as pri­vate. Noth­ing could be fur­ther from the truth. Even before the cyber-lib­er­tar­i­an poli­cies advo­cat­ed by indi­vid­u­als like John Per­ry Bar­low, Eddie Snow­den, Julian Assange and oth­ers became man­i­fest in the Trump admin­is­tra­tion’s were imple­ment­ed by the Trump admin­is­tra­tion and the GOP-con­trolled con­gress, dig­i­tal affairs were sub­ject to an extra­or­di­nary degree of manip­u­la­tion by a mul­ti­tude of inter­ests.

We begin our exam­i­na­tion of tech­no­crat­ic fas­cism with a look at the cor­po­rate foun­da­tion of Poke­mon Go. Infor­ma­tion about the back­ground of Poke­mon Go’s devel­op­er (Niantic) and the devel­op­ment of the firm is detailed in an arti­cle from Net­work WorldIn addi­tion to the for­mi­da­ble nature of the intel­li­gence agen­cies involved with gen­er­at­ing the cor­po­rate foun­da­tion of Poke­mon Go (Key­hole, Inc.; Niantic), note the unnerv­ing nature of the infor­ma­tion that can be gleaned from the Android phone of any­one who down­loads the “app.”

Poke­mon Go was seen as enhancing the “Cool Japan Strat­e­gy” of Prime Min­is­ter Shin­zo Abe. The “Cool Japan Pro­mo­tion Fund” was imple­ment­ed by Abe (the grand­son of Nobo­suke Kishi, a Japan­ese war crim­i­nal who signed Japan’s dec­la­ra­tion of war against the U.S. and became the coun­try’s first post­war Prime Min­is­ter) to “raise the inter­na­tion­al pro­file of the country’s mass cul­ture.”

The Finance Min­is­ter of Japan is Taro Aso, one of the enthu­si­asts of Nazi polit­i­cal strat­e­gy high­light­ed below. The “Cool Japan pro­mo­tion Fund” would have been under his admin­is­tra­tion, with Tomo­mi Ina­da func­tion­ing as his admin­is­tra­tor for the pro­gram. Now serv­ing as Japan’s Defense Min­is­ter, Ina­da is anoth­er advo­cate of Nazi polit­i­cal strat­e­gy.

The Yamato DynastyNext, we turn to anoth­er man­i­fes­ta­tion of Poke­mon Go. The “Alt-Right” (read “Nazi”) move­ment is using Poke­mon Go to recruit kids to the Nazi cause. Con­sid­er this against the back­ground of Niantic, the Cool Japan strat­e­gy and the pro-Nazi fig­ures involved with it. Con­sid­er this also, in con­junc­tion with the Naz­i­fied AI devel­oped and deployed by Robert and Rebekah Mer­cer, Steve Ban­non, Cam­bridge Ana­lyt­i­ca and the “Alt-Right” milieu with which they asso­ciate.

A recent New York­er arti­cle by Jane May­er con­cern­ing Robert Mer­cer keys some inter­est­ing thoughts about Mer­cer, Ban­non, the Alt-Right Wik­iLeaks and the Naz­i­fied AI we spoke of in FTR #‘s 948 and 949. In FTR #946, we not­ed this con­cate­na­tion’s cen­tral place in the Face­book con­stel­la­tion, a posi­tion that has posi­tioned them to act deci­sive­ly on the polit­i­cal land­scape.

We note sev­er­al things about the May­er piece:

  • She writes of Mer­cer’s sup­port for the Alt-Right–Mercer helps fund Ban­non’s Bre­it­bart:  “. . . . In Feb­ru­ary, David Mager­man, a senior employ­ee at Renais­sance, spoke out about what he regards as Mercer’s wor­ri­some influ­ence. Mager­man, a Demo­c­rat who is a strong sup­port­er of Jew­ish caus­es, took par­tic­u­lar issue with Mercer’s empow­er­ment of the alt-right, which has includ­ed anti-Semit­ic and white-suprema­cist voic­es. . . .”
  • Mer­cer is racist, feel­ing that racism only exists in con­tem­po­rary black cul­ture: “. . . . Mer­cer, for his part, has argued that the Civ­il Rights Act, in 1964, was a major mis­take. Accord­ing to the one­time Renais­sance employ­ee, Mer­cer has assert­ed repeat­ed­ly that African-Amer­i­cans were bet­ter off eco­nom­i­cal­ly before the civ­il-rights move­ment. (Few schol­ars agree.) He has also said that the prob­lem of racism in Amer­i­ca is exag­ger­at­ed. The source said that, not long ago, he heard Mer­cer pro­claim that there are no white racists in Amer­i­ca today, only black racists. . . .”
  • His work at IBM was fund­ed in part by DARPA, strong­ly imply­ing that the DOD has applied some of the Mer­cer tech­nol­o­gy: “. . . . Yet, when I.B.M. failed to offer ade­quate sup­port for Mer­cer and Brown’s trans­la­tion project, they secured addi­tion­al fund­ing from DARPA, the secre­tive Pen­ta­gon pro­gram. Despite Mercer’s dis­dain for “big gov­ern­ment,” this fund­ing was essen­tial to his ear­ly suc­cess. . . .”
  • In a 2012 anti-Oba­ma pro­pa­gan­da film fund­ed by Cit­i­zens Unit­ed, Steve Ban­non bor­rowed from The Tri­umph of the Will: “. . . . Many of these [dis­il­lu­sioned Oba­ma] vot­ers became the cen­tral fig­ures of “The Hope & the Change,” an anti-Oba­ma film that Ban­non and Cit­i­zens Unit­ed released dur­ing the 2012 Demo­c­ra­t­ic Nation­al Con­ven­tion. After Cad­dell saw the film, he point­ed out to Ban­non that its open­ing imi­tat­ed that of ‘Tri­umph of the Will,’ the 1935 ode to Hitler, made by the Nazi film­mak­er Leni Riefen­stahl. Ban­non laughed and said, ‘You’re the only one that caught it!’ In both films, a plane flies over a blight­ed land, as omi­nous music swells; then clouds in the sky part, augur­ing a new era. . . .

Next, we return to the sub­ject of Bit­coin and cyber-lib­er­tar­i­an pol­i­cy. We have explored Bit­coin in a num­ber of pro­grams–FTR #‘s 760, 764, 770 and 785.

An impor­tant new book by David Golum­bia sets forth the tech­no­crat­ic fas­cist pol­i­tics under­ly­ing Bit­coin. Known to vet­er­an listeners/readers as the author of an oft-quot­ed arti­cle deal­ing with tech­no­crat­ic fas­cism, Golum­bia has pub­lished a short, impor­tant book about the right-wing extrem­ism under­ly­ing Bit­coin. (Pro­grams on tech­no­crat­ic fas­cism include: FTR #‘s 851, 859, 866, 867.)

In an excerpt from the book, we see dis­turb­ing ele­ments of res­o­nance with the views of Stephen Ban­non and some of the philo­soph­i­cal influ­ences on him. Julius Evola, “Men­cius Mold­bug” and Ban­non him­self see our civ­i­liza­tion as in decline, at a crit­i­cal “turn­ing point,” and in need of being “blown up” (as Evola put it) or need­ing a “shock to the sys­tem.”

Note that the Cypher­punk’s Man­i­festo (pub­lished by the Elec­tron­ic Fron­tier Foun­da­tion) and the 1996 “Dec­la­ra­tion of the Inde­pen­dence of Cyber­space” writ­ten by the lib­er­tar­i­an activist, Grate­ful Dead lyri­cist, Elec­tron­ic Fron­tier Foun­da­tion founder John Per­ry Bar­low decry gov­ern­men­tal reg­u­la­tion of the dig­i­tal sys­tem. (EFF is a lead­ing “dig­i­tal rights” and tech­nol­o­gy indus­try advo­ca­cy orga­ni­za­tion.)

The libertarian/fascist eth­ic of the dig­i­tal world was artic­u­lat­ed by Bar­low.

Note how the “free­dom” advo­cat­ed by Bar­low et al has played out: the Trump admin­is­tra­tion (imple­ment­ing the desires of cor­po­rate Amer­i­ca) has “dereg­u­lat­ed” the inter­net. All this in the name of “free­dom.”

In FTR #854, we not­ed the curi­ous pro­fes­sion­al resume of Bar­low, con­tain­ing such dis­parate ele­ments as–lyricist for the Grate­ful Dead (“Far Out!”); Dick Cheney’s cam­paign man­ag­er (not so “Far Out!”); a vot­er for white supremacist/segregationist George Wal­lace in the 1968 Pres­i­den­tial cam­paign (very “Un-Far Out!”).

For our pur­pos­es, his most note­wor­thy pro­fes­sion­al under­tak­ing is his found­ing of the EFF–The Elec­tron­ic Fron­tier Foun­da­tion. A lead­ing osten­si­ble advo­cate for inter­net free­dom, the EFF has endorsed tech­nol­o­gy and embraced per­son­nel inex­tri­ca­bly linked with a CIA-derived milieu embod­ied in Radio Free Asi­a’s Open Tech­nol­o­gy Fund. (For those who are, under­stand­ably, sur­prised and/or skep­ti­cal, we dis­cussed this at length and in detail in FTR #‘s 891  and 895.)

Next, we present an arti­cle that brings to the fore some inter­est­ing ques­tions about Bar­low, the CIA and the very gen­e­sis of social media.

We offer Ms. Sun­der­son­’s obser­va­tions, stress­ing that Bar­low’s fore­shad­ow­ing of the com­mu­ni­ca­tion func­tions inher­ent in social media and his pres­ence at CIA head­quar­ters (by invi­ta­tion!) sug­gest that Bar­low not only has strong ties to CIA but may have been involved in the con­cep­tu­al gen­e­sis that spawned CIA-con­nect­ed enti­ties such as Face­book.

In FTR #951, we observed that Richard B. Spencer, one of Trump’s Nazi back­ers, has begun a web­site with Swedish Alt-Righter Daniel Friberg, part of the Swedish fas­cist milieu to which Carl Lund­strom belongs. In FTR #732 (among oth­er pro­grams), we not­ed that it was Lund­strom who financed the Pirate Bay web­site, on which Wik­iLeaks held forth for quite some time. In FTR #745, we doc­u­ment­ed that top Assange aide and Holo­caust-denier Joran Jer­mas (aka “Israel Shamir”) arranged the Lundstrom/WikiLeaks liai­son. (Jer­mas han­dles Wik­iLeaks Russ­ian oper­a­tions, a point of inter­est in the wake of the 2016 cam­paign.)

It is a good bet that Lundstrom/Pirate Bay/WikiLeaks et al were data min­ing the many peo­ple who vis­it­ed the Wik­iLeaks site.

Might Lundstrom/Jermas/Assange et al have shared the volu­mi­nous data they may well have mined with Mercer/Cambridge Analytica/Bannon’s Naz­i­fied AI?

We con­clude with recap of Microsoft researcher Kate Craw­ford’s obser­va­tions at the SXSW event. Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

We reit­er­ate, in clos­ing, that ” . . . . Palan­tir is build­ing an intel­li­gence sys­tem to assist Don­ald Trump in deport­ing immi­grants. . . .”

In FTR #757 we not­ed that Palan­tir is a firm dom­i­nat­ed by Peter Thiel, a main backer of Don­ald Trump.

Pro­gram High­lights Include: 

  • Wik­iLeaks’ con­tin­ued prop­a­ga­tion of Alt-Right style Anti-Semit­ic pro­pa­gan­da: ” . . . . Now it is the dar­ling of the alt-right, reveal­ing hacked emails seem­ing­ly to influ­ence a pres­i­den­tial con­test, claim­ing the US elec­tion is ‘rigged.’ and descend­ing into con­spir­a­cy. Just this week on Twit­ter, it described the deaths by nat­ur­al caus­es of two of its sup­port­ers as a ‘bloody year for Wik­iLeaks.’ and warned of media out­lets ‘con­trolled by’ mem­bers of the Roth­schild fam­i­ly – a com­mon anti-Semit­ic trope. . . .”
  • Assess­ing all of the data-min­ing poten­tial (cer­tain­ty) of Wik­iLeaks, Poke­mon Go and the (per­haps) Bar­low-inspired Social Media world against the back­ground of the Mercer/Bannon/Cambridge ana­lyt­i­ca Naz­i­fied AI.

1a. Infor­ma­tion about the back­ground of Poke­mon Go’s devel­op­er (Niantic) and the devel­op­ment of the firm is detailed in an arti­cle from Net­work World. In addi­tion to the for­mi­da­ble nature of the intel­li­gence agen­cies involved with gen­er­at­ing the cor­po­rate foun­da­tion of Poke­mon Go (Key­hole, Inc.; Niantic), note the unnerv­ing nature of the infor­ma­tion that can be gleaned from the Android phone of any­one who down­loads the “app.”

“The CIA, NSA and Poke­mon Go” by Lin­ux Tycoon; Net­work World; 7/22/2016.

. . . . Way back in 2001, Key­hole, Inc. was found­ed by John Han­ke (who pre­vi­ous­ly worked in a “for­eign affairs” posi­tion with­in the U.S. gov­ern­ment). The com­pa­ny was named after the old “eye-in-the-sky” mil­i­tary satel­lites. One of the key, ear­ly back­ers of Key­hole was a firm called In-Q-Tel.

In-Q-Tel is the ven­ture cap­i­tal firm of the CIA. Yes, the Cen­tral Intel­li­gence Agency. Much of the fund­ing pur­port­ed­ly came from the Nation­al Geospa­tial-Intel­li­gence Agency (NGA). The NGA han­dles com­bat sup­port for the U.S. Depart­ment of Defense and pro­vides intel­li­gence to the NSA and CIA, among oth­ers.

Keyhole’s note­wor­thy pub­lic prod­uct was “Earth.” Renamed to “Google Earth” after Google acquired Key­hole in 2004.

In 2010, Niantic Labs was found­ed (inside Google) by Keyhole’s founder, John Han­ke.

Over the next few years, Niantic cre­at­ed two loca­tion-based apps/games. The first was Field Trip, a smart­phone appli­ca­tion where users walk around and find things. The sec­ond was Ingress, a sci-fi-themed game where play­ers walk around and between loca­tions in the real world.

In 2015, Niantic was spun off from Google and became its own com­pa­ny. Then Poké­mon Go was devel­oped and launched by Niantic. It’s a game where you walk around in the real world (between loca­tions sug­gest­ed by the ser­vice) while hold­ing your smart­phone.

Data the game can access

Let’s move on to what infor­ma­tion Poké­mon Go has access to, bear­ing the his­to­ry of the com­pa­ny in mind as we do.

When you install Poké­mon Go on an Android phone, you grant it the fol­low­ing access (not includ­ing the abil­i­ty to make in-app pur­chas­es):

Iden­ti­ty

  • Find accounts on the device

Con­tacts

  • Find accounts on the device

Loca­tion

  • Pre­cise loca­tion (GPS and net­work-based)
  • Approx­i­mate loca­tion (net­work-based)

Photos/Media/Files

  • Mod­i­fy or delete the con­tents of your USB stor­age
  • Read the con­tents of your USB stor­age

Stor­age

  • Mod­i­fy or delete the con­tents of your USB stor­age
  • Read the con­tents of your USB stor­age

Cam­era

  • Take pic­tures and videos

Oth­er

  • Receive data from the inter­net
  • Con­trol vibra­tion
  • Pair with Blue­tooth devices
  • Access Blue­tooth set­tings
  • Full net­work access
  • Use accounts on the device
  • View net­work con­nec­tions
  • Pre­vent the device from sleep­ing

Based on the access to your device (and your infor­ma­tion), cou­pled with the design of Poké­mon Go, the game should have no prob­lem dis­cern­ing and stor­ing the fol­low­ing infor­ma­tion (just for a start):

  • Where you are
  • Where you were
  • What route you took between those loca­tions
  • When you were at each loca­tion
  • How long it took you to get between them
  • What you are look­ing at right now
  • What you were look­ing at in the past
  • What you look like
  • What files you have on your device and the entire con­tents of those files

1b. Poke­mon Go was seen as enhanc­ing the “Cool Japan Strat­e­gy” of Prime Min­is­ter Shin­zo Abe. The “Cool Japan Pro­mo­tion Fund” was imple­ment­ed by Abe (the grand­son of Nobo­suke Kishi, a Japan­ese war crim­i­nal who signed Japan’s dec­la­ra­tion of war against the U.S. and became the coun­try’s first post­war Prime Min­is­ter) to “raise the inter­na­tion­al pro­file of the country’s mass cul­ture.”

The Finance Min­is­ter of Japan is Taro Aso, one of the enthu­si­asts of Nazi polit­i­cal strat­e­gy high­light­ed below. The “Cool Japan pro­mo­tion Fund” would have been under his admin­is­tra­tion, with Tomo­mi Ina­da func­tion­ing as his admin­is­tra­tor for the pro­gram. Ina­da is anoth­er advo­cate of Nazi polit­i­cal strat­e­gy.

“Will Poke­mon Go Pow­er Up Japan’s ‘Cool Econ­o­my’” by Hen­ry Lau­rence; The Diplo­mat; 7/29/2016.

Is Poké­mon Go a game chang­er for the Japan­ese econ­o­my? With­in days of its release in ear­ly July, a record 21 mil­lion peo­ple were play­ing at once, track­ing down and cap­tur­ing the cute lit­tle mon­sters on their smart­phones. Cre­ator Nintendo’s shares soared. But the phe­nom­e­nal pop­u­lar­i­ty of the game rais­es impor­tant ques­tions, beyond just “where’s the near­est Poké­gym?” Is it a sign that Sil­i­con Val­ley-style inno­va­tion is rein­vig­o­rat­ing cor­po­rate Japan’s noto­ri­ous­ly insu­lar man­age­ment? Might this be the first big suc­cess sto­ry for Prime Min­is­ter Shin­zo Abe’s “Cool Japan” ini­tia­tive, a key ele­ment in the struc­tur­al reforms promised but so far unde­liv­ered by Abe­nomics . . .

 . . . . In 2013, amid great fan­fare, Prime Min­is­ter Shin­zo Abe announced a “Cool Japan Pro­mo­tion Fund” to raise the inter­na­tion­al pro­file of the country’s mass cul­ture. The need for such a fund, cur­rent­ly set at about $1 bil­lion, is itself an inter­est­ing reflec­tion on how lit­tle faith pol­i­cy­mak­ers seem to have in the eco­nom­ic clout of the nation’s artists. Ques­tions also remain about the help­ful­ness of elder­ly politi­cians dab­bling in the cre­ative sec­tor. [The Finance Min­is­ter of Japan is Taro Aso, one of the enthu­si­asts of Nazi polit­i­cal strat­e­gy high­light­ed below. The “Cool Japan pro­mo­tion Fund” would have been under his admin­is­tra­tion, with Tomo­mi Ina­da fund­tion­ing as his admin­is­tra­tor for the pro­gram. Ina­da is anoth­er advo­cate of Nazi polit­i­cal strategy.–D.E.] . .

1c. Abe is turn­ing back the Japan­ese his­tor­i­cal and polit­i­cal clock. Japan­ese gov­ern­ment offi­cials are open­ly sanc­tion­ing anti-Kore­an racism and net­work­ing with orga­ni­za­tions that pro­mote that doc­trine. Sev­er­al mem­bers of Abe’s gov­ern­ment net­work with Japan­ese neo-Nazis, some of whom advo­cate using the Nazi method for seiz­ing pow­er in Japan. Is Abe’s gov­ern­ment doing just that?

 “For Top Pols In Japan Crime Doesn’t Pay, But Hate Crime Does” by Jake Adel­stein and Angela Eri­ka Kubo; The Dai­ly Beast; 9/26/2014.

 . . . . Accord­ing to the mag­a­zine “Sun­day Mainichi,” Ms. Tomo­mi Ina­da, Min­is­ter Of The “Cool Japan” Strat­egy, also received dona­tions from Masa­ki and oth­er Zaitokukai asso­ciates.

Appar­ently, racism is cool in Japan.

Ina­da made news ear­lier this month after pho­tos cir­cu­lated of her and anoth­er female in the new cab­i­net pos­ing with a neo-Nazi par­ty leader. Both denied know­ing the neo-Nazi well but lat­er were revealed to have con­tributed blurbs for an adver­tise­ment prais­ing the out-of-print book Hitler’s Elec­tion Strategy. Coin­ci­den­tally, Vice-Prime Min­is­ter [and Finance Minister–D.E.],Taro Aso, is also a long-time admir­er of Nazi polit­i­cal strat­egy, and has sug­gested Japan fol­low the Nazi Par­ty tem­plate to sneak con­sti­tu­tional change past the pub­lic. . . .

. . . In August, Japan’s rul­ing par­ty, which put Abe into pow­er orga­nized a work­ing group to dis­cuss laws that would restrict hate-crimealthough the new laws will prob­a­bly also be used to clamp down on anti-nuclear protests out­side the Diet build­ing.

Of course, it is a lit­tle wor­ri­some that Sanae Takaichi, who was sup­posed to over­see the project, is the oth­er female min­is­ter who was pho­tographed with a neo-Nazi leader and is a fan of Hitler. . .

1d. Devo­tee of Hitler’s polit­i­cal strat­e­gy Tomo­mi Ina­da is now the defense min­is­ter of Japan.

“Japan’s PM Picks Hawk­ish Defense Min­is­ter for New Cab­i­net, Vows Eco­nom­ic Recov­ery” by Elaine Lies and Kiyoshi Tak­e­na­ka; Reuters; 8/3/2016. 

Japan­ese Prime Min­is­ter Shin­zo Abe appoint­ed a con­ser­v­a­tive ally as defense min­is­ter in a cab­i­net reshuf­fle on Wednes­day that left most key posts unchanged, and he promised to has­ten the economy’s escape from defla­tion and boost region­al ties.

New defense min­is­ter Tomo­mi Ina­da, pre­vi­ous­ly the rul­ing par­ty pol­i­cy chief, shares Abe’s goal of revis­ing the post-war, paci­fist con­sti­tu­tion, which some con­ser­v­a­tives con­sid­er a humil­i­at­ing sym­bol of Japan’s World War Two defeat.

She also reg­u­lar­ly vis­its Tokyo’s Yasuku­ni Shrine for war dead, which Chi­na and South Korea see as a sym­bol of Japan’s past mil­i­tarism. Japan’s ties with Chi­na and South Korea have been frayed by the lega­cy of its mil­i­tary aggres­sion before and dur­ing World War Two. . . .

1e. The “Alt-Right” (read “Nazi”) move­ment is using Poke­mon Go to recruit kids to the Nazi cause. Con­sid­er this against the back­ground of Niantic, the Cool Japan strat­e­gy and the pro-Nazi fig­ures involved with it. Con­sid­er this also, in con­junc­tion with the Naz­i­fied AI devel­oped and deployed by Robert and Rebekah Mer­cer, Steve Ban­non, Cam­bridge Ana­lyt­i­ca and the “Alt-Right” milieu with which they asso­ciate.

“Alt-Right Recruit­ing Kids With ‘Poké­mon Go Nazi Chal­lenge’” by James King and Adi Cohen; Voca­tiv; 9/7/2016.

Alt-right neo-Nazis are tar­get­ing kids as young as 10 years old with Pikachu dressed as Hitler

The racist fringe of the now-main­stream alt-right move­ment is seiz­ing on the pop­u­lar­i­ty of Poké­mon Go to recruit kids who con­gre­gate at “gyms” to play the mobile game, accord­ing to one of the group’s most out­spo­ken lead­ers.

Andrew Anglin, the neo-Nazi word­smith behind the alt-right Dai­ly Stormer blog, post­ed a sto­ry on Tues­day about an “enter­pris­ing Stormer” (a fol­low­er of Anglin’s blog) who is find­ing Poké­mon Go gyms, which serve as bat­tle grounds for play­ers, and dis­trib­ut­ing recruit­ment fliers to kids with the hope of “con­vert­ing chil­dren and teens to HARDCORE NEO-NAZISM!”

“The Dai­ly Stormer was designed to appeal to teenagers, but I have long thought that we need­ed to get pre-teens involved in the move­ment,” Anglin wrote in the blog post. “At that age, you can real­ly brain­wash some­one eas­i­ly. Any­one who accepts Nazism at the age of 10 or 11 is going to be a Nazi for life.” He added, “And it isn’t hard. It’s just a mat­ter of pulling them in. And what bet­ter way to do it than with Poké­mon fliers at the Poké­mon GO gym???”

Anglin declined to iden­ti­fy the “stormer” behind the fliers by name, nor did he dis­close where these fliers have been distributed—saying only that it is in an “Amer­i­can town.” Voca­tiv could not find any media or law enforce­ment reports of neo-Nazis hand­ing out the fliers in any city. Nor could experts who mon­i­tor peo­ple like Anglin and groups like the alt-right.

The fli­er fea­tures run-of-the-mill neo-Nazi propaganda—it rails on Jews, African-Amer­i­cans, and claims a “white geno­cide” is hap­pen­ing and white peo­ple need to stand up and pre­pare for the impend­ing race war. The first step, the fli­er explains, is elect­ing Don­ald Trump pres­i­dent. Step two is to “get active in the Nazi move­ment” because the “alt-right Nazis are the only ones who can save this coun­try from the kikes.”

“Adolph Hitler was a great man,” the fli­er, under the title “Hey White Boy!” explains. “Just as you want to catch all the Poke­mon, he hunt­ed a dif­fer­ent type of mon­ster: Jews.”

The alt-right move­ment isn’t new but made nation­al head­lines last month when Hillary Clin­ton gave a scathing speech link­ing Trump to the oft-racist move­ment. Alt-righters gen­er­al­ly fall into one of two cat­e­gories: those who dis­guise their racism as “white nation­al­ism” and don’t embrace the racist label in an effort to be tak­en seri­ous­ly, and those—like Anglin and his followers—who wear their big­otry on their sleeves, as Voca­tiv has pre­vi­ous­ly report­ed.

Clinton’s speech came just days after a shake­up in the Trump cam­paign led to the appoint­ment of Stephen Ban­non, the for­mer head of the alt-right web­site Breitbart.com, as the CEO of the cam­paign. As Clin­ton men­tioned in her speech, Breitbart.com is respon­si­ble for white nation­al­ist pro­pa­gan­da like a sto­ry titled “Hoist It High And Proud: The Con­fed­er­ate Flag Pro­claims A Glo­ri­ous Her­itage,” and a sex­ist rant with the head­line, “Birth Con­trol Makes Women Unat­trac­tive And Crazy.”

Anglin has cre­at­ed a PDF file of the fli­er so oth­er “storm­ers” can print them out and dis­trib­ute them at Poké­mon Go gyms and even pro­vid­ed a map show­ing the loca­tions of gyms across the coun­try.

“These hotspots are packed,” he wrote. “No doubt, you’ll be able to hand-out a hun­dred in 30 min­utes easy if you live in a decent-sized urban area. Get in and get out. Take a bud­dy with you.”

2. A recent New York­er arti­cle by Jane May­er con­cern­ing Robert Mer­cer keys some inter­est­ing thoughts about Mer­cer, Ban­non, the Alt-Right Wik­iLeaks and the Naz­i­fied AI we spoke of in FTR #‘s 948 and 949. In FTR #946, we not­ed this con­cate­na­tion’s cen­tral place in the Face­book con­stel­la­tion, a posi­tion that has posi­tioned them to act deci­sive­ly on the polit­i­cal land­scape.

We note sev­er­al things about the May­er piece:

  • She writes of Mer­cer’s sup­port for the Alt-Right–Mercer helps fund Ban­non’s Bre­it­bart:  “. . . . In Feb­ru­ary, David Mager­man, a senior employ­ee at Renais­sance, spoke out about what he regards as Mercer’s wor­ri­some influ­ence. Mager­man, a Demo­c­rat who is a strong sup­port­er of Jew­ish caus­es, took par­tic­u­lar issue with Mercer’s empow­er­ment of the alt-right, which has includ­ed anti-Semit­ic and white-suprema­cist voic­es. . . .”
  • Mer­cer is racist, feel­ing that racism only exists in con­tem­po­rary black cul­ture: “. . . . Mer­cer, for his part, has argued that the Civ­il Rights Act, in 1964, was a major mis­take. Accord­ing to the one­time Renais­sance employ­ee, Mer­cer has assert­ed repeat­ed­ly that African-Amer­i­cans were bet­ter off eco­nom­i­cal­ly before the civ­il-rights move­ment. (Few schol­ars agree.) He has also said that the prob­lem of racism in Amer­i­ca is exag­ger­at­ed. The source said that, not long ago, he heard Mer­cer pro­claim that there are no white racists in Amer­i­ca today, only black racists. . . .”
  • His work at IBM was fund­ed in part by DARPA, strong­ly imply­ing that the DOD has applied some of the Mer­cer tech­nol­o­gy: “. . . . Yet, when I.B.M. failed to offer ade­quate sup­port for Mer­cer and Brown’s trans­la­tion project, they secured addi­tion­al fund­ing from DARPA, the secre­tive Pen­ta­gon pro­gram. Despite Mercer’s dis­dain for “big gov­ern­ment,” this fund­ing was essen­tial to his ear­ly suc­cess. . . .”
  • In a 2012 anti-Oba­ma pro­pa­gan­da film fund­ed by Cit­i­zens Unit­ed, Steve Ban­non bor­rowed from The Tri­umph of the Will: “. . . . Many of these [dis­il­lu­sioned Oba­ma] vot­ers became the cen­tral fig­ures of “The Hope & the Change,” an anti-Oba­ma film that Ban­non and Cit­i­zens Unit­ed released dur­ing the 2012 Demo­c­ra­t­ic Nation­al Con­ven­tion. After Cad­dell saw the film, he point­ed out to Ban­non that its open­ing imi­tat­ed that of ‘Tri­umph of the Will,’ the 1935 ode to Hitler, made by the Nazi film­mak­er Leni Riefen­stahl. Ban­non laughed and said, ‘You’re the only one that caught it!’ In both films, a plane flies over a blight­ed land, as omi­nous music swells; then clouds in the sky part, augur­ing a new era. . . .

“The Reclu­sive Hedge-Fund Tycoon Behind the Trump Pres­i­den­cy” by Jane May­er; The New York­er; 3/27/2017.

. . . . In Feb­ru­ary, David Mager­man, a senior employ­ee at Renais­sance, spoke out about what he regards as Mercer’s wor­ri­some influ­ence. Mager­man, a Demo­c­rat who is a strong sup­port­er of Jew­ish caus­es, took par­tic­u­lar issue with Mercer’s empow­er­ment of the alt-right, which has includ­ed anti-Semit­ic and white-suprema­cist voic­es. . . .

. . . . Mer­cer, for his part, has argued that the Civ­il Rights Act, in 1964, was a major mis­take. Accord­ing to the one­time Renais­sance employ­ee, Mer­cer has assert­ed repeat­ed­ly that African-Amer­i­cans were bet­ter off eco­nom­i­cal­ly before the civ­il-rights move­ment. (Few schol­ars agree.) He has also said that the prob­lem of racism in Amer­i­ca is exag­ger­at­ed. The source said that, not long ago, he heard Mer­cer pro­claim that there are no white racists in Amer­i­ca today, only black racists. . . .

. . . . Yet, when I.B.M. failed to offer ade­quate sup­port for Mer­cer and Brown’s trans­la­tion project, they secured addi­tion­al fund­ing from DARPA, the secre­tive Pen­ta­gon pro­gram. Despite Mercer’s dis­dain for “big gov­ern­ment,” this fund­ing was essen­tial to his ear­ly suc­cess. . . .

. . . . Many of these [dis­il­lu­sioned Oba­ma] vot­ers became the cen­tral fig­ures of “The Hope & the Change,” an anti-Oba­ma film that Ban­non and Cit­i­zens Unit­ed released dur­ing the 2012 Demo­c­ra­t­ic Nation­al Con­ven­tion. After Cad­dell saw the film, he point­ed out to Ban­non that its open­ing imi­tat­ed that of “Tri­umph of the Will,” the 1935 ode to Hitler, made by the Nazi film­mak­er Leni Riefen­stahl. Ban­non laughed and said, “You’re the only one that caught it!” In both films, a plane flies over a blight­ed land, as omi­nous music swells; then clouds in the sky part, augur­ing a new era. . . .

3a.We have explored Bit­coin in a num­ber of pro­grams–FTR #‘s 760, 764, 770 and 785.

An impor­tant new book by David Golum­bia sets forth the tech­no­crat­ic fas­cist pol­i­tics under­ly­ing Bit­coin. Known to vet­er­an listeners/readers as the author of an oft-quot­ed arti­cle deal­ing with tech­no­crat­ic fas­cism, Golum­bia has pub­lished a short, impor­tant book about the right-wing extrem­ism under­ly­ing Bit­coin. (Pro­grams on tech­no­crat­ic fas­cism include: FTR #‘s 851, 859, 866, 867.)

In the excerpt below, we see dis­turb­ing ele­ments of res­o­nance with the views of Stephen Ban­non and some of the philo­soph­i­cal influ­ences on him. Julius Evola, “Men­cius Mold­bug” and Ban­non him­self see our civ­i­liza­tion as in decline, at a crit­i­cal “turn­ing point,” and in need of being “blown up” (as Evola put it) or need­ing a “shock to the sys­tem.”

The Pol­i­tics of Bit­coin: Soft­ware as Right-Wing Extrem­ism by David Golum­bia; Uni­ver­si­ty of Min­neso­ta Press [SC]; pp. 73–75.

. . . . As objects of dis­course, Bit­coin and the blockchain do a remark­able job of rein­forc­ing the view that the entire glob­al his­to­ry of polit­i­cal thought and action needs to be jet­ti­soned, or, even worse, that it has already been jet­ti­soned through the intro­duc­tion of any num­ber of tech­nolo­gies. Thus, in the intro­duc­tion to a bizarrely earnest and destruc­tive vol­ume called From Bit­coin to Burn­ing Man and Beyond (Clip­pinger and Bol­lier 2014), the edi­tors, one of whom is a research sci­en­tist at MIT, write, “Enlight­en­ment ideals of demo­c­ra­t­ic rule seem to have run their course. A con­tin­u­ous flow of sci­en­tif­ic find­ings are under­min­ing many foun­da­tion­al claims about human ratio­nal­i­ty and per­fectibil­i­ty while expo­nen­tial tech­no­log­i­cal changes and explod­ing glob­al demo­graph­ics over­whelm the capac­i­ty of demo­c­ra­t­ic insti­tu­tions to rule effec­tive­ly, and ulti­mate­ly, their very legit­i­ma­cy.” Such abrupt dis­missals of hun­dreds of years of thought, work, and lives fol­lows direct­ly from cyber­lib­er­tar­i­an thought and extrem­ist rein­ter­pre­ta­tions of polit­i­cal insti­tu­tions:” What once required the author­i­ty of a cen­tral bank or a sov­er­eign author­i­ty can now be achieved through open, dis­trib­uted cryp­to-algo­rithms. Nation­al bor­ders, tra­di­tion­al legal regimes, and human inter­ven­tion are increas­ing­ly moot.” Like most ide­o­log­i­cal for­ma­tions, these sen­ti­ments are high­ly resis­tant to being proven false by facts. . . .

. . . . Few atti­tudes typ­i­fy the para­dox­i­cal cyber­lib­er­tar­i­an mind-set of Bit­coin pro­mot­ers (and many oth­ers) more than do those of “San­juro,” the alias of the per­son who cre­at­ed a Bit­coin “assas­si­na­tion mar­ket” (Green­berg 2013). San­juro believes that by incen­tiviz­ing peo­ple to kill politi­cians, he will destroy “all gov­ern­ments, every­where.” This anar­chic apoc­a­lypse “will change the world for the bet­ter,” pro­duc­ing “a world with­out wars, drag­net Panop­ti­con-style sur­veil­lance, nuclear weapons, armies, repres­sion, mon­ey manip­u­la­tion, and lim­its to trade.” Only some­one so blink­ered by their ide­o­log­i­cal tun­nel vision could look at world his­to­ry and imag­ine that mur­der­ing the rep­re­sen­ta­tives of demo­c­ra­t­i­cal­ly elect­ed gov­ern­ments and thus putting the gov­ern­ments them­selves out of exis­tence would do any­thing but make every one of these prob­lems immea­sur­ably worse than they already are. Yet this, in the end, is the extreme rightist–anarcho-capitalist, win­ner-take-all, even neo-feudalist–political vision too many of those in the Bit­coin (along with oth­er cryp­tocur­ren­cy) and blockchain com­mu­ni­ties, what­ev­er they believe their polit­i­cal ori­en­ta­tion to be, are work­ing active­ly to bring about. . . .

3b. Note that the Cypher­punk’s Man­i­festo (pub­lished by the Elec­tron­ic Fron­tier Foun­da­tion) and the 1996 “Dec­la­ra­tion of the Inde­pen­dence of Cyber­space” writ­ten by the lib­er­tar­i­an activist, Grate­ful Dead lyri­cist, Elec­tron­ic Fron­tier Foun­da­tion founder John Per­ry Bar­low decry gov­ern­men­tal reg­u­la­tion of the dig­i­tal sys­tem. (EFF is a lead­ing “dig­i­tal rights” and tech­nol­o­gy indus­try advo­ca­cy orga­ni­za­tion.)

The Pol­i­tics of Bit­coin: Soft­ware as Right-Wing Extrem­ism by David Golum­bia; Uni­ver­si­ty of Min­neso­ta Press [SC]; pp. 31–32.

. . . . Among the clear­est tar­gets of these move­ments (see both the Cypher­punk’s Man­i­festo, Hugh­es 1993; and the close­ly relat­ed Cryp­to-Anar­chist Man­i­festo, May 1992) has always specif­i­cal­ly been gov­ern­men­tal over­sight of finan­cial (and oth­er) trans­ac­tions. No effort is made to dis­tin­guish between legit­i­mate and ille­git­i­mate use of gov­ern­men­tal pow­er; rather, all gov­ern­men­tal pow­er is inher­ent­ly tak­en to be ille­git­i­mate. Fur­ther, despite occa­sion­al rhetor­i­cal nods toward cor­po­rate abus­es, just as in Mur­ray Roth­bard’s work, strict­ly speak­ing no mech­a­nisms what­so­ev­er are posit­ed that actu­al­ly might con­strain cor­po­rate pow­er. Com­bined with either an explic­it com­mit­ment toward, or at best an extreme naivete about, the oper­a­tion of con­cen­trat­ed cap­i­tal, this polit­i­cal the­o­ry works to deprive the peo­ple of their only proven mech­a­nism for that con­straint. This is why an august an antigov­ern­ment thinker as Noam Chom­sky (2015) can have declared that lib­er­tar­i­an the­o­ries, despite sur­face appear­ances, pro­mote “cor­po­rate tyran­ny, mean­ing tyran­ny by unac­count­able pri­vate con­cen­tra­tions of pow­er, the worst kind of tyran­ny you can imag­ine.” . . .

3c. The libertarian/fascist eth­ic of the dig­i­tal world was artic­u­lat­ed by John Per­ry Bar­low.

Note how the “free­dom” advo­cat­ed by John Per­ry Bar­low et al has played out: the Trump admin­is­tra­tion (imple­ment­ing the desires of cor­po­rate Amer­i­ca) has “dereg­u­lat­ed” the inter­net. All this in the name of “free­dom.”

The Pol­i­tics of Bit­coin: Soft­ware as Right-Wing Extrem­ism by David Golum­bia; Uni­ver­si­ty of Min­neso­ta Press [SC]; p. 3.

. . . . In its most basic and lim­it­ed form, cyber­lib­er­tar­i­an­ism is some­times sum­ma­rized as the prin­ci­ple that “gov­ern­ments should not reg­u­late the inter­net” (Mal­com 2013.) This belief was artic­u­lat­ed with par­tic­u­lar force in the 1996 “Dec­la­ra­tion of the Inde­pen­dence of Cyber­space” writ­ten by the lib­er­tar­i­an activist, Grate­ful Dead lyri­cist, Elec­tron­ic Fron­tier Foun­da­tion founder (EFF is a lead­ing “dig­i­tal rights” and tech­nol­o­gy indus­try advo­ca­cy orga­ni­za­tion) John Per­ry Bar­low, which declared that “gov­ern­ments of our indus­tri­al world” are “not wel­come” in and “have no sov­er­eign­ty” over the dig­i­tal sys­tem. . . .

4a. In FTR #854, we not­ed the curi­ous pro­fes­sion­al resume of John Per­ry Bar­low, con­tain­ing such dis­parate ele­ments as–lyricist for the Grate­ful Dead (“Far Out!”); Dick Cheney’s cam­paign man­ag­er (not so “Far Out!”); a vot­er for white supremacist/segregationist George Wal­lace in the 1968 Pres­i­den­tial cam­paign (very “Un-Far Out!”).

Bar­low intro­duced the Grate­ful Dead to Tim­o­thy Leary, who was inex­tri­ca­bly linked with the CIA. We dis­cussed this at length in AFA #28.

AFA 28: The CIA, the Mil­i­tary & Drugs, Pt. 5
The CIA & LSD
Part 5a
46:15 | Part 5b 45:52 | Part 5c 42:56 | Part 5d 45:11 | Part 5e 11:25
(Record­ed April 26, 1987)

” . . . . Tim­o­thy Leary’s ear­ly research into LSD was sub­si­dized, to some extent, by the CIA. Lat­er, Leary’s LSD pros­e­ly­ti­za­tion was great­ly aid­ed by William Mel­lon Hitch­cock, a mem­ber of the pow­er­ful Mel­lon fam­i­ly. The financ­ing of the Mel­lon-Leary col­lab­o­ra­tion was effect­ed through the Cas­tle Bank, a Caribbean oper­a­tion that was deeply involved in the laun­der­ing of CIA drug mon­ey.

After mov­ing to the West Coast, Leary hooked up with a group of ex-surfers, the Broth­er­hood of Eter­nal Love. This group became the largest LSD syn­the­siz­ing and dis­trib­ut­ing orga­ni­za­tion in the world. Their “chief chemist” was a curi­ous indi­vid­ual named Ronald Hadley Stark. An enig­mat­ic, mul­ti-lin­gual and well-trav­eled indi­vid­ual, Stark worked for the CIA, and appears to have been with the agency when he was mak­ing the Broth­er­hood’s acid. The qual­i­ty of his prod­uct pro­ject­ed the Broth­er­hood of Eter­nal Love into its lead­er­ship role in the LSD trade. Stark also oper­at­ed in con­junc­tion with the Ital­ian intelligence/fascist milieu described in AFA #‘s 17–21.

The broad­cast under­scores the pos­si­bil­i­ty that LSD and oth­er hal­lu­cino­gens may have been dis­sem­i­nat­ed, in part, in order to dif­fuse the pro­gres­sive polit­i­cal activism of the 1960’s.

Pro­gram High­lights Include: CIA direc­tor Allen Dulles’ pro­mo­tion of psy­cho­log­i­cal research by the Agency; the work of CIA physi­cian Dr. Sid­ney Got­tlieb for the Agen­cy’s Tech­ni­cal Ser­vices Divi­sion; con­nec­tions between Stark and the kid­nap­ping and assas­si­na­tion of Ital­ian Prime Min­is­ter Aldo Moro; Stark’s mys­te­ri­ous death in prison while await­ing tri­al; Leary’s con­nec­tions to the milieu of the “left” CIA and the role those con­nec­tions appear to have played in Leary’s flight from incar­cer­a­tion; the CIA’s intense inter­est in (and involve­ment with) the Haight-Ash­bury scene of the 1960s. . . . .”

For our pur­pos­es, his most note­wor­thy pro­fes­sion­al under­tak­ing is his found­ing of the EFF–The Elec­tron­ic Fron­tier Foun­da­tion. A lead­ing osten­si­ble advo­cate for inter­net free­dom, the EFF has endorsed tech­nol­o­gy and embraced per­son­nel inex­tri­ca­bly linked with a CIA-derived milieu embod­ied in Radio Free Asi­a’s Open Tech­nol­o­gy Fund. (For those who are, under­stand­ably, sur­prised and/or skep­ti­cal, we dis­cussed this at length and in detail in FTR #‘s 891  and 895.)

Lis­ten­er Tiffany Sun­der­son con­tributed an arti­cle in the “Com­ments” sec­tion that brings to the fore some inter­est­ing ques­tions about Bar­low, the CIA and the very gen­e­sis of social media.

We offer Ms. Sun­der­son­’s obser­va­tions, stress­ing that Bar­low’s fore­shad­ow­ing of the com­mu­ni­ca­tion func­tions inher­ent in social media and his pres­ence at CIA head­quar­ters (by invi­ta­tion!) sug­gest that Bar­low not only has strong ties to CIA but may have been involved in the con­cep­tu­al gen­e­sis that spawned CIA-con­nect­ed enti­ties such as Face­book:

“Fas­ci­nat­ing arti­cle by John Per­ry Bar­low, can’t believe I haven’t seen this before. From Forbes in 2002. Can’t accuse Bar­low of hid­ing his intel ties, he’ll tell you all about it! To me, this is prac­ti­cal­ly a his­tor­i­cal doc­u­ment, as it hints at the think­ing that inevitably lead to Inq­tel, Geofee­dia, Palan­tir, Face­book, etc. Includ­ing whole arti­cle, but here are a few pas­sages that jumped out at me.

http://www.forbes.com/asap/2002/1007/042_print.html

This part cracks me up: it’s “mys­ti­cal super­sti­tion” to imag­ine that wires leav­ing a build­ing are also wires ENTERING a build­ing? Seri­ous­ly? For a guy who nev­er shuts up about net­work­ing, he should get that there is noth­ing “mys­ti­cal” about such a notion. It’s exact­ly how attack­ers get in. If you are con­nect­ed to the inter­net, you are not tru­ly secure. Peri­od.

“All of their prim­i­tive net­works had an ‘air wall,’ or phys­i­cal sep­a­ra­tion, from the Inter­net. They admit­ted that it might be even more dan­ger­ous to secu­ri­ty to remain abstract­ed from the wealth of infor­ma­tion that had already assem­bled itself there, but they had an almost mys­ti­cal super­sti­tion that wires leav­ing the agency would also be wires enter­ing it, a ver­i­ta­ble super­high­way for invad­ing cyber­spooks. ”

Here, JPB brags about his con­nec­tions and who he brought back to CIA. I’ve always had spooky feel­ings about Cerf, Dyson, and Kapor. Don’t know Rutkows­ki. But the oth­er three are seri­ous play­ers, and Cerf and Kapor are heav­i­ly involved with EFF. You know, because the EFF is all about stand­ing up for the lit­tle guy.

“They told me they’d brought Steve Jobs in a few weeks before to indoc­tri­nate them in mod­ern infor­ma­tion man­age­ment. And they were delight­ed when I returned lat­er, bring­ing with me a pla­toon of Inter­net gurus, includ­ing Esther Dyson, Mitch Kapor, Tony Rutkows­ki, and Vint Cerf. They sealed us into an elec­tron­i­cal­ly impen­e­tra­ble room to dis­cuss the rad­i­cal pos­si­bil­i­ty that a good first step in lift­ing their black­out would be for the CIA to put up a Web site”

This next part SCREAMS of intel’s ties to the “social media explo­sion.” I think this pas­sage is what qual­i­fies Barlow’s arti­cle as a his­tor­i­cal doc of some val­ue.

“Let’s cre­ate a process of infor­ma­tion diges­tion in which inex­pen­sive data are gath­ered from large­ly open sources and con­densed, through an open process, into knowl­edge terse and insight­ful enough to inspire wis­dom in our lead­ers.

The enti­ty I envi­sion would be small, high­ly net­worked, and gen­er­al­ly vis­i­ble. It would be open to infor­ma­tion from all avail­able sources and would clas­si­fy only infor­ma­tion that arrived clas­si­fied. It would rely heav­i­ly on the Inter­net, pub­lic media, the aca­d­e­m­ic press, and an infor­mal world­wide net­work of volunteers–a kind of glob­al Neigh­bor­hood Watch–that would sub­mit on-the-ground reports.

It would use off-the-shelf tech­nol­o­gy, and use it less for gath­er­ing data than for col­lat­ing and com­mu­ni­cat­ing them. Being off-the-shelf, it could deploy tools while they were still state-of-the-art.

I imag­ine this enti­ty staffed ini­tial­ly with librar­i­ans, jour­nal­ists, lin­guists, sci­en­tists, tech­nol­o­gists, philoso­phers, soci­ol­o­gists, cul­tur­al his­to­ri­ans, the­olo­gians, econ­o­mists, philoso­phers, and artists‑a lot like the orig­i­nal CIA, the OSS, under “Wild Bill” Dono­van. Its bud­get would be under the direct author­i­ty of the Pres­i­dent, act­ing through the Nation­al Secu­ri­ty Advis­er. Con­gres­sion­al over­sight would reside in the com­mit­tees on sci­ence and tech­nol­o­gy (and not under the con­gres­sion­al Joint Com­mit­tee on Intel­li­gence). ”

http://www.forbes.com/asap/2002/1007/042_2.html

“Why Spy?” by John Per­ry Bar­low; Forbes; 10/07/02.

If the spooks can’t ana­lyze their own data, why call it intel­li­gence?
For more than a year now, there has been a del­uge of sto­ries and op-ed pieces about the fail­ure of the Amer­i­can intel­li­gence com­mu­ni­ty to detect or pre­vent the Sep­tem­ber 11, 2001, mas­sacre.

Near­ly all of these accounts have expressed aston­ish­ment at the appar­ent incom­pe­tence of America’s watch­dogs.

I’m aston­ished that anyone’s aston­ished.

The visu­al impair­ment of our mul­ti­tudi­nous spook­hous­es has long been the least secret of their secrets. Their short­com­ings go back 50 years, when they were still pre­sum­ably effi­cient but some­how failed to detect sev­er­al mil­lion Chi­nese mil­i­tary “vol­un­teers” head­ing south into Korea. The sur­prise attacks on the World Trade Cen­ter and the Pen­ta­gon were only the most recent over­sight dis­as­ters. And for ser­vice like this we are pay­ing between $30 bil­lion and $50 bil­lion a year. Talk about a faith-based ini­tia­tive.

After a decade of both fight­ing with and con­sult­ing to the intel­li­gence com­mu­ni­ty, I’ve con­clud­ed that the Amer­i­can intel­li­gence sys­tem is bro­ken beyond repair, self-pro­tec­tive beyond reform, and per­ma­nent­ly fix­at­ed on a world that no longer exists.

I was intro­duced to this world by a for­mer spy named Robert Steele, who called me in the fall of 1992 and asked me to speak at a Wash­ing­ton con­fer­ence that would be “attend­ed pri­mar­i­ly by intel­li­gence pro­fes­sion­als.” Steele seemed inter­est­ing, if unset­tling. A for­mer Marine intel­li­gence offi­cer, Steele moved to the CIA and served three over­seas tours in clan­des­tine intel­li­gence, at least one of them “in a com­bat envi­ron­ment” in Cen­tral Amer­i­ca.

After near­ly two decades of ser­vice in the shad­ows, Steele emerged with a lust for light and a belief in what he calls, in char­ac­ter­is­tic spook-speak, OSINT, or open source intel­li­gence. Open source intel­li­gence is assem­bled from what is pub­licly avail­able, in media, pub­lic doc­u­ments, the Net, wher­ev­er. It’s a giv­en that such materials–and the tech­no­log­i­cal tools for ana­lyz­ing them–are grow­ing expo­nen­tial­ly these days. But while OSINT may be a time­ly notion, it’s not pop­u­lar in a cul­ture where the phrase “infor­ma­tion is pow­er” means some­thing bru­tal­ly con­crete and where sources are “owned.”

At that time, intel­li­gence was awak­en­ing to the Inter­net, the ulti­mate open source. Steele’s con­fer­ence was attend­ed by about 600 mem­bers of the Amer­i­can and Euro­pean intel­li­gence estab­lish­ment, includ­ing many of its senior lead­ers. For some­one whose major claim to fame was hip­pie song-mon­ger­ing, address­ing such an audi­ence made me feel as if I’d sud­den­ly become a char­ac­ter in a Thomas Pyn­chon nov­el.

Nonethe­less, I sal­lied forth, con­fi­dent­ly telling the gray throng that pow­er lay not in con­ceal­ing infor­ma­tion but in dis­trib­ut­ing it, that the Inter­net would endow small groups of zealots with the capac­i­ty to wage cred­i­ble assaults on nation-states, that young hack­ers could eas­i­ly run cir­cles around old spies.

I didn’t expect a warm recep­tion, but it wasn’t as if I was inter­view­ing for a job.

Or so I thought. When I came off­stage, a group of calm, alert men await­ed. They seemed eager, in their undemon­stra­tive way, to pur­sue these issues fur­ther. Among them was Paul Wall­ner, the CIA’s open source coor­di­na­tor. Wall­ner want­ed to know if I would be will­ing to drop by, have a look around, and dis­cuss my ideas with a few folks.

A few weeks lat­er, in ear­ly 1993, I passed through the gates of the CIA head­quar­ters in Lan­g­ley, Vir­ginia, and entered a chilled silence, a zone of par­a­lyt­ic para­noia and obses­sive secre­cy, and a tech­no­log­i­cal time cap­sule straight out of the ear­ly ’60s. The Cold War was offi­cial­ly over, but it seemed the news had yet to pen­e­trate where I now found myself.

If, in 1993, you want­ed to see the Sovi­et Union still alive and well, you’d go to Lan­g­ley, where it was pre­served in the meth­ods, assump­tions, and archi­tec­ture of the CIA.

Where I expect­ed to see com­put­ers, there were tele­type machines. At the nerve core of The Com­pa­ny, five ana­lysts sat around a large, wood­en lazy Susan. Beside each of them was a tele­type, chat­ter­ing in upper­case. When­ev­er a mes­sage came in to, say, the East­ern Europe ana­lyst that might be of inter­est to the one watch­ing events in Latin Amer­i­ca, he’d rip it out of the machine, put it on the turntable, and rotate it to the appro­pri­ate quad­rant.

The most dis­tress­ing dis­cov­ery of my first expe­di­tion was the near­ly uni­ver­sal frus­tra­tion of employ­ees at the intran­si­gence of the beast they inhab­it­ed. They felt forced into incom­pe­tence by infor­ma­tion hoard­ing and non­com­mu­ni­ca­tion, both with­in the CIA and with oth­er relat­ed agen­cies. They hat­ed their prim­i­tive tech­nol­o­gy. They felt unap­pre­ci­at­ed, oppressed, demor­al­ized. “Some­how, over the last 35 years, there was an infor­ma­tion rev­o­lu­tion,” one of them said bleak­ly, “and we missed it.”

They were cut off. But at least they were try­ing. They told me they’d brought Steve Jobs in a few weeks before to indoc­tri­nate them in mod­ern infor­ma­tion man­age­ment. And they were delight­ed when I returned lat­er, bring­ing with me a pla­toon of Inter­net gurus, includ­ing Esther Dyson, Mitch Kapor, Tony Rutkows­ki, and Vint Cerf. They sealed us into an elec­tron­i­cal­ly impen­e­tra­ble room to dis­cuss the rad­i­cal pos­si­bil­i­ty that a good first step in lift­ing their black­out would be for the CIA to put up a Web site.

They didn’t see how this would be pos­si­ble with­out com­pro­mis­ing their secu­ri­ty. All of their prim­i­tive net­works had an “air wall,” or phys­i­cal sep­a­ra­tion, from the Inter­net. They admit­ted that it might be even more dan­ger­ous to secu­ri­ty to remain abstract­ed from the wealth of infor­ma­tion that had already assem­bled itself there, but they had an almost mys­ti­cal super­sti­tion that wires leav­ing the agency would also be wires enter­ing it, a ver­i­ta­ble super­high­way for invad­ing cyber­spooks.

We explained to them how easy it would be to have two net­works, one con­nect­ed to the Inter­net for gath­er­ing infor­ma­tion from open sources and a sep­a­rate intranet, one that would remain ded­i­cat­ed to clas­si­fied data. We told them that infor­ma­tion exchange was a barter sys­tem, and that to receive, one must also be will­ing to share. This was an alien notion to them. They weren’t even will­ing to share infor­ma­tion among them­selves, much less the world.

In the end, they acqui­esced. They put up a Web site, and I start­ed to get email from peo­ple @cia.gov, indi­cat­ing that the Inter­net had made it to Lan­g­ley. But the cul­tur­al ter­ror of releas­ing any­thing of val­ue remains. Go to their Web site today and you will find a lot of press releas­es, as well as descrip­tions of maps and pub­li­ca­tions that you can acquire only by buy­ing them in paper. The unof­fi­cial al Qae­da Web site, http://www.almuhajiroun.com, is con­sid­er­ably more reveal­ing.

This dog­ma of secre­cy is prob­a­bly the most per­sis­tent­ly dam­ag­ing fall­out from “the Sovi­et fac­tor” at the CIA and else­where in the intel­li­gence “com­mu­ni­ty.” Our spooks stared so long at what Churchill called “a mys­tery sur­round­ed by a rid­dle wrapped in an enig­ma,” they became one them­selves. They con­tin­ue to be one, despite the evap­o­ra­tion of their old adver­sary, as well as a long series of efforts by elect­ed author­i­ties to loosen the white-knuck­led grip on their secrets.

The most recent of these was the 1997 Com­mis­sion on Pro­tect­ing and Reduc­ing Gov­ern­ment Secre­cy, led by Sen­a­tor Patrick Moyni­han. The Moyni­han Com­mis­sion released a with­er­ing report charg­ing intel­li­gence agen­cies with exces­sive clas­si­fi­ca­tion and cit­ing a long list of adverse con­se­quences rang­ing from pub­lic dis­trust to con­cealed (and there­fore irre­me­di­a­ble) orga­ni­za­tion­al fail­ures.

That same year, Moyni­han pro­posed a bill called the Gov­ern­ment Secre­cy Reform Act. Cospon­sored by con­ser­v­a­tive Repub­li­cans Jesse Helms and Trent Lott, among oth­ers, this leg­is­la­tion was hard­ly out to gut Amer­i­can intel­li­gence. But the spooks fought back effec­tive­ly through the Clin­ton Admin­is­tra­tion and so weak­ened the bill that one of its cospon­sors, Con­gress­man Lee Hamil­ton (D‑Ind.), con­clud­ed that it would be bet­ter not to pass what remained.

A few of its rec­om­men­da­tions even­tu­al­ly were wrapped into the Intel­li­gence Autho­riza­tion Act of 2000. But of these, the only one with any oper­a­tional force–a require­ment that a pub­lic-inter­est declas­si­fi­ca­tion board be estab­lished to advise the Admin­is­tra­tion in these mat­ters-has nev­er been imple­ment­ed. Thanks to the vig­or­ous inter­ven­tions of the Clin­ton White House, the cult of secre­cy remained unmo­lest­ed.

One might be sur­prised to learn that Clin­to­ni­ans were so pro-secre­cy. In fact, they weren’t. But they lacked the force to dom­i­nate their wily sub­or­di­nates. Indeed, in 1994, one high­ly placed White House staffer told me that their incom­pre­hen­si­ble cryp­to poli­cies arose from being “afraid of the NSA.”

In May 2000, I began to under­stand what they were up against. I was invit­ed to speak to the Intel­li­gence Com­mu­ni­ty Col­lab­o­ra­tion Con­fer­ence (a title that con­tained at least four ironies). The oth­er pri­ma­ry speak­er was Air Force Lt. Gen­er­al Mike Hay­den, the new­ly appoint­ed direc­tor of the NSA. He said he felt pow­er­less, though he was deter­mined not to remain that way.

“I had been on the job for a while before I real­ized that I have no staff,” he com­plained. “Every­thing the agency does had been pushed down into the components…it’s all being man­aged sev­er­al lev­els below me.” In oth­er words, the NSA had devel­oped an immune sys­tem against exter­nal inter­ven­tion.

Hay­den rec­og­nized how exces­sive secre­cy had dam­aged intel­li­gence, and he was deter­mined to fix it. “We were America’s infor­ma­tion age enter­prise in the indus­tri­al age. Now we have to do that same task in the infor­ma­tion age, and we find our­selves less adept,” he said.

He also vowed to dimin­ish the CIA’s com­pet­i­tive­ness with oth­er agen­cies. (This is a prob­lem that remains severe, even though it was first iden­ti­fied by the Hoover Com­mis­sion in 1949.) Hay­den decried “the stovepipe men­tal­i­ty” where infor­ma­tion is passed ver­ti­cal­ly through many bureau­crat­ic lay­ers but rarely pass­es hor­i­zon­tal­ly. “We are rid­dled with water­tight infor­ma­tion com­part­ments,” he said. “At the mas­sive agency lev­el, if I had to ask, ‘Do we need blue giz­mos?’ the only per­son I could ask was the per­son whose job secu­ri­ty depend­ed on there being more blue giz­mos.”

Like the CIA I encoun­tered, Hayden’s NSA was also a lot like the Sovi­et Union; secre­tive unto itself, sullen, and gross­ly inef­fi­cient. The NSA was also, by his account, as tech­no­log­i­cal­ly mal­adroit as its rival in Lan­g­ley. Hay­den won­dered, for exam­ple, why the direc­tor of what was sup­pos­ed­ly one of the most sophis­ti­cat­ed agen­cies in the world would have four phones on his desk. Direct elec­tron­ic con­tact between him and the con­sumers of his information–namely the Pres­i­dent and Nation­al Secu­ri­ty staff–was vir­tu­al­ly nil. There were, he said, thou­sands of unlinked, inter­nal­ly gen­er­at­ed oper­at­ing sys­tems inside the NSA, inca­pable of exchang­ing infor­ma­tion with one anoth­er.

Hay­den rec­og­nized the impor­tance of get­ting over the Cold War. “Our tar­gets are no longer con­trolled by the tech­no­log­i­cal lim­i­ta­tions of the Sovi­et Union, a slow, prim­i­tive, under­fund­ed foe. Now [our ene­mies] have access to state-of-the-art….In 40 years the world went from 5,000 stand-alone com­put­ers, many of which we owned, to 420 mil­lion com­put­ers, many of which are bet­ter than ours.”

But there wasn’t much evi­dence that it was going to hap­pen any­time soon. While Hay­den spoke, the 200 or so high-rank­ing intel­li­gence offi­cials in the audi­ence sat with their arms fold­ed defen­sive­ly across their chests. When I got up to essen­tial­ly sing the same song in a dif­fer­ent key, I asked them, as a favor, not to assume that pos­ture while I was speak­ing. I then watched a Strangelov­ian spec­ta­cle when, dur­ing my talk, many arms crept up to cross invol­un­tar­i­ly and were thrust back down to their sides by force of embar­rassed will.

That said, I draw a clear dis­tinc­tion between the insti­tu­tions of intel­li­gence and the folks who staff them.

All of the actu­al peo­ple I’ve encoun­tered in intel­li­gence are, in fact, intel­li­gent. They are ded­i­cat­ed and thought­ful. How then, can the insti­tu­tion­al sum add up to so much less than the parts? Because anoth­er, much larg­er, com­bi­na­tion of fac­tors is also at work: bureau­cra­cy and secre­cy.

Bureau­cra­cies nat­u­ral­ly use secre­cy to immu­nize them­selves against hos­tile inves­ti­ga­tion, from with­out or with­in. This ten­den­cy becomes an autoim­mune dis­or­der when the bureau­cra­cy is actu­al­ly designed to be secre­tive and is whol­ly focused on oth­er, sim­i­lar insti­tu­tions. The coun­ter­pro­duc­tive infor­ma­tion hoard­ing, the tech­no­log­i­cal back­ward­ness, the unac­count­abil­i­ty, the moral lax­i­ty, the sus­pi­cion of pub­lic infor­ma­tion, the arro­gance, the xeno­pho­bia (and result­ing lack of cul­tur­al and lin­guis­tic sophis­ti­ca­tion), the risk aver­sion, the recruit­ing homo­gene­ity, the inward-direct­ed­ness, the pref­er­ence for data acqui­si­tion over infor­ma­tion dis­sem­i­na­tion, and the use­less­ness of what is dis­sem­i­nat­ed-all are the nat­ur­al, and now ful­ly mature, whelps of bureau­cra­cy and secre­cy.

Not sur­pris­ing­ly, peo­ple who work there believe that job secu­ri­ty and pow­er are defined by the amount of infor­ma­tion one can stop from mov­ing. You become more pow­er­ful based on your capac­i­ty to know things that no one else does. The same applies, in con­cen­tric cir­cles of self-pro­tec­tion, to one’s team, depart­ment, sec­tion, and agency. How can data be digest­ed into use­ful infor­ma­tion in a sys­tem like that?

How can we expect the CIA and FBI to share infor­ma­tion with each oth­er when they’re dis­in­clined to share it with­in their own orga­ni­za­tions? The result­ing dif­fer­ences cut deep. One of the rev­e­la­tions of the House Report on Coun­tert­er­ror­ism Intel­li­gence Capa­bil­i­ties and Per­for­mance Pri­or to Sep­tem­ber 11 was that none of the respon­si­ble agen­cies even shared the same def­i­n­i­tion of ter­ror­ism. It’s hard to find some­thing when you can’t agree on what you’re look­ing for.

The infor­ma­tion they do divulge is also flawed in a vari­ety of ways. The “con­sumers” (as they gen­er­al­ly call pol­i­cy­mak­ers) are unable to deter­mine the reli­a­bil­i­ty of what they’re get­ting because the sources are con­cealed. Much of what they get is too undi­gest­ed and volu­mi­nous to be use­ful to some­one already suf­fer­ing from infor­ma­tion over­load. And it comes with strings attached. As one gen­er­al put it, “I don’t want infor­ma­tion that requires three secu­ri­ty offi­cers and a safe to move it in around the bat­tle­field.”

As a result, the con­sumers are increas­ing­ly more inclined to get their infor­ma­tion from pub­lic sources. Sec­re­tary of State Col­in Pow­ell says that he prefers “the Ear­ly Bird,” a com­pendi­um of dai­ly news­pa­per sto­ries, to the President’s Dai­ly Brief (the CIA’s ulti­mate prod­uct).

The same is appar­ent­ly true with­in the agen­cies them­selves. Although their fin­ished prod­ucts rarely make explic­it use of what’s been gleaned from the media, ana­lysts rou­tine­ly turn there for infor­ma­tion. On the day I first vis­it­ed the CIA’s “mis­sion con­trol” room, the ana­lysts around the lazy Susan often turned their atten­tion to the giant video mon­i­tors over­head. Four of these were show­ing the same CNN feed.

Secre­cy also breeds tech­no­log­i­cal stag­na­tion. In the ear­ly ’90s, I was speak­ing to per­son­nel from the Depart­ment of Ener­gy nuclear labs about com­put­er secu­ri­ty. I told them I thought their empha­sis on clas­si­fi­ca­tion might be unnec­es­sary because mak­ing a weapon was less a mat­ter of infor­ma­tion than of indus­tri­al capac­i­ty. The recipe for a nuclear bomb has been gen­er­al­ly avail­able since 1978, when John Aris­to­tle Phillips pub­lished plans in The Pro­gres­sive. What’s not so read­i­ly avail­able is the plu­to­ni­um and tri­tium, which require an entire nation to pro­duce. Giv­en that, I couldn’t see why they were being so secre­tive.

The next speak­er was Dr. Edward Teller, who sur­prised me by not only agree­ing but point­ing out both the role of open dis­course in sci­en­tif­ic progress, as well as the futil­i­ty of most infor­ma­tion secu­ri­ty. “If we made an impor­tant nuclear dis­cov­ery, the Rus­sians were usu­al­ly able to get it with­in a year,” he said. He went on: “After World War II we were ahead of the Sovi­ets in nuclear tech­nol­o­gy and about even with them in elec­tron­ics. We main­tained a closed sys­tem for nuclear design while design­ing elec­tron­ics in the open. Their sys­tems were closed in both regards. After 40 years, we are at par­i­ty in nuclear sci­ence, where­as, thanks to our open sys­tem in the study of elec­tron­ics, we are decades ahead of the Rus­sians.”

There is also the sticky mat­ter of bud­getary account­abil­i­ty. The direc­tor of Cen­tral Intel­li­gence (DCI) is sup­posed to be in charge of all the func­tions of intel­li­gence. In fact, he has con­trol over less than 15% of the total bud­get, direct­ing only the CIA. Sev­er­al of the dif­fer­ent intel­li­gence-reform com­mis­sions that have been con­vened since 1949 have called for con­sol­i­dat­ing bud­getary author­i­ty under the DCI, but it has nev­er hap­pened.

With such hazy over­sight, the intel­li­gence agen­cies nat­u­ral­ly become waste­ful and redun­dant. They spent their mon­ey on toys like satel­lite-imag­ing sys­tems and big-iron com­put­ers (often obso­lete by the time they’re deployed) rather than devel­op­ing the orga­ni­za­tion­al capac­i­ty for ana­lyz­ing all those snap­shots from space, or train­ing ana­lysts in lan­guages oth­er than Eng­lish and Russ­ian, or infil­trat­ing poten­tial­ly dan­ger­ous groups, or invest­ing in the resources nec­es­sary for good HUMINT (as they poet­i­cal­ly call infor­ma­tion gath­ered by humans oper­at­ing on the ground).

In fact, few­er than 10% of the mil­lions of satel­lite pho­tographs tak­en have ever been seen by any­body. Only one-third of the employ­ees at the CIA speak any lan­guage besides Eng­lish. Even if they do, it’s gen­er­al­ly either Russ­ian or some com­mon Euro­pean lan­guage. Of what use are the NSA’s humon­gous code-break­ing com­put­ers if no one can read the plain text extract­ed from the encrypt­ed stream?

Anoth­er sys­temic deficit of intel­li­gence lies, inter­est­ing­ly enough, in the area of good old-fash­ioned spy­ing. Although its inten­tions were noble, the ’70s Church Com­mit­tee had a dev­as­tat­ing effect on this nec­es­sary part of intel­li­gence work. It caught the CIA in a num­ber of dubi­ous covert oper­a­tions and took the guilty to task.

But rather than lis­ten to the committee’s essen­tial mes­sage that they should renounce the sorts of nefar­i­ous deeds the pub­lic would repu­di­ate and lim­it secre­cy to essen­tial secu­ri­ty con­sid­er­a­tions, the lead­er­ship respond­ed by pulling most of its agents out of the field, aside from a few hired trai­tors.

Despite all the efforts aimed at sharp­en­ing their tools, intel­li­gence offi­cials have only become pro­gres­sive­ly duller and more expen­sive. We enter an era of asym­met­ri­cal threats, dis­trib­uted over the entire globe, against which our most effec­tive weapon is under­stand­ing. Yet we are still pro­tect­ed by agen­cies geared to gaz­ing on a sin­gle, cen­tral­ized threat, using meth­ods that opti­mize obfus­ca­tion. What is to be done?

We might begin by ask­ing what intel­li­gence should do. The answer is sim­ple: Intel­li­gence exists to pro­vide deci­sion mak­ers with an accu­rate, com­pre­hen­sive, and unbi­ased under­stand­ing of what’s going on in the world. In oth­er words, intel­li­gence defines real­i­ty for those whose actions could alter it. “Giv­en our basic mis­sion,” one ana­lyst said weari­ly, “we’d do bet­ter to study epis­te­mol­o­gy than mis­sile emplace­ments.”

If we are seri­ous about defin­ing real­i­ty, we might look at the sys­tem that defines real­i­ty for most of us: sci­en­tif­ic dis­course. The sci­en­tif­ic method is straight­for­ward. The­o­ries are open­ly advanced for exam­i­na­tion and tri­al by oth­ers in the field. Sci­en­tists toil to cre­ate sys­tems to make all the infor­ma­tion avail­able to one imme­di­ate­ly avail­able to all. They don’t like secrets. They base their rep­u­ta­tions on their abil­i­ty to dis­trib­ute their con­clu­sions rather than the abil­i­ty to con­ceal them. They rec­og­nize that “truth” is based on the widest pos­si­ble con­sen­sus of per­cep­tions. They are com­mit­ted free mar­ke­teers in the com­merce of thought. This method has worked fab­u­lous­ly well for 500 years. It might be worth a try in the field of intel­li­gence.

Intel­li­gence has been focused on gath­er­ing infor­ma­tion from expen­sive closed sources, such as satel­lites and clan­des­tine agents. Let’s attempt to turn that propo­si­tion around. Let’s cre­ate a process of infor­ma­tion diges­tion in which inex­pen­sive data are gath­ered from large­ly open sources and con­densed, through an open process, into knowl­edge terse and insight­ful enough to inspire wis­dom in our lead­ers.

The enti­ty I envi­sion would be small, high­ly net­worked, and gen­er­al­ly vis­i­ble. It would be open to infor­ma­tion from all avail­able sources and would clas­si­fy only infor­ma­tion that arrived clas­si­fied. It would rely heav­i­ly on the Inter­net, pub­lic media, the aca­d­e­m­ic press, and an infor­mal world­wide net­work of volunteers–a kind of glob­al Neigh­bor­hood Watch–that would sub­mit on-the-ground reports.

It would use off-the-shelf tech­nol­o­gy, and use it less for gath­er­ing data than for col­lat­ing and com­mu­ni­cat­ing them. Being off-the-shelf, it could deploy tools while they were still state-of-the-art.

I imag­ine this enti­ty staffed ini­tial­ly with librar­i­ans, jour­nal­ists, lin­guists, sci­en­tists, tech­nol­o­gists, philoso­phers, soci­ol­o­gists, cul­tur­al his­to­ri­ans, the­olo­gians, econ­o­mists, philoso­phers, and artists‑a lot like the orig­i­nal CIA, the OSS, under “Wild Bill” Dono­van. Its bud­get would be under the direct author­i­ty of the Pres­i­dent, act­ing through the Nation­al Secu­ri­ty Advis­er. Con­gres­sion­al over­sight would reside in the com­mit­tees on sci­ence and tech­nol­o­gy (and not under the con­gres­sion­al Joint Com­mit­tee on Intel­li­gence).

There are, of course, prob­lems with this pro­pos­al. First, it does not address the press­ing need to reestab­lish clan­des­tine human intel­li­gence. Per­haps this new Open Intel­li­gence Office (OIO) could also work close­ly with a Clan­des­tine Intel­li­gence Bureau, also sep­a­rate from the tra­di­tion­al agen­cies, to direct infil­tra­tors and moles who would report their obser­va­tions to the OIO through a tech­no­log­i­cal mem­brane that would strip their iden­ti­ties from their find­ings. The oper­a­tives would be legal­ly restrict­ed to gath­er­ing infor­ma­tion, with harsh penal­ties attached to any engage­ment in covert oper­a­tions.

The oth­er prob­lem is the “Sat­urn” dilem­ma. Once this new enti­ty begins to demon­strate its effec­tive­ness in pro­vid­ing insight to pol­i­cy­mak­ers that is con­cise, time­ly, and accu­rate (as I believe it would), almost cer­tain­ly tra­di­tion­al agen­cies would try to haul it back into the moth­er ship and break it (as has hap­pened to the Sat­urn divi­sion at Gen­er­al Motors). I don’t know how to deal with that one. It’s the nature of bureau­cra­cies to crush com­pe­ti­tion. No one at the CIA would be hap­py to hear that the only thing the Pres­i­dent and cab­i­net read every morn­ing is the OIO report.

But I think we can deal with that prob­lem when we’re lucky enough to have it. Know­ing that it’s like­ly to occur may be suf­fi­cient. A more imme­di­ate prob­lem would be keep­ing exist­ing agen­cies from abort­ing the OIO as soon as some­one with the pow­er to cre­ate it start­ed think­ing it might be a good idea. And, of course, there’s also the unlike­li­hood that any­one who thinks that the Depart­ment of Home­land Secu­ri­ty is a good idea would ever enter­tain such a pos­si­bil­i­ty.

Right now, we have to do some­thing, and prefer­ably some­thing use­ful. The U.S. has just tak­en its worst hit from the out­side since 1941. Our exist­ing sys­tems for under­stand­ing the world are designed to under­stand a world that no longer exists. It’s time to try some­thing that’s the right kind of crazy. It’s time to end the more tra­di­tion­al insan­i­ty of end­less­ly repeat­ing the same futile efforts.

John Per­ry Bar­low is cofounder of the Elec­tron­ic Fron­tier Foun­da­tion.

4b. We note–again–that Wik­iLeaks is, and always was, an obvi­ous­ly fascist/Nazi insti­tu­tion. (In FTR #‘s 724, 725, 732, 745, 755 and 917 we have detailed the fas­cist and far right-wing ide­ol­o­gy, asso­ci­a­tions and pol­i­tics of Julian Assange and Wik­iLeaks.)

“Inside the Para­noid, Strange World of Julian Assange” by James Ball; Buz­zFeed; 10/23/2016.

. . . . Spend­ing those few months at such close prox­im­i­ty to Assange and his con­fi­dants, and expe­ri­enc­ing first-hand the pres­sures exert­ed on those there, have giv­en me a par­tic­u­lar insight into how Wik­iLeaks has become what it is today.

To an out­sider, the Wik­iLeaks of 2016 looks total­ly unre­lat­ed to the Wik­iLeaks of 2010. . . .

Now it is the dar­ling of the alt-right, reveal­ing hacked emails seem­ing­ly to influ­ence a pres­i­den­tial con­test, claim­ing the US elec­tion is “rigged”, and descend­ing into con­spir­a­cy. Just this week on Twit­ter, it described the deaths by nat­ur­al caus­es of two of its sup­port­ers as a “bloody year for Wik­iLeaks”, and warned of media out­lets “con­trolled by” mem­bers of the Roth­schild fam­i­ly – a com­mon anti-Semit­ic trope. . .

5a. In FTR #951, we observed that Richard B. Spencer, one of Trump’s Nazi back­ers, has begun a web­site with Swedish Alt-Righter Daniel Friberg, part of the Swedish fas­cist milieu to which Carl Lund­strom belongs. In FTR #732 (among oth­er pro­grams), we not­ed that it was Lund­strom who financed the Pirate Bay web­site, on which Wik­iLeaks held forth for quite some time. In FTR #745, we doc­u­ment­ed that top Assange aide and Holo­caust-denier Joran Jer­mas (aka “Israel Shamir”) arranged the Lundstrom/WikiLeaks liai­son. (Jer­mas han­dles Wik­iLeaks Russ­ian oper­a­tions, a point of inter­est in the wake of the 2016 cam­paign.)

It is a good bet that Lundstrom/Pirate Bay/WikiLeaks et al were data min­ing the many peo­ple who vis­it­ed the Wik­iLeaks site.

Might Lundstrom/Jermas/Assange et al have shared the volu­mi­nous data they may well have mined with Mercer/Cambridge Analytica/Bannon’s Naz­i­fied AI?

“Richard Spencer and His Alt-Right Bud­dies Launch a New Web­site” by Osi­ta Nwavenu; Slate; 1/17/2017.

On Mon­day, Richard Spencer, New Jer­sey Insti­tute of Tech­nol­o­gy lec­tur­er Jason Jor­jani, and Swedish New Right fig­ure Daniel Friberg launched altright.com, a site aimed at bring­ing togeth­er “the best writ­ers and ana­lysts from Alt Right, in North Amer­i­ca, Europe, and around the world.” . . .

. . . . As of now, most of the site’s con­tent is recy­cled mate­r­i­al from Friberg’s Ark­tos pub­lish­ing house, Spencer’s oth­er pub­li­ca­tion, Radix Jour­nal, the alt-right online media net­work Red Ice, and Occi­den­tal Dis­sent, a white nation­al­ist blog run by altright.com’s news edi­tor Hunter Wal­lace. . . .

…. Still, Spencer’s intel­lec­tu­al­ism does lit­tle to hide the cen­tral­i­ty of big­otry to his own world­view and the views of those he pub­lish­es. His pre­vi­ous site, Alter­na­tive Right, once ran an essay called, ‘Is Black Geno­cide Right?’” “Instead of ask­ing how we can make repa­ra­tions for slav­ery, colo­nial­ism, and Apartheid or how we can equal­ize aca­d­e­m­ic scores and incomes,” Col­in Lid­dell wrote, “we should instead be ask­ing ques­tions like, ‘Does human civ­i­liza­tion actu­al­ly need the Black race?’ ‘Is Black geno­cide right?’ and, if it is, ‘What would be the best and eas­i­est way to dis­pose of them?’” It remains to be seen whether altright.com will employ sim­i­lar­ly can­did writ­ers. . . .

5b. Pirate Bay sug­ar dad­dy Lund­strom has dis­cussed his polit­i­cal sym­pa­thies. [The excerpt below is from Google trans­la­tions. The Swedish sen­tence is fol­lowed by the Eng­lish trans­la­tion.] Note that he appears on the user/subscriber list for Nordic Pub­lish­ers, the Nazi pub­lish­ing out­fit that han­dles the efforts pro­duced by one of Jer­mas’s [aka “Shamir’s”] pub­lish­ers.

“The Goal: Take over all Pira­cy” by Peter Karls­son; realtid.se; 3/10/2006.

. . . Lund­ström har inte gjort någon hem­lighet av sina sym­pa­ti­er för främ­lings­fientli­ga grup­per, och för­ra året fanns hans namn med på kun­dreg­istret hos det nazis­tiska bok­för­laget Nordiska För­laget. Lund­strom has made no secret of his sym­pa­thy for the xeno­pho­bic groups, and last year was his name with the cus­tomer code of the Nazi pub­lish­ing house Nordic Pub­lish­ers.

– Jag stöder dem genom att köpa böck­er och musik. - I sup­port them by buy­ing books and music. Ni i media vill bara spri­da mis­sak­t­ning om oli­ka per­son­er. You in the media just want to spread con­tempt for dif­fer­ent peo­ple. Ni i media är fyll­da av hat till Pirate Bay, avs­lu­tar en myck­et upprörd Carl Lund­ström. You in the media is full of hatred to the Pirate Bay, fin­ish­ing a very upset Carl Lund­ström.

Nordiska För­laget säl­jer vit makt musik och böck­er som hyl­lar rasis­tiska våld­shan­dlin­gar. Nordic pub­lish­ing com­pa­ny sells white pow­er music and books that cel­e­brates the racist vio­lence. För­laget stöder nazis­ter­nas demon­stra­tion i Salem och bjöd in Ku Klux Klan ledaren till en före­drag­turné i Sverige. Pub­lish­er sup­ports the Nazi demon­stra­tion in Salem and invit­ed the Ku Klux Klan leader [David Duke] for a lec­ture tour in Swe­den. . . .

6c. Expo–found­ed by the late Stieg Larsson–revealed that Friberg’s Nordic Pub­lish­ers has mor­phed into Ark­tos, one of the out­fits asso­ci­at­ed with Spencer, et al.

Right Wing Pub­lic Edu­ca­tion” by Maria-Pia Cabero [Google Trans­la­tion]; Expo; Jan­u­ary of 2014.

. . . . When NF [Nordiska Forlaget–D.E.] were dis­con­tin­ued in 2010 found­ed the pub­lish­er Ark­tos by basi­cal­ly the same peo­ple. Ark­tos pub­lish­es New Right-inspired lit­er­a­ture and CEO Daniel Friberg, who was dri­ving in the NF, has played a key role in the estab­lish­ment of ideas. . . .

7. At the SXSW event, Microsoft researcher Kate Craw­ford gave a speech about her work titled “Dark Days: AI and the Rise of Fas­cism,” the pre­sen­ta­tion high­light­ed  the social impact of machine learn­ing and large-scale data sys­tems. The take home mes­sage? By del­e­gat­ing pow­ers to Bid Data-dri­ven AIs, those AIs could become fascist’s dream: Incred­i­ble pow­er over the lives of oth­ers with min­i­mal account­abil­i­ty: ” . . . .‘This is a fascist’s dream,’ she said. ‘Pow­er with­out account­abil­i­ty.’ . . . .”

We reit­er­ate, in clos­ing, that ” . . . . Palan­tir is build­ing an intel­li­gence sys­tem to assist Don­ald Trump in deport­ing immi­grants. . . .”

In FTR #757 we not­ed that Palan­tir is a firm dom­i­nat­ed by Peter Thiel, a main backer of Don­ald Trump.

“Arti­fi­cial Intel­li­gence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017.

Microsoft’s Kate Craw­ford tells SXSW that soci­ety must pre­pare for author­i­tar­i­an move­ments to test the ‘pow­er with­out account­abil­i­ty’ of AI

As arti­fi­cial intel­li­gence becomes more pow­er­ful, peo­ple need to make sure it’s not used by author­i­tar­i­an regimes to cen­tral­ize pow­er and tar­get cer­tain pop­u­la­tions, Microsoft Research’s Kate Craw­ford warned on Sun­day.

In her SXSW ses­sion, titled Dark Days: AI and the Rise of Fas­cism, Craw­ford, who stud­ies the social impact of machine learn­ing and large-scale data sys­tems, explained ways that auto­mat­ed sys­tems and their encod­ed bias­es can be mis­used, par­tic­u­lar­ly when they fall into the wrong hands.

“Just as we are see­ing a step func­tion increase in the spread of AI, some­thing else is hap­pen­ing: the rise of ultra-nation­al­ism, rightwing author­i­tar­i­an­ism and fas­cism,” she said.

All of these move­ments have shared char­ac­ter­is­tics, includ­ing the desire to cen­tral­ize pow­er, track pop­u­la­tions, demo­nize out­siders and claim author­i­ty and neu­tral­i­ty with­out being account­able. Machine intel­li­gence can be a pow­er­ful part of the pow­er play­book, she said.

One of the key prob­lems with arti­fi­cial intel­li­gence is that it is often invis­i­bly cod­ed with human bias­es. She described a con­tro­ver­sial piece of research from Shang­hai Jiao Tong Uni­ver­si­ty in Chi­na, where authors claimed to have devel­oped a sys­tem that could pre­dict crim­i­nal­i­ty based on someone’s facial fea­tures. The machine was trained on Chi­nese gov­ern­ment ID pho­tos, ana­lyz­ing the faces of crim­i­nals and non-crim­i­nals to iden­ti­fy pre­dic­tive fea­tures. The researchers claimed it was free from bias.

“We should always be sus­pi­cious when machine learn­ing sys­tems are described as free from bias if it’s been trained on human-gen­er­at­ed data,” Craw­ford said. “Our bias­es are built into that train­ing data.”

In the Chi­nese research it turned out that the faces of crim­i­nals were more unusu­al than those of law-abid­ing cit­i­zens. “Peo­ple who had dis­sim­i­lar faces were more like­ly to be seen as untrust­wor­thy by police and judges. That’s encod­ing bias,” Craw­ford said. “This would be a ter­ri­fy­ing sys­tem for an auto­crat to get his hand on.”

Craw­ford then out­lined the “nasty his­to­ry” of peo­ple using facial fea­tures to “jus­ti­fy the unjus­ti­fi­able”. The prin­ci­ples of phrenol­o­gy, a pseu­do­science that devel­oped across Europe and the US in the 19th cen­tu­ry, were used as part of the jus­ti­fi­ca­tion of both slav­ery and the Nazi per­se­cu­tion of Jews.

With AI this type of dis­crim­i­na­tion can be masked in a black box of algo­rithms, as appears to be the case with a com­pa­ny called Face­cep­tion, for instance, a firm that promis­es to pro­file people’s per­son­al­i­ties based on their faces. In its ownmar­ket­ing mate­r­i­al, the com­pa­ny sug­gests that Mid­dle East­ern-look­ing peo­ple with beards are “ter­ror­ists”, while white look­ing women with trendy hair­cuts are “brand pro­mot­ers”.

Anoth­er area where AI can be mis­used is in build­ing reg­istries, which can then be used to tar­get cer­tain pop­u­la­tion groups. Craw­ford not­ed his­tor­i­cal cas­es of reg­istry abuse, includ­ing IBM’s role in enabling Nazi Ger­many to track Jew­ish, Roma and oth­er eth­nic groups with the Hol­lerith Machine, and the Book of Life used in South Africa dur­ing apartheid. [We note in pass­ing that Robert Mer­cer, who devel­oped the core pro­grams used by Cam­bridge Ana­lyt­i­ca did so while work­ing for IBM. We dis­cussed the pro­found rela­tion­ship between IBM and the Third Reich in FTR #279–D.E.]

Don­ald Trump has float­ed the idea of cre­at­ing a Mus­lim reg­istry. “We already have that. Face­book has become the default Mus­lim reg­istry of the world,” Craw­ford said, men­tion­ing research from Cam­bridge Uni­ver­si­ty that showed it is pos­si­ble to pre­dict people’s reli­gious beliefs based on what they “like” on the social net­work. Chris­tians and Mus­lims were cor­rect­ly clas­si­fied in 82% of cas­es, and sim­i­lar results were achieved for Democ­rats and Repub­li­cans (85%). That study was con­clud­ed in 2013, since when AI has made huge leaps.

Craw­ford was con­cerned about the poten­tial use of AI in pre­dic­tive polic­ing sys­tems, which already gath­er the kind of data nec­es­sary to train an AI sys­tem. Such sys­tems are flawed, as shown by a Rand Cor­po­ra­tion study of Chicago’s pro­gram. The pre­dic­tive polic­ing did not reduce crime, but did increase harass­ment of peo­ple in “hotspot” areas. Ear­li­er this year the jus­tice depart­ment con­clud­ed that Chicago’s police had for years reg­u­lar­ly used “unlaw­ful force”, and that black and His­pan­ic neigh­bor­hoods were most affect­ed.

Anoth­er wor­ry relat­ed to the manip­u­la­tion of polit­i­cal beliefs or shift­ing vot­ers, some­thing Face­book and Cam­bridge Ana­lyt­i­ca claim they can already do. Craw­ford was skep­ti­cal about giv­ing Cam­bridge Ana­lyt­i­ca cred­it for Brex­it and the elec­tion of Don­ald Trump, but thinks what the firm promis­es – using thou­sands of data points on peo­ple to work out how to manip­u­late their views – will be pos­si­ble “in the next few years”.

“This is a fascist’s dream,” she said. “Pow­er with­out account­abil­i­ty.”

Such black box sys­tems are start­ing to creep into gov­ern­ment. Palan­tir is build­ing an intel­li­gence sys­tem to assist Don­ald Trump in deport­ing immi­grants.

“It’s the most pow­er­ful engine of mass depor­ta­tion this coun­try has ever seen,” she said. . . .

 

Discussion

9 comments for “FTR #952 Be Afraid, Be VERY Afraid: Update on Technocratic Fascism”

  1. Here’s a twist to the GOP’s deci­sion to give inter­net ser­vice providers the right to tell their cus­tomers’ brows­ing data that might actu­al­ly give the Alt Right and oth­er neo-Nazis a very big rea­son to be very pissed at Trump and the GOP: If you’re a neo-Nazi fan of web­sites like Storm­front, the fact that you vis­it these sites was going to be between you and Storm­front. And your ISP. And any oth­er enti­ties, gov­ern­ment or oth­er­wise, that be watch­ing the traf­fic to those sites. And if there are any adver­tis­ers on those sites they might also be able to track your Storm­front view­ing habits. Plus, if you used a search engine to get to the site then your search engine provider would pre­sum­ably know. But that would most­ly be it. But now, thanks to the GOP, that whole giant data har­vest­ing indus­try that exists to build pro­files on every­one is now going to get to learn about your Storm­front view­ing habits too! Or maybe your love of 4Chan. Or all those hor­ri­ble Red­dit forums your vis­it and post to. All that will be even more avail­able to add to the var­i­ous Big Data pro­files of your float­ing around in the data har­vest­ing indus­try. And it’s all thanks to the GOP and Don­ald Trump:

    Spin

    The Internet’s Anony­mous Nazis Have Real­ized They Played Them­selves Now That Trump Plans to Kill Inter­net Pri­va­cy

    Andy Cush // March 29, 2017

    Yes­ter­day, the Repub­li­can-con­trolled House of Rep­re­sen­ta­tives passed a bill that will kill the Fed­er­al Com­mu­ni­ca­tions Commission’s rules pre­vent­ing inter­net ser­vice providers from sell­ing users’ brows­ing data to adver­tis­ers and oth­er third par­ties. Don­ald Trump’s White House has indi­cat­ed that it intends to sign the bill into law, a move that will dras­ti­cal­ly roll back the pri­va­cy rights of ordi­nary peo­ple on the inter­net. If the armies of anony­mous right-wing trolls who helped pro­pel the pres­i­dent to vic­to­ry aren’t sec­ond-guess­ing their Trump sup­port right about now, they real­ly should be.

    Last year, when the FCC was still under Demo­c­ra­t­ic con­trol, it passed rules that would require ISPs like Com­cast and Ver­i­zon to obtain their cus­tomers’ explic­it per­mis­sion before shar­ing infor­ma­tion like brows­er his­to­ry and geo­graph­i­cal data with oth­er com­pa­nies, a move that was hailed as a major step for­ward for online pri­va­cy at the time. The new law will elim­i­nate those rules, reopen­ing the path for providers to sell data about your porn view­ing habits or the time you spend read­ing dis­rep­utable music pub­li­ca­tions to adver­tis­ers hop­ing to build detailed pro­files of their poten­tial cus­tomers.

    Dur­ing the pres­i­den­tial cam­paign, anony­mous inter­net forums like Red­dit, 4chan and 8chan, and the dingi­est cor­ners of Twit­ter emerged as major hubs for Trump sup­port­ers. “I’m fu cking trem­bling out of excite­ment brahs,”one user of 4chan’s nihilis­tic right-wing pol­i­tics board /pol/ pro­claimed after Trump’s vic­to­ry. “We actu­al­ly elect­ed a meme as pres­i­dent.”

    The only thing chan trolls care about as much as ethics in gam­ing jour­nal­ism and secur­ing a future for white chil­dren is access to an inter­net that’s unfet­tered by snoop­ing cor­po­rate inter­lop­ers. The entire iden­ti­ty of a Trump-lov­ing 4chan poster is wrapped up in the anonymi­ty of the inter­net, and the abil­i­ty to do what­ev­er you want online with­out con­se­quences, whether it’s telling elab­o­rate inside jokes on mes­sage boards or harass­ing lib­er­al jour­nal­ists on Twit­ter. The elim­i­na­tion of the FCC’s Oba­ma-era pri­va­cy pro­tec­tions is a major blow to the idea of a free and open inter­net.

    The trolls prob­a­bly should have seen this com­ing when Trump select­ed the out­spo­ken net neu­tral­i­ty oppo­nent Ajit Pai as FCC com­mis­sion­er, who vot­ed against the pri­va­cy rules when he was a hum­ble com­mis­sion mem­ber. Still, they’ve spent the 12 hours since the news of the roll­back broke in a state of apoplec­tic anx­i­ety. The cur­rent top post on Reddit’s pop­u­lar The_Donald board reads “Let’s dis­cuss this ISP pri­va­cy bill.” All three of the top com­ments are Trump sup­port­ers who are against it. “I think there needs to be a new bill that intro­duces an all encom­pass­ing pri­va­cy pro­tec­tion,” a Red­di­tor named Stir­lingG wrote. “I don’t like ISPs or web­sites sell­ing my shit, and I don’t see any major pos­i­tives to this.”

    “Keep note: HUGE amounts of ‘for­mer’ don­ald trump sup­port­ers shit­post­ing,” a user named bloodfist45 observed, fur­ther down the post. “I believe they’ll only become for­mer if Trump begins to vote on stuff like this,” anoth­er Red­di­tor respond­ed. “This bill is anti Amer­i­can. Data col­lec­tion on mas­sive lev­els is anti-amer­i­can.”

    ...

    The internet’s oth­er Trump hubs are no hap­pi­er. “Where were you when Trump sided with cor­po­ra­tions and gov­ern­ment over the Amer­i­can peo­ple?” asks a post on /pol/ from this morn­ing. “This is one of very few Oba­ma-era reg­u­la­tions that should have stayed,” reads the top com­ment on Breitbart’s news sto­ry about the bill, which char­ac­ter­izes the pri­va­cy rules as “big gov­ern­ment” over­reach. “This is an attack against free­dom,” anoth­er com­menter respond­ed. “I want to make my own deci­sions about my life, not oth­ers, and that includes my pri­vate info as well. this is total bs, the house giv­ing in to the busi­ness “estab­lish­ment”, Pres Trump should stop this!!!!!”

    Inci­den­tal­ly, the episode is a use­ful cau­tion­ary tale for impres­sion­able young Trump sup­port­ers about the Repub­li­can Party’s con­fla­tion of free-mar­ket cor­po­ratism and indi­vid­ual lib­er­ty in gen­er­al. When you dereg­u­late indus­tries, it’s not the com­mon man who enjoys new free­doms. It’s the peo­ple and orga­ni­za­tions who already have lots of mon­ey and power–in this case, the ISPs. And when those peo­ple in pow­er are giv­en an oppor­tu­ni­ty to fur­ther exploit the com­mon peo­ple who rely on them for essen­tial ser­vices in exchange for a lit­tle more mon­ey, they’ll always take it. Take note, Twit­ter eggs and Red­dit Pepes: it’s as true of health­care and finance as it is of the inter­net.

    “The only thing chan trolls care about as much as ethics in gam­ing jour­nal­ism and secur­ing a future for white chil­dren is access to an inter­net that’s unfet­tered by snoop­ing cor­po­rate inter­lop­ers. The entire iden­ti­ty of a Trump-lov­ing 4chan poster is wrapped up in the anonymi­ty of the inter­net, and the abil­i­ty to do what­ev­er you want online with­out con­se­quences, whether it’s telling elab­o­rate inside jokes on mes­sage boards or harass­ing lib­er­al jour­nal­ists on Twit­ter. The elim­i­na­tion of the FCC’s Oba­ma-era pri­va­cy pro­tec­tions is a major blow to the idea of a free and open inter­net.”

    And just to be clear, Don­ald Trump did actu­al­ly sign this into law, so it’s a done deal. And, of course, par for the course for the GOP:

    ...
    Inci­den­tal­ly, the episode is a use­ful cau­tion­ary tale for impres­sion­able young Trump sup­port­ers about the Repub­li­can Party’s con­fla­tion of free-mar­ket cor­po­ratism and indi­vid­ual lib­er­ty in gen­er­al. When you dereg­u­late indus­tries, it’s not the com­mon man who enjoys new free­doms. It’s the peo­ple and orga­ni­za­tions who already have lots of mon­ey and power–in this case, the ISPs. And when those peo­ple in pow­er are giv­en an oppor­tu­ni­ty to fur­ther exploit the com­mon peo­ple who rely on them for essen­tial ser­vices in exchange for a lit­tle more mon­ey, they’ll always take it. Take note, Twit­ter eggs and Red­dit Pepes: it’s as true of health­care and finance as it is of the inter­net.

    The Alt Right was­n’t just betrayed by Trump and the GOP but by their own not just their own anti-gov­ern­ment Lib­er­tar­i­an ide­ol­o­gy. Yeah, it’s def­i­nite­ly a ‘sad Pepe’ moment.

    Posted by Pterrafractyl | April 10, 2017, 7:22 pm
  2. Con­sid­er­ing what Hitler and the Nazis did with a ... I for­get ... 80 char­ac­ter? punch card, what is out there today is mon­strous.

    The prob­lem is, does any­one have any way to con­ceive this, frame it, or man­age it so that there can be a con­sen­sus reached about laws and rights?

    One thing I’ve won­dered, is if we have all of this sur­veil­lance abil­i­ty, why is there still orga­nized crime? The Mafia should be gone, evis­cer­at­ed, unless they own the sur­veil­lance machine or are in on it in some way?

    Posted by Brux | April 10, 2017, 10:01 pm
  3. Remem­ber how Tay, Microsoft­’s AI-pow­ered twit­ter­bot designed to learn from its human inter­ac­tions, became a neo-Nazi in less than a day after a bunch of 4chan users decid­ed to flood Tay with neo-Nazi-like tweets? Well, accord­ing to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human big­otries. The AIs’ analy­sis of real-world human lan­guage usage will do that auto­mat­i­cal­ly:

    The Guardian

    AI pro­grams exhib­it racial and gen­der bias­es, research reveals

    Machine learn­ing algo­rithms are pick­ing up deeply ingrained race and gen­der prej­u­dices con­cealed with­in the pat­terns of lan­guage use, sci­en­tists say

    Han­nah Devlin
    Sci­ence cor­re­spon­dent

    Thurs­day 13 April 2017 14.00 EDT

    An arti­fi­cial intel­li­gence tool that has rev­o­lu­tionised the abil­i­ty of com­put­ers to inter­pret every­day lan­guage has been shown to exhib­it strik­ing gen­der and racial bias­es.

    The find­ings raise the spec­tre of exist­ing social inequal­i­ties and prej­u­dices being rein­forced in new and unpre­dictable ways as an increas­ing num­ber of deci­sions affect­ing our every­day lives are ced­ed to automa­tons.

    In the past few years, the abil­i­ty of pro­grams such as Google Trans­late to inter­pret lan­guage has improved dra­mat­i­cal­ly. These gains have been thanks to new machine learn­ing tech­niques and the avail­abil­i­ty of vast amounts of online text data, on which the algo­rithms can be trained.

    How­ev­er, as machines are get­ting clos­er to acquir­ing human-like lan­guage abil­i­ties, they are also absorb­ing the deeply ingrained bias­es con­cealed with­in the pat­terns of lan­guage use, the lat­est research reveals.

    Joan­na Bryson, a com­put­er sci­en­tist at the Uni­ver­si­ty of Bath and a co-author, said: “A lot of peo­ple are say­ing this is show­ing that AI is prej­u­diced. No. This is show­ing we’re prej­u­diced and that AI is learn­ing it.”

    But Bryson warned that AI has the poten­tial to rein­force exist­ing bias­es because, unlike humans, algo­rithms may be unequipped to con­scious­ly coun­ter­act learned bias­es. “A dan­ger would be if you had an AI sys­tem that didn’t have an explic­it part that was dri­ven by moral ideas, that would be bad,” she said.

    The research, pub­lished in the jour­nal Sci­ence, focus­es on a machine learn­ing tool known as “word embed­ding”, which is already trans­form­ing the way com­put­ers inter­pret speech and text. Some argue that the nat­ur­al next step for the tech­nol­o­gy may involve machines devel­op­ing human-like abil­i­ties such as com­mon sense and log­ic.

    ...

    The approach, which is already used in web search and machine trans­la­tion, works by build­ing up a math­e­mat­i­cal rep­re­sen­ta­tion of lan­guage, in which the mean­ing of a word is dis­tilled into a series of num­bers (known as a word vec­tor) based on which oth­er words most fre­quent­ly appear along­side it. Per­haps sur­pris­ing­ly, this pure­ly sta­tis­ti­cal approach appears to cap­ture the rich cul­tur­al and social con­text of what a word means in the way that a dic­tio­nary def­i­n­i­tion would be inca­pable of.

    For instance, in the math­e­mat­i­cal “lan­guage space”, words for flow­ers are clus­tered clos­er to words linked to pleas­ant­ness, while words for insects are clos­er to words linked to unpleas­ant­ness, reflect­ing com­mon views on the rel­a­tive mer­its of insects ver­sus flow­ers.

    The lat­est paper shows that some more trou­bling implic­it bias­es seen in human psy­chol­o­gy exper­i­ments are also read­i­ly acquired by algo­rithms. The words “female” and “woman” were more close­ly asso­ci­at­ed with arts and human­i­ties occu­pa­tions and with the home, while “male” and “man” were clos­er to maths and engi­neer­ing pro­fes­sions.

    And the AI sys­tem was more like­ly to asso­ciate Euro­pean Amer­i­can names with pleas­ant words such as “gift” or “hap­py”, while African Amer­i­can names were more com­mon­ly asso­ci­at­ed with unpleas­ant words.

    The find­ings sug­gest that algo­rithms have acquired the same bias­es that lead peo­ple (in the UK and US, at least) to match pleas­ant words and white faces in implic­it asso­ci­a­tion tests.

    These bias­es can have a pro­found impact on human behav­iour. One pre­vi­ous study showed that an iden­ti­cal CV is 50% more like­ly to result in an inter­view invi­ta­tion if the candidate’s name is Euro­pean Amer­i­can than if it is African Amer­i­can. The lat­est results sug­gest that algo­rithms, unless explic­it­ly pro­grammed to address this, will be rid­dled with the same social prej­u­dices.

    “If you didn’t believe that there was racism asso­ci­at­ed with people’s names, this shows it’s there,” said Bryson.

    The machine learn­ing tool used in the study was trained on a dataset known as the “com­mon crawl” cor­pus – a list of 840bn words that have been tak­en as they appear from mate­r­i­al pub­lished online. Sim­i­lar results were found when the same tools were trained on data from Google News.

    San­dra Wachter, a researcher in data ethics and algo­rithms at the Uni­ver­si­ty of Oxford, said: “The world is biased, the his­tor­i­cal data is biased, hence it is not sur­pris­ing that we receive biased results.”

    Rather than algo­rithms rep­re­sent­ing a threat, they could present an oppor­tu­ni­ty to address bias and coun­ter­act it where appro­pri­ate, she added.

    “At least with algo­rithms, we can poten­tial­ly know when the algo­rithm is biased,” she said. “Humans, for exam­ple, could lie about the rea­sons they did not hire some­one. In con­trast, we do not expect algo­rithms to lie or deceive us.”

    How­ev­er, Wachter said the ques­tion of how to elim­i­nate inap­pro­pri­ate bias from algo­rithms designed to under­stand lan­guage, with­out strip­ping away their pow­ers of inter­pre­ta­tion, would be chal­leng­ing.

    “We can, in prin­ci­ple, build sys­tems that detect biased deci­sion-mak­ing, and then act on it,” said Wachter, who along with oth­ers has called for an AI watch­dog to be estab­lished. “This is a very com­pli­cat­ed task, but it is a respon­si­bil­i­ty that we as soci­ety should not shy away from.”

    “And the AI sys­tem was more like­ly to asso­ciate Euro­pean Amer­i­can names with pleas­ant words such as “gift” or “hap­py”, while African Amer­i­can names were more com­mon­ly asso­ci­at­ed with unpleas­ant words.”

    Yep, if we decide to teach our AIs by just expos­ing it to a large data­base of human activ­i­ty, like a mas­sive data­base of doc­u­ments pub­lished online, those AIs are going to learn a few lessons that we might not want them to learn. Kind of like how each gen­er­a­tion of humans instills and per­pet­u­ates the next gen­er­a­tion with the dom­i­nant big­otries of that soci­ety sim­ply through expo­sure. Great.

    And while it’s pos­si­ble to build sys­tems to detect these kinds of bias­es and cor­rect for it, doing so with­out strip­ping away the AIs’ pow­ers of inter­pre­ta­tion isn’t going to be easy:

    ...
    “At least with algo­rithms, we can poten­tial­ly know when the algo­rithm is biased,” she said. “Humans, for exam­ple, could lie about the rea­sons they did not hire some­one. In con­trast, we do not expect algo­rithms to lie or deceive us.”

    How­ev­er, Wachter said the ques­tion of how to elim­i­nate inap­pro­pri­ate bias from algo­rithms designed to under­stand lan­guage, with­out strip­ping away their pow­ers of inter­pre­ta­tion, would be chal­leng­ing.

    “We can, in prin­ci­ple, build sys­tems that detect biased deci­sion-mak­ing, and then act on it,” said Wachter, who along with oth­ers has called for an AI watch­dog to be estab­lished. “This is a very com­pli­cat­ed task, but it is a respon­si­bil­i­ty that we as soci­ety should not shy away from.”

    At this point it’s look­ing like build­ing non-big­ot­ed AIs that’s learn sim­ply by observ­ing the world humans cre­at­ed is going to be a very dif­fi­cult task. In oth­er words, the easy and lazy thing to do is to just leave the learned big­otries intact. In oth­er oth­er words, the prof­itable thing to do is to just leave the learned big­otries intact. At least in many cas­es (because who wants to pay for the expen­sive non-big­ot­ed AI?)

    So while it should­n’t be sur­pris­ing if the AIs of the future have an anti-human bias, it’s pos­si­ble that the next gen­er­a­tion of AIs will have an anti-spe­cif­ic-types-of-humans bias. And that par­tic­u­lar bias will depend on which human data set was used to train the AI. We would­n’t expect an AI trained on Amer­i­can lan­guage usage to have the same bias­es as those trained on, say, a Japan­ese data set. So we could have a future where the same under­ly­ing AI-self-learn­ing tech­nol­o­gy ends up cre­at­ing wild­ly dif­fer­ent AI big­otries due to the dif­fer­ent soci­etal data sets.

    It all rais­es a creepy AI big­otry ques­tion: will the AIs be big­ot­ed against AIs that don’t share their big­otries? For instance, will a white suprema­cist AI view an AI trained on, say, a Hebrew data set neg­a­tive­ly? What about an AI taught pri­mar­i­ly through expo­sure to reli­gious fun­da­men­tal­ist writ­ings? Will it view non-fun­da­men­tal­ist AIs as god­less ene­mies? These are the creepy kind of ques­tions we get to ask nowa­days. They’re pret­ty sim­i­lar to the ques­tions soci­eties should have been ask­ing of them­selves through­out his­to­ry regard­ing the col­lec­tive self-aware­ness of the big­otries get­ting passed on to the next gen­er­a­tion, but some­how creepi­er.

    And now you know: when you read about peo­ple like Elon Musk equat­ing arti­fi­cial intel­li­gence with “sum­mon­ing the demon”, that demon is us. At least in part.

    Posted by Pterrafractyl | April 13, 2017, 3:30 pm
  4. Check out the expla­na­tion GOP Con­gress­man James Sensen­bren­ner gave for why he sup­port­ed the vote to allow US inter­net ser­vice providers to sell their cus­tomers’ brows­ing habits: “Nobody’s got to use the inter­net”:

    ArsTech­ni­ca

    Why one Repub­li­can vot­ed to kill pri­va­cy rules: “Nobody has to use the Inter­net”
    Repub­li­cans encounter angry cit­i­zens after killing online pri­va­cy rules.

    Jon Brod­kin — 4/14/2017, 3:03 PM

    A Repub­li­can law­mak­er who vot­ed to elim­i­nate Inter­net pri­va­cy rules said, “Nobody’s got to use the Inter­net” when asked why ISPs should be able to use and share their cus­tomers’ Web brows­ing his­to­ry for adver­tis­ing pur­pos­es.

    US Rep. Jim Sensen­bren­ner (R‑Wis.) was host­ing a town hall meet­ing when a con­stituent asked about the deci­sion to elim­i­nate pri­va­cy rules. The per­son in the audi­ence was dis­put­ing the Repub­li­can argu­ment that ISPs should­n’t face stricter require­ments than web­sites such as Face­book.

    “Face­book is not com­pa­ra­ble to an ISP. I do not have to go on Face­book,” the town hall meet­ing attendee said. But when it comes to Inter­net ser­vice providers, the per­son said, “I have one choice. I don’t have to go on Google. My ISP provider is dif­fer­ent than those providers.”

    That’s when Sensen­bren­ner said, “Nobody’s got to use the Inter­net.” He praised ISPs for “invest[ing] an awful lot of mon­ey in hav­ing almost uni­ver­sal ser­vice now.” He then said, “I don’t think it’s my job to tell you that you can­not get adver­tis­ing for your infor­ma­tion being sold. My job, I think, is to tell you that you have the oppor­tu­ni­ty to do it, and then you take it upon your­self to make the choice.”

    Peo­ple “ought to have more choic­es rather than few­er choic­es with the gov­ern­ment con­trol­ling our every­day lives,” he con­clud­ed, before mov­ing on to the next ques­tion.

    “He said that nobody has to use the Inter­net. They have a choice,” Sensen­bren­ner’s press office explained on Twit­ter.

    Video was post­ed on Twit­ter yes­ter­day by Amer­i­can Bridge 21st Cen­tu­ry, a polit­i­cal action com­mit­tee that says it is “com­mit­ted to hold­ing Repub­li­cans account­able for their words and actions.”

    .@JimPressOffice tells his con­stituents not to use the inter­net if they don’t like his vote to sell out their pri­va­cy to adver­tis­ers. #wi05 pic.twitter.com/lSVVx8OclO— Brad Bainum (@bradbainum) April 13, 2017

    Rules would have giv­en cus­tomers a choice

    Sensen­bren­ner did not address the fact that the pri­va­cy rules would have let cus­tomers make a choice about whether their data is tracked and used. The rules would have required ISPs to get cus­tomers’ opt-in con­sent before using, shar­ing, or sell­ing their Web brows­ing his­to­ry and app usage his­to­ry. Because Con­gress elim­i­nat­ed the rules before they could go into effect, ISPs can con­tin­ue to use cus­tomers’ brows­ing and app usage his­to­ry with­out offer­ing any­thing more than a chance to opt out. With­out such rules, cus­tomers may not even be aware that they have a choice.

    The rules were issued last year by the Fed­er­al Com­mu­ni­ca­tions Com­mis­sion and elim­i­nat­ed this month when Pres­i­dent Don­ald Trump signed a repeal that was approved along par­ty lines in Con­gress. There are no pri­va­cy rules that apply to ISPs now, but ISPs say they will let cus­tomers opt out of sys­tems that use brows­ing his­to­ry to deliv­er tar­get­ed ads.

    ...

    “That’s when Sensen­bren­ner said, “Nobody’s got to use the Inter­net.” He praised ISPs for “invest[ing] an awful lot of mon­ey in hav­ing almost uni­ver­sal ser­vice now.” He then said, “I don’t think it’s my job to tell you that you can­not get adver­tis­ing for your infor­ma­tion being sold. My job, I think, is to tell you that you have the oppor­tu­ni­ty to do it, and then you take it upon your­self to make the choice.””

    Rep. Sensen­bren­ner does­n’t think it’s his job to tell you that you can­not get adver­tis­ing in exchange for hav­ing your infor­ma­tion sold. Despite the fact that the pri­va­cy bill the GOP just killed explic­it­ly gave peo­ple the option to have their brows­ing habits sold if they want­ed to do that:

    ...
    Sensen­bren­ner did not address the fact that the pri­va­cy rules would have let cus­tomers make a choice about whether their data is tracked and used. The rules would have required ISPs to get cus­tomers’ opt-in con­sent before using, shar­ing, or sell­ing their Web brows­ing his­to­ry and app usage his­to­ry. Because Con­gress elim­i­nat­ed the rules before they could go into effect, ISPs can con­tin­ue to use cus­tomers’ brows­ing and app usage his­to­ry with­out offer­ing any­thing more than a chance to opt out. With­out such rules, cus­tomers may not even be aware that they have a choice.
    ...

    So it would prob­a­bly be more accu­rate to say that Rep. Sensen­bren­ner thinks it is his job to explic­it­ly tell you that you can’t not receive adver­tis­ing for hav­ing your infor­ma­tion sold. Or rather, to pass a law that says you can’t not have your infor­ma­tion sold, and then frame it was a “free­dom of choice” argu­ment while simul­ta­ne­ous­ly sug­gest­ing that the real choice you have is whether or not to use the inter­net at all because you’ll have no choice about whether or not your inter­net usage will be sold. Sell­ing out his con­stituents, very direct­ly in this case, and fram­ing it as free­dom. And then get­ting reelect­ed. That appears to be his job.

    He’s pret­ty good at his job.

    Posted by Pterrafractyl | April 17, 2017, 8:08 pm
  5. FYI, those fan­cy new head­phones you’re wear­ing might be lis­ten­ing to what you’re lis­ten­ing to. And your head­phone man­u­fac­tur­er might be lis­ten­ing to what you’re head­phones are telling them about what you’re lis­ten­ing to. And the data min­ing com­pa­nies might be lis­ten­ing to what your head­phone man­u­fac­tur­er is telling sell­ing them about what your head­phones are telling them you’re lis­ten­ing to. And that data min­ing com­pa­nies’ clients might be lis­ten­ing to what the data min­ing com­pa­ny is telling sell­ing them about what the head­phone man­u­fac­tur­er is telling them about what your head­phones is telling them you’re lis­ten­ing to. In oth­er words, when your head­phones talk about what you’re lis­ten­ing to, A LOT of dif­fer­ent par­ties might be lis­ten­ing. Espe­cial­ly if those fan­cy new head­phones are man­u­fac­tured by Bose:

    The Wash­ing­ton Post

    Bose head­phones have been spy­ing on cus­tomers, law­suit claims

    By Hay­ley Tsukaya­ma
    April 19, 2017 at 5:24 PM

    Bose knows what you’re lis­ten­ing to.

    At least that’s the claim of a pro­posed class-action law­suit filed late Tues­day in Illi­nois that accus­es the high-end audio equip­ment mak­er of spy­ing on its users and sell­ing infor­ma­tion about their lis­ten­ing habits with­out per­mis­sion.

    The main plain­tiff in the case is Kyle Zak, who bought a $350 pair of wire­less Bose head­phones last month. He reg­is­tered the head­phones, giv­ing the com­pa­ny his name and email address, as well as the head­phone ser­i­al num­ber. And he down­load the Bose Con­nect app, which the com­pa­ny said would make the head­phones more use­ful by adding func­tions such as the abil­i­ty to cus­tomize the lev­el of noise can­cel­la­tion in the head­phones.

    But it turns out the app was also telling Bose a lot more about Zak than he bar­gained for.

    Accord­ing to the com­plaint, Bose col­lect­ed infor­ma­tion that was not cov­ered by its pri­va­cy pol­i­cy. This includ­ed the names of the audio files its cus­tomers were lis­ten­ing to:

    Defen­dant pro­grammed its Bose Con­nect app to con­tin­u­ous­ly record the con­tents of the elec­tron­ic com­mu­ni­ca­tions that users send to their Bose Wire­less Prod­ucts from their smart­phones, includ­ing the names of the music and audio tracks they select to play along with the cor­re­spond­ing artist and album infor­ma­tion, togeth­er with the Bose Wire­less Product’s ser­i­al num­bers (col­lec­tive­ly, “Media Infor­ma­tion”).

    Com­bined with the reg­is­tra­tion infor­ma­tion, that gave Bose access to per­son­al­ly iden­ti­fi­able infor­ma­tion that Zak and oth­er nev­er agreed to share, the com­plaint says. Lis­ten­ing data can be very per­son­al, par­tic­u­lar­ly if users are lis­ten­ing to pod­casts or oth­er audio files that could shade in infor­ma­tion about their polit­i­cal pref­er­ences, health con­di­tions or oth­er inter­ests, the com­plaint argues.

    The fil­ing also alleges that Bose was­n’t just col­lect­ing the infor­ma­tion. It was also shar­ing it with a data min­ing com­pa­ny called Segment.io, accord­ing to research con­duct­ed by Edel­son, the Chica­go-based law firm rep­re­sent­ing Zak.

    Bose did not imme­di­ate­ly respond to a request for com­ment on the suit.

    Wire­less head­phones are part of a grow­ing cat­e­go­ry of con­nect­ed devices, in which every­day prod­ucts can hook up to the Inter­net and pass infor­ma­tion from users to com­pa­nies. Oth­er smart device mak­ers have been accused of shar­ing and sell­ing infor­ma­tion with­out users’ con­sent. Tele­vi­sion mak­er Vizio set­tled with the Fed­er­al Trade Com­mis­sion in Feb­ru­ary over alle­ga­tions that it shared cus­tomers’ view­ing data with oth­er com­pa­nies with­out let­ting its users know.

    “It’s increas­ing­ly impor­tant for com­pa­nies to be upfront and hon­est about the data use poli­cies” as more devices become smart, said John Ver­di, vice pres­i­dent of pol­i­cy at the Future of Pri­va­cy Forum. “This is a sign of the fric­tion that is increas­ing­ly com­mon when devices, like head­phones, that were not pre­vi­ous­ly con­nect­ed or data-dri­ven become increas­ing­ly data-dri­ven.”

    Zak’s com­plaint alleges that Bose’s actions vio­late Illi­nois state statutes pro­hibit­ing decep­tive busi­ness prac­tices, as well as laws against eaves­drop­ping and wire­tap­ping.

    “Cus­tomers were not get­ting notice or giv­ing con­sent to have this type of data leave their phone to go to Bose, or to third-par­ties,” said Christo­pher Dore, a lawyer at Edel­son. He added that because a data min­ing com­pa­ny was pick­ing up the Bose infor­ma­tion, the small details of what Zak and oth­ers have lis­tened to could have been resold by that com­pa­ny far and wide — but it’s not clear to whom. “We don’t know where the data could have gone after that,” Dore said.

    Dore declined to elab­o­rate on how Zak found out the infor­ma­tion was being col­lect­ed.

    Wire­less head­phones are gain­ing pop­u­lar­i­ty, ana­lysts have said. Sales of Blue­tooth head­sets over­took sales of non-Blue­tooth head­sets in 2016, accord­ing to mar­ket research firm NPD Group. Moves from com­pa­nies to remove head­phone jacks from phones — most notably Apple and the iPhone — have also made Blue­tooth head­sets more appeal­ing for con­sumers and man­u­fac­tur­ers.

    Many head­phone mak­ers pair their prod­ucts with free apps that offer cus­tomers access to more fea­tures.

    The prod­ucts list­ed in the com­plaint are: the Qui­et­Com­fort 35, Sound­Sport Wire­less, Sound Sport Pulse Wire­less, Qui­et­Con­trol 30, SoundLink Around-Ear Wire­less Head­phones II, and SoundLink Col­or II. Bose does­n’t release sales infor­ma­tion on indi­vid­ual prod­ucts. But the QuietComfort35, which is the mod­el that Zak had, is a com­mon fix­ture on gift guides — includ­ing The Wash­ing­ton Post’s — and one of the top-ten sell­ing head­phones on Amazon.com.

    ...

    ““Cus­tomers were not get­ting notice or giv­ing con­sent to have this type of data leave their phone to go to Bose, or to third-par­ties,” said Christo­pher Dore, a lawyer at Edel­son. He added that because a data min­ing com­pa­ny was pick­ing up the Bose infor­ma­tion, the small details of what Zak and oth­ers have lis­tened to could have been resold by that com­pa­ny far and wide — but it’s not clear to whom. “We don’t know where the data could have gone after that,” Dore said.”

    That does­n’t sound good. Def­i­nite­ly not a ‘good vibes’ sit­u­a­tion for Bose if this turns out to be true.

    And in oth­er ‘not good vibes’ news...

    Posted by Pterrafractyl | April 19, 2017, 7:28 pm
  6. With VW fac­ing a final crim­i­nal fine from the US gov­ern­ment of $2.8 bil­lion over its use of soft­ware “defeat devices” designed to detect when reg­u­la­tors were exam­in­ing the diesel fuel emis­sions and oper­ate in a spe­cial reg­u­la­tor-friend­ly mode, it’s worth not­ing that Uber, the ride-shar­ing behe­moth, appears to have a pro­cliv­i­ty for defeat devices of its own. Defeat devices for both pub­lic and pri­vate reg­u­la­tors (law enforce­ment and Apple):

    The New York Times

    Uber’s C.E.O. Plays With Fire
    Travis Kalanick’s dri­ve to win in life has led to
    a pat­tern of risk-tak­ing that has at times put his
    ride-hail­ing com­pa­ny on the brink of implo­sion.

    By MIKE ISAAC
    APRIL 23, 2017

    SAN FRANCISCO — Travis Kalan­ick, the chief exec­u­tive of Uber, vis­it­ed Apple’s head­quar­ters in ear­ly 2015 to meet with Tim­o­thy D. Cook, who runs the iPhone mak­er. It was a ses­sion that Mr. Kalan­ick was dread­ing.

    For months, Mr. Kalan­ick had pulled a fast one on Apple by direct­ing his employ­ees to help cam­ou­flage the ride-hail­ing app from Apple’s engi­neers. The rea­son? So Apple would not find out that Uber had been secret­ly iden­ti­fy­ing and tag­ging iPhones even after its app had been delet­ed and the devices erased — a fraud detec­tion maneu­ver that vio­lat­ed Apple’s pri­va­cy guide­lines.

    But Apple was onto the decep­tion, and when Mr. Kalan­ick arrived at the midafter­noon meet­ing sport­ing his favorite pair of bright red sneak­ers and hot-pink socks, Mr. Cook was pre­pared. “So, I’ve heard you’ve been break­ing some of our rules,” Mr. Cook said in his calm, South­ern tone. Stop the trick­ery, Mr. Cook then demand­ed, or Uber’s app would be kicked out of Apple’s App Store.

    For Mr. Kalan­ick, the moment was fraught with ten­sion. If Uber’s app was yanked from the App Store, it would lose access to mil­lions of iPhone cus­tomers — essen­tial­ly destroy­ing the ride-hail­ing company’s busi­ness. So Mr. Kalan­ick acced­ed.

    In a quest to build Uber into the world’s dom­i­nant ride-hail­ing enti­ty, Mr. Kalan­ick has open­ly dis­re­gard­ed many rules and norms, back­ing down only when caught or cor­nered. He has flout­ed trans­porta­tion and safe­ty reg­u­la­tions, bucked against entrenched com­peti­tors and cap­i­tal­ized on legal loop­holes and gray areas to gain a busi­ness advan­tage. In the process, Mr. Kalan­ick has helped cre­ate a new trans­porta­tion indus­try, with Uber spread­ing to more than 70 coun­tries and gain­ing a val­u­a­tion of near­ly $70 bil­lion, and its busi­ness con­tin­ues to grow.

    But the pre­vi­ous­ly unre­port­ed encounter with Mr. Cook showed how Mr. Kalan­ick was also respon­si­ble for risk-tak­ing that pushed Uber beyond the pale, some­times to the very brink of implo­sion.

    Cross­ing that line was not a one-off for Mr. Kalan­ick. Accord­ing to inter­views with more than 50 cur­rent and for­mer Uber employ­ees, investors and oth­ers with whom the exec­u­tive had per­son­al rela­tion­ships, Mr. Kalan­ick, 40, is dri­ven to the point that he must win at what­ev­er he puts his mind to and at what­ev­er cost — a trait that has now plunged Uber into its most sus­tained set of crises since its found­ing in 2009.

    “Travis’s biggest strength is that he will run through a wall to accom­plish his goals,” said Mark Cuban, the Dal­las Mav­er­icks own­er and bil­lion­aire investor who has men­tored Mr. Kalan­ick. “Travis’s biggest weak­ness is that he will run through a wall to accom­plish his goals. That’s the best way to describe him.”

    A blind­ness to bound­aries is not uncom­mon for Sil­i­con Val­ley entre­pre­neurs. But in Mr. Kalan­ick, that led to a pat­tern of repeat­ed­ly going too far at Uber, includ­ing the duplic­i­ty with Apple, sab­o­tag­ing com­peti­tors and allow­ing the com­pa­ny to use secret tool called Grey­ball to trick some law enforce­ment agen­cies.

    That qual­i­ty also extend­ed to his per­son­al life, where Mr. Kalan­ick mix­es with celebri­ties like Jay Z and busi­ness­men includ­ing Pres­i­dent Trump’s chief eco­nom­ic advis­er, Gary D. Cohn. But it has alien­at­ed some Uber exec­u­tives, employ­ees and advis­ers. Mr. Kalan­ick, with salt-and-pep­per hair, a fast-paced walk and an iPhone prac­ti­cal­ly embed­ded in his hand, is described by friends as more at ease with data and num­bers (some con­sid­er him a math savant) than with peo­ple.

    Uber is grap­pling with the fall­out. For the last few months, the com­pa­ny has been reel­ing from alle­ga­tions of a machis­mo-fueled work­place where man­agers rou­tine­ly over­stepped ver­bal­ly, phys­i­cal­ly and some­times sex­u­al­ly with employ­ees. Mr. Kalan­ick com­pound­ed that image by engag­ing in a shout­ing match with an Uber dri­ver in Feb­ru­ary, an inci­dent record­ed by the dri­ver and then leaked online. (Mr. Kalan­ick now has a pri­vate dri­ver.)

    The dam­age has been exten­sive. Uber’s detrac­tors have start­ed a grass-roots cam­paign with the hash­tag #dele­teU­ber. Exec­u­tives have streamed out. Some Uber investors have open­ly crit­i­cized the com­pa­ny.

    Mr. Kalanick’s lead­er­ship is at a pre­car­i­ous point. While Uber is financed by a who’s who of investors includ­ing Gold­man Sachs and Sau­di Arabia’s Pub­lic Invest­ment Fund, Mr. Kalan­ick con­trols the major­i­ty of the company’s vot­ing shares with a small hand­ful of oth­er close friends, and has stacked Uber’s board of direc­tors with many who are invest­ed in his suc­cess. Yet board mem­bers have con­clud­ed that he must change his man­age­ment style, and are pres­sur­ing him to do so.

    He has pub­licly apol­o­gized for some of his behav­ior, and for the first time has said he needs man­age­ment help. He is inter­view­ing can­di­dates for a chief oper­at­ing offi­cer, even as some employ­ees ques­tion whether a new addi­tion will make any dif­fer­ence. He has also been work­ing with senior man­agers to reset some of the company’s stat­ed val­ues. Results of an inter­nal inves­ti­ga­tion into Uber’s work­place cul­ture are expect­ed next month.

    Through an Uber spokesman, Mr. Kalan­ick declined an inter­view request. Apple declined to com­ment on the meet­ing with Mr. Cook. Many of the peo­ple inter­viewed for this arti­cle, who revealed pre­vi­ous­ly unre­port­ed details of Mr. Kalanick’s life, asked to remain anony­mous because they had signed nondis­clo­sure agree­ments with Uber or feared dam­ag­ing their rela­tion­ship with the chief exec­u­tive.

    Mr. Kalanick’s pat­tern for push­ing lim­its is deeply ingrained. It began dur­ing his child­hood in sub­ur­ban Los Ange­les, where he went from being bul­lied to being the aggres­sor, con­tin­ued through his years tak­ing risks at two tech­nol­o­gy start-ups there, and crys­tal­lized in his role at Uber.

    ...

    For the Win

    With Mr. Kalan­ick set­ting the tone at Uber, employ­ees act­ed to ensure the ride-hail­ing ser­vice would win no mat­ter what.

    They spent much of their ener­gy one-upping rivals like Lyft. Uber devot­ed teams to so-called com­pet­i­tive intel­li­gence, pur­chas­ing data from an ana­lyt­ics ser­vice called Slice Intel­li­gence. Using an email digest ser­vice it owns named Unroll.me, Slice col­lect­ed its cus­tomers’ emailed Lyft receipts from their inbox­es and sold the anonymized data to Uber. Uber used the data as a proxy for the health of Lyft’s busi­ness. (Lyft, too, oper­ates a com­pet­i­tive intel­li­gence team.)

    Slice con­firmed that it sells anonymized data (mean­ing that cus­tomers’ names are not attached) based on ride receipts from Uber and Lyft, but declined to dis­close who buys the infor­ma­tion.

    Uber also tried to win over Lyft’s dri­vers. Uber’s “dri­ver sat­is­fac­tion rat­ing,” an inter­nal met­ric, has dropped since Feb­ru­ary 2016, and rough­ly a quar­ter of its dri­vers turn over on aver­age every three months. Accord­ing to an inter­nal slide deck on dri­ver income lev­els viewed by The New York Times, Uber con­sid­ered Lyft and McDonald’s its main com­pe­ti­tion for attract­ing new dri­vers.

    To frus­trate Lyft dri­vers, Uber dis­patched some employ­ees to order and can­cel Lyft rides en masse. Oth­ers hailed Lyfts and spent the rides per­suad­ing dri­vers to switch to Uber full time.

    After Mr. Kalan­ick heard that Lyft was work­ing on a car-pool­ing fea­ture, Uber cre­at­ed and start­ed its own car-pool­ing option, Uber­Pool, in 2014, two days before Lyft unveiled its project.

    That year, Uber came close to buy­ing Lyft. At a meet­ing at Mr. Kalanick’s house, and over car­tons of Chi­nese food, he and Mr. Michael host­ed Lyft’s pres­i­dent, John Zim­mer, who asked for 15 per­cent of Uber in exchange for sell­ing Lyft. Over the next hour, Mr. Kalan­ick and Mr. Michael repeat­ed­ly laughed at Mr. Zimmer’s auda­cious request. No deal was reached. Lyft declined to com­ment.

    The rival­ry remains in force. In 2016, Uber held a sum­mit meet­ing in Mex­i­co City for some top man­agers, where it dis­trib­uted a play­book on how to cut into Lyft’s busi­ness and had ses­sions on how to dam­age its com­peti­tor.

    To devel­op its own busi­ness, Uber side­stepped the author­i­ties. Some employ­ees start­ed using a tool called Grey­ball to deceive offi­cials try­ing to shut down Uber’s ser­vice. The tool, devel­oped to aid dri­ver safe­ty and to trick fraud­sters, essen­tial­ly showed a fake ver­sion of Uber’s app to some peo­ple to dis­guise the loca­tions of cars and dri­vers. It soon became a way for Uber dri­vers to evade cap­ture by law enforce­ment in places where the ser­vice was deemed ille­gal.

    After The Times report­ed on Grey­ball in March, Uber said it would pro­hib­it employ­ees from using the tool against law enforce­ment.

    The idea of fool­ing Apple, the main dis­trib­u­tor of Uber’s app, began in 2014.

    At the time, Uber was deal­ing with wide­spread account fraud in places like Chi­na, where trick­sters bought stolen iPhones that were erased and resold. Some Uber dri­vers there would then cre­ate dozens of fake email address­es to sign up for new Uber rid­er accounts attached to each phone, and request rides from those phones, which they would then accept. Since Uber was hand­ing out incen­tives to dri­vers to take more rides, the dri­vers could earn more mon­ey this way.

    To halt the activ­i­ty, Uber engi­neers assigned a per­sis­tent iden­ti­ty to iPhones with a small piece of code, a prac­tice called “fin­ger­print­ing.” Uber could then iden­ti­fy an iPhone and pre­vent itself from being fooled even after the device was erased of its con­tents.

    There was one prob­lem: Fin­ger­print­ing iPhones broke Apple’s rules. Mr. Cook believed that wip­ing an iPhone should ensure that no trace of the owner’s iden­ti­ty remained on the device.

    So Mr. Kalan­ick told his engi­neers to “geofence” Apple’s head­quar­ters in Cuper­ti­no, Calif., a way to dig­i­tal­ly iden­ti­fy peo­ple review­ing Uber’s soft­ware in a spe­cif­ic loca­tion. Uber would then obfus­cate its code for peo­ple with­in that geofenced area, essen­tial­ly draw­ing a dig­i­tal las­so around those it want­ed to keep in the dark. Apple employ­ees at its head­quar­ters were unable to see Uber’s fin­ger­print­ing.

    The ruse did not last. Apple engi­neers out­side of Cuper­ti­no caught on to Uber’s meth­ods, prompt­ing Mr. Cook to call Mr. Kalan­ick to his office.

    Mr. Kalan­ick was shak­en by Mr. Cook’s scold­ing, accord­ing to a per­son who saw him after the meet­ing.

    But only momen­tar­i­ly. After all, Mr. Kalan­ick had faced off against Apple, and Uber had sur­vived. He had lived to fight anoth­er day.

    “So Mr. Kalan­ick told his engi­neers to “geofence” Apple’s head­quar­ters in Cuper­ti­no, Calif., a way to dig­i­tal­ly iden­ti­fy peo­ple review­ing Uber’s soft­ware in a spe­cif­ic loca­tion. Uber would then obfus­cate its code for peo­ple with­in that geofenced area, essen­tial­ly draw­ing a dig­i­tal las­so around those it want­ed to keep in the dark. Apple employ­ees at its head­quar­ters were unable to see Uber’s fin­ger­print­ing.”

    That sure sounds like a “defeat device”. And note how sim­ple it was to design: as long as Uber was with­in a cer­tain range around Apple’s head­quar­ters, the “defeat device” soft­ware was turn on. And that was just for Apple. Then there’s the law enforce­ment “defeat device”:

    ...
    To devel­op its own busi­ness, Uber side­stepped the author­i­ties. Some employ­ees start­ed using a tool called Grey­ball to deceive offi­cials try­ing to shut down Uber’s ser­vice. The tool, devel­oped to aid dri­ver safe­ty and to trick fraud­sters, essen­tial­ly showed a fake ver­sion of Uber’s app to some peo­ple to dis­guise the loca­tions of cars and dri­vers. It soon became a way for Uber dri­vers to evade cap­ture by law enforce­ment in places where the ser­vice was deemed ille­gal.

    After The Times report­ed on Grey­ball in March, Uber said it would pro­hib­it employ­ees from using the tool against law enforce­ment.

    ...

    So how did “Grey­ball” work? Well, geofenc­ing was one approach, where Uber would “geofence” the area around where city employ­ees involved with reg­u­lat­ing Uber worked. But for Grey­ball the geofenc­ing got much more spe­cif­ic. Or, rather, per­son­al. Uber would “tag” indi­vid­u­als work­ing for law enforce­ment and “geofence” them:

    The New York Times

    How Uber Deceives the Author­i­ties World­wide

    By MIKE ISAAC
    MARCH 3, 2017

    SAN FRANCISCO — Uber has for years engaged in a world­wide pro­gram to deceive the author­i­ties in mar­kets where its low-cost ride-hail­ing ser­vice was resist­ed by law enforce­ment or, in some instances, had been banned.

    The pro­gram, involv­ing a tool called Grey­ball, uses data col­lect­ed from the Uber app and oth­er tech­niques to iden­ti­fy and cir­cum­vent offi­cials who were try­ing to clamp down on the ride-hail­ing ser­vice. Uber used these meth­ods to evade the author­i­ties in cities like Boston, Paris and Las Vegas, and in coun­tries like Aus­tralia, Chi­na and South Korea.

    Grey­ball was part of a pro­gram called VTOS, short for “vio­la­tion of terms of ser­vice,” which Uber cre­at­ed to root out peo­ple it thought were using or tar­get­ing its ser­vice improp­er­ly. The pro­gram, includ­ing Grey­ball, began as ear­ly as 2014 and remains in use, pre­dom­i­nant­ly out­side the Unit­ed States. Grey­ball was approved by Uber’s legal team.

    Grey­ball and the VTOS pro­gram were described to The New York Times by four cur­rent and for­mer Uber employ­ees, who also pro­vid­ed doc­u­ments. The four spoke on the con­di­tion of anonymi­ty because the tools and their use are con­fi­den­tial and because of fear of retal­i­a­tion by Uber.

    Uber’s use of Grey­ball was record­ed on video in late 2014, when Erich Eng­land, a code enforce­ment inspec­tor in Port­land, Ore., tried to hail an Uber car down­town in a sting oper­a­tion against the com­pa­ny.

    At the time, Uber had just start­ed its ride-hail­ing ser­vice in Port­land with­out seek­ing per­mis­sion from the city, which lat­er declared the ser­vice ille­gal. To build a case against the com­pa­ny, offi­cers like Mr. Eng­land posed as rid­ers, open­ing the Uber app to hail a car and watch­ing as minia­ture vehi­cles on the screen made their way toward the poten­tial fares.

    But unknown to Mr. Eng­land and oth­er author­i­ties, some of the dig­i­tal cars they saw in the app did not rep­re­sent actu­al vehi­cles. And the Uber dri­vers they were able to hail also quick­ly can­celed. That was because Uber had tagged Mr. Eng­land and his col­leagues — essen­tial­ly Grey­balling them as city offi­cials — based on data col­lect­ed from the app and in oth­er ways. The com­pa­ny then served up a fake ver­sion of the app, pop­u­lat­ed with ghost cars, to evade cap­ture.

    At a time when Uber is already under scruti­ny for its bound­ary-push­ing work­place cul­ture, its use of the Grey­ball tool under­scores the lengths to which the com­pa­ny will go to dom­i­nate its mar­ket. Uber has long flout­ed laws and reg­u­la­tions to gain an edge against entrenched trans­porta­tion providers, a modus operan­di that has helped pro­pel it into more than 70 coun­tries and to a val­u­a­tion close to $70 bil­lion.

    Yet using its app to iden­ti­fy and side­step the author­i­ties where reg­u­la­tors said Uber was break­ing the law goes fur­ther toward skirt­ing eth­i­cal lines — and, poten­tial­ly, legal ones. Some at Uber who knew of the VTOS pro­gram and how the Grey­ball tool was being used were trou­bled by it.

    In a state­ment, Uber said, “This pro­gram denies ride requests to users who are vio­lat­ing our terms of ser­vice — whether that’s peo­ple aim­ing to phys­i­cal­ly harm dri­vers, com­peti­tors look­ing to dis­rupt our oper­a­tions, or oppo­nents who col­lude with offi­cials on secret ‘stings’ meant to entrap dri­vers.”

    The may­or of Port­land, Ted Wheel­er, said in a state­ment, “I am very con­cerned that Uber may have pur­pose­ful­ly worked to thwart the city’s job to pro­tect the pub­lic.”

    Uber, which lets peo­ple hail rides using a smart­phone app, oper­ates mul­ti­ple types of ser­vices, includ­ing a lux­u­ry Black Car offer­ing in which dri­vers are com­mer­cial­ly licensed. But an Uber ser­vice that many reg­u­la­tors have had prob­lems with is the low­er-cost ver­sion, known in the Unit­ed States as UberX.

    UberX essen­tial­ly lets peo­ple who have passed a back­ground check and vehi­cle inspec­tion become Uber dri­vers quick­ly. In the past, many cities have banned the ser­vice and declared it ille­gal.

    That is because the abil­i­ty to sum­mon a non­com­mer­cial dri­ver — which is how UberX dri­vers using pri­vate vehi­cles are typ­i­cal­ly cat­e­go­rized — was often unreg­u­lat­ed. In bar­rel­ing into new mar­kets, Uber cap­i­tal­ized on this lack of reg­u­la­tion to quick­ly enlist UberX dri­vers and put them to work before local reg­u­la­tors could stop them.

    After the author­i­ties caught on to what was hap­pen­ing, Uber and local offi­cials often clashed. Uber has encoun­tered legal prob­lems over UberX in cities includ­ing Austin, Tex., Philadel­phia and Tam­pa, Fla., as well as inter­na­tion­al­ly. Even­tu­al­ly, agree­ments were reached under which reg­u­la­tors devel­oped a legal frame­work for the low-cost ser­vice.

    That approach has been cost­ly. Law enforce­ment offi­cials in some cities have impound­ed vehi­cles or issued tick­ets to UberX dri­vers, with Uber gen­er­al­ly pick­ing up those costs on the dri­vers’ behalf. The com­pa­ny has esti­mat­ed thou­sands of dol­lars in lost rev­enue for every vehi­cle impound­ed and tick­et received.

    This is where the VTOS pro­gram and the use of the Grey­ball tool came in. When Uber moved into a new city, it appoint­ed a gen­er­al man­ag­er to lead the charge. This per­son, using var­i­ous tech­nolo­gies and tech­niques, would try to spot enforce­ment offi­cers.

    One tech­nique involved draw­ing a dig­i­tal perime­ter, or “geofence,” around the gov­ern­ment offices on a dig­i­tal map of a city that Uber was mon­i­tor­ing. The com­pa­ny watched which peo­ple were fre­quent­ly open­ing and clos­ing the app — a process known inter­nal­ly as eye­balling — near such loca­tions as evi­dence that the users might be asso­ci­at­ed with city agen­cies.

    Oth­er tech­niques includ­ed look­ing at a user’s cred­it card infor­ma­tion and deter­min­ing whether the card was tied direct­ly to an insti­tu­tion like a police cred­it union.

    Enforce­ment offi­cials involved in large-scale sting oper­a­tions meant to catch Uber dri­vers would some­times buy dozens of cell­phones to cre­ate dif­fer­ent accounts. To cir­cum­vent that tac­tic, Uber employ­ees would go to local elec­tron­ics stores to look up device num­bers of the cheap­est mobile phones for sale, which were often the ones bought by city offi­cials work­ing with bud­gets that were not large.

    In all, there were at least a dozen or so sig­ni­fiers in the VTOS pro­gram that Uber employ­ees could use to assess whether users were reg­u­lar new rid­ers or prob­a­bly city offi­cials.

    If such clues did not con­firm a user’s iden­ti­ty, Uber employ­ees would search social media pro­files and oth­er infor­ma­tion avail­able online. If users were iden­ti­fied as being linked to law enforce­ment, Uber Grey­balled them by tag­ging them with a small piece of code that read “Grey­ball” fol­lowed by a string of num­bers.

    When some­one tagged this way called a car, Uber could scram­ble a set of ghost cars in a fake ver­sion of the app for that per­son to see, or show that no cars were avail­able. Occa­sion­al­ly, if a dri­ver acci­den­tal­ly picked up some­one tagged as an offi­cer, Uber called the dri­ver with instruc­tions to end the ride.

    Uber employ­ees said the prac­tices and tools were born in part out of safe­ty mea­sures meant to pro­tect dri­vers in some coun­tries. In France, India and Kenya, for instance, taxi com­pa­nies and work­ers tar­get­ed and attacked new Uber dri­vers.

    ...

    In those areas, Grey­balling start­ed as a way to scram­ble the loca­tions of UberX dri­vers to pre­vent com­peti­tors from find­ing them. Uber said that was still the tool’s pri­ma­ry use.

    But as Uber moved into new mar­kets, its engi­neers saw that the same meth­ods could be used to evade law enforce­ment. Once the Grey­ball tool was put in place and test­ed, Uber engi­neers cre­at­ed a play­book with a list of tac­tics and dis­trib­uted it to gen­er­al man­agers in more than a dozen coun­tries on five con­ti­nents.

    At least 50 peo­ple inside Uber knew about Grey­ball, and some had qualms about whether it was eth­i­cal or legal. Grey­ball was approved by Uber’s legal team, led by Salle Yoo, the company’s gen­er­al coun­sel. Ryan Graves, an ear­ly hire who became senior vice pres­i­dent of glob­al oper­a­tions and a board mem­ber, was also aware of the pro­gram.

    Ms. Yoo and Mr. Graves did not respond to requests for com­ment.

    Out­side legal spe­cial­ists said they were uncer­tain about the legal­i­ty of the pro­gram. Grey­ball could be con­sid­ered a vio­la­tion of the fed­er­al Com­put­er Fraud and Abuse Act, or pos­si­bly inten­tion­al obstruc­tion of jus­tice, depend­ing on local laws and juris­dic­tions, said Peter Hen­ning, a law pro­fes­sor at Wayne State Uni­ver­si­ty who also writes for The New York Times.

    “With any type of sys­tem­at­ic thwart­ing of the law, you’re flirt­ing with dis­as­ter,” Pro­fes­sor Hen­ning said. “We all take our foot off the gas when we see the police car at the inter­sec­tion up ahead, and there’s noth­ing wrong with that. But this goes far beyond avoid­ing a speed trap.”

    On Fri­day, Mari­et­je Schaake, a mem­ber of the Euro­pean Par­lia­ment for the Dutch Demo­c­ra­t­ic Par­ty in the Nether­lands, wrote that she had writ­ten to the Euro­pean Com­mis­sion ask­ing, among oth­er things, if it planned to inves­ti­gate the legal­i­ty of Grey­ball.

    To date, Grey­balling has been effec­tive. In Port­land on that day in late 2014, Mr. Eng­land, the enforce­ment offi­cer, did not catch an Uber, accord­ing to local reports.

    And two weeks after Uber began dis­patch­ing dri­vers in Port­land, the com­pa­ny reached an agree­ment with local offi­cials that said that after a three-month sus­pen­sion, UberX would even­tu­al­ly be legal­ly avail­able in the city.

    “But unknown to Mr. Eng­land and oth­er author­i­ties, some of the dig­i­tal cars they saw in the app did not rep­re­sent actu­al vehi­cles. And the Uber dri­vers they were able to hail also quick­ly can­celed. That was because Uber had tagged Mr. Eng­land and his col­leagues — essen­tial­ly Grey­balling them as city offi­cials — based on data col­lect­ed from the app and in oth­er ways. The com­pa­ny then served up a fake ver­sion of the app, pop­u­lat­ed with ghost cars, to evade cap­ture.”

    And how exact­ly did Uber “tag” these indi­vid­u­als work­ing for city gov­ern­ments? There was using their cred­it card infor­ma­tion to deter­mine if they were for some­one like the police. But if that did­n’t work, they searched social media:

    ...
    This is where the VTOS pro­gram and the use of the Grey­ball tool came in. When Uber moved into a new city, it appoint­ed a gen­er­al man­ag­er to lead the charge. This per­son, using var­i­ous tech­nolo­gies and tech­niques, would try to spot enforce­ment offi­cers.

    One tech­nique involved draw­ing a dig­i­tal perime­ter, or “geofence,” around the gov­ern­ment offices on a dig­i­tal map of a city that Uber was mon­i­tor­ing. The com­pa­ny watched which peo­ple were fre­quent­ly open­ing and clos­ing the app — a process known inter­nal­ly as eye­balling — near such loca­tions as evi­dence that the users might be asso­ci­at­ed with city agen­cies.

    Oth­er tech­niques includ­ed look­ing at a user’s cred­it card infor­ma­tion and deter­min­ing whether the card was tied direct­ly to an insti­tu­tion like a police cred­it union.

    Enforce­ment offi­cials involved in large-scale sting oper­a­tions meant to catch Uber dri­vers would some­times buy dozens of cell­phones to cre­ate dif­fer­ent accounts. To cir­cum­vent that tac­tic, Uber employ­ees would go to local elec­tron­ics stores to look up device num­bers of the cheap­est mobile phones for sale, which were often the ones bought by city offi­cials work­ing with bud­gets that were not large.

    In all, there were at least a dozen or so sig­ni­fiers in the VTOS pro­gram that Uber employ­ees could use to assess whether users were reg­u­lar new rid­ers or prob­a­bly city offi­cials.

    If such clues did not con­firm a user’s iden­ti­ty, Uber employ­ees would search social media pro­files and oth­er infor­ma­tion avail­able online. If users were iden­ti­fied as being linked to law enforce­ment, Uber Grey­balled them by tag­ging them with a small piece of code that read “Grey­ball” fol­lowed by a string of num­bers.
    ...

    “If such clues did not con­firm a user’s iden­ti­ty, Uber employ­ees would search social media pro­files and oth­er infor­ma­tion avail­able online. If users were iden­ti­fied as being linked to law enforce­ment, Uber Grey­balled them by tag­ging them with a small piece of code that read “Grey­ball” fol­lowed by a string of num­bers.”

    So Uber was search social media account to play “Where’s Cop Wal­do?”, and that was only if their oth­er meth­ods of iden­ti­fy­ing the police or oth­er gov­ern­ment employ­ees did­n’t work. In oth­er words, the “defeat device” for Uber in this case was Big Data. Big Data plus a sim­ple “tag­ging” sys­tem that could acti­vate Uber’s fake-mode for peo­ple on “Grey­ball” list. That’s quite a “defeat device”.

    And as dis­turb­ing as this sto­ry is in terms of the grow­ing pri­va­cy dan­ger it high­lights for any­one involved with law enforce­ment and gov­ern­ment (or pri­vate reg­u­la­tors in the case of Apple), it also rais­es the ques­tion: if Uber was doing this “back­ground check” in order to find­ing the police, does­n’t that mean it was doing this check for all its rid­ers? At least in the cities where they had these dis­putes with the local gov­ern­ment? Isn’t that what’s implied by this sto­ry? Mass back ground checks? It seems like that’s what it implies.

    So that’s all one more exam­ple of the pri­va­cy dan­gers asso­ci­at­ed with the ‘any­thing goes’ bare­ly-reg­u­lat­ed Big Data indus­try: pri­vate com­pa­nies now have an incen­tive to engage in Big Data searchers on basi­cal­ly every­one in order to iden­ti­fy all the peo­ple who might bust them for mass Big Data abus­es.

    Posted by Pterrafractyl | April 23, 2017, 3:28 pm
  7. Check out some of the fea­tures in the next ver­sion of Ama­zon’s Echo, the Echo Look which has a micro­phone and cam­era so it can take pic­tures of you and give you fash­ion advice. Yes, an AI-dri­ven device designed to placed in your bed­room to cap­ture audio and video is one the way. The images and videos are stored indef­i­nite­ly in the Ama­zon cloud. And when Ama­zon was asked if the pho­tos, videos, and the data gleaned from the Echo Look would be sold to third par­ties, Ama­zon did­n’t address that ques­tion. So based on that non-response response, it would appear that sell­ing off your pri­vate info col­lect­ed from these devices is pre­sum­ably anoth­er fea­ture of the Echo Look:

    Vice
    Moth­er­board

    Ama­zon Wants to Put a Cam­era and Micro­phone in Your Bed­room

    Jason Koe­bler

    Apr 26 2017, 10:34am

    Ama­zon is giv­ing Alexa eyes. And it’s going to let her judge your out­fits.

    The new­ly announced Echo Look is a vir­tu­al assis­tant with a micro­phone and a cam­era that’s designed to go some­where in your bed­room, bath­room, or wher­ev­er the hell you get dressed.

    Ama­zon is pitch­ing it as an easy way to snap pic­tures of your out­fits to send to your friends when you’re not sure if your out­fit is cute, but it’s also got a built-in app called StyleCheck that is worth some fur­ther dis­sec­tion.

    * You cool with an algo­rithm, machine learn­ing, and “fash­ion spe­cial­ists” decid­ing whether you look attrac­tive today? What sorts of built-in bias­es will an AI fash­ion­ista have? It’s worth remem­ber­ing that a recent AI-judged beau­ty con­test picked pri­mar­i­ly white win­ners.
    * You cool with Ama­zon hav­ing the capa­bil­i­ty to see and per­haps cat­a­log every sin­gle arti­cle of cloth­ing you own? Who needs a Calvin Klein dash but­ton if your Echo can tell when you need new under­wear? Will Alexa pre­vent you from buy­ing a pair of JNCOs?
    * You cool with Ama­zon putting a cam­era in your bed­room?
    * Ama­zon store images and videos tak­en by Echo Look indef­i­nite­ly, the com­pa­ny told us. Audio record­ed by the orig­i­nal Echo has already been sought out in a mur­der case; to its cred­it, Ama­zon fought a search war­rant in that case.

    “All pho­tos and video cap­tured with your Echo Look are secure­ly stored in the AWS cloud and local­ly in the Echo Look app until a cus­tomer deletes them,” a spokesper­son for the com­pa­ny said. “You can delete the pho­tos or videos asso­ci­at­ed with your account any­time in the Echo Look App.”

    Moth­er­board also asked if Echo Look pho­tos, videos, and the data gleaned from them would be sold to third par­ties; the com­pa­ny did not address that ques­tion.

    As tech­noso­ci­ol­o­gist Zeynep Tufek­ci points out, machine learn­ing com­bined with full-length pho­tos and videos have at least the poten­tial to be used for much more than sell­ing you clothes or serv­ing you ads. Ama­zon will have the capa­bil­i­ty to detect if you’re preg­nant and may be able to learn if you’re depressed. Her whole thread is worth read­ing.

    With this data, Ama­zon won’t be able to just sell you clothes or judge you. It could ana­lyze if you’re depressed or preg­nant and much else. pic.twitter.com/irc0tLVce9— Zeynep Tufek­ci (@zeynep) April 26, 2017

    ...

    In prac­tice, the Echo Look isn’t much dif­fer­ent than, say, a Nest cam­era or an inter­net-con­nect­ed baby mon­i­tor (the lat­ter of which gets hacked all the time, by the way). But the addi­tion of arti­fi­cial intel­li­gence and Ama­zon’s pen­chant for using its prod­ucts to sell us more stuff makes this feel more than a bit Black Mir­ror-ish.

    “Moth­er­board also asked if Echo Look pho­tos, videos, and the data gleaned from them would be sold to third par­ties; the com­pa­ny did not address that ques­tion.”

    No com­ment. That’s quite a com­ment.

    But it’s worth not­ing that Ama­zon did com­ment about this same ques­tion to a reporter from The Verge. Although it was­n’t an answer to exact­ly the same ques­tion. The ques­tion Ama­zon neglect­ed to answer above was the gen­er­al ques­tion of whether Ama­zon would sell the data “to third par­ties.” But for a report in The Verge, which came out a day after the above report from Vice, an Ama­zon rep­re­sen­ta­tive did vol­un­teer that it wouldn’t share any per­son­al infor­ma­tion gath­ered from the Echo Look to “adver­tis­ers or to third-par­ty sites that dis­play our inter­est-based ads”:

    The Verge

    Amazon’s Echo Look is a mine­field of AI and pri­va­cy con­cerns
    What does Ama­zon want to learn from pic­tures of its cus­tomers? The com­pa­ny won’t say
    by James Vin­cent
    Apr 27, 2017, 2:48pm EDT

    Com­put­er sci­en­tist Andrew Ng once described the pow­er of con­tem­po­rary AI as the abil­i­ty to auto­mate any men­tal task that takes a human “less than one sec­ond of thought.” It’s a rule of thumb that’s worth remem­ber­ing when you think about Amazon’s new Echo Look — a smart cam­era with a built-in AI assis­tant. Ama­zon says the Echo Look will help users dress and give them fash­ion advice, but what oth­er judge­ments could it make?

    ...

    As aca­d­e­m­ic and soci­ol­o­gist Zeynep Tufek­ci put it on Twit­ter: “Machine learn­ing algo­rithms can do so much with reg­u­lar full length pic­tures of you. They can infer pri­vate things you did not dis­close [...] All this to sell you more clothes. We are sell­ing out to sur­veil­lance cap­i­tal­ism that can quick­ly evolve into author­i­tar­i­an­ism for so cheap.” (The whole thread from Tufec­ki is def­i­nite­ly worth a read.)

    Adver­tis­ers open­ly say it’s best to sell make-up to women when they feel “fat, lone­ly and depressed.” With this data, won’t have to guess.— Zeynep Tufek­ci (@zeynep) April 26, 2017

    This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they “can’t spec­u­late” on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.

    This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: “Can’t spec­u­late.”

    The com­pa­ny did, though, say it wouldn’t share any per­son­al infor­ma­tion gleaned from the Echo Look to “adver­tis­ers or to third-par­ty sites that dis­play our inter­est-based ads.” That means Ama­zon could still use data from the Look to tar­get ads at you itself, but at least third par­ties won’t.

    Right now, the Echo Look is halfway between pro­to­type and full-on prod­uct. As is often the case with Amazon’s hard­ware efforts, the com­pa­ny seems most inter­est­ed in just get­ting a prod­uct out there and gaug­ing pub­lic reac­tion, rather than finess­ing every detail. The com­pa­ny is giv­ing no indi­ca­tion of when the Echo Look will actu­al­ly be avail­able, and it’s cur­rent­ly only being sold “by invi­ta­tion only.” All this means that Ama­zon itself prob­a­bly isn’t yet sure what exact­ly it will do with the data the device col­lects. But, if the com­pa­ny refus­es to give any more detail, it’s under­stand­able to fear the worst.

    “The com­pa­ny did, though, say it wouldn’t share any per­son­al infor­ma­tion gleaned from the Echo Look to “adver­tis­ers or to third-par­ty sites that dis­play our inter­est-based ads.” That means Ama­zon could still use data from the Look to tar­get ads at you itself, but at least third par­ties won’t.”

    So based on the infor­ma­tion, or lack there­of, that Ama­zon is will­ing to share at this point, the answer to the ques­tion of whether or not the data gath­ered by the Echo Look will be sold to third-par­ties depends on whether or not the phrase “adver­tis­ers or to third-par­ty sites that dis­play our inter­est-based ads” was intend­ed to mean “all third-par­ties” or not. Because it seems like there should be plen­ty of non-adver­tis­ing inter­est in exact­ly the same kind of data adver­tis­ers are inter­est­ed in.

    Only time, and like­ly a series of data-pri­va­cy hor­ror sto­ries, will tell whether or not the data col­lect­ed by Ama­zon’s per­son­al data col­lec­tive device designed to go in your bed­room end up in third-par­ty hands. At this point we can most­ly just cyn­i­cal­ly spec­u­late.

    As to whether or not Ama­zon will be keep­ing all this har­vest­ed data for its own inter­nal pur­pos­es, like cross-ref­er­enc­ing the data gath­ered from the Echo Look with all the oth­er data it has on us, we don’t need to cyn­i­cal­ly spec­u­late quite as much since it’s pret­ty obvi­ous from Ama­zon’s “we can’t spec­u­late” response that the com­pa­ny is def­i­nite­ly spec­u­lat­ing about cross-ref­er­enc­ing all that data:

    ...
    This might seem over­ly spec­u­la­tive or alarmist to some, but Ama­zon isn’t offer­ing any reas­sur­ance that they won’t be doing more with data gath­ered from the Echo Look. When asked if the com­pa­ny would use machine learn­ing to ana­lyze users’ pho­tos for any pur­pose oth­er than fash­ion advice, a rep­re­sen­ta­tive sim­ply told The Verge that they “can’t spec­u­late” on the top­ic. The rep did stress that users can delete videos and pho­tos tak­en by the Look at any time, but until they do, it seems this con­tent will be stored indef­i­nite­ly on Amazon’s servers.

    This non-denial means the Echo Look could poten­tial­ly pro­vide Ama­zon with the resource every AI com­pa­ny craves: data. And full-length pho­tos of peo­ple tak­en reg­u­lar­ly in the same loca­tion would be a par­tic­u­lar­ly valu­able dataset — even more so if you com­bine this infor­ma­tion with every­thing else Ama­zon knows about its cus­tomers (their shop­ping habits, for one). But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: “Can’t spec­u­late.”
    ...

    “But when asked whether the com­pa­ny would ever com­bine these two datasets, an Ama­zon rep only gave the same, canned answer: “Can’t spec­u­late.””

    That’s an iron­i­cal­ly clear answer. Yes, Ama­zon is almost def­i­nite­ly spec­u­lat­ing about cross-ref­er­ence the data it col­lects from the Look with every­thing else it knows about you. And that rais­es anoth­er inter­est­ing ques­tion about the data it could poten­tial­ly sell: the big edge that Big Data col­lec­tors like Face­book, Google, and Ama­zon have over the rest of the Big Data indus­try is all the insights they can make by putting two and two togeth­er. Insights that oth­er Big Data com­peti­tors can’t gen­er­ate them­selves because of the rel­a­tive exclu­sive­ness of the data col­lect­ed by social media/internet giants like Face­book, Google, and Ama­zon. So what about the sale of those insights? After all, they weren’t direct­ly har­vest­ed data. They were instead inferred from har­vest­ed data. So will those insights be for sale? Like, if the Ama­zon infers that you’re depressed or preg­nant or what­ev­er based on the data col­lect­ed from the Echo Look — and maybe some oth­er data they have on you — are they going to be open to sell­ing that data.

    In oth­er words, when Ama­zon claims that it won’t sell the data it har­vests to third-par­ty adver­tis­ers, does that include infer­ences Ama­zon makes based on that data too? Or is it just the raw data that Ama­zon says it won’t sell? Giv­en the ambigu­ous­ness of Ama­zon’s answers we can only spec­u­late at this point. Very cyn­i­cal­ly spec­u­late.

    Posted by Pterrafractyl | April 27, 2017, 2:38 pm
  8. It is rather iron­ic, at least I hope it is, that for decades, as long as I have been alive, cer­tain peo­ple have “trum­pet­ed” that what this coun­try needs is a good busi­ness­man to take the helm and bring cor­po­rate order and fis­cal san­i­ty to the nation ... and now we get Trump.

    I won­der, hope, expect, demand that Amer­i­cans have their noses rubbed in the the tox­ic mess of this wispy-haired idiot who even George Will ( who I detest ) says can­not think or speak clear­ly and once and for all real­ize that the skills need­ed for mak­ing a lot of mon­ey are counter the skills need­ed to uphold the Con­sti­tu­tion and rep­re­sent fair­ly all of the Amer­i­can peo­ple.

    Even the incom­pe­tence of pri­vate cor­po­ra­tions in the mar­ket place has had a huge neg­a­tive effect. Look­ing more close­ly at the Inter­net, when it began it was not sup­posed to be used for any com­mer­cial pur­pos­es and had it been more close­ly reg­u­lat­ed there might have been a chance that it would not be such a threat vul­ner­a­bil­i­ty from both for­eign and domes­tic enti­ties.

    A thing that sur­prised the oth­er day was that I was brows­ing one of the local city web­sites which hap­pened to have an arti­cle about grow­ing legal pot and how cities were try­ing to reg­u­late it. I was just brows­ing, not com­ment­ing or thumbs-upping or thumbs-down­ing. When I clicked away from the site for the first time on my brows­er ads on com­mer­cial pot cul­ti­va­tion? That unnerved me a bit.

    It is not just that all this data is being stored, but that var­i­ous searched and queries can be run against it and things that have no legal back­ing or judi­cial review are bypassed so that the secu­ri­ty state is going to evolve to Big Broth­er, if it has not already.

    Posted by Brux | June 23, 2017, 2:58 pm
  9. Remem­ber Uber’s “Grey­ball” project? That was the tool Uber devel­oped to iden­ti­fy gov­ern­ment and law enforce­ment offi­cials in a city Uber was ille­gal­ly oper­at­ing in. Grey­ball was a spe­cial ver­sion of the Uber app that hide the exis­tence of a Uber dri­vers in the area when these tar­get­ed offi­cials tried to use the Uber app to deter­mine whether or not Uber was indeed oper­at­ing ille­gal­ly in their city.

    Well, not sur­pris­ing­ly it turns out that Uber has a back­up plan in case “Grey­ball” does­n’t work and one of their offices end up get­ting raid­ed: Rip­ley. That’s the name for the soft­ware Uber secret­ly installed on the com­put­ers in its hun­dreds of offices around the world that will imme­di­ate­ly lock down all of the com­put­ers and make them basi­cal­ly inac­ces­si­ble to law enforce­ment. Rip­ley allows the Uber HQ team to remote­ly change pass­words and lock up data on com­pa­ny-owned smart­phones, lap­tops, and desk­tops. And then remote­ly shut down the devices. Lat­er ver­sions of Rip­ley give Uber’s HQ the abil­i­ty to selec­tive­ly pro­vide infor­ma­tion to gov­ern­ment agen­cies. Com­pa­ny lawyers can direct secu­ri­ty engi­neers to select which infor­ma­tion to share with offi­cials when a war­rant is issued. And this, of course, has law enforce­ment agen­cies won­der­ing what they’ve missed in past raids. Because Rip­ley isn’t just a tool Uber devel­oped ‘just in case’ and nev­er actu­al­ly used. Sources say it’s been used at least two dozen times so far:

    Bloomberg Busi­ness­week

    Uber’s Secret Tool for Keep­ing the Cops in the Dark
    At least two dozen times, the San Fran­cis­co head­quar­ters locked down equip­ment in for­eign offices to shield files from police raids.

    By Olivia Zales­ki and Eric New­com­er
    Jan­u­ary 11, 2018, 4:30 AM CST

    In May 2015 about 10 inves­ti­ga­tors for the Que­bec tax author­i­ty burst into Uber Tech­nolo­gies Inc.’s office in Mon­tre­al. The author­i­ties believed Uber had vio­lat­ed tax laws and had a war­rant to col­lect evi­dence. Man­agers on-site knew what to do, say peo­ple with knowl­edge of the event.

    Like man­agers at Uber’s hun­dreds of offices abroad, they’d been trained to page a num­ber that alert­ed spe­cial­ly trained staff at com­pa­ny head­quar­ters in San Fran­cis­co. When the call came in, staffers quick­ly remote­ly logged off every com­put­er in the Mon­tre­al office, mak­ing it prac­ti­cal­ly impos­si­ble for the author­i­ties to retrieve the com­pa­ny records they’d obtained a war­rant to col­lect. The inves­ti­ga­tors left with­out any evi­dence.

    Most tech com­pa­nies don’t expect police to reg­u­lar­ly raid their offices, but Uber isn’t most com­pa­nies. The ride-hail­ing startup’s rep­u­ta­tion for flout­ing local labor laws and taxi rules has made it a favorite tar­get for law enforce­ment agen­cies around the world. That’s where this remote sys­tem, called Rip­ley, comes in. From spring 2015 until late 2016, Uber rou­tine­ly used Rip­ley to thwart police raids in for­eign coun­tries, say three peo­ple with knowl­edge of the sys­tem. Allu­sions to its nature can be found in a smat­ter­ing of court fil­ings, but its details, scope, and ori­gin haven’t been pre­vi­ous­ly report­ed.

    The Uber HQ team over­see­ing Rip­ley could remote­ly change pass­words and oth­er­wise lock up data on com­pa­ny-owned smart­phones, lap­tops, and desk­tops as well as shut down the devices. This rou­tine was ini­tial­ly called the unex­pect­ed vis­i­tor pro­to­col. Employ­ees aware of its exis­tence even­tu­al­ly took to call­ing it Rip­ley, after Sigour­ney Weaver’s flamethrow­er-wield­ing hero in the Alien movies. The nick­name was inspired by a Rip­ley line in Aliens, after the acid-blood­ed extrater­res­tri­als eas­i­ly best a squad of ground troops. “Nuke the entire site from orbit. It’s the only way to be sure.”

    Oth­er com­pa­nies have shut off com­put­ers dur­ing police raids, then grant­ed offi­cers access after review­ing a war­rant. And Uber has rea­son to be cau­tious with the sen­si­tive infor­ma­tion it holds about cus­tomers and their loca­tions around the world. Rip­ley stands out part­ly because it was used reg­u­lar­ly—at least two dozen times, the peo­ple with knowl­edge of the sys­tem say—and part­ly because some employ­ees involved say they felt the pro­gram slowed inves­ti­ga­tions that were legal­ly sound in the local offices’ juris­dic­tions. “Obstruc­tion of jus­tice def­i­n­i­tions vary wide­ly by coun­try,” says Ryan Calo, a cyber­law pro­fes­sor at the Uni­ver­si­ty of Wash­ing­ton. “What’s clear is that Uber main­tained a gen­er­al pat­tern of legal arbi­trage.”

    ...

    Uber has already drawn crim­i­nal inquiries from the U.S. Depart­ment of Jus­tice for at least five oth­er alleged schemes. In Feb­ru­ary, the New York Times exposed Uber’s use of a soft­ware tool called Grey­ball, which showed enforce­ment offi­cers a fake ver­sion of its app to pro­tect dri­vers from get­ting tick­et­ed. Ripley’s exis­tence gives offi­cials look­ing into oth­er Uber inci­dents rea­son to won­der what they may have missed when their raids were stymied by locked com­put­ers or encrypt­ed files. Pros­e­cu­tors may look at whether Uber obstruct­ed law enforce­ment in a new light. “It’s a fine line,” says Albert Gidari, direc­tor of pri­va­cy at Stan­ford Law School’s Cen­ter for Inter­net & Soci­ety. “What is going to deter­mine which side of the line you’re on, between obstruc­tion and prop­er­ly pro­tect­ing your busi­ness, is going to be things like your his­to­ry, how the gov­ern­ment has inter­act­ed with you.”

    About a year after the failed Mon­tre­al raid, the judge in the Que­bec tax authority’s law­suit against Uber wrote that “Uber want­ed to shield evi­dence of its ille­gal activ­i­ties” and that the company’s actions in the raid reflect­ed “all the char­ac­ter­is­tics of an attempt to obstruct jus­tice.” Uber told the court it nev­er delet­ed its files. It coop­er­at­ed with a sec­ond search war­rant that explic­it­ly cov­ered the files and agreed to col­lect provin­cial tax­es for each ride.

    Uber deployed Rip­ley rou­tine­ly as recent­ly as late 2016, includ­ing dur­ing gov­ern­ment raids in Ams­ter­dam, Brus­sels, Hong Kong, and Paris, say the peo­ple with knowl­edge of the mat­ter. The tool was devel­oped in coor­di­na­tion with Uber’s secu­ri­ty and legal depart­ments, the peo­ple say. The heads of both depart­ments, Joe Sul­li­van and Salle Yoo, left the com­pa­ny last year. Nei­ther respond­ed to requests for com­ment.

    Ripley’s roots date to March 2015, when police stormed Uber’s Brus­sels office, say peo­ple with knowl­edge of the event. The Bel­gian author­i­ties, which accused Uber of oper­at­ing with­out prop­er licens­es, gained access to the company’s pay­ments sys­tem and finan­cial doc­u­ments as well as dri­ver and employ­ee infor­ma­tion. A court order forced Uber to shut down its unli­censed ser­vice lat­er that year. Fol­low­ing that raid and anoth­er in Paris the same week, Yoo, then Uber’s gen­er­al coun­sel, direct­ed her staff to install a stan­dard encryp­tion ser­vice and log off com­put­ers after 60 sec­onds of inac­tiv­i­ty. She also pro­posed test­ing an app to counter raids. Work­ers in Uber’s IT depart­ment were soon tasked with cre­at­ing a sys­tem to keep inter­nal records hid­den from intrud­ers enter­ing any of its hun­dreds of for­eign offices. They used soft­ware from Twilio Inc. to page staffers who would trig­ger the lock­down.

    The secu­ri­ty team, which housed many of Uber’s most con­tro­ver­sial pro­grams, took over Rip­ley from the IT depart­ment in 2016. In a let­ter shared with U.S. attor­neys and made pub­lic in a trade-secrets law­suit against Uber, Richard Jacobs, a for­mer Uber man­ag­er, accused the secu­ri­ty group of spy­ing on gov­ern­ment offi­cials and rivals. Jacobs’s let­ter makes an oblique ref­er­ence to a pro­gram for imped­ing police raids. A 2016 wrong­ful-dis­missal law­suit by Samuel Span­gen­berg, anoth­er Uber man­ag­er, also ref­er­ences its use dur­ing the May 2015 tax author­i­ty raid in Mon­tre­al.

    The three peo­ple with knowl­edge of the pro­gram say they believe Ripley’s use was jus­ti­fied in some cas­es because police out­side the U.S. didn’t always come with war­rants or relied on broad orders to con­duct fish­ing expe­di­tions. But the pro­gram was a close­ly guard­ed secret. Its exis­tence was unknown even to many work­ers in the Uber offices being raid­ed. Some were bewil­dered and dis­tressed when law enforce­ment ordered them to log on to their com­put­ers and they were unable to do so, two of the peo­ple say.

    Lat­er ver­sions of Rip­ley gave Uber the abil­i­ty to selec­tive­ly pro­vide infor­ma­tion to gov­ern­ment agen­cies that searched the company’s for­eign offices. At the direc­tion of com­pa­ny lawyers, secu­ri­ty engi­neers could select which infor­ma­tion to share with offi­cials who had war­rants to access Uber’s sys­tems, the peo­ple say.

    Anoth­er option was con­tem­plat­ed for times when Uber want­ed to be less trans­par­ent. In 2016 the secu­ri­ty team began work­ing on soft­ware called uLock­er. An ear­ly pro­to­type could present a dum­my ver­sion of a typ­i­cal login screen to police or oth­er unwant­ed eyes, the peo­ple say. But Uber says no dum­my-desk­top func­tion was ever imple­ment­ed or used, and that the cur­rent ver­sion of uLock­er doesn’t include that capa­bil­i­ty. The project is over­seen by John Fly­nn, Uber’s chief infor­ma­tion secu­ri­ty offi­cer.

    ———-

    “Uber’s Secret Tool for Keep­ing the Cops in the Dark” by Olivia Zales­ki and Eric New­com­er; Bloomberg Busi­ness­week; 01/11/2018

    “Like man­agers at Uber’s hun­dreds of offices abroad, they’d been trained to page a num­ber that alert­ed spe­cial­ly trained staff at com­pa­ny head­quar­ters in San Fran­cis­co. When the call came in, staffers quick­ly remote­ly logged off every com­put­er in the Mon­tre­al office, mak­ing it prac­ti­cal­ly impos­si­ble for the author­i­ties to retrieve the com­pa­ny records they’d obtained a war­rant to col­lect. The inves­ti­ga­tors left with­out any evi­dence.”

    That’s the descrip­tion of a cor­po­rate response to ‘unex­pect­ed vis­its’ from law enforce­ment that’s appar­ent­ly been rou­tine for Uber in recent years: a law enforce­ment raid prompts a call to Uber HQ, and all of a sud­den all the com­put­ers are shut down, encrypt­ed, and there’s basi­cal­ly no abil­i­ty to gath­er evi­dence. And then after law enforce­ment demands access after show­ing its war­rant, “Rip­ley” allows Uber HQ to decrypt and make avail­able a select­ed sub­set of the data on Uber’s sys­tem. One ver­sion of Rip­ley even cre­at­ed a fake login screen, which pre­sum­ably launch a ‘clean’ ver­sion of that com­put­er with no sen­si­tive files avail­able:

    ...
    Lat­er ver­sions of Rip­ley gave Uber the abil­i­ty to selec­tive­ly pro­vide infor­ma­tion to gov­ern­ment agen­cies that searched the company’s for­eign offices. At the direc­tion of com­pa­ny lawyers, secu­ri­ty engi­neers could select which infor­ma­tion to share with offi­cials who had war­rants to access Uber’s sys­tems, the peo­ple say.

    Anoth­er option was con­tem­plat­ed for times when Uber want­ed to be less trans­par­ent. In 2016 the secu­ri­ty team began work­ing on soft­ware called uLock­er. An ear­ly pro­to­type could present a dum­my ver­sion of a typ­i­cal login screen to police or oth­er unwant­ed eyes, the peo­ple say. But Uber says no dum­my-desk­top func­tion was ever imple­ment­ed or used, and that the cur­rent ver­sion of uLock­er doesn’t include that capa­bil­i­ty. The project is over­seen by John Fly­nn, Uber’s chief infor­ma­tion secu­ri­ty offi­cer.
    ...

    So Uber has a sys­tem where is can lock down its data almost instant­ly, and then choose what data it wants to reveal after the war­rant forces the com­pa­ny to com­ply. And this all might be legal because the fine line between obstruc­tion and prop­er­ly pro­tect­ing your busi­ness is a legal grey­zone:

    ...
    Uber has already drawn crim­i­nal inquiries from the U.S. Depart­ment of Jus­tice for at least five oth­er alleged schemes. In Feb­ru­ary, the New York Times exposed Uber’s use of a soft­ware tool called Grey­ball, which showed enforce­ment offi­cers a fake ver­sion of its app to pro­tect dri­vers from get­ting tick­et­ed. Ripley’s exis­tence gives offi­cials look­ing into oth­er Uber inci­dents rea­son to won­der what they may have missed when their raids were stymied by locked com­put­ers or encrypt­ed files. Pros­e­cu­tors may look at whether Uber obstruct­ed law enforce­ment in a new light. “It’s a fine line,” says Albert Gidari, direc­tor of pri­va­cy at Stan­ford Law School’s Cen­ter for Inter­net & Soci­ety. “What is going to deter­mine which side of the line you’re on, between obstruc­tion and prop­er­ly pro­tect­ing your busi­ness, is going to be things like your his­to­ry, how the gov­ern­ment has inter­act­ed with you.”
    ...

    Although it seems like the accu­sa­tions of spy­ing on gov­ern­ment offi­cials and rivals prob­a­bly isn’t legal. Or is it?

    ...
    The secu­ri­ty team, which housed many of Uber’s most con­tro­ver­sial pro­grams, took over Rip­ley from the IT depart­ment in 2016. In a let­ter shared with U.S. attor­neys and made pub­lic in a trade-secrets law­suit against Uber, Richard Jacobs, a for­mer Uber man­ag­er, accused the secu­ri­ty group of spy­ing on gov­ern­ment offi­cials and rivals. Jacobs’s let­ter makes an oblique ref­er­ence to a pro­gram for imped­ing police raids. A 2016 wrong­ful-dis­missal law­suit by Samuel Span­gen­berg, anoth­er Uber man­ag­er, also ref­er­ences its use dur­ing the May 2015 tax author­i­ty raid in Mon­tre­al.
    ...

    Keep in mind that “Grey­ball” was all about spy­ing on gov­ern­ment offi­cials. That’s how the sys­tem worked: Uber iden­ti­fied gov­ern­ment offi­cials in a city and then manip­u­lat­ed just their Uber apps to hide all the local Uber dri­vers. So was Uber spy­ing on these gov­ern­ment offi­cials sole­ly via the Uber app or was there a more inva­sive form of spy­ing going on? This is seems like the kind of thing that should be inves­ti­gat­ed. Per­haps with a raid on Uber’s offices. Oh wait...

    Posted by Pterrafractyl | January 16, 2018, 2:58 pm

Post a comment