Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #952 Be Afraid, Be VERY Afraid: Update on Technocratic Fascism

Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by early winter of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.) (The previous flash drive was current through the end of May of 2012.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

TriumphWillIPoliticsBitcoinIntroduction: One of the illusions harbored by many–in particular, young people who have grown up with the internet, social networks and mobile technology–sees digital activity as private. Nothing could be further from the truth. Even before the cyber-libertarian policies advocated by individuals like John Perry Barlow, Eddie Snowden, Julian Assange and others became manifest in the Trump administration’s were implemented by the Trump administration and the GOP-controlled congress, digital affairs were subject to an extraordinary degree of manipulation by a multitude of interests.

We begin our examination of technocratic fascism with a look at the corporate foundation of Pokemon Go. Information about the background of Pokemon Go’s developer (Niantic) and the development of the firm is detailed in an article from Network WorldIn addition to the formidable nature of the intelligence agencies involved with generating the corporate foundation of Pokemon Go (Keyhole, Inc.; Niantic), note the unnerving nature of the information that can be gleaned from the Android phone of anyone who downloads the “app.”

Pokemon Go was seen as enhancing the “Cool Japan Strategy” of Prime Minister Shinzo Abe. The “Cool Japan Promotion Fund” was implemented by Abe (the grandson of Nobosuke Kishi, a Japanese war criminal who signed Japan’s declaration of war against the U.S. and became the country’s first postwar Prime Minister) to “raise the international profile of the country’s mass culture.”

The Finance Minister of Japan is Taro Aso, one of the enthusiasts of Nazi political strategy highlighted below. The “Cool Japan promotion Fund” would have been under his administration, with Tomomi Inada functioning as his administrator for the program. Now serving as Japan’s Defense Minister, Inada is another advocate of Nazi political strategy.

The Yamato DynastyNext, we turn to another manifestation of Pokemon Go. The “Alt-Right” (read “Nazi”) movement is using Pokemon Go to recruit kids to the Nazi cause. Consider this against the background of Niantic, the Cool Japan strategy and the pro-Nazi figures involved with it. Consider this also, in conjunction with the Nazified AI developed and deployed by Robert and Rebekah Mercer, Steve Bannon, Cambridge Analytica and the “Alt-Right” milieu with which they associate.

A recent New Yorker article by Jane Mayer concerning Robert Mercer keys some interesting thoughts about Mercer, Bannon, the Alt-Right WikiLeaks and the Nazified AI we spoke of in FTR #’s 948 and 949. In FTR #946, we noted this concatenation’s central place in the Facebook constellation, a position that has positioned them to act decisively on the political landscape.

We note several things about the Mayer piece:

  • She writes of Mercer’s support for the Alt-Right–Mercer helps fund Bannon’s Breitbart:  “. . . . In February, David Magerman, a senior employee at Renaissance, spoke out about what he regards as Mercer’s worrisome influence. Magerman, a Democrat who is a strong supporter of Jewish causes, took particular issue with Mercer’s empowerment of the alt-right, which has included anti-Semitic and white-supremacist voices. . . .”
  • Mercer is racist, feeling that racism only exists in contemporary black culture: “. . . . Mercer, for his part, has argued that the Civil Rights Act, in 1964, was a major mistake. According to the onetime Renaissance employee, Mercer has asserted repeatedly that African-Americans were better off economically before the civil-rights movement. (Few scholars agree.) He has also said that the problem of racism in America is exaggerated. The source said that, not long ago, he heard Mercer proclaim that there are no white racists in America today, only black racists. . . .”
  • His work at IBM was funded in part by DARPA, strongly implying that the DOD has applied some of the Mercer technology: “. . . . Yet, when I.B.M. failed to offer adequate support for Mercer and Brown’s translation project, they secured additional funding from DARPA, the secretive Pentagon program. Despite Mercer’s disdain for “big government,” this funding was essential to his early success. . . .”
  • In a 2012 anti-Obama propaganda film funded by Citizens United, Steve Bannon borrowed from The Triumph of the Will: “. . . . Many of these [disillusioned Obama] voters became the central figures of “The Hope & the Change,” an anti-Obama film that Bannon and Citizens United released during the 2012 Democratic National Convention. After Caddell saw the film, he pointed out to Bannon that its opening imitated that of ‘Triumph of the Will,’ the 1935 ode to Hitler, made by the Nazi filmmaker Leni Riefenstahl. Bannon laughed and said, ‘You’re the only one that caught it!’ In both films, a plane flies over a blighted land, as ominous music swells; then clouds in the sky part, auguring a new era. . . .

Next, we return to the subject of Bitcoin and cyber-libertarian policy. We have explored Bitcoin in a number of programs–FTR #’s 760, 764, 770 and 785.

An important new book by David Golumbia sets forth the technocratic fascist politics underlying Bitcoin. Known to veteran listeners/readers as the author of an oft-quoted article dealing with technocratic fascism, Golumbia has published a short, important book about the right-wing extremism underlying Bitcoin. (Programs on technocratic fascism include: FTR #’s 851, 859, 866, 867.)

In an excerpt from the book, we see disturbing elements of resonance with the views of Stephen Bannon and some of the philosophical influences on him. Julius Evola, “Mencius Moldbug” and Bannon himself see our civilization as in decline, at a critical “turning point,” and in need of being “blown up” (as Evola put it) or needing a “shock to the system.”

Note that the Cypherpunk’s Manifesto (published by the Electronic Frontier Foundation) and the 1996 “Declaration of the Independence of Cyberspace” written by the libertarian activist, Grateful Dead lyricist, Electronic Frontier Foundation founder John Perry Barlow decry governmental regulation of the digital system. (EFF is a leading “digital rights” and technology industry advocacy organization.)

The libertarian/fascist ethic of the digital world was articulated by Barlow.

Note how the “freedom” advocated by Barlow et al has played out: the Trump administration (implementing the desires of corporate America) has “deregulated” the internet. All this in the name of “freedom.”

In FTR #854, we noted the curious professional resume of Barlow, containing such disparate elements as–lyricist for the Grateful Dead (“Far Out!”); Dick Cheney’s campaign manager (not so “Far Out!”); a voter for white supremacist/segregationist George Wallace in the 1968 Presidential campaign (very “Un-Far Out!”).

For our purposes, his most noteworthy professional undertaking is his founding of the EFF–The Electronic Frontier Foundation. A leading ostensible advocate for internet freedom, the EFF has endorsed technology and embraced personnel inextricably linked with a CIA-derived milieu embodied in Radio Free Asia’s Open Technology Fund. (For those who are, understandably, surprised and/or skeptical, we discussed this at length and in detail in FTR #’s 891  and 895.)

Next, we present an article that brings to the fore some interesting questions about Barlow, the CIA and the very genesis of social media.

We offer Ms. Sunderson’s observations, stressing that Barlow’s foreshadowing of the communication functions inherent in social media and his presence at CIA headquarters (by invitation!) suggest that Barlow not only has strong ties to CIA but may have been involved in the conceptual genesis that spawned CIA-connected entities such as Facebook.

In FTR #951, we observed that Richard B. Spencer, one of Trump’s Nazi backers, has begun a website with Swedish Alt-Righter Daniel Friberg, part of the Swedish fascist milieu to which Carl Lundstrom belongs. In FTR #732 (among other programs), we noted that it was Lundstrom who financed the Pirate Bay website, on which WikiLeaks held forth for quite some time. In FTR #745, we documented that top Assange aide and Holocaust-denier Joran Jermas (aka “Israel Shamir”) arranged the Lundstrom/WikiLeaks liaison. (Jermas handles WikiLeaks Russian operations, a point of interest in the wake of the 2016 campaign.)

It is a good bet that Lundstrom/Pirate Bay/WikiLeaks et al were data mining the many people who visited the WikiLeaks site.

Might Lundstrom/Jermas/Assange et al have shared the voluminous data they may well have mined with Mercer/Cambridge Analytica/Bannon’s Nazified AI?

We conclude with recap of Microsoft researcher Kate Crawford’s observations at the SXSW event. Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted  the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”

We reiterate, in closing, that ” . . . . Palantir is building an intelligence system to assist Donald Trump in deporting immigrants. . . .”

In FTR #757 we noted that Palantir is a firm dominated by Peter Thiel, a main backer of Donald Trump.

Program Highlights Include: 

  • WikiLeaks’ continued propagation of Alt-Right style Anti-Semitic propaganda: ” . . . . Now it is the darling of the alt-right, revealing hacked emails seemingly to influence a presidential contest, claiming the US election is ‘rigged.’ and descending into conspiracy. Just this week on Twitter, it described the deaths by natural causes of two of its supporters as a ‘bloody year for WikiLeaks.’ and warned of media outlets ‘controlled by’ members of the Rothschild family – a common anti-Semitic trope. . . .”
  • Assessing all of the data-mining potential (certainty) of WikiLeaks, Pokemon Go and the (perhaps) Barlow-inspired Social Media world against the background of the Mercer/Bannon/Cambridge analytica Nazified AI.

1a. Information about the background of Pokemon Go’s developer (Niantic) and the development of the firm is detailed in an article from Network World. In addition to the formidable nature of the intelligence agencies involved with generating the corporate foundation of Pokemon Go (Keyhole, Inc.; Niantic), note the unnerving nature of the information that can be gleaned from the Android phone of anyone who downloads the “app.”

“The CIA, NSA and Pokemon Go” by Linux Tycoon; Network World; 7/22/2016.

. . . . Way back in 2001, Keyhole, Inc. was founded by John Hanke (who previously worked in a “foreign affairs” position within the U.S. government). The company was named after the old “eye-in-the-sky” military satellites. One of the key, early backers of Keyhole was a firm called In-Q-Tel.

In-Q-Tel is the venture capital firm of the CIA. Yes, the Central Intelligence Agency. Much of the funding purportedly came from the National Geospatial-Intelligence Agency (NGA). The NGA handles combat support for the U.S. Department of Defense and provides intelligence to the NSA and CIA, among others.

Keyhole’s noteworthy public product was “Earth.” Renamed to “Google Earth” after Google acquired Keyhole in 2004.

In 2010, Niantic Labs was founded (inside Google) by Keyhole’s founder, John Hanke.

Over the next few years, Niantic created two location-based apps/games. The first was Field Trip, a smartphone application where users walk around and find things. The second was Ingress, a sci-fi-themed game where players walk around and between locations in the real world.

In 2015, Niantic was spun off from Google and became its own company. Then Pokémon Go was developed and launched by Niantic. It’s a game where you walk around in the real world (between locations suggested by the service) while holding your smartphone.

Data the game can access

Let’s move on to what information Pokémon Go has access to, bearing the history of the company in mind as we do.

When you install Pokémon Go on an Android phone, you grant it the following access (not including the ability to make in-app purchases):

Identity

  • Find accounts on the device

Contacts

  • Find accounts on the device

Location

  • Precise location (GPS and network-based)
  • Approximate location (network-based)

Photos/Media/Files

  • Modify or delete the contents of your USB storage
  • Read the contents of your USB storage

Storage

  • Modify or delete the contents of your USB storage
  • Read the contents of your USB storage

Camera

  • Take pictures and videos

Other

  • Receive data from the internet
  • Control vibration
  • Pair with Bluetooth devices
  • Access Bluetooth settings
  • Full network access
  • Use accounts on the device
  • View network connections
  • Prevent the device from sleeping

Based on the access to your device (and your information), coupled with the design of Pokémon Go, the game should have no problem discerning and storing the following information (just for a start):

  • Where you are
  • Where you were
  • What route you took between those locations
  • When you were at each location
  • How long it took you to get between them
  • What you are looking at right now
  • What you were looking at in the past
  • What you look like
  • What files you have on your device and the entire contents of those files

1b. Pokemon Go was seen as enhancing the “Cool Japan Strategy” of Prime Minister Shinzo Abe. The “Cool Japan Promotion Fund” was implemented by Abe (the grandson of Nobosuke Kishi, a Japanese war criminal who signed Japan’s declaration of war against the U.S. and became the country’s first postwar Prime Minister) to “raise the international profile of the country’s mass culture.”

The Finance Minister of Japan is Taro Aso, one of the enthusiasts of Nazi political strategy highlighted below. The “Cool Japan promotion Fund” would have been under his administration, with Tomomi Inada functioning as his administrator for the program. Inada is another advocate of Nazi political strategy.

“Will Pokemon Go Power Up Japan’s ‘Cool Economy’” by Henry Laurence; The Diplomat; 7/29/2016.

Is Pokémon Go a game changer for the Japanese economy? Within days of its release in early July, a record 21 million people were playing at once, tracking down and capturing the cute little monsters on their smartphones. Creator Nintendo’s shares soared. But the phenomenal popularity of the game raises important questions, beyond just “where’s the nearest Pokégym?” Is it a sign that Silicon Valley-style innovation is reinvigorating corporate Japan’s notoriously insular management? Might this be the first big success story for Prime Minister Shinzo Abe’s “Cool Japan” initiative, a key element in the structural reforms promised but so far undelivered by Abenomics . . .

 . . . . In 2013, amid great fanfare, Prime Minister Shinzo Abe announced a “Cool Japan Promotion Fund” to raise the international profile of the country’s mass culture. The need for such a fund, currently set at about $1 billion, is itself an interesting reflection on how little faith policymakers seem to have in the economic clout of the nation’s artists. Questions also remain about the helpfulness of elderly politicians dabbling in the creative sector. [The Finance Minister of Japan is Taro Aso, one of the enthusiasts of Nazi political strategy highlighted below. The “Cool Japan promotion Fund” would have been under his administration, with Tomomi Inada fundtioning as his administrator for the program. Inada is another advocate of Nazi political strategy.–D.E.] . .

1c. Abe is turning back the Japanese historical and political clock. Japanese government officials are openly sanctioning anti-Korean racism and networking with organizations that promote that doctrine. Several members of Abe’s government network with Japanese neo-Nazis, some of whom advocate using the Nazi method for seizing power in Japan. Is Abe’s government doing just that?

 “For Top Pols In Japan Crime Doesn’t Pay, But Hate Crime Does” by Jake Adel­stein and Angela Erika Kubo; The Daily Beast; 9/26/2014.

 . . . . Accord­ing to the mag­a­zine “Sun­day Mainichi,” Ms. Tomomi Inada, Min­is­ter Of The “Cool Japan” Strat­egy, also received dona­tions from Masaki and other Zaitokukai associates.

Appar­ently, racism is cool in Japan.

Inada made news ear­lier this month after pho­tos cir­cu­lated of her and another female in the new cab­i­net pos­ing with a neo-Nazi party leader. Both denied know­ing the neo-Nazi well but later were revealed to have con­tributed blurbs for an adver­tise­ment prais­ing the out-of-print book Hitler’s Elec­tion Strategy. Coin­ci­den­tally, Vice-Prime Minister [and Finance Minister–D.E.],Taro Aso, is also a long-time admirer of Nazi polit­i­cal strat­egy, and has sug­gested Japan fol­low the Nazi Party tem­plate to sneak con­sti­tu­tional change past the public. . . .

. . . In August, Japan’s rul­ing party, which put Abe into power orga­nized a work­ing group to dis­cuss laws that would restrict hate-crimealthough the new laws will prob­a­bly also be used to clamp down on anti-nuclear protests out­side the Diet building.

Of course, it is a lit­tle wor­ri­some that Sanae Takaichi, who was sup­posed to over­see the project, is the other female min­is­ter who was pho­tographed with a neo-Nazi leader and is a fan of Hitler. . .

1d. Devotee of Hitler’s political strategy Tomomi Inada is now the defense minister of Japan.

“Japan’s PM Picks Hawkish Defense Minister for New Cabinet, Vows Economic Recovery” by Elaine Lies and Kiyoshi Takenaka; Reuters; 8/3/2016. 

Japanese Prime Minister Shinzo Abe appointed a conservative ally as defense minister in a cabinet reshuffle on Wednesday that left most key posts unchanged, and he promised to hasten the economy’s escape from deflation and boost regional ties.

New defense minister Tomomi Inada, previously the ruling party policy chief, shares Abe’s goal of revising the post-war, pacifist constitution, which some conservatives consider a humiliating symbol of Japan’s World War Two defeat.

She also regularly visits Tokyo’s Yasukuni Shrine for war dead, which China and South Korea see as a symbol of Japan’s past militarism. Japan’s ties with China and South Korea have been frayed by the legacy of its military aggression before and during World War Two. . . .

1e. The “Alt-Right” (read “Nazi”) movement is using Pokemon Go to recruit kids to the Nazi cause. Consider this against the background of Niantic, the Cool Japan strategy and the pro-Nazi figures involved with it. Consider this also, in conjunction with the Nazified AI developed and deployed by Robert and Rebekah Mercer, Steve Bannon, Cambridge Analytica and the “Alt-Right” milieu with which they associate.

“Alt-Right Recruiting Kids With ‘Pokémon Go Nazi Challenge’” by James King and Adi Cohen; Vocativ; 9/7/2016.

Alt-right neo-Nazis are targeting kids as young as 10 years old with Pikachu dressed as Hitler

The racist fringe of the now-mainstream alt-right movement is seizing on the popularity of Pokémon Go to recruit kids who congregate at “gyms” to play the mobile game, according to one of the group’s most outspoken leaders.

Andrew Anglin, the neo-Nazi wordsmith behind the alt-right Daily Stormer blog, posted a story on Tuesday about an “enterprising Stormer” (a follower of Anglin’s blog) who is finding Pokémon Go gyms, which serve as battle grounds for players, and distributing recruitment fliers to kids with the hope of “converting children and teens to HARDCORE NEO-NAZISM!”

“The Daily Stormer was designed to appeal to teenagers, but I have long thought that we needed to get pre-teens involved in the movement,” Anglin wrote in the blog post. “At that age, you can really brainwash someone easily. Anyone who accepts Nazism at the age of 10 or 11 is going to be a Nazi for life.” He added, “And it isn’t hard. It’s just a matter of pulling them in. And what better way to do it than with Pokémon fliers at the Pokémon GO gym???”

Anglin declined to identify the “stormer” behind the fliers by name, nor did he disclose where these fliers have been distributed—saying only that it is in an “American town.” Vocativ could not find any media or law enforcement reports of neo-Nazis handing out the fliers in any city. Nor could experts who monitor people like Anglin and groups like the alt-right.

The flier features run-of-the-mill neo-Nazi propaganda—it rails on Jews, African-Americans, and claims a “white genocide” is happening and white people need to stand up and prepare for the impending race war. The first step, the flier explains, is electing Donald Trump president. Step two is to “get active in the Nazi movement” because the “alt-right Nazis are the only ones who can save this country from the kikes.”

“Adolph Hitler was a great man,” the flier, under the title “Hey White Boy!” explains. “Just as you want to catch all the Pokemon, he hunted a different type of monster: Jews.”

The alt-right movement isn’t new but made national headlines last month when Hillary Clinton gave a scathing speech linking Trump to the oft-racist movement. Alt-righters generally fall into one of two categories: those who disguise their racism as “white nationalism” and don’t embrace the racist label in an effort to be taken seriously, and those—like Anglin and his followers—who wear their bigotry on their sleeves, as Vocativ has previously reported.

Clinton’s speech came just days after a shakeup in the Trump campaign led to the appointment of Stephen Bannon, the former head of the alt-right website Breitbart.com, as the CEO of the campaign. As Clinton mentioned in her speech, Breitbart.com is responsible for white nationalist propaganda like a story titled “Hoist It High And Proud: The Confederate Flag Proclaims A Glorious Heritage,” and a sexist rant with the headline, “Birth Control Makes Women Unattractive And Crazy.”

Anglin has created a PDF file of the flier so other “stormers” can print them out and distribute them at Pokémon Go gyms and even provided a map showing the locations of gyms across the country.

“These hotspots are packed,” he wrote. “No doubt, you’ll be able to hand-out a hundred in 30 minutes easy if you live in a decent-sized urban area. Get in and get out. Take a buddy with you.”

TriumphWillIITriumphWillI2. A recent New Yorker article by Jane Mayer concerning Robert Mercer keys some interesting thoughts about Mercer, Bannon, the Alt-Right WikiLeaks and the Nazified AI we spoke of in FTR #’s 948 and 949. In FTR #946, we noted this concatenation’s central place in the Facebook constellation, a position that has positioned them to act decisively on the political landscape.

We note several things about the Mayer piece:

  • She writes of Mercer’s support for the Alt-Right–Mercer helps fund Bannon’s Breitbart:  “. . . . In February, David Magerman, a senior employee at Renaissance, spoke out about what he regards as Mercer’s worrisome influence. Magerman, a Democrat who is a strong supporter of Jewish causes, took particular issue with Mercer’s empowerment of the alt-right, which has included anti-Semitic and white-supremacist voices. . . .”
  • Mercer is racist, feeling that racism only exists in contemporary black culture: “. . . . Mercer, for his part, has argued that the Civil Rights Act, in 1964, was a major mistake. According to the onetime Renaissance employee, Mercer has asserted repeatedly that African-Americans were better off economically before the civil-rights movement. (Few scholars agree.) He has also said that the problem of racism in America is exaggerated. The source said that, not long ago, he heard Mercer proclaim that there are no white racists in America today, only black racists. . . .”
  • His work at IBM was funded in part by DARPA, strongly implying that the DOD has applied some of the Mercer technology: “. . . . Yet, when I.B.M. failed to offer adequate support for Mercer and Brown’s translation project, they secured additional funding from DARPA, the secretive Pentagon program. Despite Mercer’s disdain for “big government,” this funding was essential to his early success. . . .”
  • In a 2012 anti-Obama propaganda film funded by Citizens United, Steve Bannon borrowed from The Triumph of the Will: “. . . . Many of these [disillusioned Obama] voters became the central figures of “The Hope & the Change,” an anti-Obama film that Bannon and Citizens United released during the 2012 Democratic National Convention. After Caddell saw the film, he pointed out to Bannon that its opening imitated that of ‘Triumph of the Will,’ the 1935 ode to Hitler, made by the Nazi filmmaker Leni Riefenstahl. Bannon laughed and said, ‘You’re the only one that caught it!’ In both films, a plane flies over a blighted land, as ominous music swells; then clouds in the sky part, auguring a new era. . . .

“The Reclusive Hedge-Fund Tycoon Behind the Trump Presidency” by Jane Mayer; The New Yorker; 3/27/2017.

. . . . In February, David Magerman, a senior employee at Renaissance, spoke out about what he regards as Mercer’s worrisome influence. Magerman, a Democrat who is a strong supporter of Jewish causes, took particular issue with Mercer’s empowerment of the alt-right, which has included anti-Semitic and white-supremacist voices. . . .

. . . . Mercer, for his part, has argued that the Civil Rights Act, in 1964, was a major mistake. According to the onetime Renaissance employee, Mercer has asserted repeatedly that African-Americans were better off economically before the civil-rights movement. (Few scholars agree.) He has also said that the problem of racism in America is exaggerated. The source said that, not long ago, he heard Mercer proclaim that there are no white racists in America today, only black racists. . . .

. . . . Yet, when I.B.M. failed to offer adequate support for Mercer and Brown’s translation project, they secured additional funding from DARPA, the secretive Pentagon program. Despite Mercer’s disdain for “big government,” this funding was essential to his early success. . . .

. . . . Many of these [disillusioned Obama] voters became the central figures of “The Hope & the Change,” an anti-Obama film that Bannon and Citizens United released during the 2012 Democratic National Convention. After Caddell saw the film, he pointed out to Bannon that its opening imitated that of “Triumph of the Will,” the 1935 ode to Hitler, made by the Nazi filmmaker Leni Riefenstahl. Bannon laughed and said, “You’re the only one that caught it!” In both films, a plane flies over a blighted land, as ominous music swells; then clouds in the sky part, auguring a new era. . . .

3a.We have explored Bitcoin in a number of programs–FTR #’s 760, 764, 770 and 785.

An important new book by David Golumbia sets forth the technocratic fascist politics underlying Bitcoin. Known to veteran listeners/readers as the author of an oft-quoted article dealing with technocratic fascism, Golumbia has published a short, important book about the right-wing extremism underlying Bitcoin. (Programs on technocratic fascism include: FTR #’s 851, 859, 866, 867.)

In the excerpt below, we see disturbing elements of resonance with the views of Stephen Bannon and some of the philosophical influences on him. Julius Evola, “Mencius Moldbug” and Bannon himself see our civilization as in decline, at a critical “turning point,” and in need of being “blown up” (as Evola put it) or needing a “shock to the system.”

The Politics of Bitcoin: Software as Right-Wing Extremism by David Golumbia; University of Minnesota Press [SC]; pp. 73-75.

. . . . As objects of discourse, Bitcoin and the blockchain do a remarkable job of reinforcing the view that the entire global history of political thought and action needs to be jettisoned, or, even worse, that it has already been jettisoned through the introduction of any number of technologies. Thus, in the introduction to a bizarrely earnest and destructive volume called From Bitcoin to Burning Man and Beyond (Clippinger and Bollier 2014), the editors, one of whom is a research scientist at MIT, write, “Enlightenment ideals of democratic rule seem to have run their course. A continuous flow of scientific findings are undermining many foundational claims about human rationality and perfectibility while exponential technological changes and exploding global demographics overwhelm the capacity of democratic institutions to rule effectively, and ultimately, their very legitimacy.” Such abrupt dismissals of hundreds of years of thought, work, and lives follows directly from cyberlibertarian thought and extremist reinterpretations of political institutions:” What once required the authority of a central bank or a sovereign authority can now be achieved through open, distributed crypto-algorithms. National borders, traditional legal regimes, and human intervention are increasingly moot.” Like most ideological formations, these sentiments are highly resistant to being proven false by facts. . . .

. . . . Few attitudes typify the paradoxical cyberlibertarian mind-set of Bitcoin promoters (and many others) more than do those of “Sanjuro,” the alias of the person who created a Bitcoin “assassination market” (Greenberg 2013). Sanjuro believes that by incentivizing people to kill politicians, he will destroy “all governments, everywhere.” This anarchic apocalypse “will change the world for the better,” producing “a world without wars, dragnet Panopticon-style surveillance, nuclear weapons, armies, repression, money manipulation, and limits to trade.” Only someone so blinkered by their ideological tunnel vision could look at world history and imagine that murdering the representatives of democratically elected governments and thus putting the governments themselves out of existence would do anything but make every one of these problems immeasurably worse than they already are. Yet this, in the end, is the extreme rightist–anarcho-capitalist, winner-take-all, even neo-feudalist–political vision too many of those in the Bitcoin (along with other cryptocurrency) and blockchain communities, whatever they believe their political orientation to be, are working actively to bring about. . . .

3b. Note that the Cypherpunk’s Manifesto (published by the Electronic Frontier Foundation) and the 1996 “Declaration of the Independence of Cyberspace” written by the libertarian activist, Grateful Dead lyricist, Electronic Frontier Foundation founder John Perry Barlow decry governmental regulation of the digital system. (EFF is a leading “digital rights” and technology industry advocacy organization.)

The Politics of Bitcoin: Software as Right-Wing Extremism by David Golumbia; University of Minnesota Press [SC]; pp. 31-32.

. . . . Among the clearest targets of these movements (see both the Cypherpunk’s Manifesto, Hughes 1993; and the closely related Crypto-Anarchist Manifesto, May 1992) has always specifically been governmental oversight of financial (and other) transactions. No effort is made to distinguish between legitimate and illegitimate use of governmental power; rather, all governmental power is inherently taken to be illegitimate. Further, despite occasional rhetorical nods toward corporate abuses, just as in Murray Rothbard’s work, strictly speaking no mechanisms whatsoever are posited that actually might constrain corporate power. Combined with either an explicit commitment toward, or at best an extreme naivete about, the operation of concentrated capital, this political theory works to deprive the people of their only proven mechanism for that constraint. This is why an august an antigovernment thinker as Noam Chomsky (2015) can have declared that libertarian theories, despite surface appearances, promote “corporate tyranny, meaning tyranny by unaccountable private concentrations of power, the worst kind of tyranny you can imagine.” . . .

3c. The libertarian/fascist ethic of the digital world was articulated by John Perry Barlow.

Note how the “freedom” advocated by John Perry Barlow et al has played out: the Trump administration (implementing the desires of corporate America) has “deregulated” the internet. All this in the name of “freedom.”

The Politics of Bitcoin: Software as Right-Wing Extremism by David Golumbia; University of Minnesota Press [SC]; p. 3.

. . . . In its most basic and limited form, cyberlibertarianism is sometimes summarized as the principle that “governments should not regulate the internet” (Malcom 2013.) This belief was articulated with particular force in the 1996 “Declaration of the Independence of Cyberspace” written by the libertarian activist, Grateful Dead lyricist, Electronic Frontier Foundation founder (EFF is a leading “digital rights” and technology industry advocacy organization) John Perry Barlow, which declared that “governments of our industrial world” are “not welcome” in and “have no sovereignty” over the digital system. . . .

4a. In FTR #854, we noted the curious professional resume of John Perry Barlow, containing such disparate elements as–lyricist for the Grateful Dead (“Far Out!”); Dick Cheney’s campaign manager (not so “Far Out!”); a voter for white supremacist/segregationist George Wallace in the 1968 Presidential campaign (very “Un-Far Out!”).

Barlow introduced the Grateful Dead to Timothy Leary, who was inextricably linked with the CIA. We discussed this at length in AFA #28.

AFA 28: The CIA, the Military & Drugs, Pt. 5
The CIA & LSD
Part 5a
46:15 | Part 5b 45:52 | Part 5c 42:56 | Part 5d 45:11 | Part 5e 11:25
(Recorded April 26, 1987)

” . . . . Timothy Leary’s early research into LSD was subsidized, to some extent, by the CIA. Later, Leary’s LSD proselytization was greatly aided by William Mellon Hitchcock, a member of the powerful Mellon family. The financing of the Mellon-Leary collaboration was effected through the Castle Bank, a Caribbean operation that was deeply involved in the laundering of CIA drug money.

After moving to the West Coast, Leary hooked up with a group of ex-surfers, the Brotherhood of Eternal Love. This group became the largest LSD synthesizing and distributing organization in the world. Their “chief chemist” was a curious individual named Ronald Hadley Stark. An enigmatic, multi-lingual and well-traveled individual, Stark worked for the CIA, and appears to have been with the agency when he was making the Brotherhood’s acid. The quality of his product projected the Brotherhood of Eternal Love into its leadership role in the LSD trade. Stark also operated in conjunction with the Italian intelligence/fascist milieu described in AFA #’s 17-21.

The broadcast underscores the possibility that LSD and other hallucinogens may have been disseminated, in part, in order to diffuse the progressive political activism of the 1960’s.

Program Highlights Include: CIA director Allen Dulles’ promotion of psychological research by the Agency; the work of CIA physician Dr. Sidney Gottlieb for the Agency’s Technical Services Division; connections between Stark and the kidnapping and assassination of Italian Prime Minister Aldo Moro; Stark’s mysterious death in prison while awaiting trial; Leary’s connections to the milieu of the “left” CIA and the role those connections appear to have played in Leary’s flight from incarceration; the CIA’s intense interest in (and involvement with) the Haight-Ashbury scene of the 1960s. . . . .”

For our purposes, his most noteworthy professional undertaking is his founding of the EFF–The Electronic Frontier Foundation. A leading ostensible advocate for internet freedom, the EFF has endorsed technology and embraced personnel inextricably linked with a CIA-derived milieu embodied in Radio Free Asia’s Open Technology Fund. (For those who are, understandably, surprised and/or skeptical, we discussed this at length and in detail in FTR #’s 891  and 895.)

Listener Tiffany Sunderson contributed an article in the “Comments” section that brings to the fore some interesting questions about Barlow, the CIA and the very genesis of social media.

We offer Ms. Sunderson’s observations, stressing that Barlow’s foreshadowing of the communication functions inherent in social media and his presence at CIA headquarters (by invitation!) suggest that Barlow not only has strong ties to CIA but may have been involved in the conceptual genesis that spawned CIA-connected entities such as Facebook:

“Fascinating article by John Perry Barlow, can’t believe I haven’t seen this before. From Forbes in 2002. Can’t accuse Barlow of hiding his intel ties, he’ll tell you all about it! To me, this is practically a historical document, as it hints at the thinking that inevitably lead to Inqtel, Geofeedia, Palantir, Facebook, etc. Including whole article, but here are a few passages that jumped out at me.

http://www.forbes.com/asap/2002/1007/042_print.html

This part cracks me up: it’s “mystical superstition” to imagine that wires leaving a building are also wires ENTERING a building? Seriously? For a guy who never shuts up about networking, he should get that there is nothing “mystical” about such a notion. It’s exactly how attackers get in. If you are connected to the internet, you are not truly secure. Period.

“All of their primitive networks had an ‘air wall,’ or physical separation, from the Internet. They admitted that it might be even more dangerous to security to remain abstracted from the wealth of information that had already assembled itself there, but they had an almost mystical superstition that wires leaving the agency would also be wires entering it, a veritable superhighway for invading cyberspooks. ”

Here, JPB brags about his connections and who he brought back to CIA. I’ve always had spooky feelings about Cerf, Dyson, and Kapor. Don’t know Rutkowski. But the other three are serious players, and Cerf and Kapor are heavily involved with EFF. You know, because the EFF is all about standing up for the little guy.

“They told me they’d brought Steve Jobs in a few weeks before to indoctrinate them in modern information management. And they were delighted when I returned later, bringing with me a platoon of Internet gurus, including Esther Dyson, Mitch Kapor, Tony Rutkowski, and Vint Cerf. They sealed us into an electronically impenetrable room to discuss the radical possibility that a good first step in lifting their blackout would be for the CIA to put up a Web site”

This next part SCREAMS of intel’s ties to the “social media explosion.” I think this passage is what qualifies Barlow’s article as a historical doc of some value.

“Let’s create a process of information digestion in which inexpensive data are gathered from largely open sources and condensed, through an open process, into knowledge terse and insightful enough to inspire wisdom in our leaders.

The entity I envision would be small, highly networked, and generally visible. It would be open to information from all available sources and would classify only information that arrived classified. It would rely heavily on the Internet, public media, the academic press, and an informal worldwide network of volunteers–a kind of global Neighborhood Watch–that would submit on-the-ground reports.

It would use off-the-shelf technology, and use it less for gathering data than for collating and communicating them. Being off-the-shelf, it could deploy tools while they were still state-of-the-art.

I imagine this entity staffed initially with librarians, journalists, linguists, scientists, technologists, philosophers, sociologists, cultural historians, theologians, economists, philosophers, and artists-a lot like the original CIA, the OSS, under “Wild Bill” Donovan. Its budget would be under the direct authority of the President, acting through the National Security Adviser. Congressional oversight would reside in the committees on science and technology (and not under the congressional Joint Committee on Intelligence). ”

http://www.forbes.com/asap/2002/1007/042_2.html

“Why Spy?” by John Perry Barlow; Forbes; 10/07/02.

If the spooks can’t analyze their own data, why call it intelligence?
For more than a year now, there has been a deluge of stories and op-ed pieces about the failure of the American intelligence community to detect or prevent the September 11, 2001, massacre.

Nearly all of these accounts have expressed astonishment at the apparent incompetence of America’s watchdogs.

I’m astonished that anyone’s astonished.

The visual impairment of our multitudinous spookhouses has long been the least secret of their secrets. Their shortcomings go back 50 years, when they were still presumably efficient but somehow failed to detect several million Chinese military “volunteers” heading south into Korea. The surprise attacks on the World Trade Center and the Pentagon were only the most recent oversight disasters. And for service like this we are paying between $30 billion and $50 billion a year. Talk about a faith-based initiative.

After a decade of both fighting with and consulting to the intelligence community, I’ve concluded that the American intelligence system is broken beyond repair, self-protective beyond reform, and permanently fixated on a world that no longer exists.

I was introduced to this world by a former spy named Robert Steele, who called me in the fall of 1992 and asked me to speak at a Washington conference that would be “attended primarily by intelligence professionals.” Steele seemed interesting, if unsettling. A former Marine intelligence officer, Steele moved to the CIA and served three overseas tours in clandestine intelligence, at least one of them “in a combat environment” in Central America.

After nearly two decades of service in the shadows, Steele emerged with a lust for light and a belief in what he calls, in characteristic spook-speak, OSINT, or open source intelligence. Open source intelligence is assembled from what is publicly available, in media, public documents, the Net, wherever. It’s a given that such materials–and the technological tools for analyzing them–are growing exponentially these days. But while OSINT may be a timely notion, it’s not popular in a culture where the phrase “information is power” means something brutally concrete and where sources are “owned.”

At that time, intelligence was awakening to the Internet, the ultimate open source. Steele’s conference was attended by about 600 members of the American and European intelligence establishment, including many of its senior leaders. For someone whose major claim to fame was hippie song-mongering, addressing such an audience made me feel as if I’d suddenly become a character in a Thomas Pynchon novel.

Nonetheless, I sallied forth, confidently telling the gray throng that power lay not in concealing information but in distributing it, that the Internet would endow small groups of zealots with the capacity to wage credible assaults on nation-states, that young hackers could easily run circles around old spies.

I didn’t expect a warm reception, but it wasn’t as if I was interviewing for a job.

Or so I thought. When I came offstage, a group of calm, alert men awaited. They seemed eager, in their undemonstrative way, to pursue these issues further. Among them was Paul Wallner, the CIA’s open source coordinator. Wallner wanted to know if I would be willing to drop by, have a look around, and discuss my ideas with a few folks.

A few weeks later, in early 1993, I passed through the gates of the CIA headquarters in Langley, Virginia, and entered a chilled silence, a zone of paralytic paranoia and obsessive secrecy, and a technological time capsule straight out of the early ’60s. The Cold War was officially over, but it seemed the news had yet to penetrate where I now found myself.

If, in 1993, you wanted to see the Soviet Union still alive and well, you’d go to Langley, where it was preserved in the methods, assumptions, and architecture of the CIA.

Where I expected to see computers, there were teletype machines. At the nerve core of The Company, five analysts sat around a large, wooden lazy Susan. Beside each of them was a teletype, chattering in uppercase. Whenever a message came in to, say, the Eastern Europe analyst that might be of interest to the one watching events in Latin America, he’d rip it out of the machine, put it on the turntable, and rotate it to the appropriate quadrant.

The most distressing discovery of my first expedition was the nearly universal frustration of employees at the intransigence of the beast they inhabited. They felt forced into incompetence by information hoarding and noncommunication, both within the CIA and with other related agencies. They hated their primitive technology. They felt unappreciated, oppressed, demoralized. “Somehow, over the last 35 years, there was an information revolution,” one of them said bleakly, “and we missed it.”

They were cut off. But at least they were trying. They told me they’d brought Steve Jobs in a few weeks before to indoctrinate them in modern information management. And they were delighted when I returned later, bringing with me a platoon of Internet gurus, including Esther Dyson, Mitch Kapor, Tony Rutkowski, and Vint Cerf. They sealed us into an electronically impenetrable room to discuss the radical possibility that a good first step in lifting their blackout would be for the CIA to put up a Web site.

They didn’t see how this would be possible without compromising their security. All of their primitive networks had an “air wall,” or physical separation, from the Internet. They admitted that it might be even more dangerous to security to remain abstracted from the wealth of information that had already assembled itself there, but they had an almost mystical superstition that wires leaving the agency would also be wires entering it, a veritable superhighway for invading cyberspooks.

We explained to them how easy it would be to have two networks, one connected to the Internet for gathering information from open sources and a separate intranet, one that would remain dedicated to classified data. We told them that information exchange was a barter system, and that to receive, one must also be willing to share. This was an alien notion to them. They weren’t even willing to share information among themselves, much less the world.

In the end, they acquiesced. They put up a Web site, and I started to get email from people @cia.gov, indicating that the Internet had made it to Langley. But the cultural terror of releasing anything of value remains. Go to their Web site today and you will find a lot of press releases, as well as descriptions of maps and publications that you can acquire only by buying them in paper. The unofficial al Qaeda Web site, http://www.almuhajiroun.com, is considerably more revealing.

This dogma of secrecy is probably the most persistently damaging fallout from “the Soviet factor” at the CIA and elsewhere in the intelligence “community.” Our spooks stared so long at what Churchill called “a mystery surrounded by a riddle wrapped in an enigma,” they became one themselves. They continue to be one, despite the evaporation of their old adversary, as well as a long series of efforts by elected authorities to loosen the white-knuckled grip on their secrets.

The most recent of these was the 1997 Commission on Protecting and Reducing Government Secrecy, led by Senator Patrick Moynihan. The Moynihan Commission released a withering report charging intelligence agencies with excessive classification and citing a long list of adverse consequences ranging from public distrust to concealed (and therefore irremediable) organizational failures.

That same year, Moynihan proposed a bill called the Government Secrecy Reform Act. Cosponsored by conservative Republicans Jesse Helms and Trent Lott, among others, this legislation was hardly out to gut American intelligence. But the spooks fought back effectively through the Clinton Administration and so weakened the bill that one of its cosponsors, Congressman Lee Hamilton (D-Ind.), concluded that it would be better not to pass what remained.

A few of its recommendations eventually were wrapped into the Intelligence Authorization Act of 2000. But of these, the only one with any operational force–a requirement that a public-interest declassification board be established to advise the Administration in these matters-has never been implemented. Thanks to the vigorous interventions of the Clinton White House, the cult of secrecy remained unmolested.

One might be surprised to learn that Clintonians were so pro-secrecy. In fact, they weren’t. But they lacked the force to dominate their wily subordinates. Indeed, in 1994, one highly placed White House staffer told me that their incomprehensible crypto policies arose from being “afraid of the NSA.”

In May 2000, I began to understand what they were up against. I was invited to speak to the Intelligence Community Collaboration Conference (a title that contained at least four ironies). The other primary speaker was Air Force Lt. General Mike Hayden, the newly appointed director of the NSA. He said he felt powerless, though he was determined not to remain that way.

“I had been on the job for a while before I realized that I have no staff,” he complained. “Everything the agency does had been pushed down into the components…it’s all being managed several levels below me.” In other words, the NSA had developed an immune system against external intervention.

Hayden recognized how excessive secrecy had damaged intelligence, and he was determined to fix it. “We were America’s information age enterprise in the industrial age. Now we have to do that same task in the information age, and we find ourselves less adept,” he said.

He also vowed to diminish the CIA’s competitiveness with other agencies. (This is a problem that remains severe, even though it was first identified by the Hoover Commission in 1949.) Hayden decried “the stovepipe mentality” where information is passed vertically through many bureaucratic layers but rarely passes horizontally. “We are riddled with watertight information compartments,” he said. “At the massive agency level, if I had to ask, ‘Do we need blue gizmos?’ the only person I could ask was the person whose job security depended on there being more blue gizmos.”

Like the CIA I encountered, Hayden’s NSA was also a lot like the Soviet Union; secretive unto itself, sullen, and grossly inefficient. The NSA was also, by his account, as technologically maladroit as its rival in Langley. Hayden wondered, for example, why the director of what was supposedly one of the most sophisticated agencies in the world would have four phones on his desk. Direct electronic contact between him and the consumers of his information–namely the President and National Security staff–was virtually nil. There were, he said, thousands of unlinked, internally generated operating systems inside the NSA, incapable of exchanging information with one another.

Hayden recognized the importance of getting over the Cold War. “Our targets are no longer controlled by the technological limitations of the Soviet Union, a slow, primitive, underfunded foe. Now [our enemies] have access to state-of-the-art….In 40 years the world went from 5,000 stand-alone computers, many of which we owned, to 420 million computers, many of which are better than ours.”

But there wasn’t much evidence that it was going to happen anytime soon. While Hayden spoke, the 200 or so high-ranking intelligence officials in the audience sat with their arms folded defensively across their chests. When I got up to essentially sing the same song in a different key, I asked them, as a favor, not to assume that posture while I was speaking. I then watched a Strangelovian spectacle when, during my talk, many arms crept up to cross involuntarily and were thrust back down to their sides by force of embarrassed will.

That said, I draw a clear distinction between the institutions of intelligence and the folks who staff them.

All of the actual people I’ve encountered in intelligence are, in fact, intelligent. They are dedicated and thoughtful. How then, can the institutional sum add up to so much less than the parts? Because another, much larger, combination of factors is also at work: bureaucracy and secrecy.

Bureaucracies naturally use secrecy to immunize themselves against hostile investigation, from without or within. This tendency becomes an autoimmune disorder when the bureaucracy is actually designed to be secretive and is wholly focused on other, similar institutions. The counterproductive information hoarding, the technological backwardness, the unaccountability, the moral laxity, the suspicion of public information, the arrogance, the xenophobia (and resulting lack of cultural and linguistic sophistication), the risk aversion, the recruiting homogeneity, the inward-directedness, the preference for data acquisition over information dissemination, and the uselessness of what is disseminated-all are the natural, and now fully mature, whelps of bureaucracy and secrecy.

Not surprisingly, people who work there believe that job security and power are defined by the amount of information one can stop from moving. You become more powerful based on your capacity to know things that no one else does. The same applies, in concentric circles of self-protection, to one’s team, department, section, and agency. How can data be digested into useful information in a system like that?

How can we expect the CIA and FBI to share information with each other when they’re disinclined to share it within their own organizations? The resulting differences cut deep. One of the revelations of the House Report on Counterterrorism Intelligence Capabilities and Performance Prior to September 11 was that none of the responsible agencies even shared the same definition of terrorism. It’s hard to find something when you can’t agree on what you’re looking for.

The information they do divulge is also flawed in a variety of ways. The “consumers” (as they generally call policymakers) are unable to determine the reliability of what they’re getting because the sources are concealed. Much of what they get is too undigested and voluminous to be useful to someone already suffering from information overload. And it comes with strings attached. As one general put it, “I don’t want information that requires three security officers and a safe to move it in around the battlefield.”

As a result, the consumers are increasingly more inclined to get their information from public sources. Secretary of State Colin Powell says that he prefers “the Early Bird,” a compendium of daily newspaper stories, to the President’s Daily Brief (the CIA’s ultimate product).

The same is apparently true within the agencies themselves. Although their finished products rarely make explicit use of what’s been gleaned from the media, analysts routinely turn there for information. On the day I first visited the CIA’s “mission control” room, the analysts around the lazy Susan often turned their attention to the giant video monitors overhead. Four of these were showing the same CNN feed.

Secrecy also breeds technological stagnation. In the early ’90s, I was speaking to personnel from the Department of Energy nuclear labs about computer security. I told them I thought their emphasis on classification might be unnecessary because making a weapon was less a matter of information than of industrial capacity. The recipe for a nuclear bomb has been generally available since 1978, when John Aristotle Phillips published plans in The Progressive. What’s not so readily available is the plutonium and tritium, which require an entire nation to produce. Given that, I couldn’t see why they were being so secretive.

The next speaker was Dr. Edward Teller, who surprised me by not only agreeing but pointing out both the role of open discourse in scientific progress, as well as the futility of most information security. “If we made an important nuclear discovery, the Russians were usually able to get it within a year,” he said. He went on: “After World War II we were ahead of the Soviets in nuclear technology and about even with them in electronics. We maintained a closed system for nuclear design while designing electronics in the open. Their systems were closed in both regards. After 40 years, we are at parity in nuclear science, whereas, thanks to our open system in the study of electronics, we are decades ahead of the Russians.”

There is also the sticky matter of budgetary accountability. The director of Central Intelligence (DCI) is supposed to be in charge of all the functions of intelligence. In fact, he has control over less than 15% of the total budget, directing only the CIA. Several of the different intelligence-reform commissions that have been convened since 1949 have called for consolidating budgetary authority under the DCI, but it has never happened.

With such hazy oversight, the intelligence agencies naturally become wasteful and redundant. They spent their money on toys like satellite-imaging systems and big-iron computers (often obsolete by the time they’re deployed) rather than developing the organizational capacity for analyzing all those snapshots from space, or training analysts in languages other than English and Russian, or infiltrating potentially dangerous groups, or investing in the resources necessary for good HUMINT (as they poetically call information gathered by humans operating on the ground).

In fact, fewer than 10% of the millions of satellite photographs taken have ever been seen by anybody. Only one-third of the employees at the CIA speak any language besides English. Even if they do, it’s generally either Russian or some common European language. Of what use are the NSA’s humongous code-breaking computers if no one can read the plain text extracted from the encrypted stream?

Another systemic deficit of intelligence lies, interestingly enough, in the area of good old-fashioned spying. Although its intentions were noble, the ’70s Church Committee had a devastating effect on this necessary part of intelligence work. It caught the CIA in a number of dubious covert operations and took the guilty to task.

But rather than listen to the committee’s essential message that they should renounce the sorts of nefarious deeds the public would repudiate and limit secrecy to essential security considerations, the leadership responded by pulling most of its agents out of the field, aside from a few hired traitors.

Despite all the efforts aimed at sharpening their tools, intelligence officials have only become progressively duller and more expensive. We enter an era of asymmetrical threats, distributed over the entire globe, against which our most effective weapon is understanding. Yet we are still protected by agencies geared to gazing on a single, centralized threat, using methods that optimize obfuscation. What is to be done?

We might begin by asking what intelligence should do. The answer is simple: Intelligence exists to provide decision makers with an accurate, comprehensive, and unbiased understanding of what’s going on in the world. In other words, intelligence defines reality for those whose actions could alter it. “Given our basic mission,” one analyst said wearily, “we’d do better to study epistemology than missile emplacements.”

If we are serious about defining reality, we might look at the system that defines reality for most of us: scientific discourse. The scientific method is straightforward. Theories are openly advanced for examination and trial by others in the field. Scientists toil to create systems to make all the information available to one immediately available to all. They don’t like secrets. They base their reputations on their ability to distribute their conclusions rather than the ability to conceal them. They recognize that “truth” is based on the widest possible consensus of perceptions. They are committed free marketeers in the commerce of thought. This method has worked fabulously well for 500 years. It might be worth a try in the field of intelligence.

Intelligence has been focused on gathering information from expensive closed sources, such as satellites and clandestine agents. Let’s attempt to turn that proposition around. Let’s create a process of information digestion in which inexpensive data are gathered from largely open sources and condensed, through an open process, into knowledge terse and insightful enough to inspire wisdom in our leaders.

The entity I envision would be small, highly networked, and generally visible. It would be open to information from all available sources and would classify only information that arrived classified. It would rely heavily on the Internet, public media, the academic press, and an informal worldwide network of volunteers–a kind of global Neighborhood Watch–that would submit on-the-ground reports.

It would use off-the-shelf technology, and use it less for gathering data than for collating and communicating them. Being off-the-shelf, it could deploy tools while they were still state-of-the-art.

I imagine this entity staffed initially with librarians, journalists, linguists, scientists, technologists, philosophers, sociologists, cultural historians, theologians, economists, philosophers, and artists-a lot like the original CIA, the OSS, under “Wild Bill” Donovan. Its budget would be under the direct authority of the President, acting through the National Security Adviser. Congressional oversight would reside in the committees on science and technology (and not under the congressional Joint Committee on Intelligence).

There are, of course, problems with this proposal. First, it does not address the pressing need to reestablish clandestine human intelligence. Perhaps this new Open Intelligence Office (OIO) could also work closely with a Clandestine Intelligence Bureau, also separate from the traditional agencies, to direct infiltrators and moles who would report their observations to the OIO through a technological membrane that would strip their identities from their findings. The operatives would be legally restricted to gathering information, with harsh penalties attached to any engagement in covert operations.

The other problem is the “Saturn” dilemma. Once this new entity begins to demonstrate its effectiveness in providing insight to policymakers that is concise, timely, and accurate (as I believe it would), almost certainly traditional agencies would try to haul it back into the mother ship and break it (as has happened to the Saturn division at General Motors). I don’t know how to deal with that one. It’s the nature of bureaucracies to crush competition. No one at the CIA would be happy to hear that the only thing the President and cabinet read every morning is the OIO report.

But I think we can deal with that problem when we’re lucky enough to have it. Knowing that it’s likely to occur may be sufficient. A more immediate problem would be keeping existing agencies from aborting the OIO as soon as someone with the power to create it started thinking it might be a good idea. And, of course, there’s also the unlikelihood that anyone who thinks that the Department of Homeland Security is a good idea would ever entertain such a possibility.

Right now, we have to do something, and preferably something useful. The U.S. has just taken its worst hit from the outside since 1941. Our existing systems for understanding the world are designed to understand a world that no longer exists. It’s time to try something that’s the right kind of crazy. It’s time to end the more traditional insanity of endlessly repeating the same futile efforts.

John Perry Barlow is cofounder of the Electronic Frontier Foundation.

4b. We note–again–that WikiLeaks is, and always was, an obviously fascist/Nazi institution. (In FTR #’s 724, 725, 732, 745, 755 and 917 we have detailed the fascist and far right-wing ideology, associations and politics of Julian Assange and WikiLeaks.)

“Inside the Paranoid, Strange World of Julian Assange” by James Ball; BuzzFeed; 10/23/2016.

. . . . Spending those few months at such close proximity to Assange and his confidants, and experiencing first-hand the pressures exerted on those there, have given me a particular insight into how WikiLeaks has become what it is today.

To an outsider, the WikiLeaks of 2016 looks totally unrelated to the WikiLeaks of 2010. . . .

Now it is the darling of the alt-right, revealing hacked emails seemingly to influence a presidential contest, claiming the US election is “rigged”, and descending into conspiracy. Just this week on Twitter, it described the deaths by natural causes of two of its supporters as a “bloody year for WikiLeaks”, and warned of media outlets “controlled by” members of the Rothschild family – a common anti-Semitic trope. . .

5a. In FTR #951, we observed that Richard B. Spencer, one of Trump’s Nazi backers, has begun a website with Swedish Alt-Righter Daniel Friberg, part of the Swedish fascist milieu to which Carl Lundstrom belongs. In FTR #732 (among other programs), we noted that it was Lundstrom who financed the Pirate Bay website, on which WikiLeaks held forth for quite some time. In FTR #745, we documented that top Assange aide and Holocaust-denier Joran Jermas (aka “Israel Shamir”) arranged the Lundstrom/WikiLeaks liaison. (Jermas handles WikiLeaks Russian operations, a point of interest in the wake of the 2016 campaign.)

It is a good bet that Lundstrom/Pirate Bay/WikiLeaks et al were data mining the many people who visited the WikiLeaks site.

Might Lundstrom/Jermas/Assange et al have shared the voluminous data they may well have mined with Mercer/Cambridge Analytica/Bannon’s Nazified AI?

“Richard Spencer and His Alt-Right Buddies Launch a New Website” by Osita Nwavenu; Slate; 1/17/2017.

On Monday, Richard Spencer, New Jersey Institute of Technology lecturer Jason Jorjani, and Swedish New Right figure Daniel Friberg launched altright.com, a site aimed at bringing together “the best writers and analysts from Alt Right, in North America, Europe, and around the world.” . . .

. . . . As of now, most of the site’s content is recycled material from Friberg’s Arktos publishing house, Spencer’s other publication, Radix Journal, the alt-right online media network Red Ice, and Occidental Dissent, a white nationalist blog run by altright.com’s news editor Hunter Wallace. . . .

…. Still, Spencer’s intellectualism does little to hide the centrality of bigotry to his own worldview and the views of those he publishes. His previous site, Alternative Right, once ran an essay called, ‘Is Black Genocide Right?’” “Instead of asking how we can make reparations for slavery, colonialism, and Apartheid or how we can equalize academic scores and incomes,” Colin Liddell wrote, “we should instead be asking questions like, ‘Does human civilization actually need the Black race?’ ‘Is Black genocide right?’ and, if it is, ‘What would be the best and easiest way to dispose of them?’” It remains to be seen whether altright.com will employ similarly candid writers. . . .

5b. Pirate Bay sugar daddy Lundstrom has discussed his political sympathies. [The excerpt below is from Google translations. The Swedish sentence is followed by the English translation.] Note that he appears on the user/subscriber list for Nordic Publishers, the Nazi publishing outfit that handles the efforts produced by one of Jermas’s [aka “Shamir’s”] publishers.

“The Goal: Take over all Piracy” by Peter Karlsson; realtid.se; 3/10/2006.

. . . Lundström har inte gjort någon hemlighet av sina sympatier för främlingsfientliga grupper, och förra året fanns hans namn med på kundregistret hos det nazistiska bokförlaget Nordiska Förlaget. Lundstrom has made no secret of his sympathy for the xenophobic groups, and last year was his name with the customer code of the Nazi publishing house Nordic Publishers.

– Jag stöder dem genom att köpa böcker och musik. – I support them by buying books and music. Ni i media vill bara sprida missaktning om olika personer. You in the media just want to spread contempt for different people. Ni i media är fyllda av hat till Pirate Bay, avslutar en mycket upprörd Carl Lundström. You in the media is full of hatred to the Pirate Bay, finishing a very upset Carl Lundström.

Nordiska Förlaget säljer vit makt musik och böcker som hyllar rasistiska våldshandlingar. Nordic publishing company sells white power music and books that celebrates the racist violence. Förlaget stöder nazisternas demonstration i Salem och bjöd in Ku Klux Klan ledaren till en föredragturné i Sverige. Publisher supports the Nazi demonstration in Salem and invited the Ku Klux Klan leader [David Duke] for a lecture tour in Sweden. . . .

6c. Expo–founded by the late Stieg Larsson–revealed that Friberg’s Nordic Publishers has morphed into Arktos, one of the outfits associated with Spencer, et al.

Right Wing Public Education” by Maria-Pia Cabero [Google Translation]; Expo; January of 2014.

. . . . When NF [Nordiska Forlaget–D.E.] were discontinued in 2010 founded the publisher Arktos by basically the same people. Arktos publishes New Right-inspired literature and CEO Daniel Friberg, who was driving in the NF, has played a key role in the establishment of ideas. . . .

7. At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted  the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”

We reiterate, in closing, that ” . . . . Palantir is building an intelligence system to assist Donald Trump in deporting immigrants. . . .”

In FTR #757 we noted that Palantir is a firm dominated by Peter Thiel, a main backer of Donald Trump.

“Artificial Intelligence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017.

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. She described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias.

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.”

Crawford then outlined the “nasty history” of people using facial features to “justify the unjustifiable”. The principles of phrenology, a pseudoscience that developed across Europe and the US in the 19th century, were used as part of the justification of both slavery and the Nazi persecution of Jews.

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its ownmarketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.

Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid. [We note in passing that Robert Mercer, who developed the core programs used by Cambridge Analytica did so while working for IBM. We discussed the profound relationship between IBM and the Third Reich in FTR #279–D.E.]

Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,” Crawford said, mentioning research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013, since when AI has made huge leaps.

Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas. Earlier this year the justice department concluded that Chicago’s police had for years regularly used “unlawful force”, and that black and Hispanic neighborhoods were most affected.

Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.

“This is a fascist’s dream,” she said. “Power without accountability.”

Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.

“It’s the most powerful engine of mass deportation this country has ever seen,” she said. . . .

 

Discussion

7 comments for “FTR #952 Be Afraid, Be VERY Afraid: Update on Technocratic Fascism”

  1. Here’s a twist to the GOP’s decision to give internet service providers the right to tell their customers’ browsing data that might actually give the Alt Right and other neo-Nazis a very big reason to be very pissed at Trump and the GOP: If you’re a neo-Nazi fan of websites like Stormfront, the fact that you visit these sites was going to be between you and Stormfront. And your ISP. And any other entities, government or otherwise, that be watching the traffic to those sites. And if there are any advertisers on those sites they might also be able to track your Stormfront viewing habits. Plus, if you used a search engine to get to the site then your search engine provider would presumably know. But that would mostly be it. But now, thanks to the GOP, that whole giant data harvesting industry that exists to build profiles on everyone is now going to get to learn about your Stormfront viewing habits too! Or maybe your love of 4Chan. Or all those horrible Reddit forums your visit and post to. All that will be even more available to add to the various Big Data profiles of your floating around in the data harvesting industry. And it’s all thanks to the GOP and Donald Trump:

    Spin

    The Internet’s Anonymous Nazis Have Realized They Played Themselves Now That Trump Plans to Kill Internet Privacy

    Andy Cush // March 29, 2017

    Yesterday, the Republican-controlled House of Representatives passed a bill that will kill the Federal Communications Commission’s rules preventing internet service providers from selling users’ browsing data to advertisers and other third parties. Donald Trump’s White House has indicated that it intends to sign the bill into law, a move that will drastically roll back the privacy rights of ordinary people on the internet. If the armies of anonymous right-wing trolls who helped propel the president to victory aren’t second-guessing their Trump support right about now, they really should be.

    Last year, when the FCC was still under Democratic control, it passed rules that would require ISPs like Comcast and Verizon to obtain their customers’ explicit permission before sharing information like browser history and geographical data with other companies, a move that was hailed as a major step forward for online privacy at the time. The new law will eliminate those rules, reopening the path for providers to sell data about your porn viewing habits or the time you spend reading disreputable music publications to advertisers hoping to build detailed profiles of their potential customers.

    During the presidential campaign, anonymous internet forums like Reddit, 4chan and 8chan, and the dingiest corners of Twitter emerged as major hubs for Trump supporters. “I’m fu cking trembling out of excitement brahs,”one user of 4chan’s nihilistic right-wing politics board /pol/ proclaimed after Trump’s victory. “We actually elected a meme as president.”

    The only thing chan trolls care about as much as ethics in gaming journalism and securing a future for white children is access to an internet that’s unfettered by snooping corporate interlopers. The entire identity of a Trump-loving 4chan poster is wrapped up in the anonymity of the internet, and the ability to do whatever you want online without consequences, whether it’s telling elaborate inside jokes on message boards or harassing liberal journalists on Twitter. The elimination of the FCC’s Obama-era privacy protections is a major blow to the idea of a free and open internet.

    The trolls probably should have seen this coming when Trump selected the outspoken net neutrality opponent Ajit Pai as FCC commissioner, who voted against the privacy rules when he was a humble commission member. Still, they’ve spent the 12 hours since the news of the rollback broke in a state of apoplectic anxiety. The current top post on Reddit’s popular The_Donald board reads “Let’s discuss this ISP privacy bill.” All three of the top comments are Trump supporters who are against it. “I think there needs to be a new bill that introduces an all encompassing privacy protection,” a Redditor named StirlingG wrote. “I don’t like ISPs or websites selling my shit, and I don’t see any major positives to this.”

    “Keep note: HUGE amounts of ‘former’ donald trump supporters shitposting,” a user named bloodfist45 observed, further down the post. “I believe they’ll only become former if Trump begins to vote on stuff like this,” another Redditor responded. “This bill is anti American. Data collection on massive levels is anti-american.”

    The internet’s other Trump hubs are no happier. “Where were you when Trump sided with corporations and government over the American people?” asks a post on /pol/ from this morning. “This is one of very few Obama-era regulations that should have stayed,” reads the top comment on Breitbart’s news story about the bill, which characterizes the privacy rules as “big government” overreach. “This is an attack against freedom,” another commenter responded. “I want to make my own decisions about my life, not others, and that includes my private info as well. this is total bs, the house giving in to the business “establishment”, Pres Trump should stop this!!!!!”

    Incidentally, the episode is a useful cautionary tale for impressionable young Trump supporters about the Republican Party’s conflation of free-market corporatism and individual liberty in general. When you deregulate industries, it’s not the common man who enjoys new freedoms. It’s the people and organizations who already have lots of money and power–in this case, the ISPs. And when those people in power are given an opportunity to further exploit the common people who rely on them for essential services in exchange for a little more money, they’ll always take it. Take note, Twitter eggs and Reddit Pepes: it’s as true of healthcare and finance as it is of the internet.

    “The only thing chan trolls care about as much as ethics in gaming journalism and securing a future for white children is access to an internet that’s unfettered by snooping corporate interlopers. The entire identity of a Trump-loving 4chan poster is wrapped up in the anonymity of the internet, and the ability to do whatever you want online without consequences, whether it’s telling elaborate inside jokes on message boards or harassing liberal journalists on Twitter. The elimination of the FCC’s Obama-era privacy protections is a major blow to the idea of a free and open internet.”

    And just to be clear, Donald Trump did actually sign this into law, so it’s a done deal. And, of course, par for the course for the GOP:


    Incidentally, the episode is a useful cautionary tale for impressionable young Trump supporters about the Republican Party’s conflation of free-market corporatism and individual liberty in general. When you deregulate industries, it’s not the common man who enjoys new freedoms. It’s the people and organizations who already have lots of money and power–in this case, the ISPs. And when those people in power are given an opportunity to further exploit the common people who rely on them for essential services in exchange for a little more money, they’ll always take it. Take note, Twitter eggs and Reddit Pepes: it’s as true of healthcare and finance as it is of the internet.

    The Alt Right wasn’t just betrayed by Trump and the GOP but by their own not just their own anti-government Libertarian ideology. Yeah, it’s definitely a ‘sad Pepe’ moment.

    Posted by Pterrafractyl | April 10, 2017, 7:22 pm
  2. Considering what Hitler and the Nazis did with a … I forget … 80 character? punch card, what is out there today is monstrous.

    The problem is, does anyone have any way to conceive this, frame it, or manage it so that there can be a consensus reached about laws and rights?

    One thing I’ve wondered, is if we have all of this surveillance ability, why is there still organized crime? The Mafia should be gone, eviscerated, unless they own the surveillance machine or are in on it in some way?

    Posted by Brux | April 10, 2017, 10:01 pm
  3. Remember how Tay, Microsoft’s AI-powered twitterbot designed to learn from its human interactions, became a neo-Nazi in less than a day after a bunch of 4chan users decided to flood Tay with neo-Nazi-like tweets? Well, according to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically:

    The Guardian

    AI programs exhibit racial and gender biases, research reveals

    Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say

    Hannah Devlin
    Science correspondent

    Thursday 13 April 2017 14.00 EDT

    An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

    The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

    In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

    However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

    Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

    But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

    The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

    The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

    For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

    The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

    And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

    The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

    These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

    “If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.

    The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

    Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”

    Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

    “At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”

    However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

    “We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

    “And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.”

    Yep, if we decide to teach our AIs by just exposing it to a large database of human activity, like a massive database of documents published online, those AIs are going to learn a few lessons that we might not want them to learn. Kind of like how each generation of humans instills and perpetuates the next generation with the dominant bigotries of that society simply through exposure. Great.

    And while it’s possible to build systems to detect these kinds of biases and correct for it, doing so without stripping away the AIs’ powers of interpretation isn’t going to be easy:


    “At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”

    However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

    “We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

    At this point it’s looking like building non-bigoted AIs that’s learn simply by observing the world humans created is going to be a very difficult task. In other words, the easy and lazy thing to do is to just leave the learned bigotries intact. In other other words, the profitable thing to do is to just leave the learned bigotries intact. At least in many cases (because who wants to pay for the expensive non-bigoted AI?)

    So while it shouldn’t be surprising if the AIs of the future have an anti-human bias, it’s possible that the next generation of AIs will have an anti-specific-types-of-humans bias. And that particular bias will depend on which human data set was used to train the AI. We wouldn’t expect an AI trained on American language usage to have the same biases as those trained on, say, a Japanese data set. So we could have a future where the same underlying AI-self-learning technology ends up creating wildly different AI bigotries due to the different societal data sets.

    It all raises a creepy AI bigotry question: will the AIs be bigoted against AIs that don’t share their bigotries? For instance, will a white supremacist AI view an AI trained on, say, a Hebrew data set negatively? What about an AI taught primarily through exposure to religious fundamentalist writings? Will it view non-fundamentalist AIs as godless enemies? These are the creepy kind of questions we get to ask nowadays. They’re pretty similar to the questions societies should have been asking of themselves throughout history regarding the collective self-awareness of the bigotries getting passed on to the next generation, but somehow creepier.

    And now you know: when you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us. At least in part.

    Posted by Pterrafractyl | April 13, 2017, 3:30 pm
  4. Check out the explanation GOP Congressman James Sensenbrenner gave for why he supported the vote to allow US internet service providers to sell their customers’ browsing habits: “Nobody’s got to use the internet”:

    ArsTechnica

    Why one Republican voted to kill privacy rules: “Nobody has to use the Internet”
    Republicans encounter angry citizens after killing online privacy rules.

    Jon Brodkin – 4/14/2017, 3:03 PM

    A Republican lawmaker who voted to eliminate Internet privacy rules said, “Nobody’s got to use the Internet” when asked why ISPs should be able to use and share their customers’ Web browsing history for advertising purposes.

    US Rep. Jim Sensenbrenner (R-Wis.) was hosting a town hall meeting when a constituent asked about the decision to eliminate privacy rules. The person in the audience was disputing the Republican argument that ISPs shouldn’t face stricter requirements than websites such as Facebook.

    “Facebook is not comparable to an ISP. I do not have to go on Facebook,” the town hall meeting attendee said. But when it comes to Internet service providers, the person said, “I have one choice. I don’t have to go on Google. My ISP provider is different than those providers.”

    That’s when Sensenbrenner said, “Nobody’s got to use the Internet.” He praised ISPs for “invest[ing] an awful lot of money in having almost universal service now.” He then said, “I don’t think it’s my job to tell you that you cannot get advertising for your information being sold. My job, I think, is to tell you that you have the opportunity to do it, and then you take it upon yourself to make the choice.”

    People “ought to have more choices rather than fewer choices with the government controlling our everyday lives,” he concluded, before moving on to the next question.

    “He said that nobody has to use the Internet. They have a choice,” Sensenbrenner’s press office explained on Twitter.

    Video was posted on Twitter yesterday by American Bridge 21st Century, a political action committee that says it is “committed to holding Republicans accountable for their words and actions.”

    .@JimPressOffice tells his constituents not to use the internet if they don't like his vote to sell out their privacy to advertisers. #wi05 pic.twitter.com/lSVVx8OclO— Brad Bainum (@bradbainum) April 13, 2017

    Rules would have given customers a choice

    Sensenbrenner did not address the fact that the privacy rules would have let customers make a choice about whether their data is tracked and used. The rules would have required ISPs to get customers’ opt-in consent before using, sharing, or selling their Web browsing history and app usage history. Because Congress eliminated the rules before they could go into effect, ISPs can continue to use customers’ browsing and app usage history without offering anything more than a chance to opt out. Without such rules, customers may not even be aware that they have a choice.

    The rules were issued last year by the Federal Communications Commission and eliminated this month when President Donald Trump signed a repeal that was approved along party lines in Congress. There are no privacy rules that apply to ISPs now, but ISPs say they will let customers opt out of systems that use browsing history to deliver targeted ads.

    “That’s when Sensenbrenner said, “Nobody’s got to use the Internet.” He praised ISPs for “invest[ing] an awful lot of money in having almost universal service now.” He then said, “I don’t think it’s my job to tell you that you cannot get advertising for your information being sold. My job, I think, is to tell you that you have the opportunity to do it, and then you take it upon yourself to make the choice.“”

    Rep. Sensenbrenner doesn’t think it’s his job to tell you that you cannot get advertising in exchange for having your information sold. Despite the fact that the privacy bill the GOP just killed explicitly gave people the option to have their browsing habits sold if they wanted to do that:


    Sensenbrenner did not address the fact that the privacy rules would have let customers make a choice about whether their data is tracked and used. The rules would have required ISPs to get customers’ opt-in consent before using, sharing, or selling their Web browsing history and app usage history. Because Congress eliminated the rules before they could go into effect, ISPs can continue to use customers’ browsing and app usage history without offering anything more than a chance to opt out. Without such rules, customers may not even be aware that they have a choice.

    So it would probably be more accurate to say that Rep. Sensenbrenner thinks it is his job to explicitly tell you that you can’t not receive advertising for having your information sold. Or rather, to pass a law that says you can’t not have your information sold, and then frame it was a “freedom of choice” argument while simultaneously suggesting that the real choice you have is whether or not to use the internet at all because you’ll have no choice about whether or not your internet usage will be sold. Selling out his constituents, very directly in this case, and framing it as freedom. And then getting reelected. That appears to be his job.

    He’s pretty good at his job.

    Posted by Pterrafractyl | April 17, 2017, 8:08 pm
  5. FYI, those fancy new headphones you’re wearing might be listening to what you’re listening to. And your headphone manufacturer might be listening to what you’re headphones are telling them about what you’re listening to. And the data mining companies might be listening to what your headphone manufacturer is telling selling them about what your headphones are telling them you’re listening to. And that data mining companies’ clients might be listening to what the data mining company is telling selling them about what the headphone manufacturer is telling them about what your headphones is telling them you’re listening to. In other words, when your headphones talk about what you’re listening to, A LOT of different parties might be listening. Especially if those fancy new headphones are manufactured by Bose:

    The Washington Post

    Bose headphones have been spying on customers, lawsuit claims

    By Hayley Tsukayama
    April 19, 2017 at 5:24 PM

    Bose knows what you’re listening to.

    At least that’s the claim of a proposed class-action lawsuit filed late Tuesday in Illinois that accuses the high-end audio equipment maker of spying on its users and selling information about their listening habits without permission.

    The main plaintiff in the case is Kyle Zak, who bought a $350 pair of wireless Bose headphones last month. He registered the headphones, giving the company his name and email address, as well as the headphone serial number. And he download the Bose Connect app, which the company said would make the headphones more useful by adding functions such as the ability to customize the level of noise cancellation in the headphones.

    But it turns out the app was also telling Bose a lot more about Zak than he bargained for.

    According to the complaint, Bose collected information that was not covered by its privacy policy. This included the names of the audio files its customers were listening to:

    Defendant programmed its Bose Connect app to continuously record the contents of the electronic communications that users send to their Bose Wireless Products from their smartphones, including the names of the music and audio tracks they select to play along with the corresponding artist and album information, together with the Bose Wireless Product’s serial numbers (collectively, “Media Information”).

    Combined with the registration information, that gave Bose access to personally identifiable information that Zak and other never agreed to share, the complaint says. Listening data can be very personal, particularly if users are listening to podcasts or other audio files that could shade in information about their political preferences, health conditions or other interests, the complaint argues.

    The filing also alleges that Bose wasn’t just collecting the information. It was also sharing it with a data mining company called Segment.io, according to research conducted by Edelson, the Chicago-based law firm representing Zak.

    Bose did not immediately respond to a request for comment on the suit.

    Wireless headphones are part of a growing category of connected devices, in which everyday products can hook up to the Internet and pass information from users to companies. Other smart device makers have been accused of sharing and selling information without users’ consent. Television maker Vizio settled with the Federal Trade Commission in February over allegations that it shared customers’ viewing data with other companies without letting its users know.

    “It’s increasingly important for companies to be upfront and honest about the data use policies” as more devices become smart, said John Verdi, vice president of policy at the Future of Privacy Forum. “This is a sign of the friction that is increasingly common when devices, like headphones, that were not previously connected or data-driven become increasingly data-driven.”

    Zak’s complaint alleges that Bose’s actions violate Illinois state statutes prohibiting deceptive business practices, as well as laws against eavesdropping and wiretapping.

    “Customers were not getting notice or giving consent to have this type of data leave their phone to go to Bose, or to third-parties,” said Christopher Dore, a lawyer at Edelson. He added that because a data mining company was picking up the Bose information, the small details of what Zak and others have listened to could have been resold by that company far and wide — but it’s not clear to whom. “We don’t know where the data could have gone after that,” Dore said.

    Dore declined to elaborate on how Zak found out the information was being collected.

    Wireless headphones are gaining popularity, analysts have said. Sales of Bluetooth headsets overtook sales of non-Bluetooth headsets in 2016, according to market research firm NPD Group. Moves from companies to remove headphone jacks from phones — most notably Apple and the iPhone — have also made Bluetooth headsets more appealing for consumers and manufacturers.

    Many headphone makers pair their products with free apps that offer customers access to more features.

    The products listed in the complaint are: the QuietComfort 35, SoundSport Wireless, Sound Sport Pulse Wireless, QuietControl 30, SoundLink Around-Ear Wireless Headphones II, and SoundLink Color II. Bose doesn’t release sales information on individual products. But the QuietComfort35, which is the model that Zak had, is a common fixture on gift guides — including The Washington Post’s — and one of the top-ten selling headphones on Amazon.com.

    ““Customers were not getting notice or giving consent to have this type of data leave their phone to go to Bose, or to third-parties,” said Christopher Dore, a lawyer at Edelson. He added that because a data mining company was picking up the Bose information, the small details of what Zak and others have listened to could have been resold by that company far and wide — but it’s not clear to whom. “We don’t know where the data could have gone after that,” Dore said.”

    That doesn’t sound good. Definitely not a ‘good vibes’ situation for Bose if this turns out to be true.

    And in other ‘not good vibes’ news…

    Posted by Pterrafractyl | April 19, 2017, 7:28 pm
  6. With VW facing a final criminal fine from the US government of $2.8 billion over its use of software “defeat devices” designed to detect when regulators were examining the diesel fuel emissions and operate in a special regulator-friendly mode, it’s worth noting that Uber, the ride-sharing behemoth, appears to have a proclivity for defeat devices of its own. Defeat devices for both public and private regulators (law enforcement and Apple):

    The New York Times

    Uber’s C.E.O. Plays With Fire
    Travis Kalanick’s drive to win in life has led to
    a pattern of risk-taking that has at times put his
    ride-hailing company on the brink of implosion.

    By MIKE ISAAC
    APRIL 23, 2017

    SAN FRANCISCO — Travis Kalanick, the chief executive of Uber, visited Apple’s headquarters in early 2015 to meet with Timothy D. Cook, who runs the iPhone maker. It was a session that Mr. Kalanick was dreading.

    For months, Mr. Kalanick had pulled a fast one on Apple by directing his employees to help camouflage the ride-hailing app from Apple’s engineers. The reason? So Apple would not find out that Uber had been secretly identifying and tagging iPhones even after its app had been deleted and the devices erased — a fraud detection maneuver that violated Apple’s privacy guidelines.

    But Apple was onto the deception, and when Mr. Kalanick arrived at the midafternoon meeting sporting his favorite pair of bright red sneakers and hot-pink socks, Mr. Cook was prepared. “So, I’ve heard you’ve been breaking some of our rules,” Mr. Cook said in his calm, Southern tone. Stop the trickery, Mr. Cook then demanded, or Uber’s app would be kicked out of Apple’s App Store.

    For Mr. Kalanick, the moment was fraught with tension. If Uber’s app was yanked from the App Store, it would lose access to millions of iPhone customers — essentially destroying the ride-hailing company’s business. So Mr. Kalanick acceded.

    In a quest to build Uber into the world’s dominant ride-hailing entity, Mr. Kalanick has openly disregarded many rules and norms, backing down only when caught or cornered. He has flouted transportation and safety regulations, bucked against entrenched competitors and capitalized on legal loopholes and gray areas to gain a business advantage. In the process, Mr. Kalanick has helped create a new transportation industry, with Uber spreading to more than 70 countries and gaining a valuation of nearly $70 billion, and its business continues to grow.

    But the previously unreported encounter with Mr. Cook showed how Mr. Kalanick was also responsible for risk-taking that pushed Uber beyond the pale, sometimes to the very brink of implosion.

    Crossing that line was not a one-off for Mr. Kalanick. According to interviews with more than 50 current and former Uber employees, investors and others with whom the executive had personal relationships, Mr. Kalanick, 40, is driven to the point that he must win at whatever he puts his mind to and at whatever cost — a trait that has now plunged Uber into its most sustained set of crises since its founding in 2009.

    “Travis’s biggest strength is that he will run through a wall to accomplish his goals,” said Mark Cuban, the Dallas Mavericks owner and billionaire investor who has mentored Mr. Kalanick. “Travis’s biggest weakness is that he will run through a wall to accomplish his goals. That’s the best way to describe him.”

    A blindness to boundaries is not uncommon for Silicon Valley entrepreneurs. But in Mr. Kalanick, that led to a pattern of repeatedly going too far at Uber, including the duplicity with Apple, sabotaging competitors and allowing the company to use secret tool called Greyball to trick some law enforcement agencies.

    That quality also extended to his personal life, where Mr. Kalanick mixes with celebrities like Jay Z and businessmen including President Trump’s chief economic adviser, Gary D. Cohn. But it has alienated some Uber executives, employees and advisers. Mr. Kalanick, with salt-and-pepper hair, a fast-paced walk and an iPhone practically embedded in his hand, is described by friends as more at ease with data and numbers (some consider him a math savant) than with people.

    Uber is grappling with the fallout. For the last few months, the company has been reeling from allegations of a machismo-fueled workplace where managers routinely overstepped verbally, physically and sometimes sexually with employees. Mr. Kalanick compounded that image by engaging in a shouting match with an Uber driver in February, an incident recorded by the driver and then leaked online. (Mr. Kalanick now has a private driver.)

    The damage has been extensive. Uber’s detractors have started a grass-roots campaign with the hashtag #deleteUber. Executives have streamed out. Some Uber investors have openly criticized the company.

    Mr. Kalanick’s leadership is at a precarious point. While Uber is financed by a who’s who of investors including Goldman Sachs and Saudi Arabia’s Public Investment Fund, Mr. Kalanick controls the majority of the company’s voting shares with a small handful of other close friends, and has stacked Uber’s board of directors with many who are invested in his success. Yet board members have concluded that he must change his management style, and are pressuring him to do so.

    He has publicly apologized for some of his behavior, and for the first time has said he needs management help. He is interviewing candidates for a chief operating officer, even as some employees question whether a new addition will make any difference. He has also been working with senior managers to reset some of the company’s stated values. Results of an internal investigation into Uber’s workplace culture are expected next month.

    Through an Uber spokesman, Mr. Kalanick declined an interview request. Apple declined to comment on the meeting with Mr. Cook. Many of the people interviewed for this article, who revealed previously unreported details of Mr. Kalanick’s life, asked to remain anonymous because they had signed nondisclosure agreements with Uber or feared damaging their relationship with the chief executive.

    Mr. Kalanick’s pattern for pushing limits is deeply ingrained. It began during his childhood in suburban Los Angeles, where he went from being bullied to being the aggressor, continued through his years taking risks at two technology start-ups there, and crystallized in his role at Uber.

    For the Win

    With Mr. Kalanick setting the tone at Uber, employees acted to ensure the ride-hailing service would win no matter what.

    They spent much of their energy one-upping rivals like Lyft. Uber devoted teams to so-called competitive intelligence, purchasing data from an analytics service called Slice Intelligence. Using an email digest service it owns named Unroll.me, Slice collected its customers’ emailed Lyft receipts from their inboxes and sold the anonymized data to Uber. Uber used the data as a proxy for the health of Lyft’s business. (Lyft, too, operates a competitive intelligence team.)

    Slice confirmed that it sells anonymized data (meaning that customers’ names are not attached) based on ride receipts from Uber and Lyft, but declined to disclose who buys the information.

    Uber also tried to win over Lyft’s drivers. Uber’s “driver satisfaction rating,” an internal metric, has dropped since February 2016, and roughly a quarter of its drivers turn over on average every three months. According to an internal slide deck on driver income levels viewed by The New York Times, Uber considered Lyft and McDonald’s its main competition for attracting new drivers.

    To frustrate Lyft drivers, Uber dispatched some employees to order and cancel Lyft rides en masse. Others hailed Lyfts and spent the rides persuading drivers to switch to Uber full time.

    After Mr. Kalanick heard that Lyft was working on a car-pooling feature, Uber created and started its own car-pooling option, UberPool, in 2014, two days before Lyft unveiled its project.

    That year, Uber came close to buying Lyft. At a meeting at Mr. Kalanick’s house, and over cartons of Chinese food, he and Mr. Michael hosted Lyft’s president, John Zimmer, who asked for 15 percent of Uber in exchange for selling Lyft. Over the next hour, Mr. Kalanick and Mr. Michael repeatedly laughed at Mr. Zimmer’s audacious request. No deal was reached. Lyft declined to comment.

    The rivalry remains in force. In 2016, Uber held a summit meeting in Mexico City for some top managers, where it distributed a playbook on how to cut into Lyft’s business and had sessions on how to damage its competitor.

    To develop its own business, Uber sidestepped the authorities. Some employees started using a tool called Greyball to deceive officials trying to shut down Uber’s service. The tool, developed to aid driver safety and to trick fraudsters, essentially showed a fake version of Uber’s app to some people to disguise the locations of cars and drivers. It soon became a way for Uber drivers to evade capture by law enforcement in places where the service was deemed illegal.

    After The Times reported on Greyball in March, Uber said it would prohibit employees from using the tool against law enforcement.

    The idea of fooling Apple, the main distributor of Uber’s app, began in 2014.

    At the time, Uber was dealing with widespread account fraud in places like China, where tricksters bought stolen iPhones that were erased and resold. Some Uber drivers there would then create dozens of fake email addresses to sign up for new Uber rider accounts attached to each phone, and request rides from those phones, which they would then accept. Since Uber was handing out incentives to drivers to take more rides, the drivers could earn more money this way.

    To halt the activity, Uber engineers assigned a persistent identity to iPhones with a small piece of code, a practice called “fingerprinting.” Uber could then identify an iPhone and prevent itself from being fooled even after the device was erased of its contents.

    There was one problem: Fingerprinting iPhones broke Apple’s rules. Mr. Cook believed that wiping an iPhone should ensure that no trace of the owner’s identity remained on the device.

    So Mr. Kalanick told his engineers to “geofence” Apple’s headquarters in Cupertino, Calif., a way to digitally identify people reviewing Uber’s software in a specific location. Uber would then obfuscate its code for people within that geofenced area, essentially drawing a digital lasso around those it wanted to keep in the dark. Apple employees at its headquarters were unable to see Uber’s fingerprinting.

    The ruse did not last. Apple engineers outside of Cupertino caught on to Uber’s methods, prompting Mr. Cook to call Mr. Kalanick to his office.

    Mr. Kalanick was shaken by Mr. Cook’s scolding, according to a person who saw him after the meeting.

    But only momentarily. After all, Mr. Kalanick had faced off against Apple, and Uber had survived. He had lived to fight another day.

    “So Mr. Kalanick told his engineers to “geofence” Apple’s headquarters in Cupertino, Calif., a way to digitally identify people reviewing Uber’s software in a specific location. Uber would then obfuscate its code for people within that geofenced area, essentially drawing a digital lasso around those it wanted to keep in the dark. Apple employees at its headquarters were unable to see Uber’s fingerprinting.”

    That sure sounds like a “defeat device”. And note how simple it was to design: as long as Uber was within a certain range around Apple’s headquarters, the “defeat device” software was turn on. And that was just for Apple. Then there’s the law enforcement “defeat device”:


    To develop its own business, Uber sidestepped the authorities. Some employees started using a tool called Greyball to deceive officials trying to shut down Uber’s service. The tool, developed to aid driver safety and to trick fraudsters, essentially showed a fake version of Uber’s app to some people to disguise the locations of cars and drivers. It soon became a way for Uber drivers to evade capture by law enforcement in places where the service was deemed illegal.

    After The Times reported on Greyball in March, Uber said it would prohibit employees from using the tool against law enforcement.

    So how did “Greyball” work? Well, geofencing was one approach, where Uber would “geofence” the area around where city employees involved with regulating Uber worked. But for Greyball the geofencing got much more specific. Or, rather, personal. Uber would “tag” individuals working for law enforcement and “geofence” them:

    The New York Times

    How Uber Deceives the Authorities Worldwide

    By MIKE ISAAC
    MARCH 3, 2017

    SAN FRANCISCO — Uber has for years engaged in a worldwide program to deceive the authorities in markets where its low-cost ride-hailing service was resisted by law enforcement or, in some instances, had been banned.

    The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea.

    Greyball was part of a program called VTOS, short for “violation of terms of service,” which Uber created to root out people it thought were using or targeting its service improperly. The program, including Greyball, began as early as 2014 and remains in use, predominantly outside the United States. Greyball was approved by Uber’s legal team.

    Greyball and the VTOS program were described to The New York Times by four current and former Uber employees, who also provided documents. The four spoke on the condition of anonymity because the tools and their use are confidential and because of fear of retaliation by Uber.

    Uber’s use of Greyball was recorded on video in late 2014, when Erich England, a code enforcement inspector in Portland, Ore., tried to hail an Uber car downtown in a sting operation against the company.

    At the time, Uber had just started its ride-hailing service in Portland without seeking permission from the city, which later declared the service illegal. To build a case against the company, officers like Mr. England posed as riders, opening the Uber app to hail a car and watching as miniature vehicles on the screen made their way toward the potential fares.

    But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues — essentially Greyballing them as city officials — based on data collected from the app and in other ways. The company then served up a fake version of the app, populated with ghost cars, to evade capture.

    At a time when Uber is already under scrutiny for its boundary-pushing workplace culture, its use of the Greyball tool underscores the lengths to which the company will go to dominate its market. Uber has long flouted laws and regulations to gain an edge against entrenched transportation providers, a modus operandi that has helped propel it into more than 70 countries and to a valuation close to $70 billion.

    Yet using its app to identify and sidestep the authorities where regulators said Uber was breaking the law goes further toward skirting ethical lines — and, potentially, legal ones. Some at Uber who knew of the VTOS program and how the Greyball tool was being used were troubled by it.

    In a statement, Uber said, “This program denies ride requests to users who are violating our terms of service — whether that’s people aiming to physically harm drivers, competitors looking to disrupt our operations, or opponents who collude with officials on secret ‘stings’ meant to entrap drivers.”

    The mayor of Portland, Ted Wheeler, said in a statement, “I am very concerned that Uber may have purposefully worked to thwart the city’s job to protect the public.”

    Uber, which lets people hail rides using a smartphone app, operates multiple types of services, including a luxury Black Car offering in which drivers are commercially licensed. But an Uber service that many regulators have had problems with is the lower-cost version, known in the United States as UberX.

    UberX essentially lets people who have passed a background check and vehicle inspection become Uber drivers quickly. In the past, many cities have banned the service and declared it illegal.

    That is because the ability to summon a noncommercial driver — which is how UberX drivers using private vehicles are typically categorized — was often unregulated. In barreling into new markets, Uber capitalized on this lack of regulation to quickly enlist UberX drivers and put them to work before local regulators could stop them.

    After the authorities caught on to what was happening, Uber and local officials often clashed. Uber has encountered legal problems over UberX in cities including Austin, Tex., Philadelphia and Tampa, Fla., as well as internationally. Eventually, agreements were reached under which regulators developed a legal framework for the low-cost service.

    That approach has been costly. Law enforcement officials in some cities have impounded vehicles or issued tickets to UberX drivers, with Uber generally picking up those costs on the drivers’ behalf. The company has estimated thousands of dollars in lost revenue for every vehicle impounded and ticket received.

    This is where the VTOS program and the use of the Greyball tool came in. When Uber moved into a new city, it appointed a general manager to lead the charge. This person, using various technologies and techniques, would try to spot enforcement officers.

    One technique involved drawing a digital perimeter, or “geofence,” around the government offices on a digital map of a city that Uber was monitoring. The company watched which people were frequently opening and closing the app — a process known internally as eyeballing — near such locations as evidence that the users might be associated with city agencies.

    Other techniques included looking at a user’s credit card information and determining whether the card was tied directly to an institution like a police credit union.

    Enforcement officials involved in large-scale sting operations meant to catch Uber drivers would sometimes buy dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees would go to local electronics stores to look up device numbers of the cheapest mobile phones for sale, which were often the ones bought by city officials working with budgets that were not large.

    In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were regular new riders or probably city officials.

    If such clues did not confirm a user’s identity, Uber employees would search social media profiles and other information available online. If users were identified as being linked to law enforcement, Uber Greyballed them by tagging them with a small piece of code that read “Greyball” followed by a string of numbers.

    When someone tagged this way called a car, Uber could scramble a set of ghost cars in a fake version of the app for that person to see, or show that no cars were available. Occasionally, if a driver accidentally picked up someone tagged as an officer, Uber called the driver with instructions to end the ride.

    Uber employees said the practices and tools were born in part out of safety measures meant to protect drivers in some countries. In France, India and Kenya, for instance, taxi companies and workers targeted and attacked new Uber drivers.

    In those areas, Greyballing started as a way to scramble the locations of UberX drivers to prevent competitors from finding them. Uber said that was still the tool’s primary use.

    But as Uber moved into new markets, its engineers saw that the same methods could be used to evade law enforcement. Once the Greyball tool was put in place and tested, Uber engineers created a playbook with a list of tactics and distributed it to general managers in more than a dozen countries on five continents.

    At least 50 people inside Uber knew about Greyball, and some had qualms about whether it was ethical or legal. Greyball was approved by Uber’s legal team, led by Salle Yoo, the company’s general counsel. Ryan Graves, an early hire who became senior vice president of global operations and a board member, was also aware of the program.

    Ms. Yoo and Mr. Graves did not respond to requests for comment.

    Outside legal specialists said they were uncertain about the legality of the program. Greyball could be considered a violation of the federal Computer Fraud and Abuse Act, or possibly intentional obstruction of justice, depending on local laws and jurisdictions, said Peter Henning, a law professor at Wayne State University who also writes for The New York Times.

    “With any type of systematic thwarting of the law, you’re flirting with disaster,” Professor Henning said. “We all take our foot off the gas when we see the police car at the intersection up ahead, and there’s nothing wrong with that. But this goes far beyond avoiding a speed trap.”

    On Friday, Marietje Schaake, a member of the European Parliament for the Dutch Democratic Party in the Netherlands, wrote that she had written to the European Commission asking, among other things, if it planned to investigate the legality of Greyball.

    To date, Greyballing has been effective. In Portland on that day in late 2014, Mr. England, the enforcement officer, did not catch an Uber, according to local reports.

    And two weeks after Uber began dispatching drivers in Portland, the company reached an agreement with local officials that said that after a three-month suspension, UberX would eventually be legally available in the city.

    “But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues — essentially Greyballing them as city officials — based on data collected from the app and in other ways. The company then served up a fake version of the app, populated with ghost cars, to evade capture.”

    And how exactly did Uber “tag” these individuals working for city governments? There was using their credit card information to determine if they were for someone like the police. But if that didn’t work, they searched social media:


    This is where the VTOS program and the use of the Greyball tool came in. When Uber moved into a new city, it appointed a general manager to lead the charge. This person, using various technologies and techniques, would try to spot enforcement officers.

    One technique involved drawing a digital perimeter, or “geofence,” around the government offices on a digital map of a city that Uber was monitoring. The company watched which people were frequently opening and closing the app — a process known internally as eyeballing — near such locations as evidence that the users might be associated with city agencies.

    Other techniques included looking at a user’s credit card information and determining whether the card was tied directly to an institution like a police credit union.

    Enforcement officials involved in large-scale sting operations meant to catch Uber drivers would sometimes buy dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees would go to local electronics stores to look up device numbers of the cheapest mobile phones for sale, which were often the ones bought by city officials working with budgets that were not large.

    In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were regular new riders or probably city officials.

    If such clues did not confirm a user’s identity, Uber employees would search social media profiles and other information available online. If users were identified as being linked to law enforcement, Uber Greyballed them by tagging them with a small piece of code that read “Greyball” followed by a string of numbers.

    “If such clues did not confirm a user’s identity, Uber employees would search social media profiles and other information available online. If users were identified as being linked to law enforcement, Uber Greyballed them by tagging them with a small piece of code that read “Greyball” followed by a string of numbers.”

    So Uber was search social media account to play “Where’s Cop Waldo?”, and that was only if their other methods of identifying the police or other government employees didn’t work. In other words, the “defeat device” for Uber in this case was Big Data. Big Data plus a simple “tagging” system that could activate Uber’s fake-mode for people on “Greyball” list. That’s quite a “defeat device”.

    And as disturbing as this story is in terms of the growing privacy danger it highlights for anyone involved with law enforcement and government (or private regulators in the case of Apple), it also raises the question: if Uber was doing this “background check” in order to finding the police, doesn’t that mean it was doing this check for all its riders? At least in the cities where they had these disputes with the local government? Isn’t that what’s implied by this story? Mass back ground checks? It seems like that’s what it implies.

    So that’s all one more example of the privacy dangers associated with the ‘anything goes’ barely-regulated Big Data industry: private companies now have an incentive to engage in Big Data searchers on basically everyone in order to identify all the people who might bust them for mass Big Data abuses.

    Posted by Pterrafractyl | April 23, 2017, 3:28 pm
  7. Check out some of the features in the next version of Amazon’s Echo, the Echo Look which has a microphone and camera so it can take pictures of you and give you fashion advice. Yes, an AI-driven device designed to placed in your bedroom to capture audio and video is one the way. The images and videos are stored indefinitely in the Amazon cloud. And when Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. So based on that non-response response, it would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look:

    Vice
    Motherboard

    Amazon Wants to Put a Camera and Microphone in Your Bedroom

    Jason Koebler

    Apr 26 2017, 10:34am

    Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.

    The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed.

    Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection.

    * You cool with an algorithm, machine learning, and “fashion specialists” deciding whether you look attractive today? What sorts of built-in biases will an AI fashionista have? It’s worth remembering that a recent AI-judged beauty contest picked primarily white winners.
    * You cool with Amazon having the capability to see and perhaps catalog every single article of clothing you own? Who needs a Calvin Klein dash button if your Echo can tell when you need new underwear? Will Alexa prevent you from buying a pair of JNCOs?
    * You cool with Amazon putting a camera in your bedroom?
    * Amazon store images and videos taken by Echo Look indefinitely, the company told us. Audio recorded by the original Echo has already been sought out in a murder case; to its credit, Amazon fought a search warrant in that case.

    “All photos and video captured with your Echo Look are securely stored in the AWS cloud and locally in the Echo Look app until a customer deletes them,” a spokesperson for the company said. “You can delete the photos or videos associated with your account anytime in the Echo Look App.”

    Motherboard also asked if Echo Look photos, videos, and the data gleaned from them would be sold to third parties; the company did not address that question.

    As technosociologist Zeynep Tufekci points out, machine learning combined with full-length photos and videos have at least the potential to be used for much more than selling you clothes or serving you ads. Amazon will have the capability to detect if you’re pregnant and may be able to learn if you’re depressed. Her whole thread is worth reading.

    With this data, Amazon won't be able to just sell you clothes or judge you. It could analyze if you're depressed or pregnant and much else. pic.twitter.com/irc0tLVce9— Zeynep Tufekci (@zeynep) April 26, 2017

    In practice, the Echo Look isn’t much different than, say, a Nest camera or an internet-connected baby monitor (the latter of which gets hacked all the time, by the way). But the addition of artificial intelligence and Amazon’s penchant for using its products to sell us more stuff makes this feel more than a bit Black Mirror-ish.

    “Motherboard also asked if Echo Look photos, videos, and the data gleaned from them would be sold to third parties; the company did not address that question.”

    No comment. That’s quite a comment.

    But it’s worth noting that Amazon did comment about this same question to a reporter from The Verge. Although it wasn’t an answer to exactly the same question. The question Amazon neglected to answer above was the general question of whether Amazon would sell the data “to third parties.” But for a report in The Verge, which came out a day after the above report from Vice, an Amazon representative did volunteer that it wouldn’t share any personal information gathered from the Echo Look to “advertisers or to third-party sites that display our interest-based ads”:

    The Verge

    Amazon’s Echo Look is a minefield of AI and privacy concerns
    What does Amazon want to learn from pictures of its customers? The company won’t say
    by James Vincent
    Apr 27, 2017, 2:48pm EDT

    Computer scientist Andrew Ng once described the power of contemporary AI as the ability to automate any mental task that takes a human “less than one second of thought.” It’s a rule of thumb that’s worth remembering when you think about Amazon’s new Echo Look — a smart camera with a built-in AI assistant. Amazon says the Echo Look will help users dress and give them fashion advice, but what other judgements could it make?

    As academic and sociologist Zeynep Tufekci put it on Twitter: “Machine learning algorithms can do so much with regular full length pictures of you. They can infer private things you did not disclose […] All this to sell you more clothes. We are selling out to surveillance capitalism that can quickly evolve into authoritarianism for so cheap.” (The whole thread from Tufecki is definitely worth a read.)

    Advertisers openly say it's best to sell make-up to women when they feel "fat, lonely and depressed." With this data, won't have to guess.— Zeynep Tufekci (@zeynep) April 26, 2017

    This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they “can’t speculate” on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.

    This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: “Can’t speculate.”

    The company did, though, say it wouldn’t share any personal information gleaned from the Echo Look to “advertisers or to third-party sites that display our interest-based ads.” That means Amazon could still use data from the Look to target ads at you itself, but at least third parties won’t.

    Right now, the Echo Look is halfway between prototype and full-on product. As is often the case with Amazon’s hardware efforts, the company seems most interested in just getting a product out there and gauging public reaction, rather than finessing every detail. The company is giving no indication of when the Echo Look will actually be available, and it’s currently only being sold “by invitation only.” All this means that Amazon itself probably isn’t yet sure what exactly it will do with the data the device collects. But, if the company refuses to give any more detail, it’s understandable to fear the worst.

    “The company did, though, say it wouldn’t share any personal information gleaned from the Echo Look to “advertisers or to third-party sites that display our interest-based ads.” That means Amazon could still use data from the Look to target ads at you itself, but at least third parties won’t.”

    So based on the information, or lack thereof, that Amazon is willing to share at this point, the answer to the question of whether or not the data gathered by the Echo Look will be sold to third-parties depends on whether or not the phrase “advertisers or to third-party sites that display our interest-based ads” was intended to mean “all third-parties” or not. Because it seems like there should be plenty of non-advertising interest in exactly the same kind of data advertisers are interested in.

    Only time, and likely a series of data-privacy horror stories, will tell whether or not the data collected by Amazon’s personal data collective device designed to go in your bedroom end up in third-party hands. At this point we can mostly just cynically speculate.

    As to whether or not Amazon will be keeping all this harvested data for its own internal purposes, like cross-referencing the data gathered from the Echo Look with all the other data it has on us, we don’t need to cynically speculate quite as much since it’s pretty obvious from Amazon’s “we can’t speculate” response that the company is definitely speculating about cross-referencing all that data:


    This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they “can’t speculate” on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.

    This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: “Can’t speculate.”

    “But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: “Can’t speculate.””

    That’s an ironically clear answer. Yes, Amazon is almost definitely speculating about cross-reference the data it collects from the Look with everything else it knows about you. And that raises another interesting question about the data it could potentially sell: the big edge that Big Data collectors like Facebook, Google, and Amazon have over the rest of the Big Data industry is all the insights they can make by putting two and two together. Insights that other Big Data competitors can’t generate themselves because of the relative exclusiveness of the data collected by social media/internet giants like Facebook, Google, and Amazon. So what about the sale of those insights? After all, they weren’t directly harvested data. They were instead inferred from harvested data. So will those insights be for sale? Like, if the Amazon infers that you’re depressed or pregnant or whatever based on the data collected from the Echo Look – and maybe some other data they have on you – are they going to be open to selling that data.

    In other words, when Amazon claims that it won’t sell the data it harvests to third-party advertisers, does that include inferences Amazon makes based on that data too? Or is it just the raw data that Amazon says it won’t sell? Given the ambiguousness of Amazon’s answers we can only speculate at this point. Very cynically speculate.

    Posted by Pterrafractyl | April 27, 2017, 2:38 pm

Post a comment