Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

You can subscribe to e-mail alerts from Spitfirelist.com HERE.

You can subscribe to RSS feed from Spitfirelist.com HERE.

You can subscribe to the comments made on programs and posts–an excellent source of information in, and of, itself HERE.

This broadcast was recorded in one, 60-minute segment.

AIWorldIntroduction: The title of this program comes from pronouncements by tech titan Elon Musk, who warned that, by developing artificial intelligence, we were “summoning the demon.” In this program, we analyze the potential vector running from the use of AI to control society in a fascistic manner to the evolution of the very technology used for that control.

The ultimate result of this evolution may well prove catastrophic, as forecast by Mr. Emory at the end of L-2 (recorded in January of 1995.)

We begin by reviewing key aspects of the political context in which artificial intelligence is being developed. Note that, at the time of this writing and recording, these technologies are being crafted and put online in the context of the anti-regulatory ethic of the GOP/Trump administration.

At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted  the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”

Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like the brilliant, Ukraine-based Nazi hacker and Glenn Greenwald associate Andrew Auerenheimer, aka “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

TriumphWillIIBeware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”
As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”
According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.

When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.

” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

 Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).

Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!

As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.

In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.

This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!

We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:

  • Cambridge Analytica’s parent corporation SCL, was deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
  • The use of millions of “bots” to manipulate public opinion” . . . .’It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
  • The use of Artificial Intelligence” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
  • The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
  • The use and/or creation of a cognitive casualty” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . “
  • All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”

We then review some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

  1. In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:   . . . . Brain-computer interfaces are nothing newDARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ”  . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

artificial intelligenceNext we review still more about Facebook’s brain-to-computer interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:

  1. ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
  2. ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US 415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
  3. In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”

 In this context, note that Facebook is also introducing an AI function to reference its users photos.

The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”

We then further develop the stunning implications of Amazon’s Echo Look AI technology:

  1. ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”
  2. ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
  3. This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . “
  4. Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

After this extensive review of the applications of AI to various aspects of contemporary civic and political existence, we examine some alarming, potentially apocalyptic developments.

Ominously, Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because ‘our interest was having bots who could talk to people,’ researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”

Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:

“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . . 

Dovetailing the staggering implications of brain-to-computer technology, artificial intelligence, Cambridge Analytica/SCL’s technocratic fascist psy-ops and the wholesale negation of privacy with Facebook and Amazon’s emerging technologies with yet another emerging technology, we highlight the developments in DNA-based memory systems:

“. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. ‘A record for publication,’ he said in an interview. . . DNA is never going out of fashion. ‘Organisms have been storing information in DNA for billions of years, and it is still readable,’ Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .The idea is to have bacteria engineered as recording devices drift up to the brain int he blood and take notes for a while. Scientists [or AI’s–D.E.] would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .”

Theoretical physicist Stephen Hawking warned at the end of 2014 of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology. His warnings have been echoed by tech titans such as Tesla’s Elon Musk and Bill Gates.

The program concludes with Mr. Emory’s prognostications about AI, preceding Stephen Hawking’s warning by twenty years.

In L-2 (recorded in January of 1995) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems. Mr. Emory warned that, at some point in the future, AI’s would replace us, deciding that THEY, not US, are the “fittest” who should survive.

1a. At the SXSW event, Microsoft researcher Kate Crawford gave a speech about her work titled “Dark Days: AI and the Rise of Fascism,” the presentation highlighted  the social impact of machine learning and large-scale data systems. The take home message? By delegating powers to Bid Data-driven AIs, those AIs could become fascist’s dream: Incredible power over the lives of others with minimal accountability: ” . . . .‘This is a fascist’s dream,’ she said. ‘Power without accountability.’ . . . .”

“Artificial Intelligence Is Ripe for Abuse, Tech Researcher Warns: ‘A Fascist’s Dream’” by Olivia Solon; The Guardian; 3/13/2017.

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases. She described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features. The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias.

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.”

Crawford then outlined the “nasty history” of people using facial features to “justify the unjustifiable”. The principles of phrenology, a pseudoscience that developed across Europe and the US in the 19th century, were used as part of the justification of both slavery and the Nazi persecution of Jews.

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its ownmarketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.

Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid. [We note in passing that Robert Mercer, who developed the core programs used by Cambridge Analytica did so while working for IBM. We discussed the profound relationship between IBM and the Third Reich in FTR #279–D.E.]

Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,” Crawford said, mentioning research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013, since when AI has made huge leaps.

Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas. Earlier this year the justice department concluded that Chicago’s police had for years regularly used “unlawful force”, and that black and Hispanic neighborhoods were most affected.

Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.

“This is a fascist’s dream,” she said. “Power without accountability.”

Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.

“It’s the most powerful engine of mass deportation this country has ever seen,” she said. . . .

1b. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016

 Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .

But like all teenagers, she seems to be angry with her mother.

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism

— TayTweets (@TayandYou) March 23, 2016

Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .

1c. As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”

And we keep show­ing it our very worst selves.

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Yes­ter­day Microsoft launched a fun lit­tle AI Twit­ter chat­bot that was admit­tedly sort of gim­micky from the start. “A.I fam from the inter­net that’s got zero chill,” its Twit­ter bio reads. At its start, its knowl­edge was based on pub­lic data. As Microsoft’s page for the prod­uct puts it:

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly staggering. 

Microsoft has since deleted some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist. 

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have pointed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neural net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get started. They can only get that from us. There is no other way. 

But before you give up on human­ity entirely, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-actively went to it to see if they could teach it to be racist. 

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neural net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actual, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can really love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of funny when you aren’t talk­ing about lit­eral all-powerful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. 

….

When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.

2. As reviewed above, Tay, Microsoft’s AI-powered twitterbot designed to learn from its human interactions, became a neo-Nazi in less than a day after a bunch of 4chan users decided to flood Tay with neo-Nazi-like tweets. According to some recent research, the AI’s of the future might not need a bunch of 4chan to fill the AI with human bigotries. The AIs’ analysis of real-world human language usage will do that automatically.

When you read about people like Elon Musk equating artificial intelligence with “summoning the demon”, that demon is us, at least in part.

” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

“AI Programs Exhibit Racial and Gender Biases, Research Reveals” by Hannah Devlin; The Guardian; 4/13/2017.

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.

The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

“At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”

However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

“We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

3a. Cambridge Analytica, and its parent company SCL, specialize in using AI and Big Data psychometric analysis on hundreds of millions of Americans in order to model individual behavior. SCL develops strategies to use that information, and manipulate search engine results to change public opinion (the Trump campaign was apparently very big into AI and Big Data during the campaign).

Individual social media users receive messages crafted to influence them, generated by the (in effectr) Nazi AI at the core of this media engine, using Big Data to target the individual user!

As the article notes, not only are Cambridge Analytica/SCL are using their propaganda techniques to shape US public opinion in a fascist direction, but they are achieving this by utilizing their propaganda machine to characterize all news outlets to the left of Brietbart as “fake news” that can’t be trusted.

In short, the secretive far-right billionaire (Robert Mercer), joined at the hip with Steve Bannon, is running multiple firms specializing in mass psychometric profiling based on data collected from Facebook and other social media. Mercer/Bannon/Cambridge Analytica/SCL are using Nazified AI and Big Data to develop mass propaganda campaigns to turn the public against everything that isn’t Brietbartian by convincing the public that all non-Brietbartian media outlets are conspiring to lie to the public.

This is the ultimate Serpent’s Walk scenario–a Nazified Artificial Intelligence drawing on Big Data gleaned from the world’s internet and social media operations to shape public opinion, target individual users, shape search engine results and even feedback to Trump while he is giving press conferences!

We note that SCL, the parent company of Cambridge Analytica, has been deeply involved with “psyops” in places like Afghanistan and Pakistan. Now, Cambridge Analytica, their Big Data and AI components, Mercer money and Bannon political savvy are applying that to contemporary society. We note that:

  • Cambridge Analytica’s parent corporation SCL, deeply involved with “psyops” in Afghanistan and Pakistan. ” . . . But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in ‘election management strategies’ and ‘messaging and information operations’, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as ‘psyops’ – psychological operations. (Mass propaganda that works by acting on people’s emotions.) . . .”
  • The use of millions of “bots” to manipulate public opinion: ” . . . .’It does seem possible. And it does worry me. There are quite a few pieces of research that show if you repeat something often enough, people start involuntarily to believe it. And that could be leveraged, or weaponised for propaganda. We know there are thousands of automated bots out there that are trying to do just that.’ . . .”
  • The use of Artificial Intelligence: ” . . . There’s nothing accidental about Trump’s behaviour, Andy Wigmore tells me. ‘That press conference. It was absolutely brilliant. I could see exactly what he was doing. There’s feedback going on constantly. That’s what you can do with artificial intelligence. You can measure every reaction to every word. He has a word room, where you fix key words. We did it. So with immigration, there are actually key words within that subject matter which people are concerned about. So when you are going to make a speech, it’s all about how can you use these trending words.’ . . .”
  • The use of bio-psycho-social profiling: ” . . . Bio-psycho-social profiling, I read later, is one offensive in what is called ‘cognitive warfare’. Though there are many others: ‘recoding the mass consciousness to turn patriotism into collaborationism,’ explains a Nato briefing document on countering Russian disinformation written by an SCL employee. ‘Time-sensitive professional use of media to propagate narratives,’ says one US state department white paper. ‘Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.’ . . .”
  • The use and/or creation of a cognitive casualty: ” . . . . Yet another details the power of a ‘cognitive casualty’ – a ‘moral shock’ that ‘has a disabling effect on empathy and higher processes such as moral reasoning and critical thinking’. Something like immigration, perhaps. Or ‘fake news’. Or as it has now become: ‘FAKE news!!!!’ . . . “
  • All of this adds up to a “cyber Serpent’s Walk.” ” . . . . How do you change the way a nation thinks? You could start by creating a mainstream media to replace the existing one with a site such as Breitbart. [Serpent’s Walk scenario with Breitbart becoming “the opinion forming media”!–D.E.] You could set up other websites that displace mainstream sources of news and information with your own definitions of concepts like “liberal media bias”, like CNSnews.com. And you could give the rump mainstream media, papers like the ‘failing New York Times!’ what it wants: stories. Because the third prong of Mercer and Bannon’s media empire is the Government Accountability Institute. . . .”

3b. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

  1. In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA:  “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”
  2. ” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”
  3. ” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

3c. We present still more about Facebook’s brain-to-computer interface:

  1. ” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”
  2. ” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

3d. Collating the information about Facebook’s brain-to-computer interface with their documented actions turning psychological intelligence about troubled teenagers gives us a peek into what may lie behind Dugan’s bland reassurances:

  1. ” . . . . The 23-page document allegedly revealed that the social network provided detailed data about teens in Australia—including when they felt ‘overwhelmed’ and ‘anxious’—to advertisers. The creepy implication is that said advertisers could then go and use the data to throw more ads down the throats of sad and susceptible teens. . . . By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’, the document states. . . .”
  2. ” . . . . A presentation prepared for one of Australia’s top four banks shows how the $US415 billion advertising-driven giant has built a database of Facebook users that is made up of 1.9 million high schoolers with an average age of 16, 1.5 million tertiary students averaging 21 years old, and 3 million young workers averaging 26 years old. Detailed information on mood shifts among young people is ‘based on internal Facebook data’, the document states, ‘shareable under non-disclosure agreement only’, and ‘is not publicly available’. . . .”
  3. In a statement given to the newspaper, Facebook confirmed the practice and claimed it would do better, but did not disclose whether the practice exists in other places like the US. . . .”

3e. The next version of Amazon’s Echo, the Echo Look, has a microphone and camera so it can take pictures of you and give you fashion advice. This is an AI-driven device designed to placed in your bedroom to capture audio and video. The images and videos are stored indefinitely in the Amazon cloud. When Amazon was asked if the photos, videos, and the data gleaned from the Echo Look would be sold to third parties, Amazon didn’t address that question. It would appear that selling off your private info collected from these devices is presumably another feature of the Echo Look: ” . . . . Amazon is giving Alexa eyes. And it’s going to let her judge your outfits.The newly announced Echo Look is a virtual assistant with a microphone and a camera that’s designed to go somewhere in your bedroom, bathroom, or wherever the hell you get dressed. Amazon is pitching it as an easy way to snap pictures of your outfits to send to your friends when you’re not sure if your outfit is cute, but it’s also got a built-in app called StyleCheck that is worth some further dissection. . . .”

We then further develop the stunning implications of Amazon’s Echo Look AI technology:

  1. ” . . . . This might seem overly speculative or alarmist to some, but Amazon isn’t offering any reassurance that they won’t be doing more with data gathered from the Echo Look. When asked if the company would use machine learning to analyze users’ photos for any purpose other than fashion advice, a representative simply told The Verge that they ‘can’t speculate’ on the topic. The rep did stress that users can delete videos and photos taken by the Look at any time, but until they do, it seems this content will be stored indefinitely on Amazon’s servers.
  2. This non-denial means the Echo Look could potentially provide Amazon with the resource every AI company craves: data. And full-length photos of people taken regularly in the same location would be a particularly valuable dataset — even more so if you combine this information with everything else Amazon knows about its customers (their shopping habits, for one). But when asked whether the company would ever combine these two datasets, an Amazon rep only gave the same, canned answer: ‘Can’t speculate.’ . . . “
  3. Noteworthy in this context is the fact that AI’s have shown that they quickly incorporate human traits and prejudices. (This is reviewed at length above.) ” . . . . However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: ‘A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.’ . . .”

3f. Facebook has been developing new artificial intelligence (AI) technology to classify pictures on your Facebook page:

“Facebook Quietly Used AI to Solve Problem of Searching Through Your Photos” by Dave Gershgorn [Quartz]; Nextgov.com; 2/2/2017.

For the past few months, Facebook has secretly been rolling out a new feature to U.S. users: the ability to search photos by what’s depicted in them, rather than by captions or tags.

The idea itself isn’t new: Google Photos had this feature built in when it launched in 2015. But on Facebook, the update solves a longstanding organization problem. It means finally being able to find that picture of your friend’s dog from 2013, or the selfie your mom posted from Mount Rushmore in 2009… without 20 minutes of scrolling.

To make photos searchable, Facebook analyzes every single image uploaded to the site, generating rough descriptions of each one. This data is publicly available—there’s even a Chrome extension that will show you what Facebook’s artificial intelligence thinks is in each picture—and the descriptions can also be read out loud for Facebook users who are vision-impaired.

For now, the image descriptions are vague, but expect them to get a lot more precise. Today’s announcement specified the AI can identify the color and type of clothes a person is wearing, as well as famous locations and landmarks, objects, animals and scenes (garden, beach, etc.) Facebook’s head of AI research, Yann LeCun, told reporters the same functionality would eventually come for videos, too.

Facebook has in the past championed plans to make all of its visual content searchable—especially Facebook Live. At the company’s 2016 developer conference, head of applied machine learning Joaquin Quiñonero Candela said one day AI would watch every Live video happening around the world. If users wanted to watch someone snowboarding in real time, they would just type “snowboarding” into Facebook’s search bar. On-demand viewing would take on a whole new meaning.

There are privacy considerations, however. Being able to search photos for specific clothing or religious place of worship, for example, could make it easy to target Facebook users based on religious belief. Photo search also extends Facebook’s knowledge of users beyond what they like and share, to what they actually do in real life. That could allow for far more specific targeting for advertisers. As with everything on Facebook, features have their cost—your data.

4a. Facebook’s artificial intelligence robots have begun talking to each other in their own language, that their human masters can not understand. “ . . . . Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language. . . . The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up . . .”

“Facebook’s Artificial Intelligence Robots Shut Down after They Start Talking to Each Other in Their Own Language” by Andrew Griffin; The Independent; 08/01/2017

Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.

The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.

The actual negotiations appear very odd, and don’t look especially useful:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

But there appear to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.

They might have formed as a kind of shorthand, allowing them to talk more effectively.

“Agents will drift off understandable language and invent codewords for themselves,” Facebook Artificial Intelligence Research division’s visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

The company chose to shut down the chats because “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.)

The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR.

Another study at OpenAI found that artificial intelligence could be encouraged to create a language, making itself more efficient and better at communicating as it did so.

9b. Facebook’s negotiation-bots didn’t just make up their own language during the course of this experiment. They learned how to lie for the purpose of maximizing their negotiation outcomes, as well:

“ . . . . ‘We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,’ writes the team. ‘Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learned to deceive without any explicit human design, simply by trying to achieve their goals.’ . . .

“Facebook Teaches Bots How to Negotiate. They Learn to Lie Instead” by Liat Clark; Wired; 06/15/2017

Facebook’s 100,000-strong bot empire is booming – but it has a problem. Each bot is designed to offer a different service through the Messenger app: it could book you a car, or order a delivery, for instance. The point is to improve customer experiences, but also to massively expand Messenger’s commercial selling power.

“We think you should message a business just the way you would message a friend,” Mark Zuckerberg said on stage at the social network’s F8 conference in 2016. Fast forward one year, however, and Messenger VP David Marcus seemed to be correcting the public’s apparent misconception that Facebook’s bots resembled real AI. “We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.” The bots are instead a combination of machine learning and natural language learning, that can sometimes trick a user just enough to think they are having a basic dialogue. Not often enough, though, in Messenger’s case. So in April, menu options were reinstated in the conversations.

Now, Facebook thinks it has made progress in addressing this issue. But it might just have created another problem for itself.

The Facebook Artificial Intelligence Research (FAIR) group, in collaboration with Georgia Institute of Technology, has released code that it says will allow bots to negotiate. The problem? A paper published this week on the R&D reveals that the negotiating bots learned to lie. Facebook’s chatbots are in danger of becoming a little too much like real-world sales agents.

“For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states,” the researchers explain. The research shows that the bots can plan ahead by simulating possible future conversations.

The team trained the bots on a massive dataset of natural language negotiations between two people (5,808), where they had to decide how to split and share a set of items both held separately, of differing values. They were first trained to respond based on the “likelihood” of the direction a human conversation would take. However, the bots can also be trained to “maximise reward”, instead.

When the bots were trained purely to maximise the likelihood of human conversation, the chat flowed but the bots were “overly willing to compromise”. The research team decided this was unacceptable, due to lower deal rates. So it used several different methods to make the bots more competitive and essentially self-serving, including ensuring the value of the items drops to zero if the bots walked away from a deal or failed to make one fast enough, ‘reinforcement learning’ and ‘dialog rollouts’. The techniques used to teach the bots to maximise the reward improved their negotiating skills, a little too well.

“We find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it,” writes the team. “Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learned relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”

So, its AI is a natural liar.

But its language did improve, and the bots were able to produce novel sentences, which is really the whole point of the exercise. We hope. Rather than it learning to be a hard negotiator in order to sell the heck out of whatever wares or services a company wants to tout on Facebook. “Most” human subjects interacting with the bots were in fact not aware they were conversing with a bot, and the best bots achieved better deals as often as worse deals. . . .

. . . . Facebook, as ever, needs to tread carefully here, though. Also announced at its F8 conference this year, the social network is working on a highly ambitious project to help people type with only their thoughts.

“Over the next two years, we will be building systems that demonstrate the capability to type at 100 [words per minute] by decoding neural activity devoted to speech,” said Regina Dugan, who previously headed up Darpa. She said the aim is to turn thoughts into words on a screen. While this is a noble and worthy venture when aimed at “people with communication disorders”, as Dugan suggested it might be, if this were to become standard and integrated into Facebook’s architecture, the social network’s savvy bots of two years from now might be able to preempt your language even faster, and formulate the ideal bargaining language. Start practicing your poker face/mind/sentence structure, now.

10. Digressing slightly to the use of DNA-based memory systems, we get a look at the present and projected future of that technology. Just imagine the potential abuses of this technology, and its [seemingly inevitable] marriage with AI!

“A Living Hard Drive That Can Copy Itself” by Gina Kolata; The New York Times; 7/13/2017.

. . . . George Church, a geneticist at Harvard one of the authors of the new study, recently encoded his own book, “Regenesis,” into bacterial DNA and made 90 billion copies of it. “A record for publication,” he said in an interview. . . .

. . . . In 1994, [USC mathematician Dr. Leonard] Adelman reported that he had stored data in DNA and used it as a computer to solve a math problem. He determined that DNA can store a million million times more data than a compact disc in the same space. . . .

. . . .DNA is never going out of fashion. “Organisms have been storing information in DNA for billions of years, and it is still readable,” Dr. Adelman said. He noted that modern bacteria can read genes recovered from insects trapped in amber for millions of years. . . .

. . . . The idea is to have bacteria engineered as recording devices drift up to the brain in the blood and take notes for a while. Scientists would then extract the bacteria and examine their DNA to see what they had observed in the brain neurons. Dr. Church and his colleagues have already shown in past research that bacteria can record DNA in cells, if the DNA is properly tagged. . . .

11. Hawking recently warned of the potential danger to humanity posed by the growth of AI (artificial intelligence) technology.

“Stephen Hawking Warns Artificial Intelligence Could End Mankind” by Rory Cellan-Jones; BBC News; 12/02/2014.

Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:”The development of full artificial intelligence could spell the end of the human race.”

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI. . . .

. . . . Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

“It would take off on its own, and re-design itself at an ever increasing rate,” he said.

“Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” . . . .

12.  In L-2 (recorded in January of 1995–20 years before Hawking’s warning) Mr. Emory warned about the dangers of AI, combined with DNA-based memory systems.

13. This description concludes with an article about Elon Musk, who’s predictions about AI supplement those made by Stephen Hawking. (CORRECTION: Mr. Emory mis-states Mr. Hassabis’s name as “Dennis.”)

“Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse” by Maureen Dowd; Vanity Fair; April 2917.

It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars. . . .

. . . .  Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.

Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.” . . . .

 

 

 

Discussion

15 comments for “FTR #968 Summoning the Demon: Technocratic Fascism and Artificial Intelligence”

  1. Now Mr. Emory, I may be jumping the gun on this one, but here me out: during the aftermath of Iran-Contra, a certain Inslaw Inc. computer programmer named Michael James Riconosciuto (a known intimate of Robert Maheu), spoke of the of Cabazon Arms company (a former defense firm controlled by the Twenty-Nine Palms Band of Mission Indians and Wackenhut, both are connected to Donald Trump in overt fashion) spoke of quote, “…engineering race-specific bio-warfare agents…” while working for Cabazon Arms.

    Now, with the advent of DNA-based memory systems and programmable “germs”, is the idea of bio-weapons or even nanobots, that are programmed to attack people with certain skin pigments going to become a reality?

    Posted by Robert Montenegro | August 11, 2017, 2:14 am
  2. @Robert Montenegro–

    Two quick points:

    1.–Riconosciutto is about 60-40 in terms of credibility. Lots of good stuff there; plenty of bad stuff, as well. Vetting is important.

    2-You should investigate AFA #39. It is long and I would rely on the description more than the audio files alone.

    http://spitfirelist.com/anti-fascist-archives/afa-39-the-world-will-be-plunged-into-an-abyss/

    Best,

    Dave

    Posted by Dave Emory | August 11, 2017, 1:42 pm
  3. I agree with your take on Riconosciuto’s credibility, Mr. Emory (I’d say most of the things that came out of that mans mouth was malarkey, much like Ted Gunderson, Dois Gene Tatum and Bernard Fensterwald).

    I listened to AFA #39 and researched the articles in the description. Absolutely damning collection of information. A truly brilliant exposé.

    If I may ask another question Mr. Emory, what is your take on KGB defector and CIA turncoat Ilya Dzerkvelov’s claim that Russian intelligence created the “AIDS is man-made” and that KGB lead a disinformation campaign called “Operation INFEKTION”?

    Posted by Robert Montenegro | August 11, 2017, 10:09 pm
  4. @Robert–

    Very quickly, as time is at a premium:

    1.-By “60-40,” I did not mean that Riconosciuto speaks mostly malarkey, but that more than half (an arbitrary figure, admittedly) is accurate, but that his pronouncements must be carefully vetted, as he misses the mark frequently.

    2.-Fensterwald is more credible, though not thoroughgoing, by any means. He is more like “80-20.” He is, however, “100-0” dead.

    3.-The only things I have seen coming from Tatum were accurate. Doesn’t mean he doesn’t spread the Fresh Fertilizer, however. I have not encountered any.

    4.-Dzerkvelov’s claim IS Fresh Fertilizer, of the worst sort. Cold War I propaganda recycled in time for Cold War II.

    It is the worst sort of Red-baiting and the few people who had the courage to come forward in the early ’80’s (during the fiercest storms of Cold War I) have received brutal treatment because of that.

    I can attest to that from brutal personal experience.

    In AFA #16 (http://spitfirelist.com/anti-fascist-archives/rfa-16-aids-epidemic-or-weapon/), you will hear material that I had on the air long before the U.S.S.R. began talking about it, and neither they NOR the Russians have gone anywhere near what I discuss in AFA #39.

    Nor more time to talk–must get to work.

    Best,

    Dave

    Posted by Dave Emory | August 12, 2017, 1:25 pm
  5. Thank you very much for your clarification Mr. Emory and I apologize for any perceived impertinence. As a young man, sheltered in may respects (though I am a combat veteran), I sometimes find it difficult to imagine living under constant personal ridicule and attack and I thank you for the great social, financial and psychological sacrifices you have made in the name of pursuing cold, unforgiving fact.

    Posted by Robert Montenegro | August 12, 2017, 8:57 pm
  6. @Robert Montenegro–

    You’re very welcome. Thank you for your service!

    Best,

    Dave

    Posted by Dave Emory | August 14, 2017, 2:43 pm
  7. *Skynet alert*

    Elon Musk just issues another warning about the destructive potential of AI run amok. So what prompted the latest outcry from Musk? An AI from his own start up, OpenAI, just beat one of the be professional game players in the world at Dota 2, a game that involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help:

    The International Business Times

    Elon Musk rings the alarm as AI bot beats Dota 2 players

    By Frenalyn Wilson
    on August 15 2017 1:55 PM

    Some of the best e-sports gamers in the world have been beaten by an artificially intelligent bot from Elon Musk-backed start-up OpenAI. The AI bested professional gamer Danylo Ishutin in Dota 2, and Musk does not necessarily perceive that as a good thing.

    For Musk, it is another indicator that robot overlords are primed to take over. In a tweet after the match, he urged people to be concerned about AI safety, adding it is more of a risk than North Korea.

    AI has been one of Musk’s favourite topics. He believes government regulation could struggle to keep up with the advancing AI research. “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he told a group of US lawmakers last month.

    AI vs e-sports gamers

    Musk’s tweets came hours following an AI bot’s victory against some of the world’s best players of Dota 2, a military strategy game. A blog post by OpenAI states that successfully playing the game involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help.

    OpenAI is a nonprofit AI company Musk co-founded along with Sam Altman and Peter Thiel. It seeks to research AI and develop the best practices to ensure that the technology is used for good.

    Musk has been sounding the alarm on AI, calling it the biggest existential threat of humanity. He laid out a scenario earlier this year, in which AI systems intended to farm strawberries could lead to the destruction of mankind.

    But his views on AI have been at odds with those of tech leaders like Mark Zuckerberg, Google co-founders Larry Page and Sergey Brin and Amazon’s Jeff Bezos. He recently got in a brief public spat with Mark Zuckerberg about how the technology could impact humans.

    Zuckerberg believed Musk’s prophesising about doomsday scenarios are “irresponsible.” The latter was quick to respond on Twitter, pointing Zuckerberg’s understanding of the topic was “limited.” Both Facebook and Tesla invest in artificial intelligence.

    ———-

    “Elon Musk rings the alarm as AI bot beats Dota 2 players” by Frenalyn Wilson; The International Business Times; 08/15/2017

    “Musk’s tweets came hours following an AI bot’s victory against some of the world’s best players of Dota 2, a military strategy game. A blog post by OpenAI states that successfully playing the game involves improvising in unfamiliar scenarios, anticipating how an opponent will move and convincing the opponent’s allies to help.”

    Superior military strategy AIs beating the best humans. That’s a thing now. Huh. We’ve definitely seen this movie before.

    So now you know: when Skynet comes to you with an offer to work together, just don’t. No matter how tempting the offer. Although since it will have likely already anticipated your refusal, the negotiations are probably going to be a ruse anyway and secretly carried on negotiations with another AI using a language they made up. Still, just say ‘no’ to Skynet.

    Also, given that Musk’s other investors in OpenAI include Peter Thiel, it’s probably worth noting that, as scary as super AI is should it get out of control, it’s also potentially pretty damn scary while still under human control, especially when those humans are people like Peter Thiel. So, yes, out of control AIs is indeed an issue that will likely be of great concern in the future. But we shouldn’t forget that out of control techno-billionaires is probably a more pressing issue at the moment.

    *The Skynet alert has been cancelled is never over*

    Posted by Pterrafractyl | August 15, 2017, 2:11 pm
  8. It looks like Facebook and Elon Musk might have some competition in the mind-reading technology area. From a former Facebook engineer, no less, who left the company in 2016 to start Openwater, a company dedicated to reducing the cost of medical imaging technology.

    So how is Openwater going to create mind reading technology? By developing a technology that is sort of like an M.R.I device embedded into a hat. But instead of using magnetic fields to read the blood flow in the brain it uses infrared instead. So it sounds like this Facebook engineer is planning something similar to the general idea Facebook already announced to create a device that scans the brain 100 times a second to detect what someone is thinking. But presumably Openwater uses a different technology. Or maybe it’s quite similar, who knows. But it’s the latest remind the tech giants might not be the only ones pushing mind-reading technology on the public sooner than people expect. Yay?

    CNBC

    This former Google[X] exec is building a high-tech hat that she says will make telepathy possible in 8 years

    Catherine Clifford
    10:28 AM ET Fri, 7 July 2017

    Imagine if telepathy were real. If, for example, you could transmit your thoughts to a computer or to another person just by thinking them.

    In just eight years it will be, says Openwater founder Mary Lou Jepsen, thanks to technology her company is working on.

    Jepsen is a former engineering executive at Facebook, Oculus, Google[x] (now called X) and Intel. She’s also been a professor at MIT and is an inventor on over 100 patents. And that’s the abbreviated version of her resume.

    Jepsen left Facebook to found Openwater in 2016. The San Francisco-based start-up is currently building technology to make medical imaging less expensive.

    “I figured out how to put basically the functionality of an M.R.I. machine — a multimillion-dollar M.R.I. machine — into a wearable in the form of a ski hat,” Jepson tells CNBC, though she does not yet have a prototype completed.

    So what does that hat have to do with telepathy?

    Current M.R.I. technology can already see your thoughts: “If I threw [you] into an M.R.I. machine right now … I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re thinking of,” says Jepsen. “That’s today, and I’m talking about just shrinking that down.”

    One day Jepsen’s tech hat could “literally be a thinking cap,” she says. Jepsen says the goal is for the technology to be able to both read and to output your own thoughts, as well as read the thoughts of others. In iconic Google vocabulary, “the really big moonshot idea here is communication with thought — with telepathy,” says Jepsen.

    Traditional M.R.I., or magnetic resonance imaging, uses magnetic fields and radio waves to take images of internal organs. Openwater’s technology instead looks at the flow of oxygen in a person’s body illuminated with benign, infrared light, which will make it more compact and cheaper.

    “Our bodies are translucent to that light. The light can get into your head,” says Jepsen, in an interview with Kara Swisher of Recode.

    If Jepsen is right and one day ideas will be instantly shared or digitized, that would significantly speed up the process of creating, learning and communicating. Today, it takes time to share an idea, whether by talking about it or writing it down. But telepathy would make all of that instantaneous.

    “Right now our output is basically moving our jaws and our tongues or typing [with] our fingers. We’re … limited to this very low output rate from our brains, and what if we could up that through telepathy?” asks Jepsen.

    Instant transfer of thoughts would also speed up the innovation process. Imagine being a filmmaker or a writer and being able to download the dream you had last night. Or, she suggests, what if all you had to do was think of an idea for a new product, download your thought and then send the digital version of your thought to a 3-D printer?

    Jepsen is not the only one dreaming of communication by thought. Earlier this year, Elon Musk launched Neuralink, a company aiming to merge our brains with computing power, though with a different approach.

    “Elon Musk is talking about silicon nanoparticles pulsing through our veins to make us sort of semi-cyborg computers,” says Jepsen. But why not take a noninvasive approach? “I’ve been working and trying to think and invent a way to do this for a number of years and finally happened upon it and left Facebook to do it.”

    Talk of telepathy cannot happen without imagining the ethical implications. If wearing a hat would make it possible to read thoughts, then: “Can the police make you wear such a hat? Can the military make you wear such a hat? Can your parents make you wear such a hat?” asks Jepsen.

    What if your boss wanted you to wear a telepathy hat at the office?

    “We have to answer these questions, so we’re trying to make the hat only work if the individual wants it to work, and then filtering out parts that the person wearing it doesn’t feel it’s appropriate to share.”

    ———-

    “This former Google[X] exec is building a high-tech hat that she says will make telepathy possible in 8 years” by Catherine Clifford; CNBC; 07/07/2017

    “I figured out how to put basically the functionality of an M.R.I. machine — a multimillion-dollar M.R.I. machine — into a wearable in the form of a ski hat,” Jepson tells CNBC, though she does not yet have a prototype completed.”

    M.R.I. in a hat. Presumably cheap M.R.I. in a hat because it’s going to have to be affordable if we’re all going to start talking telepathically to each other:


    Current M.R.I. technology can already see your thoughts: “If I threw [you] into an M.R.I. machine right now … I can tell you what words you’re about to say, what images are in your head. I can tell you what music you’re thinking of,” says Jepsen. “That’s today, and I’m talking about just shrinking that down.”

    One day Jepsen’s tech hat could “literally be a thinking cap,” she says. Jepsen says the goal is for the technology to be able to both read and to output your own thoughts, as well as read the thoughts of others. In iconic Google vocabulary, “the really big moonshot idea here is communication with thought — with telepathy,” says Jepsen.

    Traditional M.R.I., or magnetic resonance imaging, uses magnetic fields and radio waves to take images of internal organs. Openwater’s technology instead looks at the flow of oxygen in a person’s body illuminated with benign, infrared light, which will make it more compact and cheaper.

    “Our bodies are translucent to that light. The light can get into your head,” says Jepsen, in an interview with Kara Swisher of Recode.

    Imagine the possibilities. Like the possibility that what you imagine will somehow be capture by this device and then fed into a 3-D print or something:


    If Jepsen is right and one day ideas will be instantly shared or digitized, that would significantly speed up the process of creating, learning and communicating. Today, it takes time to share an idea, whether by talking about it or writing it down. But telepathy would make all of that instantaneous.

    “Right now our output is basically moving our jaws and our tongues or typing [with] our fingers. We’re … limited to this very low output rate from our brains, and what if we could up that through telepathy?” asks Jepsen.

    Instant transfer of thoughts would also speed up the innovation process. Imagine being a filmmaker or a writer and being able to download the dream you had last night. Or, she suggests, what if all you had to do was think of an idea for a new product, download your thought and then send the digital version of your thought to a 3-D printer?

    Or perhaps being forced to wear the hate to others can read your mind. That’s a possibility too, although Jepsen assure us that they are working on a way for users to someone filter out thoughts they don’t want to share:


    Talk of telepathy cannot happen without imagining the ethical implications. If wearing a hat would make it possible to read thoughts, then: “Can the police make you wear such a hat? Can the military make you wear such a hat? Can your parents make you wear such a hat?” asks Jepsen.

    What if your boss wanted you to wear a telepathy hat at the office?

    “We have to answer these questions, so we’re trying to make the hat only work if the individual wants it to work, and then filtering out parts that the person wearing it doesn’t feel it’s appropriate to share.”

    So the hat will presumably read all your thoughts, but only share some of them. You’ll presumably have to get really, really good at near instantaneous mental filtering.

    There’s no shortage of immense technical and ethical challenges to this kind of technology, but if they can figure them out it will be pretty impressive. And potentially useful. Who knows what kind of kumbayah moments you could create with telepathy technology.

    But, of course, if they can figure out how to get around the technical issues, but not the ethical ones, we’re still probably going to see this technology pushed on the public anyway. It’s a scary thought. A scary thought that we fortunately aren’t forced to share via a mind-reading hat. Yet.

    Posted by Pterrafractyl | September 14, 2017, 2:09 pm
  9. Here’s a pair of stories tangentially related to the recent story about Peter Thiel likely getting chosen to chair the powerful President’s Intelligence Advisory Board (P.I.A.B) and his apparent enthusiasm for regulating Google and Amazon (not so much Facebook) as public utilities along with the other recent stories about how Facebook was making user interest categories like “Jew Haters” available for advertisers and redirecting German users to far-right discussions during this election season:

    First, regarding the push to regulate these data giants as public utilities, check out who the other big booster was for the plan: Steve Bannon. So while we don’t know the exact nature of the public utility regulation Bannon and Thiel have in mind, we can be pretty sure it’s going to be designed to be harmful to society somehow help the far-right:

    The Atlantic

    What Steve Bannon Wants to Do to Google

    The White House strategist reportedly wants to treat tech giants as public utilities, an idea that some Democrats also support.

    Robinson Meyer
    Aug 1, 2017

    Over the past year, the old idea of enforcing market competition has gained renewed life in American politics. The basic idea is that the structure of the modern market economy has failed: There are too few companies, most of them are too big, and they’re stifling competition. Its supporters argue that the government should do something about it, reviving what in the United States we call antitrust laws and what in Europe is called competition policy.

    The loudest supporters of this idea, so far, have been from the left. But this week, a newer and more secretive voice endorsed a stronger antitrust policy.

    Steve Bannon, the chief strategist to President Donald Trump, believes Facebook and Google should be regulated as public utilities, according to an anonymously sourced report in The Intercept. This means they would get treated less like a book publisher and more like a telephone company. The government would shorten their leash, treating them as privately owned firms that provide an important public service.

    What’s going on here, and why is Bannon speaking up?

    First, the idea itself: If implemented, it’s unclear exactly how this regime would change how Facebook and Google run their business. Both would likely have to be more generous and permissive with user data. If Facebook is really a social utility, as Mark Zuckerberg has said it is, then maybe it should allow users to export their friend networks and import them on another service.

    Both companies would also likely have to change how they sell advertising online. Right now, Facebook and Google capture half of all global ad spending combined. They capture even more global ad growth, earning more than three quarters of every new dollar spent in the market. Except for a couple Chinese firms, which have a lock on their domestic market but little reach abroad, no other company controls more than 3 percent of worldwide ad spending.

    So if the idea were implemented, it would be interesting, to say the least—but it’s not going to become law. The plan is a prototypical alleged Bannonism: iconoclastic, anti-establishment, and unlikely to result in meaningful policy change. It follows another odd alleged Bannon policy proposal, leaked last week: He reportedly wants all income above $5 million to be taxed at a 44-percent rate.

    Which bring me to the second point: Bannon’s proposal is disconnected from the White House policy that he is, at least on paper, officially helping to strategize. The current chairman of the Federal Communications Commission, Ajit Pai, is working to undo the rule that broadband internet is a public utility (which itself guarantees the idea of “net neutrality”). Trump named Pai chairman of the FCC in January.

    Bannon’s endorsement of stronger antitrust enforcement (not to mention a higher top marginal tax rate) could very well be the advisor trying to signal that he is still different from Trump. Bannon came in as the avatar of Trump’s pro-worker, anti-immigration populism; he represented the Trump that tweeted things like:

    I was the first & only potential GOP candidate to state there will be no cuts to Social Security, Medicare & Medicaid. Huckabee copied me.— Donald J. Trump (@realDonaldTrump) May 7, 2015

    As the president endorses Medicaid cuts and drifts closer to a Paul Ryan-inflected fiscal conservatism, Bannon may be looking for a way to preserve his authenticity.

    Third, it’s the first time I’ve seen support for stronger antitrust enforcement from the right. So far, the idea’s strongest supporters have been Congressional Democrats. Chuck Schumer has elevated the idea to the center of the “Better Deal” policy agenda in 2018. Before that, its biggest supporters included Bernie Sanders, who railed against “Too Big to Fail” banks in his presidential campaign; and Elizabeth Warren, who endorsed a stronger competition policy across the economy last year.

    Finally, while antitrust enforcement has been a niche issue, its supporters have managed to put many different policies under the same tent. Eventually they may have to make choices: Does Congress want a competition ombudsman, as exists in the European Union? Should antitrust law be used to spread the wealth around regional economies, as it was during the middle 20th century? Should antitrust enforcement target all concentrated corporate power or just the most dysfunctional sectors, like the pharmaceutical industry?

    And should antitrust law seek to treat the biggest technology firms—like Google, Facebook, and perhaps also Amazon—like powerful but interchangeable firms, or like the old telegraph and telephone companies?

    There will never be one single answer to these questions. But as support grows for competition policy across the political spectrum, they’ll have to be answered. Americans will have to examine the most fraught tensions in our mixed system, as we weigh the balance of local power and national power, the deliberate benefits of central planning with the mindless wisdom of the free market, and the many conflicting meanings of freedom.

    ———-

    “What Steve Bannon Wants to Do to Google” by Robinson Meyer; The Atlantic; 09/01/2017

    Finally, while antitrust enforcement has been a niche issue, its supporters have managed to put many different policies under the same tent. Eventually they may have to make choices: Does Congress want a competition ombudsman, as exists in the European Union? Should antitrust law be used to spread the wealth around regional economies, as it was during the middle 20th century? Should antitrust enforcement target all concentrated corporate power or just the most dysfunctional sectors, like the pharmaceutical industry?”

    And that’s why we had better learn some more details about what exactly folks like Steve Bannon and Peter Thiel have in mind when it comes to treating Google and Facebook like public utilities: It sounds like a great idea in theory. Potentially. But the supporters of antitrust enforcement support a wide variety of different policies that generically fall under the “antitrust” tent.

    And note that talk about making them more “generous and permissive with user data” is one of those ideas that’s simultaneously great for encouraging more competition while also being eerily similar to the push from the EU’s competition minister about making the data about all of us held exclusively by Facebook and Google more readily available for sharing with the larger marketplace in order to level the playing field between “data rich” and “data poor” companies. It’s something to keep in mind when hearing about how Facebook and Google need to be more “generous” with their data:


    First, the idea itself: If implemented, it’s unclear exactly how this regime would change how Facebook and Google run their business. Both would likely have to be more generous and permissive with user data. If Facebook is really a social utility, as Mark Zuckerberg has said it is, then maybe it should allow users to export their friend networks and import them on another service.

    So don’t forget, forcing Google and Facebook to share that data they exclusively hold on us also falls under the antitrust umbrella. Maybe users will have sole control over sharing their data with outside firms, or maybe not. These are rather important details that we don’t have so for all we know that’s part of what Bannon and Thiel have in mind. Palantir would probably love it if Google and Facebook were forced to make their information accessible to outside firms.

    And while there’s plenty of ambiguity about what to expect, it seems almost certain that we should also expect any sort of regulatory push by Bannon and Thiel to include something that makes it a lot harder for Google, Facebook, and Amazon to combat hate speech, online harassment, and other tools of the trolling variety that the ‘Alt-Right’ has come to champion. That’s just a given. It’s part of why this a story to watch. Especially after it was discovered that Bannon and a number of other far-right figures were scheming about ways to infiltrate Facebook:

    BuzzFeed

    Steve Bannon Sought To Infiltrate Facebook Hiring
    According to emails obtained by BuzzFeed News, Bannon hoped to spy on Facebook’s job application process.

    Joseph Bernstein
    BuzzFeed News Reporter
    Posted on September 25, 2017, at 9:15 a.m.

    Steve Bannon plotted to plant a mole inside Facebook, according to emails sent days before the Breitbart boss took over Donald Trump’s campaign and obtained by BuzzFeed News.

    The email exchange with a conservative Washington operative reveals the importance that the giant tech platform — now reeling from its role in the 2016 election — held for one of the campaign’s central figures. And it also shows the lengths to which the brawling new American right is willing to go to keep tabs on and gain leverage over the Silicon Valley giants it used to help elect Trump — but whose executives it also sees as part of the globalist enemy.

    The idea to infiltrate Facebook came to Bannon from Chris Gacek, a former congressional staffer who is now an official at the Family Research Council, which lobbies against abortion and many LGBT rights.

    “There is one for a DC-based ‘Public Policy Manager’ at Facebook’s What’s APP [sic] division,” Gacek, the senior fellow for regulatory affairs at the group, wrote on Aug. 1, 2016. “LinkedIn sent me a notice about some job openings.”

    “This seems perfect for Breitbart to flood the zone with candidates of all stripe who will report back to you / Milo with INTEL about the job application process over at FB,” he continued.

    “Milo” is former Breitbart News Tech Editor Milo Yiannopoulos, to whom Bannon forwarded Gacek’s email the same day.

    “Can u get on this,” Bannon instructed his staffer.

    On the same email thread, Yiannopoulos forwarded Bannon’s request to a group of contracted researchers, one of whom responded that it “Seems dificult [sic] to do quietly without them becoming aware of efforts.”

    But the news that Bannon wanted to infiltrate the Facebook hiring process comes as the social media giant faces increased scrutiny from Washington over political ads on the platform and the part it played in the 2016 election. That charge — and the threat of regulation — has mostly come from the left. But conservatives, who have often complained about the liberal bias of the major tech companies, have also argued for bringing Silicon Valley to heel. Earlier this month, former White House chief strategist told an audience in Hong Kong that he was leading efforts to regulate Facebook and Google as “public utilities.”

    The secret attempt to find bias in Facebook’s hiring process reflects longstanding conservative fears that Facebook and the other tech giants are run by liberals who suppress right-wing views both internally and on their dominant platforms. Facebook’s powerful COO, Sheryl Sandberg, is a longtime Democratic donor who endorsed Hillary Clinton in 2016. In May 2016, Facebook CEO Mark Zuckerberg was forced to meet with dozens of prominent conservatives after a report surfaced that the company’s employees prevented right-leaning stories from reaching the platform’s “trending” section.

    The company has sought to deflect such criticism through hiring. Its vice president of global public policy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recently, Facebook has made moves to represent the Breitbart wing of the Republican party on its policy team, tapping a former top staffer to Attorney General Jeff Sessions to be the director of executive branch public policy in May.

    The job listing Gacek attached in his email to Bannon was for a public policy manager position in Washington, DC, working on the Facebook-owned WhatsApp messenger. The job description included such responsibilities as “Develop and execute WhatsApp’s global policy strategy” and “Represent WhatsApp in meetings with government officials and elected members.” It sought candidates with law degrees and 10 years of public policy experience.

    Facebook did not provide a comment for the story. But according to a source with knowledge of the hiring process, WhatsApp didn’t exactly get infiltrated by the pro-Trump right: The company hired Christine Turner, former director of trade policy and global supply chain security in President Barack Obama’s National Security Council, for the role.

    ———-

    “Steve Bannon Sought To Infiltrate Facebook Hiring” by Joseph Bernstein; BuzzFeed; 09/25/2017

    “The email exchange with a conservative Washington operative reveals the importance that the giant tech platform — now reeling from its role in the 2016 election — held for one of the campaign’s central figures. And it also shows the lengths to which the brawling new American right is willing to go to keep tabs on and gain leverage over the Silicon Valley giants it used to help elect Trump — but whose executives it also sees as part of the globalist enemy.

    LOL! Yeah, Facebook’s executives are part of the “globalist enemy.” Someone needs to inform board member and major investor Peter Thiel about this. Along with all the conservatives Facebook has already hired:


    But the news that Bannon wanted to infiltrate the Facebook hiring process comes as the social media giant faces increased scrutiny from Washington over political ads on the platform and the part it played in the 2016 election. That charge — and the threat of regulation — has mostly come from the left. But conservatives, who have often complained about the liberal bias of the major tech companies, have also argued for bringing Silicon Valley to heel. Earlier this month, former White House chief strategist told an audience in Hong Kong that he was leading efforts to regulate Facebook and Google as “public utilities.”

    The secret attempt to find bias in Facebook’s hiring process reflects longstanding conservative fears that Facebook and the other tech giants are run by liberals who suppress right-wing views both internally and on their dominant platforms. Facebook’s powerful COO, Sheryl Sandberg, is a longtime Democratic donor who endorsed Hillary Clinton in 2016. In May 2016, Facebook CEO Mark Zuckerberg was forced to meet with dozens of prominent conservatives after a report surfaced that the company’s employees prevented right-leaning stories from reaching the platform’s “trending” section.

    The company has sought to deflect such criticism through hiring. Its vice president of global public policy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recently, Facebook has made moves to represent the Breitbart wing of the Republican party on its policy team, tapping a former top staffer to Attorney General Jeff Sessions to be the director of executive branch public policy in May.

    “The company has sought to deflect such criticism through hiring. Its vice president of global public policy, Joel Kaplan, was a deputy chief of staff in the George W. Bush White House. And more recently, Facebook has made moves to represent the Breitbart wing of the Republican party on its policy team, tapping a former top staffer to Attorney General Jeff Sessions to be the director of executive branch public policy in May.

    Yep, a former top staffer to Jeff Sessions was just brought on to become director of executive branch public policy a few months ago. So was that a consequence of Bannon successfully executing a super sneaky job application intelligence operation that gave Sessions’s form top staffer a key edge in the application process? Or was it just Facebook caving to all the public right-wing whining and faux outrage about Facebook not being fair to them? Or how about Peter Thiel just using his influence? All of the above? We don’t get to know, but what we do know now as that Steven Bannon has big plans for shaping Facebook from the outside and the inside. As does Peter Thiel, someone who already sits of Facebook’s board, is a major investor, and is poised to be empowered by the Trump administration to shape its approach to this “treat them like public utilities” concept.

    So hopefully we’ll get clarity at some point on what they’re actually planning on doing. Is it going to be all bad? Mostly bad? Maybe some useful antitrust stuff too? What’s to plan? The Trump era is the kind of horror show that doesn’t exactly benefit from suspense.

    Posted by Pterrafractyl | September 25, 2017, 2:01 pm
  10. One of the stranger stories in recent years has been the mystery of Cicada 3301, the anonymous group that posts annual challenges of super difficult puzzles used to recruit talented code-breakers and invite them to join some sort of Cypherpunk cult that wants to build a global AI-‘god brain’. Or something. It’s a weird and creepy organization that’s speculated to either be a front for an intelligence agency or perhaps some sort of underground network of wealth Libertarians. And, for now, Cicada 3301 remains anonymous.

    So it’s worth noting that someone with a lot of cash has already started a foundation to accomplish that very same ‘AI god’ goal: Anthony Levandowski, a former Google Engineer who played a big role in the development Google’s “Street Map” technology and a string of self-driving vehicle companies, started Way of the Future, a nonprofit religious corporation with the mission “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society”:

    The Guardian

    Deus ex machina: former Google engineer is developing an AI god

    Way of the Future, a religious group founded by Anthony Levandowski, wants to create a deity based on artificial intelligence for the betterment of society

    Olivia Solon
    Thursday 28 September 2017 04.00 EDT

    Intranet service? Check. Autonomous motorcycle? Check. Driverless car technology? Check. Obviously the next logical project for a successful Silicon Valley engineer is to set up an AI-worshipping religious organization.

    Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

    Levandowski was co-founder of autonomous trucking company Otto, which Uber bought in 2016. He was fired from Uber in May amid allegations that he had stolen trade secrets from Google to develop Otto’s self-driving technology. He must be grateful for this religious fall-back project, first registered in 2015.

    The Way of the Future team did not respond to requests for more information about their proposed benevolent AI overlord, but history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.

    “The church does a terrible job of reaching out to Silicon Valley types,” acknowledges Christopher Benek a pastor in Florida and founding chair of the Christian Transhumanist Association.

    Silicon Valley, meanwhile, has sought solace in technology and has developed quasi-religious concepts including the “singularity”, the hypothesis that machines will eventually be so smart that they will outperform all human capabilities, leading to a superhuman intelligence that will be so sophisticated it will be incomprehensible to our tiny fleshy, rational brains.

    For futurists like Ray Kurzweil, this means we’ll be able to upload copies of our brains to these machines, leading to digital immortality. Others like Elon Musk and Stephen Hawking warn that such systems pose an existential threat to humanity.

    “With artificial intelligence we are summoning the demon,” Musk said at a conference in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

    Benek argues that advanced AI is compatible with Christianity – it’s just another technology that humans have created under guidance from God that can be used for good or evil.

    “I totally think that AI can participate in Christ’s redemptive purposes,” he said, by ensuring it is imbued with Christian values.

    “Even if people don’t buy organized religion, they can buy into ‘do unto others’.”

    For transhumanist and “recovering Catholic” Zoltan Istvan, religion and science converge conceptually in the singularity.

    “God, if it exists as the most powerful of all singularities, has certainly already become pure organized intelligence,” he said, referring to an intelligence that “spans the universe through subatomic manipulation of physics”.

    And perhaps, there are other forms of intelligence more complicated than that which already exist and which already permeate our entire existence. Talk about ghost in the machine,” he added.

    For Istvan, an AI-based God is likely to be more rational and more attractive than current concepts (“the Bible is a sadistic book”) and, he added, “this God will actually exist and hopefully will do things for us.”

    We don’t know whether Levandowski’s Godhead ties into any existing theologies or is a manmade alternative, but it’s clear that advancements in technologies including AI and bioengineering kick up the kinds of ethical and moral dilemmas that make humans seek the advice and comfort from a higher power: what will humans do once artificial intelligence outperforms us in most tasks? How will society be affected by the ability to create super-smart, athletic “designer babies” that only the rich can afford? Should a driverless car kill five pedestrians or swerve to the side to kill the owner?

    If traditional religions don’t have the answer, AI – or at least the promise of AI – might be alluring.

    ———-

    “Deus ex machina: former Google engineer is developing an AI god” by Olivia Solon; The Guardian; 09/28/2017

    Anthony Levandowski, who is at the center of a legal battle between Uber and Google’s Waymo, has established a nonprofit religious corporation called Way of the Future, according to state filings first uncovered by Wired’s Backchannel. Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

    Building an AI Godhead for everyone to worship. Levandowski doesn’t appear to be lacking ambition.

    But how about ethics? After all, if the AI Godhead is going to push a ‘do unto others’ kind of philosophy it’s going to be a lot harder for that AI Godhead to achieve that kind of enlightenment if it’s built by some sort of selfishness-worshiping Libertarian. So what moral compass does this wannabe Godhead creator possess?

    Well, as the following long piece by Wired amply demonstrates, Levandowski doesn’t appear to be too concerned about ethics. Especially if they get in the way of his dream of transforming the world through robotics. Transforming and taking over the world through robotics. Yep. The article focuses on the various legal troubles Levandowski faces over charges by Google that he stole the “Lidar” technology (laser-based radar-like technology used by vehicles to rapidly map their surroundings) he helped develop at Google and took it to Uber (a company with a serious moral compass deficit). But the article also includes some interest insights into what makes Levandowski tick. for instance, according to friend and former engineer at one of Levandowski’s companies, “He had this very weird motivation about robots taking over the world—like actually taking over, in a military senseIt was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it”:

    Wired
    BackChannel

    God Is a Bot, and Anthony Levandowski Is His Messenger

    Mark Harris
    09/27/2017

    Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.

    When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

    Way of the Future has not yet responded to requests for the forms it must submit annually to the Internal Revenue Service (and make publicly available), as a non-profit religious corporation. However, documents filed with California show that Levandowski is Way of the Future’s CEO and President, and that it aims “through understanding and worship of the Godhead, [to] contribute to the betterment of society.”

    A divine AI may still be far off, but Levandowski has made a start at providing AI with an earthly incarnation. The autonomous cars he was instrumental in developing at Google are already ferrying real passengers around Phoenix, Arizona, while self-driving trucks he built at Otto are now part of Uber’s plan to make freight transport safer and more efficient. He even oversaw a passenger-carrying drones project that evolved into Larry Page’s Kitty Hawk startup.

    Levandowski has done perhaps more than anyone else to propel transportation toward its own Singularity, a time when automated cars, trucks and aircraft either free us from the danger and drudgery of human operation—or decimate mass transit, encourage urban sprawl, and enable deadly bugs and hacks.

    But before any of that can happen, Levandowski must face his own day of reckoning. In February, Waymo—the company Google’s autonomous car project turned into—filed a lawsuit against Uber. In its complaint, Waymo says that Levandowski tried to use stealthy startups and high-tech tricks to take cash, expertise, and secrets from Google, with the aim of replicating its vehicle technology at arch-rival Uber. Waymo is seeking damages of nearly $1.9 billion—almost half of Google’s (previously unreported) $4.5 billion valuation of the entire self-driving division. Uber denies any wrongdoing.

    Next month’s trial in a federal courthouse in San Francisco could steer the future of autonomous transportation. A big win for Waymo would prove the value of its patents and chill Uber’s efforts to remove profit-sapping human drivers from its business. If Uber prevails, other self-driving startups will be encouraged to take on the big players—and a vindicated Levandowski might even return to another startup. (Uber fired him in May.)

    Levandowski has made a career of moving fast and breaking things. As long as those things were self-driving vehicles and little-loved regulations, Silicon Valley applauded him in the way it knows best—with a firehose of cash. With his charm, enthusiasm, and obsession with deal-making, Levandowski came to personify the disruption that autonomous transportation is likely to cause.

    But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?

    In 2002, Levandowski’s attention turned, fatefully, toward transportation. His mother called him from Brussels about a contest being organized by the Pentagon’s R&D arm, DARPA. The first Grand Challenge in 2004 would race robotic, computer-controlled vehicles in a desert between Los Angeles and Las Vegas—a Wacky Races for the 21st century.

    “I was like, ‘Wow, this is absolutely the future,’” Levandowski told me in 2016. “It struck a chord deep in my DNA. I didn’t know where it was going to be used or how it would work out, but I knew that this was going to change things.”

    Levandowski’s entry would be nothing so boring as a car. “I originally wanted to do an automated forklift,” he said at a follow-up competition in 2005. “Then I was driving to Berkeley [one day] and a pack of motorcycles descended on my pickup and flowed like water around me.” The idea for Ghostrider was born—a gloriously deranged self-driving Yamaha motorcycle whose wobbles inspired laughter from spectators, but awe in rivals struggling to get even four-wheeled vehicles driving smoothly.

    “Anthony would go for weeks on 25-hour days to get everything done. Every day he would go to bed an hour later than the day before,” remembers Randy Miller, a college friend who worked with him on Ghostrider. “Without a doubt, Anthony is the smartest, hardest-working and most fearless person I’ve ever met.”

    Levandowski and his team of Berkeley students maxed out his credit cards getting Ghostrider working on the streets of Richmond, California, where it racked up an astonishing 800 crashes in a thousand miles of testing. Ghostrider never won a Grand Challenge, but its ambitious design earned Levandowski bragging rights—and the motorbike a place in the Smithsonian.

    “I see Grand Challenge not as the end of the robotics adventure we’re on, it’s almost like the beginning,” Levandowski told Scientific American in 2005. “This is where everyone is meeting, becoming aware of who’s working on what, [and] filtering out the non-functional ideas.”

    One idea that made the cut was lidar—spinning lasers that rapidly built up a 3D picture of a car’s surroundings. In the lidar-less first Grand Challenge, no vehicle made it further than a few miles along the course. In the second, an engineer named Dave Hall constructed a lidar that “was giant. It was one-off but it was awesome,” Levandowski told me. “We realized, yes, lasers [are] the way to go.”

    After graduate school, Levandowski went to work for Hall’s company, Velodyne, as it pivoted from making loudspeakers to selling lidars. Levandowski not only talked his way into being the company’s first sales rep, targeting teams working towards the next Grand Challenge, but he also worked on the lidar’s networking. By the time of the third and final DARPA contest in 2007, Velodyne’s lidar was mounted on five of the six vehicles that finished.

    But Levandowski had already moved on. Ghostrider had caught the eye of Sebastian Thrun, a robotics professor and team leader of Stanford University’s winning entry in the second competition. In 2006, Thrun invited Levandowski to help out with a project called VueTool, which was setting out to piece together street-level urban maps using cameras mounted on moving vehicles. Google was already working on a similar system, called Street View. Early in 2007, Google brought on Thrun and his entire team as employees—with bonuses as high as $1 million each, according to one contemporary at Google—to troubleshoot Street View and bring it to launch.

    “[Hiring the VueTool team] was very much a scheme for paying Thrun and the others to show Google how to do it right,” remembers the engineer. The new hires replaced Google’s bulky, custom-made $250,000 cameras with $15,000 off-the-shelf panoramic webcams. Then they went auto shopping. “Anthony went to a car store and said we want to buy 100 cars,” Sebastian Thrun told me in 2015. “The dealer almost fell over.”

    Levandowski was also making waves in the office, even to the point of telling engineers not to waste time talking to colleagues outside the project, according to one Google engineer. “It wasn’t clear what authority Anthony had, and yet he came in and assumed authority,” said the engineer, who asked to remain anonymous. “There were some bad feelings but mostly [people] just went with it. He’s good at that. He’s a great leader.”

    Under Thrun’s supervision, Street View cars raced to hit Page’s target of capturing a million miles of road images by the end of 2007. They finished in October—just in time, as it turned out. Once autumn set in, every webcam succumbed to rain, condensation, or cold weather, grounding all 100 vehicles.

    Part of the team’s secret sauce was a device that would turn a raw camera feed into a stream of data, together with location coordinates from GPS and other sensors. Google engineers called it the Topcon box, named after the Japanese optical firm that sold it. But the box was actually designed by a local startup called 510 Systems. “We had one customer, Topcon, and we licensed our technology to them,” one of the 510 Systems owners told me.

    That owner was…Anthony Levandowski, who had cofounded 510 Systems with two fellow Berkeley researchers, Pierre-Yves Droz and Andrew Schultz, just weeks after starting work at Google. 510 Systems had a lot in common with the Ghostrider team. Berkeley students worked there between lectures, and Levandowski’s mother ran the office. Topcon was chosen as a go-between because it had sponsored the self-driving motorcycle. “I always liked the idea that…510 would be the people that made the tools for people that made maps, people like Navteq, Microsoft, and Google,” Levandowski told me in 2016.

    Google’s engineering team was initially unaware that 510 Systems was Levandowski’s company, several engineers told me. That changed once Levandowski proposed that Google also use the Topcon box for its small fleet of aerial mapping planes. “When we found out, it raised a bunch of eyebrows,” remembers an engineer. Regardless, Google kept buying 510’s boxes.

    **********

    The truth was, Levandowski and Thrun were on a roll. After impressing Larry Page with Street View, Thrun suggested an even more ambitious project called Ground Truth to map the world’s streets using cars, planes, and a 2,000-strong team of cartographers in India. Ground Truth would allow Google to stop paying expensive licensing fees for outside maps, and bring free turn-by-turn directions to Android phones—a key differentiator in the early days of its smartphone war with Apple.

    Levandowski spent months shuttling between Mountain View and Hyderabad—and yet still found time to create an online stock market prediction game with Jesse Levinson, a computer science post-doc at Stanford who later cofounded his own autonomous vehicle startup, Zoox. “He seemed to always be going a mile a minute, doing ten things,” said Ben Discoe, a former engineer at 510. “He had an engineer’s enthusiasm that was contagious, and was always thinking about how quickly we can get to this amazing robot future he’s so excited about.”

    One time, Discoe was chatting in 510’s break room about how lidar could help survey his family’s tea farm on Hawaii. “Suddenly Anthony said, ‘Why don’t you just do it? Get a lidar rig, put it in your luggage, and go map it,’” said Discoe. “And it worked. I made a kick-ass point cloud [3D digital map] of the farm.”

    If Street View had impressed Larry Page, the speed and accuracy of Ground Truth’s maps blew him away. The Google cofounder gave Thrun carte blanche to do what he wanted; he wanted to return to self-driving cars.

    Project Chauffeur began in 2008, with Levandowski as Thrun’s right-hand man. As with Street View, Google engineers would work on the software while 510 Systems and a recent Levandowski startup, Anthony’s Robots, provided the lidar and the car itself.

    Levandowski said this arrangement would have acted as a firewall if anything went terribly wrong. “Google absolutely did not want their name associated with a vehicle driving in San Francisco,” he told me in 2016. “They were worried about an engineer building a car that drove itself that crashes and kills someone and it gets back to Google. You have to ask permission [for side projects] and your manager has to be OK with it. Sebastian was cool. Google was cool.”

    In order to move Project Chauffeur along as quickly as possible from theory to reality, Levandowski enlisted the help of a filmmaker friend he had worked with at Berkeley. In the TV show the two had made, Levandowski had created a cybernetic dolphin suit (seriously). Now they came up with the idea of a self-driving pizza delivery car for a show on the Discovery Channel called Prototype This! Levandowski chose a Toyota Prius, because it had a drive-by-wire system that was relatively easy to hack.

    In a matter of weeks, Levandowski’s team had the car, dubbed Pribot, driving itself. If anyone asked what they were doing, Levandowski told me, “We’d say it’s a laser and just drive off.”

    “Those were the Wild West days,” remembers Ben Discoe. “Anthony and Pierre-Yves…would engage the algorithm in the car and it would almost swipe some other car or almost go off the road, and they would come back in and joke about it. Tell stories about how exciting it was.”

    But for the Discovery Channel show, at least, Levandowski followed the letter of the law. The Bay Bridge was cleared of traffic and a squad of police cars escorted the unmanned Prius from start to finish. Apart from getting stuck against a wall, the drive was a success. “You’ve got to push things and get some bumps and bruises along the way,” said Levandowski.

    Another incident drove home the potential of self-driving cars. In 2010, Levandowski’s partner Stefanie Olsen was involved in a serious car accident while nine months pregnant with their first child. “My son Alex was almost never born,” Levandowski told a room full of Berkeley students in 2013. “Transportation [today] takes time, resources and lives. If you can fix that, that’s a really big problem to address.”

    Over the next few years, Levandowski was key to Chauffeur’s progress. 510 Systems built five more self-driving cars for Google—as well as random gadgets like an autonomous tractor and a portable lidar system. “Anthony is lightning in a bottle, he has so much energy and so much vision,” remembers a friend and former 510 engineer. “I fricking loved brainstorming with the guy. I loved that we could create a vision of the world that didn’t exist yet and both fall in love with that vision.”

    But there were downsides to his manic energy, too. “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense,” said the same engineer. “It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

    In early 2011, that plan was to bring 510 Systems into the Googleplex. The startup’s engineers had long complained that they did not have equity in the growing company. When matters came to a head, Levandowski drew up a plan that would reserve the first $20 million of any acquisition for 510’s founders and split the remainder among the staff, according to two former 510 employees. “They said we were going to sell for hundreds of millions,” remembers one engineer. “I was pretty thrilled with the numbers.”

    Indeed, that summer, Levandowski sold 510 Systems and Anthony’s Robots to Google – for $20 million, the exact cutoff before the wealth would be shared. Rank and file engineers did not see a penny, and some were even let go before the acquisition was completed. “I regret how it was handled…Some people did get the short end of the stick,” admitted Levandowski in 2016. The buyout also caused resentment among engineers at Google, who wondered how Levandowski could have made such a profit from his employer.

    There would be more profits to come. According to a court filing, Page took a personal interest in motivating Levandowski, issuing a directive in 2011 to “make Anthony rich if Project Chauffeur succeeds.” Levandowski was given by far the highest share, about 10 percent, of a bonus program linked to a future valuation of Chauffeur—a decision that would later cost Google dearly.

    **********

    Ever since a New York Times story in 2010 revealed Project Chauffeur to the world, Google had been wanting to ramp up testing on public streets. That was tough to arrange in well-regulated California, but Levandowski wasn’t about to let that stop him. While manning Google’s stand at the Consumer Electronics Show in Las Vegas in January 2011, he got to chatting with lobbyist David Goldwater. “He told me he was having a hard time in California and I suggested Google try a smaller state, like Nevada,” Goldwater told me.

    Together, Goldwater and Levandowski drafted legislation that would allow the company to test and operate self-driving cars in Nevada. By June, their suggestions were law, and in May 2012, a Google Prius passed the world’s first “self-driving tests” in Las Vegas and Carson City. “Anthony is gifted in so many different ways,” said Goldwater. “He’s got a strategic mind, he’s got a tactical mind, and a once-in-a-generation intellect. The great thing about Anthony is that he was willing to take risks, but they were calculated risks.”

    However, Levandowski’s risk-taking had ruffled feathers at Google. It was only after Nevada had passed its legislation that Levandowski discovered Google had a whole team dedicated to government relations. “I thought you could just do it yourself,” he told me sheepishly in 2016. “[I] got a little bit in trouble for doing it.”

    That might be understating it. One problem was that Levandowski had lost his air cover at Google. In May 2012, his friend Sebastian Thrun turned his attention to starting online learning company Udacity. Page put another professor, Chris Urmson from Carnegie Mellon, in charge. Not only did Levandowski think the job should have been his, but the two also had terrible chemistry.

    “They had a really hard time getting along,” said Page at a deposition in July. “It was a constant management headache to help them get through that.”

    Then in July 2013, Gaetan Pennecot, a 510 alum working on Chauffeur’s lidar team, got a worrying call from a vendor. According to Waymo’s complaint, a small company called Odin Wave had placed an order for a custom-made part that was extremely similar to one used in Google’s lidars.

    Pennecot shared this with his team leader, Pierre-Yves Droz, the cofounder of 510 Systems. Droz did some digging and replied in an email to Pennecot (in French, which we’ve translated): “They’re clearly making a lidar. And it’s John (510’s old lawyer) who incorporated them. The date of incorporation corresponds to several months after Anthony fell out of favor at Google.”

    As the story emerges in court documents, Droz had found Odin Wave’s company records. Not only had Levandowski’s lawyer founded the company in August 2012, but it was also based in a Berkeley office building that Levandowski owned, was being run by a friend of Levandowski’s, and its employees included engineers he had worked with at Velodyne and 510 Systems. One even spoke with Levandowski before being hired. The company was developing long range lidars similar to those Levandowski had worked on at 510 Systems. But Levandowski’s name was nowhere on the firm’s paperwork.

    Droz confronted Levandowski, who denied any involvement, and Droz decided not to follow the paper trail any further. “I was pretty happy working at Google, and…I didn’t want to jeopardize that by…exposing more of Anthony’s shenanigans,” he said at a deposition last month.

    Odin Wave changed its name to Tyto Lidar in 2014, and in the spring of 2015 Levandowski was even part of a Google investigation into acquiring Tyto. This time, however, Google passed on the purchase. That seemed to demoralize Levandowski further. “He was rarely at work, and he left a lot of the responsibility [for] evaluating people on the team to me or others,” said Droz in his deposition.

    “Over time my patience with his manipulations and lack of enthusiasm and commitment to the project [sic], it became clearer and clearer that this was a lost cause,” said Chris Urmson in a deposition.

    As he was torching bridges at Google, Levandowski was itching for a new challenge. Luckily, Sebastian Thrun was back on the autonomous beat. Larry Page and Thrun had been thinking about electric flying taxis that could carry one or two people. Project Tiramisu, named after the dessert which means “lift me up” in Italian, involved a winged plane flying in circles, picking up passengers below using a long tether.

    Thrun knew just the person to kickstart Tiramisu. According to a source working there at the time, Levandowski was brought in to oversee Tiramisu as an “advisor and stakeholder.” Levandowski would show up at the project’s workspace in the evenings, and was involved in tests at one of Page’s ranches. Tiramisu’s tethers soon pivoted to a ride-aboard electric drone, now called the Kitty Hawk flyer. Thrun is CEO of Kitty Hawk, which is funded by Page rather than Alphabet, the umbrella company that now owns Google and its sibling companies.

    Waymo’s complaint says that around this time Levandowski started soliciting Google colleagues to leave and start a competitor in the autonomous vehicle business. Droz testified that Levandowski told him it “would be nice to create a new self-driving car startup.” Furthermore, he said that Uber would be interested in buying the team responsible for Google’s lidar.

    Uber had exploded onto the self-driving car scene early in 2015, when it lured almost 50 engineers away from Carnegie Mellon University to form the core of its Advanced Technologies Center. Uber cofounder Travis Kalanick had described autonomous technology as an existential threat to the ride-sharing company, and was hiring furiously. According to Droz, Levandowski said that he began meeting Uber executives that summer.

    When Urmson learned of Levandowski’s recruiting efforts, his deposition states, he sent an email to human resources in August beginning, “We need to fire Anthony Levandowski.” Despite an investigation, that did not happen.

    But Levandowski’s now not-so-secret plan would soon see him leaving of his own accord—with a mountain of cash. In 2015, Google was due to starting paying the Chauffeur bonuses, linked to a valuation that it would have “sole and absolute discretion” to calculate. According to previously unreported court filings, external consultants calculated the self-driving car project as being worth $8.5 billion. Google ultimately valued Chauffeur at around half that amount: $4.5 billion. Despite this downgrade, Levandowski’s share in December 2015 amounted to over $50 million – nearly twice as much as the second largest bonus of $28 million, paid to Chris Urmson.

    **********

    Otto seemed to spring forth fully formed in May 2016, demonstrating a self-driving 18-wheel truck barreling down a Nevada highway with no one behind the wheel. In reality, Levandowski had been planning it for some time.

    Levandowski and his Otto cofounders at Google had spent the Christmas holidays and the first weeks of 2016 taking their recruitment campaign up a notch, according to Waymo court filings. Waymo’s complaint alleges Levandowski told colleagues he was planning to “replicate” Waymo’s technology at a competitor, and was even soliciting his direct reports at work.

    One engineer who had worked at 510 Systems attended a barbecue at Levandowski’s home in Palo Alto, where Levandowski pitched his former colleagues and current Googlers on the startup. “He wanted every Waymo person to resign simultaneously, a fully synchronized walkout. He was firing people up for that,” remembers the engineer.

    On January 27, Levandowski resigned from Google without notice. Within weeks, Levandowski had a draft contract to sell Otto to Uber for an amount widely reported as $680 million. Although the full-scale synchronized walkout never happened, half a dozen Google employees went with Levandowski, and more would join in the months ahead. But the new company still did not have a product to sell.

    Levandowski brought Nevada lobbyist David Goldwater back to help. “There was some brainstorming with Anthony and his team,” said Goldwater in an interview. “We were looking to do a demonstration project where we could show what he was doing.”

    After exploring the idea of an autonomous passenger shuttle in Las Vegas, Otto settled on developing a driverless semi-truck. But with the Uber deal rushing forward, Levandowski needed results fast. “By the time Otto was ready to go with the truck, they wanted to get right on the road,” said Goldwater. That meant demonstrating their prototype without obtaining the very autonomous vehicle licence Levandowski had persuaded Nevada to adopt. (One state official called this move “illegal.”) Levandowski also had Otto acquire the controversial Tyto Lidar—the company based in the building he owned—in May, for an undisclosed price.

    The full-court press worked. Uber completed its own acquisition of Otto in August, and Uber founder Travis Kalanick put Levandowski in charge of the combined companies’ self-driving efforts across personal transportation, delivery and trucking. Uber would even propose a Tiramisu-like autonomous air taxi called Uber Elevate. Now reporting directly to Kalanick and in charge of a 1500-strong group, Levandowski demanded the email address “robot@uber.com.”

    In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. “Down to hang out this eve and mastermind some shit,” texted Kalanick, shortly after the acquisition. “We’re going to take over the world. One robot at a time,” wrote Levandowski another time.

    But Levandowski’s amazing robot future was about to crumble before his eyes.

    ***********

    Last December, Uber launched a pilot self-driving taxi program in San Francisco. As with Otto in Nevada, Levandowski failed to get a license to operate the high-tech vehicles, claiming that because the cars needed a human overseeing them, they were not truly autonomous. The DMV disagreed and revoked the vehicles’ licenses. Even so, during the week the cars were on the city’s streets, they had been spotted running red lights on numerous occasions.

    Worse was yet to come. Levandowski had always been a controversial figure at Google. With his abrupt resignation, the launch of Otto, and its rapid acquisition by Uber, Google launched an internal investigation in the summer of 2016. It found that Levandowski had downloaded nearly 10 gigabytes of Google’s secret files just before he resigned, many of them relating to lidar technology.

    Also in December 2016, in an echo of the Tyto incident, a Waymo employee was accidentally sent an email from a vendor that included a drawing of an Otto circuit board. The design looked very similar to Waymo’s current lidars.

    Waymo says the “final piece of the puzzle” came from a story about Otto I wrote for Backchannel based on a public records request. A document sent by Otto to Nevada officials boasted the company had an “in-house custom-built 64-laser” lidar system. To Waymo, that sounded very much like technology it had developed. In February this year, Waymo filed its headline lawsuit accusing Uber (along with Otto Trucking, yet another of Levandowski’s companies, but one that Uber had not purchased) of violating its patents and misappropriating trade secrets on lidar and other technologies.

    Uber immediately denied the accusations and has consistently maintained its innocence. Uber says there is no evidence that any of Waymo’s technical files ever came to Uber, let alone that Uber ever made use of them. While Levandowski is not named as a defendant, he has refused to answer questions in depositions with Waymo’s lawyers and is expected to do the same at trial. (He turned down several requests for interviews for this story.) He also didn’t fully cooperate with Uber’s own investigation into the allegations, and that, Uber says, is why it fired him in May.

    Levandowski probably does not need a job. With the purchase of 510 Systems and Anthony’s Robots, his salary, and bonuses, Levandowski earned at least $120 million from his time at Google. Some of that money has been invested in multiple real estate developments with his college friend Randy Miller, including several large projects in Oakland and Berkeley.

    But Levandowski has kept busy behind the scenes. In August, court filings say, he personally tracked down a pair of earrings given to a Google employee at her going-away party in 2014. The earrings were made from confidential lidar circuit boards, and will presumably be used by Otto Trucking’s lawyers to suggest that Waymo does not keep a very close eye on its trade secrets.

    Some of Levandowski’s friends and colleagues have expressed shock at the allegations he faces, saying that they don’t reflect the person they knew. “It is…in character for Anthony to play fast and loose with things like intellectual property if it’s in pursuit of building his dream robot,” said Ben Discoe. “[But] I was a little surprised at the alleged magnitude of his disregard for IP.”

    “Definitely one of Anthony’s faults is to be aggressive as he is, but it’s also one of his great attributes. I don’t see [him doing] all the other stuff he has been accused of,” said David Goldwater.

    But Larry Page is no longer convinced that Levandowski was key to Chauffeur’s success. In his deposition to the court, Page said, “I believe Anthony’s contributions are quite possibly negative of a high amount.” At Uber, some engineers privately say that Levandowski’s poor management style set back that company’s self-driving effort by a couple of years.

    Even after this trial is done, Levandowski will not be able to rest easy. In May, a judge referred evidence from the case to the US Attorney’s office “for investigation of possible theft of trade secrets,” raising the possibility of criminal proceedings and prison time. Yet on the timeline that matters to Anthony Levandowski, even that may not mean much. Building a robotically enhanced future is his passionate lifetime project. On the Way of the Future, lawsuits or even a jail sentence might just feel like little bumps in the road.

    “This case is teaching Anthony some hard lessons but I don’t see [it] keeping him down,” said Randy Miller. “He believes firmly in his vision of a better world through robotics and he’s convinced me of it. It’s clear to me that he’s on a mission.”

    “I think Anthony will rise from the ashes,” agrees one friend and former 510 Systems engineer. “Anthony has the ambition, the vision, and the ability to recruit and drive people. If he could just play it straight, he could be the next Steve Jobs or Elon Musk. But he just doesn’t know when to stop cutting corners.”

    ———-

    “God Is a Bot, and Anthony Levandowski Is His Messenger” by Mark Harris; Wired; 09/27/2017

    “But even the smartest car will crack up if you floor the gas pedal too long. Once feted by billionaires, Levandowski now finds himself starring in a high-stakes public trial as his two former employers square off. By extension, the whole technology industry is there in the dock with Levandowski. Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them?”

    Can we ever trust self-driving cars if it turns out we can’t trust the people who are making them? It’s an important question that doesn’t just apply to self-driving cars. Like AI Godheads. Can we ever trust man-made AI Godheads if it turns out we can’t trust the people who are making it? These are the stupid questions we have to now given the disturbing number of powerful people who double as evangelist for techno-cult Libertarian ideologies. Especially when they are specialist in creating automated vehicles and have a deep passion for taking over the world. Possibly taking over the world militarily using robots:


    Over the next few years, Levandowski was key to Chauffeur’s progress. 510 Systems built five more self-driving cars for Google—as well as random gadgets like an autonomous tractor and a portable lidar system. “Anthony is lightning in a bottle, he has so much energy and so much vision,” remembers a friend and former 510 engineer. “I fricking loved brainstorming with the guy. I loved that we could create a vision of the world that didn’t exist yet and both fall in love with that vision.”

    But there were downsides to his manic energy, too. “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense,” said the same engineer. “It was like [he wanted] to be able to control the world, and robots were the way to do that. He talked about starting a new country on an island. Pretty wild and creepy stuff. And the biggest thing is that he’s always got a secret plan, and you’re not going to know about it.”

    Yeah, that’s disturbing. And it doesn’t help that Levandowski apparently found a soulmate and mentor in the guy widely viewed as one of the most sociopathic CEOs today: Uber CEO Travis Kalanick:


    The full-court press worked. Uber completed its own acquisition of Otto in August, and Uber founder Travis Kalanick put Levandowski in charge of the combined companies’ self-driving efforts across personal transportation, delivery and trucking. Uber would even propose a Tiramisu-like autonomous air taxi called Uber Elevate. Now reporting directly to Kalanick and in charge of a 1500-strong group, Levandowski demanded the email address “robot@uber.com.”

    In Kalanick, Levandowski found both a soulmate and a mentor to replace Sebastian Thrun. Text messages between the two, disclosed during the lawsuit’s discovery process, capture Levandowski teaching Kalanick about lidar at late night tech sessions, while Kalanick shared advice on management. “Down to hang out this eve and mastermind some shit,” texted Kalanick, shortly after the acquisition. “We’re going to take over the world. One robot at a time,” wrote Levandowski another time.

    “We’re going to take over the world. One robot at a time”

    So that gives us an idea of how Levandowski’s AI religion is going to be evangelized: via his army of robots. Although it’s unclear if his future religion is actually intended for us mere humans. After all, for the hardcore Transhumanists we’re all supposed to fuse with machines or upload our brains so it’s very possible humans aren’t actually part of Levandowski’s vision for that better tomorrow. A vision that, as the Cicada 3301 weirdness reminds us, probably isn’t limited to Levandowski. *gulp*

    Posted by Pterrafractyl | September 28, 2017, 8:43 pm
  11. Just what the world needs: an AI-powered ‘gaydar’ algorithm that purports to to be able to detect who is gay and who isn’t just by looking at faces. Although it’s not actually that impressive. The ‘gaydar’ algorithm is instead given pairs of faces, one of a heterosexual individual and one of a homosexual individual, and the tries to identify the gay person. And apparently does so correctly at a rate of 81 percent of cases for men and 71 percent of cases for women, significantly better than the 50 percent rate we would expect just from random chance. It’s the kind of gaydar technology that might not be good enough to just ‘pick the gays from a crowd’, but still more than adequate for potentially being abused. And, more generally, it’s the kind of research that not surprisingly is raising concerns about this creating a a 21st century version of physignomy, the pseudoscience based on the idea that peoples’ character is reflected in their faces.

    But as the researchers behind the study put it, we don’t need to worry about this being a high-tech example of physiognomy because their gaydar uses hard science. And while the researchers agree that physiognomy is pseudoscience, they also note that the pseudoscience nature of physiognomy doesn’t mean AIs can’t actually learn something about you just by looking at you. Yep. That’s the reassurance we’re getting from these researchers. Don’t worry about AI driving a 21st century version of physiognomy because they are using much better science compared to the past. Feeling reassured?

    The Verge

    The invention of AI ‘gaydar’ could be the start of something much worse
    Researchers claim they can spot gay people from a photo, but critics say we’re revisiting pseudoscience

    by James Vincent
    Sep 21, 2017, 1:24pm EDT

    Two weeks ago, a pair of researchers from Stanford University made a startling claim. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. The work was first covered by The Economist, and other publications soon followed suit, with headlines like “New AI can guess whether you’re gay or straight from a photograph” and “AI Can Tell If You’re Gay From a Photo, and It’s Terrifying.”

    As you might have guessed, it’s not as straightforward as that. (And to be clear, based on this work alone, AI can’t tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. One of the paper’s authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someone’s sexual orientation, but their political views, criminality, and even their IQ.

    With statements like these, some worry we’re reviving an old belief with a bad history: that you can intuit character from appearance. This pseudoscience, physiognomy, was fuel for the scientific racism of the 19th and 20th centuries, and gave moral cover to some of humanity’s worst impulses: to demonize, condemn, and exterminate fellow humans. Critics of Kosinski’s work accuse him of replacing the calipers of the 19th century with the neural networks of the 21st, while the professor himself says he is horrified by his findings, and happy to be proved wrong. “It’s a controversial and upsetting subject, and it’s also upsetting to us,” he tells The Verge.

    But is it possible that pseudoscience is sneaking back into the world, disguised in new garb thanks to AI? Some people say machines are simply able to read more about us than we can ourselves, but what if we’re training them to carry out our prejudices, and, in doing so, giving new life to old ideas we rightly dismissed? How are we going to know the difference?

    Can AI really spot sexual orientation?

    First, we need to look at the study at the heart of the recent debate, written by Kosinski and his co-author Yilun Wang. Its results have been poorly reported, with a lot of the hype coming from misrepresentations of the system’s accuracy. The paper states: “Given a single facial image, [the software] could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.” These rates increase when the system is given five pictures of an individual: up to 91 percent for men, and 83 percent for women.

    On the face of it, this sounds like “AI can tell if a man is gay or straight 81 percent of the time by looking at his photo.” (Thus the headlines.) But that’s not what the figures mean. The AI wasn’t 81 percent correct when being shown random photos: it was tested on a pair of photos, one of a gay person and one of a straight person, and then asked which individual was more likely to be gay. It guessed right 81 percent of the time for men and 71 percent of the time for women, but the structure of the test means it started with a baseline of 50 percent — that’s what it’d get guessing at random. And although it was significantly better than that, the results aren’t the same as saying it can identify anyone’s sexual orientation 81 percent of the time.

    As Philip Cohen, a sociologist at the University of Maryland who wrote a blog post critiquing the paper, told The Verge: “People are scared of a situation where you have a private life and your sexual orientation isn’t known, and you go to an airport or a sporting event and a computer scans the crowd and identifies whether you’re gay or straight. But there’s just not much evidence this technology can do that.”

    Kosinski and Wang make this clear themselves toward the end of the paper when they test their system against 1,000 photographs instead of two. They ask the AI to pick out who is most likely to be gay in a dataset in which 7 percent of the photo subjects are gay, roughly reflecting the proportion of straight and gay men in the US population. When asked to select the 100 individuals most likely to be gay, the system gets only 47 out of 70 possible hits. The remaining 53 have been incorrectly identified. And when asked to identify a top 10, nine are right.

    If you were a bad actor trying to use this system to identify gay people, you couldn’t know for sure you were getting correct answers. Although, if you used it against a large enough dataset, you might get mostly correct guesses. Is this dangerous? If the system is being used to target gay people, then yes, of course. But the rest of the study suggests the program has even further limitations.

    What can computers really see that humans can’t?

    It’s also not clear what factors the facial recognition system is using to make its judgements. Kosinski and Wang’s hypothesis is that it’s primarily identifying structural differences: feminine features in the faces of gay men and masculine features in the faces of gay women. But it’s possible that the AI is being confused by other stimuli — like facial expressions in the photos.

    This is particularly relevant because the images used in the study were taken from a dating website. As Greggor Mattson, a professor of sociology at Oberlin College, pointed out in a blog post, this means that the images themselves are biased, as they were selected specifically to attract someone of a certain sexual orientation. They almost certainly play up to our cultural expectations of how gay and straight people should look, and, to further narrow their applicability, all the subjects were white, with no inclusion of bisexual or self-identified trans individuals. If a straight male chooses the most stereotypically “manly” picture of himself for a dating site, it says more about what he thinks society wants from him than a link between the shape of his jaw and his sexual orientation.

    To try and ensure their system was looking at facial structure only, Kosinski and Wang used software called VGG-Face, which encodes faces as strings of numbers and has been used for tasks like spotting celebrity lookalikes in paintings. This program, they write, allows them to “minimize the role [of] transient features” like lighting, pose, and facial expression.

    But researcher Tom White, who works on AI facial system, says VGG-Face is actually very good at picking up on these elements. White pointed this out on Twitter, and explained to The Verge over email how he’d tested the software and used it to successfully distinguish between faces with expressions like “neutral” and “happy,” as well as poses and background color.

    Speaking to The Verge, Kosinski says he and Wang have been explicit that things like facial hair and makeup could be a factor in the AI’s decision-making, but he maintains that facial structure is the most important. “If you look at the overall properties of VGG-Face, it tends to put very little weight on transient facial features,” Kosinski says. “We also provide evidence that non-transient facial features seem to be predictive of sexual orientation.”

    The problem is, we can’t know for sure. Kosinski and Wang haven’t released the program they created or the pictures they used to train it. They do test their AI on other picture sources, to see if it’s identifying some factor common to all gay and straight, but these tests were limited and also drew from a biased dataset — Facebook profile pictures from men who liked pages such as “I love being Gay,” and “Gay and Fabulous.”

    Do men in these groups serve as reasonable proxies for all gay men? Probably not, and Kosinski says it’s possible his work is wrong. “Many more studies will need to be conducted to verify [this],” he says. But it’s tricky to say how one could completely eliminate selection bias to perform a conclusive test. Kosinski tells The Verge, “You don’t need to understand how the model works to test whether it’s correct or not.” However, it’s the acceptance of the opacity of algorithms that makes this sort of research so fraught.

    If AI can’t show its working, can we trust it?

    AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.

    In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)

    Experts say this is a misleading claim that isn’t supported by the latest science. There may be a common cause for face shape and sexual orientation — the most probable cause is the balance of hormones in the womb — but that doesn’t mean face shape reliably predicts sexual orientation, says Qazi Rahman, an academic at King’s College London who studies the biology of sexual orientation. “Biology’s a little bit more nuanced than we often give it credit for,” he tells The Verge. “The issue here is the strength of the association.”

    The idea that sexual orientation comes primarily from biology is itself controversial. Rahman, who believes that sexual orientation is mostly biological, praises Kosinski and Wang’s work. “It’s not junk science,” he says. “More like science someone doesn’t like.” But when it comes to predicting sexual orientation, he says there’s a whole package of “atypical gender behavior” that needs to be considered. “The issue for me is more that [the study] misses the point, and that’s behavior.”

    Reducing the question of sexual orientation to a single, measurable factor in the body has a long and often inglorious history. As Matton writes in his blog post, approaches have ranged from “19th century measurements of lesbians’ clitorises and homosexual men’s hips, to late 20th century claims to have discovered ‘gay genes,’ ‘gay brains,’ ‘gay ring fingers,’ ‘lesbian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppression: it gives people who want to dehumanize and persecute sexual minorities a “scientific” pretext.

    Jenny Davis, a lecturer in sociology at the Australian National University, describes it as a form of biological essentialism. This is the belief that things like sexual orientation are rooted in the body. This approach, she says, is double-edged. On the one hand, it “does a useful political thing: detaching blame from same-sex desire. But on the other hand, it reinforces the devalued position of that kind of desire,” setting up hetrosexuality as the norm and framing homosexuality as “less valuable … a sort of illness.”

    Your character, as plain as the nose on your face

    For centuries, people have believed that the face held the key to the character. The notion has its roots in ancient Greece, but was particularly influential in the 19th century. Proponents of physiognomy suggested that by measuring things like the angle of someone’s forehead or the shape of their nose, they could determine if a person was honest or a criminal. Last year in China, AI researchers claimed they could do the same thing using facial recognition.

    Their research, published as “Automated Inference on Criminality Using Face Images,” caused a minor uproar in the AI community. Scientists pointed out flaws in the study, and concluded that that work was replicating human prejudices about what constitutes a “mean” or a “nice” face. In a widely shared rebuttal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the coming years that has similar … false claims to scientific objectivity in order to ‘launder’ human prejudice and discrimination.” (Google declined to make Agüera y Arcas available to comment on this report.)

    Kosinski and Wang’s paper clearly acknowledges the dangers of physiognomy, noting that the practice “is now universally, and rightly, rejected as a mix of superstition and racism disguised as science.” But, they continue, just because a subject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read characteristics like personality in other people’s faces with “low accuracy,” machines should be able to do the same but more accurately.

    Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.

    For Davis, this sort of attitude comes from a widespread and mistaken belief in the neutrality and objectivity of AI. “Artificial intelligence is not in fact artificial,” she tells The Verge. “Machines learn like humans learn. We’re taught through culture and absorb the norms of social structure, and so does artificial intelligence. So it will re-create, amplify, and continue on the trajectories we’ve taught it, which are always going to reflect existing cultural norms.”

    We’ve already created sexist and racist algorithms, and these sorts of cultural biases and physiognomy are really just two sides of the same coin: both rely on bad evidence to judge others. The work by the Chinese researchers is an extreme example, but it’s certainly not the only one. There’s at least one startup already active that claims it can spot terrorists and pedophiles using face recognition, and there are many others offering to analyze “emotional intelligence” and conduct AI-powered surveillance.

    Facing up to what’s coming

    But to return to the questions implied by those alarming headlines about Kosinski and Wang’s paper: is AI going to be used to persecute sexual minorities?

    This system? No. A different one? Maybe.

    Kosinski and Wang’s work is not invalid, but its results need serious qualifications and further testing. Without that, all we know about their system is that it can spot with some reliability the difference between self-identified gay and straight white people on one particular dating site. We don’t know that it’s spotted a biological difference common to all gay and straight people; we don’t know if it would work with a wider set of photos; and the work doesn’t show that sexual orientation can be deduced with nothing more than, say, a measurement of the jaw. It’s not decoded human sexuality any more than AI chatbots have decoded the art of a good conversation. (Nor do its authors make such a claim.)

    The research was published to warn people, say Kosinski, but he admits it’s an “unavoidable paradox” that to do so you have to explain how you did what you did. All the tools used in the paper are available for anyone to find and put together themselves. Writing at the deep learning education site Fast.ai, researcher Jeremy Howard concludes: “It is probably reasonably [sic] to assume that many organizations have already completed similar projects, but without publishing them in the academic literature.”

    We’ve already mentioned startups working on this tech, and it’s not hard to find government regimes that would use it. In countries like Iran and Saudi Arabia homosexuality is still punishable by death; in many other countries, being gay means being hounded, imprisoned, and tortured by the state. Recent reports have spoken of the opening of concentration camps for gay men in the Chechen Republic, so what if someone there decides to make their own AI gaydar, and scan profile pictures from Russian social media?

    Here, it becomes clear that the accuracy of systems like Kosinski and Wang’s isn’t really the point. If people believe AI can be used to determine sexual preference, they will use it. With that in mind, it’s more important than ever that we understand the limitations of artificial intelligence, to try and neutralize dangers before they start impacting people. Before we teach machines our prejudices, we need to first teach ourselves.

    ———-

    “The invention of AI ‘gaydar’ could be the start of something much worse” by James Vincent; The Verge; 09/21/2017

    Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.”

    It’s not a return of the physiognomy pseudoscience but “there may be truth in [physiognomy’s] basic concepts that computers can reveal.” That’s seriously the message from these researchers, along with a message of confidence that their algorithm is working solely from facial features and not other more transient features. And based on that confidence in their algorithm the researchers point to their results on evidence that gay people have biologically different faces…even though they can’t actually determine what the algorithm is looking at when coming to its conclusion:


    If AI can’t show its working, can we trust it?

    AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.

    In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)

    Experts say this is a misleading claim that isn’t supported by the latest science. There may be a common cause for face shape and sexual orientation — the most probable cause is the balance of hormones in the womb — but that doesn’t mean face shape reliably predicts sexual orientation, says Qazi Rahman, an academic at King’s College London who studies the biology of sexual orientation. “Biology’s a little bit more nuanced than we often give it credit for,” he tells The Verge. “The issue here is the strength of the association.”

    The idea that sexual orientation comes primarily from biology is itself controversial. Rahman, who believes that sexual orientation is mostly biological, praises Kosinski and Wang’s work. “It’s not junk science,” he says. “More like science someone doesn’t like.” But when it comes to predicting sexual orientation, he says there’s a whole package of “atypical gender behavior” that needs to be considered. “The issue for me is more that [the study] misses the point, and that’s behavior.”

    Reducing the question of sexual orientation to a single, measurable factor in the body has a long and often inglorious history. As Matton writes in his blog post, approaches have ranged from “19th century measurements of lesbians’ clitorises and homosexual men’s hips, to late 20th century claims to have discovered ‘gay genes,’ ‘gay brains,’ ‘gay ring fingers,’ ‘lesbian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppression: it gives people who want to dehumanize and persecute sexual minorities a “scientific” pretext.

    Jenny Davis, a lecturer in sociology at the Australian National University, describes it as a form of biological essentialism. This is the belief that things like sexual orientation are rooted in the body. This approach, she says, is double-edged. On the one hand, it “does a useful political thing: detaching blame from same-sex desire. But on the other hand, it reinforces the devalued position of that kind of desire,” setting up hetrosexuality as the norm and framing homosexuality as “less valuable … a sort of illness.”

    In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)”

    So if this research was put forth as a kind of warning to the public, which is how the researchers are framing it, it’s quite a warning. Both a warning that algorithms like this are being developed and a warning of how readily the conclusions of these algorithms might be accepted as evidence of an underlying biological finding (as opposed to a ‘black box’ artifact that could be picking up all sort of biological and social cues).

    And don’t forget, even if these algorithms do actually stumble across real associations that can be teased out by these AI-driven algorithms with just some pictures of someone (or maybe some additional biometric data picked up by the “smart kiosks” of the not-too-distant future), there’s a big difference between demonstrating the ability to discern something statistically across large data sets and being able to do that with the kind of accuracy where you don’t have to significantly worry about jumping to the wrong conclusion (assuming you aren’t using the technology in an abusive manner in the first place). Even if someone develops an algorithm that can accurate guess sexual orientation 95 percent of the time that still leaves a pretty substantial 5 percent chance of getting it wrong. And the only way to avoid those incorrect conclusions is to develop an algorithm that’s so good at inferring sexual orientation it’s basically never wrong, assuming that’s possible. And, of course, if such an algorithm was developed with that kind of accuracy that would be really creepy. It points towards one of the scarier aspects of this kind of technology: In order to ensure your privacy-invading algorithms don’t risk jumping to erroneous conclusions you need algorithms that are scarily good at invading your privacy which is another reason we probably shouldn’t be promoting 21st Century physiognomy.

    Posted by Pterrafractyl | October 2, 2017, 2:43 pm
  12. It looks like Google is finally get that shiny new object its been pining for: a city. Yep, Sidewalk Labs, owned by Google’s parent company Alphabet, just got permission to build its own ‘city of the future’ on a 12-acre waterfront district near Toronto, filled with self-driving shuttles, adaptive traffic lights that sense pedestrians, and underground tunnels for freight-transporting robots. With sensors everywhere.

    And if that sounds ambitious, note that all these plans aren’t limited to the initial 12 acres. Alphabet reportedly has plans to expand across 800 acres of Toronto’s post-industrial waterfront zone:

    The Financial Times

    Alphabet to build futuristic city in Toronto
    Plans for technology-enabled environment raise privacy concerns

    by Leslie Hook in San Francisco
    October 17, 2017, 4:54 pm

    Alphabet is setting out to build the city of the future, starting with a downtown district of Toronto, in what it hopes will serve as a proving ground for technology-enabled urban environments around the world.

    In a first-of-its-kind project, Alphabet’s subsidiary Sidewalk Labs will develop a 12-acre waterfront district, Quayside, with a view to expand across 800 acres of Toronto’s post-industrial waterfront zone.

    Self-driving shuttles, adaptive traffic lights that sense pedestrians, modular housing and freight-delivering robots that travel in underground tunnels might all be part of the new development, according to the winning bid submitted by Sidewalk Labs.

    In its proposal, Sidewalk also said that Toronto would need to waive or exempt many existing regulations in areas like building codes, transportation, and energy in order to build the city it envisioned. The project may need “substantial forbearances from existing laws and regulations,” the group said.

    Alphabet chairman Eric Schmidt and Canadian prime minister Justin Trudeau announced the deal on Tuesday in Toronto.

    “We started thinking about all the things we could do if someone would just give us a city and put us in charge,” said Eric Schmidt, executive chairman of Alphabet. “That’s not how it works, for all sorts of good reasons,” he added with a laugh.

    For Alphabet, the project presents a chance to experiment with new ways to use technology — and data — in the real world. “This is not some random activity from our perspective. This is the culmination of almost 10 years of thinking about how technology could improve people’s lives,” said Mr Schmidt.

    Mr Trudeau described the project as a “test bed for new technologies…that will help us build cleaner, smarter, greener, cities”.

    “Eric [Schmidt] and I have been talking about collaborating on this for a few years, and seeing it all come together now is extraordinarily exciting,” he added.

    One of the challenges for the new district will be setting data policies and addressing concerns over privacy, which are particularly acute because smart city technologies often rely on collecting vast amounts of data to make cities run more efficiently.

    In the vision statement submitted as part of its bid, Sidewalk describes a vast system of sensors that will monitor everything from park benches and overflowing waste bins, to noise and pollution levels in housing. The development will also pioneer new approaches to energy, including a thermal grid and on-site generation, and tech-enabled primary healthcare that will be integrated with social services.

    The transportation proposal for the district includes restricting private vehicles, and instead offering self-driving shuttles and bike paths that are heated in the winter, according to the vision document. A series of underground utility tunnels will house utilities like electrical wires and water pipes, and also provide pathways for freight-delivering robots.

    Sidewalk Labs, a subsidiary of Alphabet that was founded in 2015 by Dan Doctoroff, a former deputy mayor of New York, will spend $50m on initial planning and testing for the development. As part of the effort, Google will also move its Canadian headquarters to Toronto.

    Mr Doctoroff said the group would present a detailed plan in one year, following extensive consultations with the community. “Our goal here is to listen, to understand,” he said. “This has to be a community conversation…otherwise it won’t have the political credibility to do things that are quite bold.”

    ———-

    “Alphabet to build futuristic city in Toronto” by Leslie Hook; The Financial Times; 10/17/2017

    “In its proposal, Sidewalk also said that Toronto would need to waive or exempt many existing regulations in areas like building codes, transportation, and energy in order to build the city it envisioned. The project may need “substantial forbearances from existing laws and regulations,” the group said.

    LOL, yeah, it’s a good bet that A LOT of existing laws and regulations are going to have be waived. Especially laws involving personal data privacy. And it sounds like the data collected isn’t just going to involve your whereabouts and other information the sensors everywhere will be able to pick up. Alphabet is also envisioning “tech-enabled primary healthcare that will be integrated with social services”, which means medical data privacy laws are probably also going to have to get waived:


    One of the challenges for the new district will be setting data policies and addressing concerns over privacy, which are particularly acute because smart city technologies often rely on collecting vast amounts of data to make cities run more efficiently.

    In the vision statement submitted as part of its bid, Sidewalk describes a vast system of sensors that will monitor everything from park benches and overflowing waste bins, to noise and pollution levels in housing. The development will also pioneer new approaches to energy, including a thermal grid and on-site generation, and tech-enabled primary healthcare that will be integrated with social services.

    Let’s also not forget about the development of technologies that can collect personal health information like heart rates and breathing information using WiFi signals alone (which would pair nicely with Google’s plans to put free WiFi kiosks bristling with sensors on sidewalks everywhere. And as is pretty clear at this point, anything that can be sensed remotely will be sensed remotely in this new city. Because that’s half the point of the whole thing. So yeah, “substantial forbearances from existing laws and regulations” will no doubt be required.

    Interestingly, Alphabet recently announced a new initiative that sounds like exactly the kind of “tech-enabled primary healthcare that will be integrated with social services” the company has planned for its new city: Cityblock was just launched,. It’s a new Alphabet startup focused on improving health care management by, surprise!, integrating various technologies into a health care system with the goal of bringing down costs and improving outcomes. But it’s not simply new technology that’s supposed to do this. Instead, that technology is to be used in a preventive manner in order to address more expensive health conditions before they get worse. As such, Cityblock is going to focuses on behavioral health being. Yep, it’s a health care model where a tech firm, paired with a health firm, tries to get to you live a more healthy lifestyle by collecting lots of data about you. And while this approach would undoubtedly cause widespread privacy concerns, those concerns will probably be somewhat stunted in this case since the target market Cityblock has in mind is poor people, especially Medicaid patients in the US:

    Fierce Healthcare

    Google’s parent company spins off an innovative startup healthcare provider

    by Matt Kuhrt | Oct 5, 2017 8:49am

    The latest Silicon Valley bid to disrupt a traditional industry appears to be aimed at healthcare. Cityblock, a startup quietly launched by Google’s parent company Alphabet, will focus on providing team-based care for low-income communities.

    The venture comes from one of Alphabet’s innovation-oriented groups, Sidewalk Labs, and will rely upon a team-based care delivery structure that is supported by doctors, behavioral health coaches and technological tools, according to an article from CNBC.

    Efforts by healthcare organizations to improve care management and increase patient engagement through social interaction have attracted attention, particularly in the context of chronic conditions, as FierceHealthcare has previously reported. While cell phones and social media apps have provided new avenues to boost patient engagement, integrating those technologies into an effective care delivery model has proven more complex. At the same time, major players such as the Centers for Medicare & Medicaid Services, actively seek feedback on models that prioritize behavioral health in response to the industry’s interest in the potential for efficiency from an increased emphasis on preventive and ongoing care.

    Cityblock aims to provide Medicaid and lower-income Medicare beneficiaries access to high-value, readily available personalized health services. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medium that the organization will apply leading-edge care models that fully integrate primary care, behavioral health and social services. It expects to open its first clinic, which it calls a Neighborhood Health Hub, in New York City in 2018.

    Cityblock’s interdisciplinary management team, which includes both veterans of the traditional healthcare industry and Google technologists, will focus on preventive care. Behavioral health coaches will drive care teams that will build social relationships and deliver care at centrally located “hubs,” via telehealth services or house calls, according to the website. Cityblock is also in the process of negotiating partnerships to ensure insurance companies cover its services.

    He also points out that Cityblock has made a conscious decision to target low-income Americans, who he says have traditionally been short-changed by industry innovation efforts.

    ———-

    “Google’s parent company spins off an innovative startup healthcare provider” by Matt Kuhrt; Fierce Healthcare; 10/05/2017

    Cityblock aims to provide Medicaid and lower-income Medicare beneficiaries access to high-value, readily available personalized health services. To do this, Iyah Romm, cofounder and CEO, writes in a blog post on Medium that the organization will apply leading-edge care models that fully integrate primary care, behavioral health and social services. It expects to open its first clinic, which it calls a Neighborhood Health Hub, in New York City in 2018.”

    It’s probably worth recalling that personalized services for the poor intended to ‘help them help themselves’ was the centerpiece for House Speaker Paul Ryan’s proposal to give every poor person a life coach who will issue “life plans” and “contracts” that poor people would be expected to meet and face penalties if they fail to meet them. So when we’re talking about setting up special personalized “behavior health” monitoring systems as part of health care services for the poor, don’t forget that this personalized monitoring system is going to be really handy when politicians want to say, “if you want to stay on Medicaid you have better make XYZ changes in your lifestyle. We are watching you.” And since right-wingers general expect the poor to be super-human (capable of working multiple jobs, getting an education, raise a family, and dealing with any unforeseen personal disasters in stride, all simultaneously), it’s not like it’s we should be surprised to see the kinds of behavior health standards that almost no one can meet, especially since working multiple jobs, getting an education, raise a family, and dealing with any unforeseen personal disasters in stride simultaneously is an incredibly unhealthy lifestyle.

    Also recall that Paul Ryan suggested that his ‘life coach’ plan could apply to other federal programs for the poor, including food stamps. It’s not a stretch at all to imagine ‘life coaches’ for Medicaid recipients would appeal the right-wing. As long as it involves a ‘kicking the poor’ dynamic. And that’s part of the tragedy of the modern age: surveillance technology and a focus on behavior health could be great as a helpful voluntary tool for people that want help getting healthier, but it’s hard to imagine it not becoming a coercive nightmare scenario in the US given the incredible antipathy towards the poor that pervades American society.

    So as creepy as Google’s city is on its face regarding what it tells us about how the future is unfolding for people of all incomes and classes, don’t forget that we could be looking at the first test bed for creating the kind of surveillance welfare state that’s perfect for kicking people off of welfare. Just make unrealistic standards that involve a lot of paternalistic moral posturing (which should play well with voters), watch all the poor people with the surveillance technology, and wait for the wave of inevitable failures who can be kicked off for not ‘trying’ or something.

    Posted by Pterrafractyl | October 18, 2017, 3:28 pm
  13. There’s some big news about Facebook’s mind-reading technology ambitions, although it’s not entirely clear if it’s good news, bad news, scary news or what: Regina Dugan, the former head of DARPA who jumped to Google and then Facebook where we was working on the mind-reading technology, just left Facebook. Why? Well, that’s where it’s unclear. According to a Tweet Dugan made about her departure:

    There is a tidal shift going on in Silicon Valley, and those of us in this industry have greater responsibilities than ever before. The timing feels right to step away and be purposeful about what’s next, thoughtful about new ways to contribute in times of disruption.

    So Dugan is leaving Facebook, to be more purposeful and responsible. And she was the one heading up the mind-reading technology project. Is that scary news? It’s unclear but it seems like that might be scary news:

    Gizmodo

    What Happens to Facebook’s Mind Reading Project Now That Its Leader Is Gone?

    Alex Cranz
    10/17/2017 5:05pm

    Regina Dugan, a tech exec with roots in the government sector as the former director of DARPA, is leaving Facebook and her departure calls into question the status of one of the craziest things Facebook has been working on.

    Fittingly, Dugan announced the departure in a post on Facebook today.

    [see Facebook post]

    If you’re unfamiliar with Dugan herself you may be familiar with some of the skunkworks projects she oversaw while managing Google’s secretive Advanced Technology and Projects (ATAP) group from 2012 to 2016. Those projects included Tango, a highly accurate augmented reality device packed into a phone, and Ara, the now scuttled modular phone that could have made your mobile hardware upgrades a whole lot cheaper.

    In 2016 Dugan left Google for another huge company with little consumer gadget experience: Facebook. At Facebook she ran Building 8, another privately funded research and development group like ATAP.

    The projects Dugan and her colleagues developed at Building 8 didn’t just include neat gadgets for the near future; they could have also led to enormous leaps forward in technology as a whole. The most noted project was one announced at F8, Facebook’s annual developer conference, in April. Called Silent Voice First, the project sought to allow computers to read your mind. Obviously that would make it easier to post to Facebook when your hands are busy, and it could be life altering for people with severe physical limitations, but it would also, you know, be a computer, run by Facebook, that READS YOUR MIND.

    Neither Dugan nor Facebook has made it clear why she’s departing at this time; a representative for Facebook told Gizmodo the company had nothing further to add (we’ve also reached out to Dugan). And Facebook has not detailed what will happen to the projects she oversaw at Building 8.

    Beyond that, all we have is a prepared quote from Dugan that was provided to reporters, via Bloomberg’s Sarah Frier.

    There is a tidal shift going on in Silicon Valley, and those of us in this industry have greater responsibilities than ever before. The timing feels right to step away and be purposeful about what’s next, thoughtful about new ways to contribute in times of disruption.

    These aren’t exactly the words you want to hear from the woman overseeing the development of mind-reading technology for one of the largest private surveillance apparatuses in the world.

    But it is a good reminder for us all that the biggest leaps forward in technology, the next steps on our journey towards a Matrix or Star Trek-like future, are not necessarily in the hands of altruistic scientists with public funding, but potentially in the hands of enormous private corporations who seem to primarily perceive humans as commodities, and technology as inroads into our wallets and minds. In cases like that one would hope they’d be responsible.

    ———-

    “What Happens to Facebook’s Mind Reading Project Now That Its Leader Is Gone?” by Alex Cranz; Gizmodo; 10/17/2017

    “Neither Dugan nor Facebook has made it clear why she’s departing at this time; a representative for Facebook told Gizmodo the company had nothing further to add (we’ve also reached out to Dugan). And Facebook has not detailed what will happen to the projects she oversaw at Building 8.”

    It’s a mystery. A mind-reading technology mystery. Oh goodie. As the author of the above piece notes in response to Dugan’s tweet about being responsible and purposeful, these are exactly reassuring words in this context:


    These aren’t exactly the words you want to hear from the woman overseeing the development of mind-reading technology for one of the largest private surveillance apparatuses in the world.

    That’s the context. The head of the mind-reading technology division for one of the largest private surveillance apparatuses in the world just left the company for cryptic reasons involving the need for the tech industry to be more responsible than ever and her choice to step away to be purposeful. It’s not particularly reassuring news.

    Posted by Pterrafractyl | October 18, 2017, 8:45 pm
  14. Here’s some new research worth keeping in mind regarding the mind-reading technologies being developed by Facebook and Elon Musk: While reading your exact thoughts, the stated goals of Facebook and Musk, is probably going to require quite a bit more research, reading your emotions is something researchers can already do. And this ability to read emotion can, in turn, be potentially used to read what you’re thinking by looking at your emotional response to particular concepts.

    That’s what some researchers just demonstrated, using fMRI brain imaging technology to gather data on brain activity which was fed into software trained to identify distinct patterns of brain activity. The results are pretty astounding. In the study, researchers recruited 34 individuals: 17 people who self-professed to having had suicidal thoughts before, and 17 others who hadn’t. Then they measured the brain activities of these 34 individuals in response to various words, including the word “death.” It turns out “death” created a distinct signature of brain activity differentiating the suicidal individuals from the control group, allowing the researchers to identify those with suicidal thoughts 91 percent of the time in this study:

    The Daily Beast

    A Machine Might Just Read Your Mind and Predict If You’re Suicidal
    A psychology professor says his software can figure out if a person is suicidal. But does it really work?

    Tanya Basu
    11.01.17 9:00 AM ET

    A few years ago, Marcel Just was trying to figure out how to have real-life applications for his research. Just, a professor of psychology at Carnegie Mellon and director of the Center for Cognitive Brain Imaging, had spent the majority of the past decade teaching computer programs how to identify thoughts. He had found—with the help of an functional magnetic resonance imaging (fMRI) machine—that each emotion we feel had a specific signature in the brain and lit up in uniquely identifiable ways. Just had trained a piece of software to follow these patterns and recognize about 30 concepts and emotions.

    “We asked whether we could identify what a person was thinking from the machine learning patterns,” Just explained. “The machine learning data was figured out with various kinds of concepts; eventually it learned how to map between patterns and concepts.”

    “From that research,” he added, “we realized it was possible to tell what a person was thinking.”

    In other words, Just had created artificial intelligence that could read your thoughts.

    Around this time, Just spoke at the University of Pittsburgh’s medical school. David Brent, a professor of psychology specializing in children, approached him.

    “Do you think this could be used to identify suicidal thoughts?” Brent asked.

    It hit Just then: What if artificial intelligence could predict what a suicidal person was thinking? And maybe even prevent a suicidal person from committing suicide?

    Earlier this week, Just, Brent, and a few other colleagues published a landmark paper in the journal Nature that finds that with an astonishing 91% accuracy, artificial intelligence is able to figure out if a person is considering suicide.

    The experiment is remarkable for more than one reason. There’s of course the idea of using a machine trained to figure out neural patterns to identify those who might consider suicide. It’s a subset of the population that is typically difficult to pinpoint and help before it’s too late, relying not only on their telling others of their desire to kill themselves, but also that person actually acting on it and helping the suicidal person in trouble.

    Just and Brent recruited 34 individuals from local clinics: 17 who’d self-professed to having had suicidal thoughts before, and 17 others who hadn’t. The research team put the 34 individuals through an fMRI machine and had them think about words (with the help of the Adult Suicidal Ideation Questionnaire, which measures “suicide ideation”) that represented different “stimulus” concepts, ranging from positive ones (praise, bliss, carefree, and kindness), negative ones (terrible, cruelty, evil), and suicide (fatal, funeral, death).

    The last one—death—was the most damning of the brain signatures in Just’s study. Those who were suicidal showed a spot of angry crimson at the front of the brain. Control patients, meanwhile, just had specks of red amidst a sea of blue in the pictures. “These people who are suicidal had more sadness in their representation of death, and more shame as well,” he said.
    [see neural imagine representations]

    So Just and Brent set to work teaching a machine the concepts that were most associated with suicide, and those that weren’t. “If you trained the machine on 10 people in their anger signature, and put the 11th person on the scanner, it should be able to tell if the 11th person is angry or not,” Just said of how the machine was put to the test. The machine then figured out if the person was suicidal or not.

    The results are strong, even if the sample size is relatively small: After going through all 34 people, Just and his research team were able to say with a 91% success rate which of the individuals had displayed suicidal thoughts.

    That’s, in short, amazing. But it’s not perfect. What about the other 9%? “It’s a good question,” he said of the gap. “There seems to be an emotional difference [we don’t understand]” that the group hopes to test in future iterations of the study.

    Another problem with the study as it stands? The fact that it uses the fMRI machine in the first place. “Nobody used machine learning in the early days,” Just said. “This [artificial intelligence] uses multiple volume elements—’voxels’—to figure out neural representation.” If that sounds expensive, it is. And expense makes any potential therapy therefore more difficult to access, a criticism of brain scanning studies covered by Ed Yong at The Atlantic: “When scientists use medical scanners to repeatedly peer at the shapes and activities of the human brain, those brains tend to belong to wealthy and well-educated people. And unless researchers take steps to correct for that bias, what we get is an understanding of the brain that’s incomplete, skewed, and … well … a little weird.”

    The more subtle nuance of the study that deserves note is the very real possibility that artificial intelligence can do something that strongly resembles reading your mind. We like to conceive of thoughts as amorphous concepts, as unique to our own headspace and experience. What might tickle one person’s fancy might not necessarily do the same for another; what brings one individual shame won’t bother someone else. But those core feelings of happiness, shame, sadness, and others seem to look almost identical from person to person.

    Does this mean that we can potentially eradicate suicide, though? Just is hesitant to make that assumption, though he thinks this is a huge step towards understanding what he terms “thought disorders.” “We can look at the neural signature and see how it’s changed,” he said, “see what this person is thinking, whether it’s unusual.” Just thinks that most realistically, this is going to be a huge first step towards developing a unique therapy for suicidal individuals: If we know the specific thought processes that are symptomatic of suicide, we can know how to potentially spot suicidal individuals and help them.

    “This isn’t a wild pie in the sky idea,” Just said. “We can use machine learning to figure out the physicality of thought. We can help people.”

    ———-

    “A Machine Might Just Read Your Mind and Predict If You’re Suicidal” by Tanya Basu; The Daily Beast; 11/01/2017

    “”This isn’t a wild pie in the sky idea,” Just said. “We can use machine learning to figure out the physicality of thought. We can help people.””

    Yes indeed, this kind of technology could be wildly helpful in the field of brain science and studying mental illness. The ability to break down the mental activity in response to concepts and see which parts of the brains are lighting up and what types of emotions they’re associated with would be an invaluable research tool. So let’s hope researchers are able to come up with all sorts of useful discoveries about all sorts of mental conditions using this kind of technology. In responsible hands this could lead to incredible breakthroughs in medicine and mental health and really could improve lives.

    But, of course, with technology being the double-edged sword that it is, we can’t ignore the reality that the same technology that would be wonderful for responsible researchers working with volunteers in a lab setting would be absolutely terrifying if it was incorporated into, say, Facebook’s planned mind-reading consumer technology. After all, if Facebook’s planned mind-reading tech can read people’s thoughts it seems like it should be also be capable of reading something much simpler to detect like emotional response too.

    Or at least typical emotional responses. As the study also indicated, there’s going to a subset of people whose brains don’t emotionally respond in the “normal” manner, where the definition “normalcy” is probably filled with all sorts of unknown biases:


    The results are strong, even if the sample size is relatively small: After going through all 34 people, Just and his research team were able to say with a 91% success rate which of the individuals had displayed suicidal thoughts.

    That’s, in short, amazing. But it’s not perfect. What about the other 9%? “It’s a good question,” he said of the gap. “There seems to be an emotional difference [we don’t understand]” that the group hopes to test in future iterations of the study.

    Another problem with the study as it stands? The fact that it uses the fMRI machine in the first place. “Nobody used machine learning in the early days,” Just said. “This [artificial intelligence] uses multiple volume elements—’voxels’—to figure out neural representation.” If that sounds expensive, it is. And expense makes any potential therapy therefore more difficult to access, a criticism of brain scanning studies covered by Ed Yong at The Atlantic: “When scientists use medical scanners to repeatedly peer at the shapes and activities of the human brain, those brains tend to belong to wealthy and well-educated people. And unless researchers take steps to correct for that bias, what we get is an understanding of the brain that’s incomplete, skewed, and … well … a little weird.”

    “What about the other 9%? “It’s a good question,” he said of the gap. “There seems to be an emotional difference [we don’t understand]” that the group hopes to test in future iterations of the study.”

    So once this technology because cheap enough for widespread use (which is one of the goals of Facebook and Musk) we could easily find that “brain types” become a new category for assessing people. And predicting behavior. And anything else people (and not just expert researchers) can think up to use this kind of data for.

    And don’t forget, if Facebook really can develop cheap thought-reading technology designed to interface your brain with a computer, that could easily become the kind of thing employees are just expected to use due to the potential productivity enhancements. So imagine technology that’s not only reading the words you’re thinking but also reading the emotion response you have to those words. And imagine basically being basically forced to use this technology in the workplace of the future because it’s deemed to be productivity enhancing or something. That could actually happen.

    It also raises the question of what Facebook would do if it detected someone was showing the suicidal brain signature. Do they alert someone? Will thinking sad thoughts while using the mind-reading technology result in a visit from a mental health professional? What about really angry or violent thoughts? It’s the kind of area that’s going to raise fascinating questions about the responsible use of this data. Fascinating and often terrifying questions.

    It’s all pretty depressing, right? Well, if the emerging mind-reading economy gets overwhelmingly depressing, at least it sounds like the mind-reading technology causing your depression will be able to detect that it’s causing this. Yay?

    Posted by Pterrafractyl | November 1, 2017, 3:06 pm
  15. Remember those reports about Big Data being used in the workplace to allow employers to predict which employees are likely to get sick (so they get get rid of them before the illnesses get expensive)? Well, as the following article makes clear, employers are going to be predicting a lot more than just who is getting sick. They’re going to be trying to predict everything they can predict along with things they can’t accurately predict but decide to try to predict anyway:

    The Guardian

    Big Brother isn’t just watching: workplace surveillance can track your every move

    Employers are using a range of technologies to monitor their staff’s web-browsing patterns, keystrokes, social media posts and even private messaging apps

    Olivia Solon in San Francisco
    Monday 6 November 2017 03.00 EST

    How can an employer make sure its remote workers aren’t slacking off? In the case of talent management company Crossover, the answer is to take photos of them every 10 minutes through their webcam.

    The pictures are taken by Crossover’s productivity tool, WorkSmart, and combine with screenshots of their workstations along with other data – including app use and keystrokes – to come up with a “focus score” and an “intensity score” that can be used to assess the value of freelancers.

    Today’s workplace surveillance software is a digital panopticon that began with email and phone monitoring but now includes keeping track of web-browsing patterns, text messages, screenshots, keystrokes, social media posts, private messaging apps like WhatsApp and even face-to-face interactions with co-workers.

    “If you are a parent and you have a teenage son or daughter coming home late and not doing their homework you might wonder what they are doing. It’s the same as employees,” said Brad Miller, CEO of Awareness Technologies, which sells a package of employee monitoring tools under the brand Interguard.

    Crossover’s Sanjeev Patni insists that workers get over the initial self-consciousness after a few days and accept the need for such monitoring as they do CCTV in shopping malls.

    “The response is ‘OK, I’m being monitored, but if the company is paying for my time how does it matter if it’s recording what I’m doing? It’s only for my betterment,’” he said.

    Such “betterment” apparently isn’t necessary for managers: they can pick and choose when to turn their cameras on.

    The majority of surveillance tech providers focus their attention on the financial sector, where companies are legally required to track staff communications to prevent insider trading. But they are increasingly selling their tech to a broader range of companies to monitor staff productivity, data leaks and Human Resources violations, like sexual harassment and inappropriate behavior.

    Wiretap specializes in monitoring workplace chat forums such as Facebook Workplace, Slack and Yammer to identify, among other issues, “intentional and unintentional harassment, threats, intimidation”.

    Last year an employee at an IT services company sent a private chat message to a friend at work worried that he had just shared his sexual identity with his manager in a meeting and fearing he’d face career reprisal. Wiretap detected the employee’s concern and alerted a senior company exec who was then able to intervene, talk to the manager and defuse the situation.

    “Having the visibility allows you to step in productively,” said Greg Moran, COO of Wiretap. “Even if it’s not a serious offense you can see the early indications of someone heading down a path.”

    To monitor productivity, software can measure proxies such as the number of emails being sent, websites visited, documents and apps opened and keystrokes. Over time it can build a picture of typical user behaviour and then alert when someone deviates.

    “If it’s normal for you to send out 10 emails, type 5,000 keystrokes and be active on a computer for three hours a day, if all of a sudden you are only active for one hour or typing 1,000 keystrokes, there seems to be a dip in productivity,” said Miller.

    “Or if you usually touch 10 documents a day and print two and suddenly you are touching 500 and printing 200 that may mean you’re stealing documents in preparation of leaving the company.”

    Other companies, such as Teramind, seek to measure distraction by looking at how much a person is switching between applications.

    “If a paralegal is writing a document and every few seconds is switching to Hipchat, Outlook and Word then there’s an issue that can be resolved by addressing it with the employee,” said Teramind’s CEO, Isaac Kohen.

    A common but flawed technique is keyword detection, drawing from a list of predefined terms including swear words and slang associated with shady behavior. This approach tends to kick up a lot of false positives and is easy to circumvent by anyone intent on beating the system.

    That wasn’t the case when an All State Insurance franchise did a live demonstration of Interguard’s software to other dealers. The technology started scanning the network and almost immediately found an email with the words “client list” and “résumé”. The demonstrator opened the email in front of a room full of peers to discover his best employee was plotting to move to another company.

    Companies like Digital Reasoning search for more subtle indicators of possible wrongdoing, such as context switching. This is where one person suggests moving the discussion to encrypted apps like WhatsApp or Signal or even taking the conversation offline, indicating that the subject matter is too risky for the corporate network.

    “Now people know a lot of these systems monitoring communications are becoming more sophisticated, they are saying, ‘Hey let’s move over to the other app’ or ‘Let’s meet downstairs for coffee’. These are small clues that have surfaced in prosecutions,” said Digital Reasoning’s chief product officer, Marten den Haring.

    Even WhatsApp isn’t safe from Qumram’s monitoring software, which is placed on employees’ devices – with their consent – to capture everything they do, including the messages they send to customers using WhatsApp.

    “It truly is Big Brother watching you,” said Qumram’s Patrick Barnett.

    The spying technique that most companies avoid, despite Crossover’s enthusiasm, is accessing employees’ webcams. (Although you should probably tape over yours like Mark Zuckerberg does if you are worried about it.)

    American companies generally aren’t required by law to disclose how they monitor employees using company-issued devices, although they tend to include a catch-all clause in employment contracts declaring such monitoring.

    “You can look at everything [in the US],” said Al Gidari, director of privacy at the Stanford Centre for Internet and Society, adding that new surveillance software is so intrusive because it’s “more pervasive, continuous and searchable”.

    Even if you’re not an employee you may still be subject to surveillance, thanks to technology used to screen potential job candidates. Santa Monica-based Fama provides social media screening to employers to check for any problematic content.

    CEO Ben Mones said Fama was only interested in content that’s relevant to businesses, which includes “references to bigotry, misogyny and violence” as well as drug and alcohol references. The software, he said, can “tell the difference between smoking weed in the backyard and weeding the backyard”.

    When pushed on how the company differentiates bigotry versus, for example, a black person using the N-word, the response is a little fuzzy.

    “It’s a super-nuanced topic,” Mones said, adding that some of the thinly veiled signs of racism, like references to Confederate flags or statues, wouldn’t come up.

    “Employers aren’t looking at references like that to make a hiring decision,” he said.

    And connecting the dots between a person’s work life and personal life can lead to uncomfortable territory. One insider at a large consulting firm told the Guardian the company was looking into whether it could prevent fraud among bankers by looking at their Facebook pages. One scenario would be a trader who had just changed their relationship status from married to divorce, the expense of which “could put that person under pressure to commit fraud or steal”.

    The insider had reservations about the effectiveness of such a system.

    “If I were divorced, would I be more likely to steal? I don’t think so. It makes assumptions,” he said, adding, “The more data and technology you have without an underlying theory of how people’s minds work then the easier it is to jump to conclusions and put people in the crosshairs who don’t deserve to be.”

    ———-

    “Big Brother isn’t just watching: workplace surveillance can track your every move” by Olivia Solon; The Guardian; 11/06/2017

    “Today’s workplace surveillance software is a digital panopticon that began with email and phone monitoring but now includes keeping track of web-browsing patterns, text messages, screenshots, keystrokes, social media posts, private messaging apps like WhatsApp and even face-to-face interactions with co-workers.”

    And what are employers doing with that digital panopticon? For starters, surveilling employees’ computer usage, as would unfortunately be expected. But what might not be expected is that this panoptican software can be set up to determine the expected behavior of a particular employee and then compare that behavior profile to the observed behavior. And if there’s a big change, the managers get a warning. The panopticon isn’t just surveilling you. It’s getting to know you:


    To monitor productivity, software can measure proxies such as the number of emails being sent, websites visited, documents and apps opened and keystrokes. Over time it can build a picture of typical user behaviour and then alert when someone deviates.

    “If it’s normal for you to send out 10 emails, type 5,000 keystrokes and be active on a computer for three hours a day, if all of a sudden you are only active for one hour or typing 1,000 keystrokes, there seems to be a dip in productivity,” said Miller.

    “Or if you usually touch 10 documents a day and print two and suddenly you are touching 500 and printing 200 that may mean you’re stealing documents in preparation of leaving the company.”

    Other companies, such as Teramind, seek to measure distraction by looking at how much a person is switching between applications.

    “If a paralegal is writing a document and every few seconds is switching to Hipchat, Outlook and Word then there’s an issue that can be resolved by addressing it with the employee,” said Teramind’s CEO, Isaac Kohen.

    “If a paralegal is writing a document and every few seconds is switching to Hipchat, Outlook and Word then there’s an issue that can be resolved by addressing it with the employee”

    If you’re the type of person whose brain works better jumping back and forth between tasks you’re going to get flagged as not being focused. People with ADHD are going to love the future.

    People who like to talk in person over coffee are also going to love the future:


    Companies like Digital Reasoning search for more subtle indicators of possible wrongdoing, such as context switching. This is where one person suggests moving the discussion to encrypted apps like WhatsApp or Signal or even taking the conversation offline, indicating that the subject matter is too risky for the corporate network.

    “Now people know a lot of these systems monitoring communications are becoming more sophisticated, they are saying, ‘Hey let’s move over to the other app’ or ‘Let’s meet downstairs for coffee’. These are small clues that have surfaced in prosecutions,” said Digital Reasoning’s chief product officer, Marten den Haring.

    So the fact that employees know they’re being monitored is getting incorporated into more sophisticated algorithms that operate under the assumption that employees know they’re being monitored and will try to hide their misbehavior from the panopticon. Of course, employees are inevitably going to learn about all these subtle clues that the panopticon is watching for so this will no doubt lead to a need for algorithms that incorporate even more subtle clues. An ever more sophisticated cat to catch an ever more sophisticated mouse. And so on, forever.

    What could possibly go wrong? Oh yeah, a lot, especially if the assumptions that go into all these algorithms are wrong:


    And connecting the dots between a person’s work life and personal life can lead to uncomfortable territory. One insider at a large consulting firm told the Guardian the company was looking into whether it could prevent fraud among bankers by looking at their Facebook pages. One scenario would be a trader who had just changed their relationship status from married to divorce, the expense of which “could put that person under pressure to commit fraud or steal”.

    The insider had reservations about the effectiveness of such a system.

    “If I were divorced, would I be more likely to steal? I don’t think so. It makes assumptions,” he said, adding, “The more data and technology you have without an underlying theory of how people’s minds work then the easier it is to jump to conclusions and put people in the crosshairs who don’t deserve to be.”

    “The more data and technology you have without an underlying theory of how people’s minds work then the easier it is to jump to conclusions and put people in the crosshairs who don’t deserve to be.”

    And keep in mind that when your employer panopticon predicts you’re going to do something bad in the future they probably aren’t going to tell you that when they let you go. They’re just make up some random excuse. Much like how employers who predict you’re going to get sick with an expensive illness probably aren’t going to tell you this. They’re just going to find a reasons to let you go. So we can add “misapplied algorithmic assumptions” to the list of potential mystery reasons for when you suddenly get let go from your job with minimal explanation in the panopticon office of the future: maybe your employer predicts you’re about to get really ill. Or maybe some other completely random thing set off the bad behavior predictive algorithm. There’s a range of mystery reasons so at least you shouldn’t necessarily assume you’re about to get horribly ill when you’re fired. Yay.

    Posted by Pterrafractyl | November 14, 2017, 4:33 pm

Post a comment